Comparison of radar view synthesis for a dynamic scene with a moving vehicle (orange box). RF4D successfully renders the moving object, whereas Radar Fields fails to recover it.
Neural fields (NFs) have achieved remarkable success in scene reconstruction and novel view synthesis. However, existing NF approaches that rely on RGB or LiDAR inputs often struggle under adverse weather conditions, limiting their robustness in real-world outdoor environments such as autonomous driving. In contrast, millimeter-wave radar is inherently resilient to environmental variations, yet its integration with NFs remains largely underexplored. Moreover, outdoor driving scenes frequently involve dynamic objects, making spatiotemporal modeling crucial for temporally consistent novel view synthesis. To address these challenges, we present RF4D, a radar-based neural field framework tailored for novel view synthesis in outdoor dynamic scenes. RF4D explicitly incorporates temporal information into its representation, enabling more accurate modeling of object motion. A dedicated scene flow module further predicts temporal offsets between adjacent frames, enforcing temporal occupancy coherence during dynamic scene reconstruction. Moreover, we propose a radar-specific power rendering formulation grounded in radar sensing physics, improving both synthesis accuracy and interpretability. Extensive experiments on public radar datasets demonstrate that RF4D substantially outperforms existing methods in radar measurement synthesis and occupancy estimation accuracy, with particularly strong gains in dynamic outdoor environments.
Overview of the proposed RF4D framework. Given a 3D query point $x$ at time $t$ and view direction $\mathbf{d}$, RF4D first predicts two radar-specific physical quantities: occupancy $\alpha$ and radar cross-section (RCS) $\sigma$, using neural radar fields. The occupancy $\alpha$ indicates whether the point is physically occupied, and the RCS $\sigma$ represents its reflectivity. These quantities are combined through the radar-specific power rendering to estimate the received radar power. During training, the rendered power is supervised by ground-truth radar measurements, and the scene flow module enforces temporal consistency by predicting motion offsets and warping points to adjacent frames to regularize occupancy over time.
Radar robustness and generalization across weather conditions. While LiDAR point clouds degrade severely in snow, radar measurements remain stable. RF4D accurately reconstructs radar measurements, maintaining consistent performance across different weather conditions.
Our method reconstructs full 3D occupancy geometry from sparse and low-resolution radar data, capturing both moving vehicles and static objects present in the scene. LiDAR point clouds axsnd scene images are provided for reference only.
@article{zhang2025rf4d,
title={RF4D: Neural Radar Fields for Novel View Synthesis in Outdoor Dynamic Scenes},
author={Zhang, Jiarui and Li, Zhihao and Wang, Chong and Wen, Bihan},
journal={arXiv preprint arXiv:2505.20967},
year={2025}
}