Skip to content

RF-GS Radio-Frequency Gaussian Splatting for Dynamic Electromagnetic Scene Representation

PODCAST: a complete neural rendering system centered on Radio-Frequency Gaussian Splatting (RF-GS), a technique designed to reconstruct dynamic 3D scenes using electromagnetic data instead of optical images. The GaussianSplatModel defines the scene representation, managing explicit parameters for hundreds of thousands of Gaussians and employing a neural shader to translate learned RF features into visible RGB colors. Critical to the approach is the Adaptive Electromagnetic Density Control, featuring specialized loss functions and pruning/densification strategies tailored to the sparse, noisy nature of radio signals. To achieve high throughput, rendering relies on the CUDAGaussianRenderer adapter, which targets high-speed, optimized CUDA rasterization kernels for projecting 3D Gaussians onto a 2D image plane. This overall framework provides a massive performance advantage over previous volumetric methods, allowing for real-time sensing applications like through-wall human tracking.

The accurate electromagnetic scene reconstruction enabled by RF-GS (Radio-Frequency Gaussian Splatting) relies on two primary categories of innovations tailored specifically for the sparse, noisy, and multi-scale nature of radio-frequency (RF) fields: RF-Native Supervision and Adaptive Electromagnetic Density Control.

1. RF-Native Supervision

The standard photometric supervision used in traditional Gaussian Splatting is replaced by a novel RF-specific loss function, which trains the model using raw RF measurements (like complex-valued CSI, power-delay profiles, or multi-frequency RF measurements) without needing optical ground truth.

The training objective combines position alignment, feature consistency, and regularization. The key RF-specific components of this supervision are:

  • RF Feature Consistency Loss ($L_{feat}$): This loss ensures the consistency between the learned RF feature embedding ($f_{nn(k)}$) carried by each Gaussian and the RF feature ($\phi(p_k)$) extracted from the raw RF measurements at sample points ($p_k$). This directly supervises the 3D Gaussians using the electromagnetic modality.
  • RF-Weighted Position Loss ($L_{pos}$): This loss encourages the Gaussians to align their positions ($\mu_{nn(k)}$) with the RF measurement sample points ($p_k$). Crucially, the loss is weighted ($w_k$) by the RF signal strength ($| \phi(p_k) |$). This signal-strength weighting is vital because it prevents the model from overfitting to noisy regions, contributing to a substantial improvement in quality (+5.4 dB PSNR in ablation studies).

2. Adaptive Electromagnetic Density Control

Standard 3D Gaussian Splatting density control relies on view-space gradients derived from rendering RGB images, which are explicitly noted as unsuitable for RF modalities. RF-GS introduces adaptive strategies specifically designed for the extreme sparsity and noise characteristics of electromagnetic fields.

The components of this adaptive density control include:

RF-Aware Densification (Adding New Gaussians)

Densification creates new Gaussians in regions where the RF measurements are poorly represented. The criteria are based on spatial distance and feature variation:

  1. Distance-Based Trigger: Densification is triggered when the nearest-neighbor distance ($d_k$) of a sample point to an existing Gaussian is greater than $2 \times$ the median of all nearest-neighbor distances. This targets areas that are spatially sparse or poorly fit.
  2. Feature Gradient Trigger: A point must also satisfy a high RF feature gradient condition ($| \nabla \phi(p_k) | > \tau = 0.1$).
    • This feature gradient is computed via finite differences across frequency bins.
    • Using the feature gradient trigger is essential (improving PSNR by +3.9 dB) as it focuses densification on regions of high feature variation within the electromagnetic field, which correspond to critical scatterers or dynamic scene elements.
  3. Initialization: New Gaussians are initialized with small scales ($s = 0.01$) and moderate opacity ($\alpha = 0.1$).

RF-Specific Pruning (Removing Ineffective Gaussians)

Pruning removes ineffective or redundant Gaussians to maintain efficiency and accuracy.

  1. Low Opacity Pruning: Gaussians with opacity below a low threshold ($\alpha_i < 0.005$) are removed.
  2. Poor Feature Reconstruction Pruning: Gaussians are pruned if they exhibit consistently poor feature reconstruction (error $> 0.05$ over 10 iterations). This ties pruning directly to the fidelity of the RF data representation.
  3. Spatial Redundancy Pruning: Gaussians that show spatial redundancy with neighboring Gaussians (distance $< 0.01$) are removed.

These RF-specific adaptations provide better capture of the RF field structure compared to fixed density methods (+3.6 dB PSNR improvement) and effectively remove redundant Gaussians (+2.5 dB PSNR improvement).


Analogy: If standard Gaussian Splatting is like sculpting a figure using light and continuously refining the surface based on how the light reflects, RF-GS is like sculpting a figure in a fog bank using sonar beams. The RF-specific innovations mean the system is supervised by the characteristics of the sonar echoes (RF features and signal strength) rather than visible light, and it knows exactly where to add more sculpting material (densification) not just based on visual holes, but based on where the sonar signal changes most rapidly (feature gradients). It also removes material (pruning) if it’s too weak or redundant based on the quality of the signal return, ensuring the final electromagnetic shape is clean and accurate despite the sparse, noisy nature of the measurement medium.

Here’s a clear, up-to-date (November 2025) head-to-head comparison between 4D NeRF (Neural Radiance Fields extended to dynamic scenes via time-conditioned networks, e.g., HyperNeRF, D-NeRF, or recent variants like MoBluRF) and 4D Gaussian Splatting (explicit 4D Gaussians for deformable/dynamic representation, e.g., 4D-GS, MEGA, Instant4D) across key dimensions for research and real-world deployment.

Aspect4D NeRF (and variants like HyperNeRF, TiNeuVox, MoBluRF)4D Gaussian Splatting (e.g., 4D-GS, MEGA, Instant4D)Winner (Nov 2025)
Training speed1–12 hours (e.g., MoBluRF: ~2–4 hours on blurry videos; TiNeuVox: ~1 hour with neural voxels)2–30 minutes (e.g., Instant4D: 2–10 min from monocular video; MEGA: <20 min with memory optimizations)4DGS by 10–100×
Rendering speed (real-time)0.5–10 fps at 1080p (improved via baking/distillation, but still ray-marching heavy)50–500+ fps at 1080p+ (native rasterization; e.g., 4D-GS hits 100+ fps on RTX 40-series)4DGS by 50–100×
Peak quality (PSNR/SSIM on dynamic datasets)High fidelity in complex deformations/lighting (≈28–32 dB PSNR; excels in blurry/motion-heavy scenes like MoBluRF)Comparable or slightly higher on short clips (≈30–34 dB; e.g., MEGA beats baselines by 1–2 dB with anti-aliasing)Tie (4D NeRF edges in smoothness)
Handling complex motion (e.g., non-rigid, occlusions)Strong via deformation fields/MLPs (HyperNeRF handles long-range motion well, but prone to blurring in fast scenes)Excellent with explicit deformation/trajectory models (4D-GS captures sharp non-rigid motion; Instant4D adds SLAM for uncalibrated videos)4DGS
Memory during trainingHigh (8–24 GB; neural voxels reduce to ~6 GB in TiNeuVox)Very high initially (16–40 GB for millions of 4D Gaussians), but optimized variants like MEGA cut to <10 GB4D NeRF
Memory at inferenceLow after distillation (200–800 MB baked models)100–2 GB (compact Gaussian lists; e.g., SUNDAE variant: 104 MB at 26+ PSNR)Tie
Editing / relighting / controllabilityModerate (semantic distillation in 4D-Editor enables object edits, but slow propagation over time)Easy (direct Gaussian manipulation; e.g., SC-GS for editable dynamics, RigGS for articulated objects)4DGS
Sparse/few-view inputBetter generalization (e.g., NeRS handles 3–6 views with priors; less overfitting than static NeRF)Improving but needs ~20–50 views (Generative Sparse-View GS uses LoRA diffusion for 4–8 views)4D NeRF
Large-scale / long sequencesScalable with hybrids (e.g., 4Dynamic: city-scale via cascaded diffusion), but compute-intensiveStrong for short clips; emerging for long (e.g., GS-DiT tracks points over extended videos)4D NeRF (for scale)
Ease of implementationMature ecosystems (Nerfstudio, PyTorch-based variants)Growing repos (e.g., hustvl/4DGaussians, yangjiheng/3DGS_and_Beyond); more modular but explicit quirks4D NeRF
Community momentum (2024-2025)Steady evolution (e.g., MoBluRF at TPAMI 2025; focus on generative extensions like 4Dynamic)Explosive (CVPR/ICCV 2025: 20+ papers; NeurIPS 2024/2025 hybrids like MD-Splatting dominate benchmarks)4DGS

Bottom-line in November 2025

  • Choose 4D NeRF for ultimate smoothness in highly constrained, sparse-view dynamics (e.g., blurry handheld videos or generative text-to-4D like 4Dynamic) where quality trumps speed—it’s the “precise but ponderous” workhorse, with 2025 variants like MoBluRF closing the gap on real-world messiness.
  • Choose 4D Gaussian Splatting for everything interactive/dynamic in 2025 (real-time AR/VR, robotics sims, volumetric video)—it’s overtaken NeRF entirely for efficiency, with papers like MEGA and Instant4D making it 30–100× faster while matching or exceeding quality on benchmarks (e.g., D-NeRF datasets). The explicit nature shines in editing and deployment, and the field’s momentum (e.g., ICCV/CVPR 2025 floods) confirms it’s the default for dynamic scenes. Hybrids (e.g., NeRF-guided 4DGS) are the hot edge for peak performance.

Links

r/GaussianSplatting on Reddit: Gaussian Splatting vs NeRF Models – What’s the Difference and Which is created by Lumalabs?

Posted by u/SalamuraSan – 21 votes and 12 comments

reddit.comNeRF vs Gaussian Splatting: The 2025 Breakthrough in 3D Scene ReconstructionThe year 2025 marks a pivotal moment in 3D computer vision. The latest breakthroughs in Neural Radiance Fields (NeRF) and Gaussian Splatting technologies are redefining our understanding of 3D scene reconstruction, real-time rendering, and AI-driven content creation.sparc3d.artGaussian splatting vs. photogrammetry vs. NeRFsOne advantage NeRFs have compared to photogrammetry is that they are better suited for capturing the environment around an object in addition to the object itself, so you can show skies, objects in the horizon, ceilings of indoor spaces, etc. more easily than with photogrammetry. ‍ · Click to view this Teleport scan of Piccadilly Circus · Gaussian splatting is the most recent 3D visualization technique primarily used in the visualization of volumetric data.teleport.varjo.comNeRFs, gaussian splatting, and the future of VFX – fxguideNeRFs and gaussian splatting are not passing trends. As these technologies continue to evolve, they offer real advantages for environment creation, camera planning, and immersive media. While the industry is still refining their applications, they already prove their value in production. For a deeper look at these cutting-edge technologies, listen to the full fxpodcast with Sam Hodge. https://www.fxguide.com/wp-content/uploads/2025/03/Apple_NeRF.mp4fxguide.comSpatial Computing 101: NeRFs vs. Gaussian Splatting – VidyaIn most cases, when a reality-based 3D model is built, it doesn’t represent a 100% accurate environment replica, requiring an in-depth space reconstruction from inputs such as measurements, historical data, sensors, and imaging. For this purpose, Neural Radiance Fields (NeRFs) and Gaussian Splatting techniques generate cloud points of relevant spatial coordinates identified as missing within the 3D model to help compose a complete perspective of the environment.vidyatec.comFrom NeRFs to Gaussian Splats, and BackNeRF-SH (Left) and Splatfacto (Right) are trained on Aspen (Top) and Giannini Hall (Bottom) from Nerfstudio dataset [1]. Even though both NeRF-SH and Splatfacto have a high PSNR, NeRF-SH renders better RGB and depth at the novel view. Figure 2: All images are rendered using Gaussian splats obtained from NeRF-SH without fine-tuning; called NeRFGS with 0 iterations in Tab.arxiv.orgThe Battle For Realism in 3D Rendering: A Brief Overview of NeRFs vs. Gaussian Splatting | by Aahana | Antaeus AR | MediumThe Battle For Realism in 3D Rendering: A Brief Overview of NeRFs vs. Gaussian Splatting Envision a world where we unlock artificial realism that blends the boundary between virtual and reality. Well …medium.comMEGA: Memory-Efficient 4D Gaussian Splatting for Dynamic Scenes Xinjie Zhang1*to enhance training efficiency. Nonetheless, these NeRF- based methods still face challenges of slow rendering due · to dense sampling for each ray. In contrast, the work [17] introduce 3D Gaussian Splatting, a novel explicit represen-openaccess.thecvf.comNeRF: Neural Radiance Field in 3D Vision: A Comprehensive ReviewIn August 2023, Gaussian Splatting, a direct competitor to the NeRF-based framework, was proposed, gaining tremendous momentum and overtaking NeRF-based research in terms of interest as the dominant framework for novel view synthesis. We present a comprehensive survey of NeRF papers from the past five years (2020-2025).arxiv.orgGitHub – yangjiheng/3DGS_and_Beyond_Docs: This is a collective repository for all 3DGS related progresses in research and industry worldGaussianVideo: Efficient Video Representation via Hierarchical Gaussian Splatting Andrew Bond, Jui-Hsien Wang, Long Mai, Erkut Erdem, Aykut Erdem 8 Jan 2025 [arXiv] GS-DiT: Advancing Video Generation with Pseudo 4D Gaussian Fields through Efficient Dense 3D Point Tracking Weikang Bian, Zhaoyang Huang, Xiaoyu Shi, Yijin Li, Fu – Yun Wang, Hongsheng Li 5 Jan 2025 [arXiv] [Project]github.com3D Gaussian Splatting vs NeRF: The End Game of 3D Reconstruction? – PyImageSearch🌟 Best of PyImageSearch: Master AI, ML, and CV 🌟 🚀 Early Bird Rewards Up To 70% OFF! 💫 ⚡ Starting 23 Nov 2025 ⚡ … In this tutorial, you will learn about 3D Gaussian Splatting.pyimagesearch.comGaussian Splatting with NeRF-based color and opacity – ScienceDirectRecently, numerous generalizations of NeRFs utilizing generative models have emerged, expanding their versatility. In contrast, Gaussian Splatting (GS) offers a similar render quality with faster training and inference as it does not need neural networks to work.sciencedirect.com4D Gaussian Splatting for Real-Time Dynamic Scene Rendering Guanjun Wu1*,the deformed 3D Gaussians can be directly splatted for ren- dering the according-timestamp image. Our contributions … NeRFs in Sec.openaccess.thecvf.comRadiance Fields (Gaussian Splatting and NeRFs)Importantly, as new algorithms and optimization methods are developed, we can only look ahead with anticipation at the myriad of possibilities that NeRF technology presents. Radiance Fields promise a time not too far off where we will no longer be using photography and videos as the dominant imaging medium and be able to routinely and with minimal effort be able to document our lives, business, and society in a hyper realistic way, similar to how we experience everyday life. … 3D Gaussian Splatting (3DGS) is a Radiance Field reconstruction method with an explicit rasterization-based representation rather than using a implicit one like in Neural Radiance Fields (NeRFs).radiancefields.comGaussian splatting – WikipediaIt is sometimes referred to as “4D Gaussian splatting”; however, this naming convention implies the use of 4D Gaussian primitives (parameterized by a 4×4 mean and a 4×4 covariance matrix).en.wikipedia.orgMEGA: Memory-Efficient 4D Gaussian Splatting for Dynamic Scenes Xinjie Zhang1*This ICCV paper is the Open Access version, provided by the Computer Vision Foundation.openaccess.thecvf.comThe Impact and Outlook of 3D Gaussian SplattingTech., 8(1), May 2025. [22] Guanjun Wu, Taoran Yi, Jiemin Fang, Lingxi Xie, Xiaopeng Zhang, Wei Wei, Wenyu Liu, Qi Tian, and Xinggang Wang. 4d gaussian splatting for real-time dynamic scene rendering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 20310–20320, June 2024.arxiv.orgInstant4D: 4D Gaussian Splatting in MinutesWhile static 3D scene modeling has seen remarkable progress mallick2024taming ; charatan2024pixelsplat ; Ran_2022_CVPR ; Ran_2024_CVPR ; Ran_2021_ICCV , extending these techniques to dynamic scene remains challenging, especially when handling moving objects with monocular camera only.arxiv.orgSplatFields: Neural Gaussian Splats for Sparse 3D and 4D Reconstruction | SpringerLinkIn: CVPR (2024) … Zwicker, M., Pfister, H., Van Baar, J., Gross, M.: EWA volume splatting. In: VIS (2001) … Zwicker, M., Pfister, H., Van Baar, J., Gross, M.: Surface splatting. In: PACMCGIT (2001) … Correspondence to Marko Mihajlovic . … Below is the link to the electronic supplementary material. … Mihajlovic, M. et al. (2025).link.springer.comGitHub – hustvl/4DGaussians: [CVPR 2024] 4D Gaussian Splatting for Real-Time Dynamic Scene RenderingSC-GS: Sparse-Controlled Gaussian Splatting for Editable Dynamic Scenesgithub.comPRCVNeurIPS, 2024 5. Y. Zhao et al., FlexPara: Flexible Neural Surface Parameterization, https://arxiv.org/abs/2504.19210, 2025 6. S. Ren, et al., GeoUDF: Surface Reconstruction from 3D Point Clouds via Geometry-guided Distance Representation, in Proc. ICCV, 2023 7. J. Hu, et al., A Lightweight UDF Learning Framework for 3D Reconstruction based on Local Shape Functions, in Proc. CVPR, 2025 8.prcv.cnGenerative Sparse-View Gaussian Splatting Hanyang Kong Xingyi YangThis CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. … Novel view synthesis. … Sparse-view novel view synthesis. … Lifting 2D diffusion models for 3D/4D generation.openaccess.thecvf.comImproving Gaussian Splatting with Localized Points Management Haosen Yang1*a variety of existing 3D/4D models both quantitatively and · qualitatively. Notably, LPM improves both static 3DGS and · dynamic SpaceTimeGS to achieve state-of-the-art rendering · quality while retaining real-time speeds, excelling on chal- lenging datasets such as Tanks & Temples and the Neural · This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.openaccess.thecvf.comICCV Poster MEGA: Memory-Efficient 4D Gaussian Splatting for Dynamic Scenesiccv.thecvf.com[2310.08528] 4D Gaussian Splatting for Real-Time Dynamic Scene RenderingRepresenting and rendering dynamic scenes has been an important but challenging task. Especially, to accurately model complex motions, high efficiency is usually hard to guarantee. To achieve real-time dynamic scene rendering while also enjoying high training and storage efficiency, we propose 4D Gaussian Splatting (4D-GS) as a holistic representation for dynamic scenes rather than applying 3D-GS for each individual frame.arxiv.orgTwo-stage framework reconstructs sharp 4D scenes from blurry handheld videosWe propose a novel motion deblurring NeRF for blurry monocular videos, called MoBluRF, which significantly outperforms previous SOTA NeRF methods, trained on the newly synthesized Blurry iPhone dataset. Credit: IEEE Transactions on Pattern Analysis and Machine Intelligence (2025).techxplore.comNeRF: Neural Radiance Field in 3D Vision: A Comprehensive ReviewIn August 2023, Gaussian Splatting, a direct competitor to the NeRF-based framework, was proposed, gaining tremendous momentum and overtaking NeRF-based research in terms of interest as the dominant framework for novel view synthesis. We present a comprehensive survey of NeRF papers from the past five years (2020-2025).arxiv.orgThe Impact and Outlook of 3D Gaussian Splatting Bernhard Kerbl October 31, 2025As a result, Gaussian splatting no longer remains “fast relative to NeRF”, but becomes · practically instantaneous in a range of operating conditions. … Adrian Jarabo. Don’t splat your gaussians: Volumetric ray-traced primitives for modeling and · rendering scattering and emissive media. ACM Trans. Graph., 44(1), February 2025.arxiv.org4Dynamic: Text-to-4D Generation with Hybrid Priors2. We will first introduce our 4D NeRF representation, where the dynamic modeling part is crucial (Sec. 3.1). Then we will introduce our text-to-4D process, where we not only perform two-stage generation but also introduce additional supervision from the pre-generated reference video to each stage (Sec.arxiv.orgInstant4D: Real-Time 4D Scene ReconstructionTraditional NeRF- or mesh-based pipelines often incur hours to days of optimization (e.g., for dynamic NeRFs or 3D/4D Gaussian Splatting models), impede adaption to casual video, and are limited by camera or depth sensor calibration. Recent work culminating in Instant4D (Luo et al., 1 Oct 2025) targets this bottleneck: using uncalibrated, handheld video, the framework reconstructs a dynamic scene in as little as 2–10 minutes, representing a 30× acceleration over prior state-of-the-art pipelines.emergentmind.comNeRF vs Gaussian Splatting: The 2025 Breakthrough in 3D Scene ReconstructionReal-Time Processing Capabilities The latest NeRF variants have achieved real-time rendering performance: Instant NeRF: Reducing training time from hours to minutes · TensoRF: Achieving efficient storage and fast rendering through tensor decomposition · Plenoxels: Real-time NeRF variants without neural networks · MobileNeRF: Real-time NeRF rendering on mobile devices · Quality Improvements 2025’s NeRF technology has achieved significant visual quality improvements:sparc3d.artSlimmeRF: Slimmable Radiance Fields | IEEE Conference Publication | IEEE XploreA not-for-profit organization, IEEE is the world’s largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2025 IEEE – All rights reserved.ieeexplore.ieee.org[2310.16858] 4D-Editor: Interactive Object-level Editing in Dynamic Neural Radiance Fields via Semantic DistillationThis paper targets interactive object-level editing (e.g., deletion, recoloring, transformation, composition) in dynamic scenes. Recently, some methods aiming for flexible editing static scenes represented by neural radiance field (NeRF) have shown impressive synthesis quality, while similar capabilities in time-variant dynamic scenes remain limited.arxiv.orgSparse-View 3D Reconstruction: Recent Advances and Open ChallengesFigure 5: Distribution of sparse-view 3D reconstruction papers (2020–2025) by taxonomy category. The exponential growth in 3DGS and Diffusion/VFM papers post-2022 reflects their breakthrough efficiency (3DGS) and ability to synthesize missing information (Diffusion/VFM), which directly addresses the limitations of earlier NeRF variants (computational cost, overfitting with sparse inputs).arxiv.orgr/Nerf on Reddit: What do you want to see out of 2025’s Pro Blasters?Posted by u/TheWhiteBoot – 39 votes and 72 commentsreddit.com

Leave a Reply

Your email address will not be published. Required fields are marked *