Skip to content

Neural Beamforming with Scene Priors: A Minimal, Reproducible Benchmark with Calibration


A Minimal, Reproducible Benchmark with Calibration

Benjamin J. Gilbert
Spectrcyde RF Quantum SCYTHE, College of the Mainland
bgilbert2@com.edu
ORCID: [0000-0002-1234-5678]


Abstract

We introduce a lightweight benchmark for RF beam selection that fuses scene priors with channel state information (CSI). A compact neural head runs on top of a simulated environment and auto-generates results (accuracy, Succ@±1, ECE, Brier score) across discrete beam sweeps (8/12/16 beams).

The framework ensures reviewer-safe reproducibility: one command yields figures, tables, and calibration analysis. The approach leverages NeRF-derived scene priors combined with WiFi CSI to predict optimal beamforming angles, with temperature scaling applied for calibrated uncertainty estimates.


I. Introduction

Traditional RF beamforming depends on geometric propagation models or hand-engineered signal processing. These methods lack robustness under real-world blockage and multipath.

Recent advances in Neural Radiance Fields (NeRF) show that compact scene encodings can act as priors for geometry and material layout. This paper integrates NeRF scene features with CSI snapshots in a neural predictor, targeting beam direction selection.

Our contribution is minimal but reproducible:

  • A supervised neural beamforming module that fuses scene priors with CSI.
  • Automatic evaluation with accuracy and calibration metrics.
  • End-to-end reproducibility via scripted builds.

The emphasis is not field-grade deployment but clear reproducibility, calibration reporting, and lightweight runtime — the Guangdong philosophy of engineer small, prove reproducible, then scale.


II. Related Work

  • Neural Scene Priors: NeRF [1] encodes geometry and materials; we exploit compact features as side information to regularize beam selection.
  • Learning-Based Beam Selection: mmWave studies show learning policies outperform exhaustive search, especially under blockage [2].
  • Propagation Studies: Classical models justify high-frequency links but underscore the need for aggressive beam steering [3].
  • Calibration: Probabilistic models require confidence control; temperature scaling [4], ECE [5], and Brier [6] are standard.

Our approach differs by combining scene priors + CSI in a lightweight neural net and reporting both accuracy and calibration, yielding Pareto tradeoffs across beam count, accuracy, and runtime.


III. Method

A. Neural Architecture

We implement RFBeamformingNN, which maps concatenated scene+CSI features to beam logits. Input vector:

  • NeRF Scene Features (50 dims): Depth, material, geometry encodings.
  • WiFi CSI (40 dims): Channel state vectors with injected noise.
  • RF Environment (20 dims): Signal strength, interference, temporal dynamics.

Backbone: fully connected layers (128 → 64 neurons), with batch norm + dropout. Training uses cross-entropy loss against simulated optimal beams. For calibration, Platt/temperature scaling adjusts probabilities.

B. Evaluation Metrics

  • Exact Accuracy: % of perfect beam index predictions.
  • Succ@±1: Success if |predicted – optimal| ≤ 1 index. Bin widths: 45° (8 beams), 30° (12 beams), 22.5° (16 beams).
  • ECE (Expected Calibration Error): 15-bin histogram, measures miscalibration.
  • Brier Score: Quadratic scoring rule for probabilistic accuracy.

Calibration procedure: Fit temperature T > 0 on a held-out set by minimizing negative log-likelihood. Report pre/post ECE and Brier on test set.


Guangdong Framing

  • Minimal build: one command reproduces all results.
  • Compact network: small enough for embedded RF controllers.
  • Full calibration reporting: reviewer-safe, no hidden heuristics.
  • Practical scope: synthetic benchmark only, but structured for rapid extension into mmWave/WiFi-6 testbeds.

⚙️ Summary: A neural beamforming benchmark that unifies NeRF scene priors with CSI. Lightweight, reproducible, calibrated. Demonstrates the Guangdong ethos: small kit, clear metrics, reproducible pipeline, deployment mindset.


Do you want me to also recast the evaluation metrics into LaTeX tabular form (like you’d drop straight into IEEE style), or keep them in narrative form for now?

Leave a Reply

Your email address will not be published. Required fields are marked *