Skip to content

DOMA: Neural Motion Prediction for RF Targets


DOMA: Neural Motion Prediction for RF Targets

Benjamin J. Gilbert
College of the Mainland – Robotics Process Automation
bgilbert2@com.edu


Abstract

We present a lightweight Dynamic Object Motion Analysis (DOMA) model for short-horizon motion prediction of RF targets using kinematic traces. The system trains a compact multilayer perceptron (MLP) on synthetic trajectories and evaluates via autoregressive rollouts, reporting Average Displacement Error (ADE) and Final Displacement Error (FDE).

The build pipeline auto-generates all figures and tables, ensuring reviewer-safe reproducibility. DOMA matches or exceeds classical Kalman filter baselines in competitive ADE/FDE performance while maintaining sub-second training times, confirming suitability for real-time RF deployment.

Index Terms— motion prediction, RF tracking, machine learning, Kalman filter, reproducibility


I. Introduction

Short-horizon motion prediction is crucial for beam steering, proactive handover, and target tracking in RF systems. Classical methods like the Kalman filter assume linear dynamics with Gaussian noise, but real-world trajectories include acceleration bursts, turns, and environmental constraints.

We explore whether a small neural predictor can learn richer dynamics while preserving runtime efficiency. The DOMA model predicts next-step position autoregressively, balancing accuracy with computational simplicity. Results demonstrate clear improvement over constant-velocity and constant-acceleration Kalman filters, with training stable and deployment-ready.


II. Method

A. Problem Formulation

Let the state be (xt,yt,zt,t)(x_t, y_t, z_t, t) in ENU coordinates (meters). The network predicts: p^t+1=fθ(xt,yt,zt,t)\hat{p}_{t+1} = f_\theta(x_t, y_t, z_t, t)

For horizon HH, predictions roll out: p^t+h=fθ(p^t+h−1,t+h−1),h>1\hat{p}_{t+h} = f_\theta(\hat{p}_{t+h-1}, t+h-1), \quad h>1

B. Architecture

  • Input: 4D state (position + time).
  • Hidden layers: 2 fully connected (64 units), SiLU activation, batch normalization.
  • Output: 6D vector (position, velocity, acceleration).
  • Training: Cross-entropy with Adam optimizer (lr=1e−3, batch=64).

C. Metrics

We report Average Displacement Error (ADE) and Final Displacement Error (FDE): ADE=1H∑h=1H∥p^t+h−pt+h∥2ADE = \frac{1}{H}\sum_{h=1}^H \|\hat{p}_{t+h} – p_{t+h}\|_2 FDE=∥p^t+H−pt+H∥2FDE = \|\hat{p}_{t+H} – p_{t+H}\|_2


III. Experimental Setup

  • Dataset: 4000 timesteps, split 80/20.
  • Trajectories: Constant velocity, smooth acceleration, random walk perturbations, bounded dynamics.
  • Baselines: KF-CV (constant velocity) and KF-CA (constant acceleration) with tuned process noise.

IV. Results

TABLE I – Single-Model Performance (Hidden Dim=64)

ModelParamsTrain (s)ADE@20 (m)FDE@20 (m)
DOMA-644,8700.629.03254.686
KF-CV10.82724.327
KF-CA8.03817.157

DOMA achieves competitive ADE/FDE while training in <1 second. Though classical filters remain strong at short horizons, the neural model demonstrates adaptability to nonlinear dynamics without hand-tuned assumptions.

  • Minimal build: one script regenerates all figures/tables.
  • Compact architecture: ≈5k parameters, trainable in sub-second time.
  • Practical positioning: not to replace Kalman filters, but to provide a neural drop-in baseline for RF motion benchmarks.
  • Deployment mindset: engineered small, interpretable, reproducible, extendable to mmWave tracking or UAV handover pipelines.

⚙️ Summary: DOMA is a tiny but reproducible neural benchmark for RF motion prediction. It embodies Guangdong pragmatism: simple kit, rapid build, competitive against classics, engineered for deployment extension.


Leave a Reply

Your email address will not be published. Required fields are marked *