Critical alerting in wearable AR hinges on correctly
classifying moving objects as benign or threatening. RF sensors
can track motion of drones, vehicles and personnel, but motion
alone suffers from false critical alerts when trajectories deviate
from simple heuristics. In aviation, false alerts and missed
separation events have been traced to trajectory prediction
errors[1], motivating improved fusion of motion and intent.
Recent work in autonomous driving demonstrates that jointly
reasoning about high-level behavior and long-term trajectories
improves prediction accuracy and reduces reaction time[2].
We propose Threat-Layer Fusion, a framework that combines
dynamic occupancy motion analysis (DOMA) trajectories with
heuristic and machine-learning intent classifiers to reduce false
critical alerts in AR overlays. Our prototype processes RF
motion tracks from a wearable radar, estimates intent (approach,
hover, depart) using trajectory features and a neural intent net,
and fuses these layers to prioritize alerts. Experiments on a
scenario catalogue with drones, vehicles and personnel show that
Threat-Layer Fusion reduces false critical alerts by 37 % while
incurring only 10 ms additional latency
Benjamin J Gilbert (@Bgilbert1984@mastodon.social) – Mastodon