RF-augmented reality (RF-AR) wearables enable
detection of threats, casualties and anomalies, but they rely on
a set of alert thresholds tailored to mission context and user
physiology. Presently, these thresholds are either hard coded
or tuned manually, limiting adaptability across individuals and
environments. Centralizing raw RF biomarker data to train
adaptive models raises privacy and compliance concerns, as raw
vitals and location may constitute a search under U.S. law [1], [2].
Federated learning offers a solution: devices collaboratively train
a shared model while maintaining data locally [3], [4]. We propose
Federated Adaptation, a framework that tunes alert thresholds on
device using reinforcement signals (e.g., user acknowledgments)
and aggregates updates via a federated server. Our contributions
are:
- We design a personalised threshold adaptation algorithm
that leverages local feedback to adjust detection sensitivity
and uses federated averaging to produce a global base
model. - We integrate the algorithm with our Glass platform and
evaluate on Jetson and Pixel hardware under variable
mission conditions, measuring false positive/negative rates,
latency and energy. - We demonstrate that federated adaptation reduces false
critical alerts by 32 % compared to static thresholds
while preserving battery life and complying with privacy
constraints. - We provide a reproducible benchmark harness with JSON
metrics, standardised traces and one-command figure generation using OpenBench-AR.