{"id":3557,"date":"2025-09-19T19:46:47","date_gmt":"2025-09-19T19:46:47","guid":{"rendered":"https:\/\/172-234-197-23.ip.linodeusercontent.com\/?p=3557"},"modified":"2025-09-19T21:54:02","modified_gmt":"2025-09-19T21:54:02","slug":"on-device-rf-filtering-compression-for-wearables","status":"publish","type":"post","link":"https:\/\/172-234-197-23.ip.linodeusercontent.com\/?p=3557","title":{"rendered":"On-Device RF Filtering &amp; Compression for Wearables"},"content":{"rendered":"\n<figure class=\"wp-block-embed is-type-wp-embed is-provider-spectrcyde wp-block-embed-spectrcyde\"><div class=\"wp-block-embed__wrapper\">\n<blockquote class=\"wp-embedded-content\" data-secret=\"RFARxrErR5\"><a href=\"https:\/\/172-234-197-23.ip.linodeusercontent.com\/?page_id=3483\">On-Device RF Filtering &amp; Compression for Wearables<\/a><\/blockquote><iframe class=\"wp-embedded-content\" sandbox=\"allow-scripts\" security=\"restricted\" style=\"position: absolute; visibility: hidden;\" title=\"&#8220;On-Device RF Filtering &amp; Compression for Wearables&#8221; &#8212; Spectrcyde\" src=\"https:\/\/172-234-197-23.ip.linodeusercontent.com\/?page_id=3483&#038;embed=true#?secret=bTb4XyaZs0#?secret=RFARxrErR5\" data-secret=\"RFARxrErR5\" width=\"600\" height=\"338\" frameborder=\"0\" marginwidth=\"0\" marginheight=\"0\" scrolling=\"no\"><\/iframe>\n<\/div><\/figure>\n\n\n\n<figure class=\"wp-block-image size-full\"><a href=\"https:\/\/mastodon.social\/@Bgilbert1984\"><img data-opt-id=219741116  fetchpriority=\"high\" decoding=\"async\" width=\"871\" height=\"889\" src=\"https:\/\/ml6vmqguit1n.i.optimole.com\/w:auto\/h:auto\/q:mauto\/f:best\/http:\/\/172-234-197-23.ip.linodeusercontent.com\/wp-content\/uploads\/2025\/09\/image-79.png\" alt=\"\" class=\"wp-image-3552\" srcset=\"https:\/\/ml6vmqguit1n.i.optimole.com\/w:871\/h:889\/q:mauto\/f:best\/https:\/\/172-234-197-23.ip.linodeusercontent.com\/wp-content\/uploads\/2025\/09\/image-79.png 871w, https:\/\/ml6vmqguit1n.i.optimole.com\/w:294\/h:300\/q:mauto\/f:best\/https:\/\/172-234-197-23.ip.linodeusercontent.com\/wp-content\/uploads\/2025\/09\/image-79.png 294w, https:\/\/ml6vmqguit1n.i.optimole.com\/w:768\/h:784\/q:mauto\/f:best\/https:\/\/172-234-197-23.ip.linodeusercontent.com\/wp-content\/uploads\/2025\/09\/image-79.png 768w\" sizes=\"(max-width: 871px) 100vw, 871px\" \/><\/a><\/figure>\n\n\n\n<p>Head-mounted augmented-reality (AR) devices are<br>increasingly used by first responders and military medics to<br>visualize radio-frequency (RF) tracks, casualty vitals and threat<br>signatures in real time. These platforms operate under severe<br>resource constraints: the computational budget is on the order<br>of tens of milliseconds, the power budget is under one watt, and<br>the thermal headroom is limited by the user\u2019s skin. Prior work<br>demonstrated that RF\u2013AR situational awareness can be achieved<br>within \u223c200 ms end-to-end on uncompressed networks. However,<br>the neural networks used for classification and localization are<br>heavily over-parameterized, leading to energy-intensive inference<br>and lengthy stalls on battery-powered wearables. To tackle this<br>problem, we present a pipeline for on-device RF filtering and<br>compression that combines quantization, sparsity and knowledge<br>distillation to shrink models without compromising mission utility. Quantization reduces the precision of weights and activations,<br>lowering memory footprints and enabling faster integer arithmetic [1], while magnitude-based pruning removes unimportant<br>parameters and accelerates inference [2]. Recent studies show<br>that pruning and quantization jointly diminish computational<br>and memory requirements [3] but must be applied carefully<br>because their effects are non-orthogonal [4]. We further employ<br>teacher\u2013student knowledge distillation, transferring knowledge<br>from a high-capacity \u201dteacher\u201d network to a lightweight \u201dstudent\u201d model [5], [6]. Our experiments on Jetson-class edge devices<br>and Pixel-8 smartphones sweep multiple quantization bit-widths<br>and sparsity levels, producing accuracy\u2013latency\u2013power Pareto<br>curves. At 50 ms median latency and 0.9 W average power, our<br>distilled INT8\/70 % sparse student attains within 1 % of baseline<br>accuracy, yielding &gt;5\u00d7 energy savings. Hardware-aware model<br>compression techniques [7] and adaptive bit-width selection [8]<br>enable deployment on resource-constrained wearable platforms.<br>We release our code, datasets and measurement harness to foster<br>reproducible research in RF\u2013AR compression.Head-mounted augmented-reality (AR) devices are<br>increasingly used by first responders and military medics to<br>visualize radio-frequency (RF) tracks, casualty vitals and threat<br>signatures in real time. These platforms operate under severe<br>resource constraints: the computational budget is on the order<br>of tens of milliseconds, the power budget is under one watt, and<br>the thermal headroom is limited by the user\u2019s skin. Prior work<br>demonstrated that RF\u2013AR situational awareness can be achieved<br>within \u223c200 ms end-to-end on uncompressed networks. However,<br>the neural networks used for classification and localization are<br>heavily over-parameterized, leading to energy-intensive inference<br>and lengthy stalls on battery-powered wearables. To tackle this<br>problem, we present a pipeline for on-device RF filtering and<br>compression that combines quantization, sparsity and knowledge<br>distillation to shrink models without compromising mission utility. Quantization reduces the precision of weights and activations,<br>lowering memory footprints and enabling faster integer arithmetic [1], while magnitude-based pruning removes unimportant<br>parameters and accelerates inference [2]. Recent studies show<br>that pruning and quantization jointly diminish computational<br>and memory requirements [3] but must be applied carefully<br>because their effects are non-orthogonal [4]. We further employ<br>teacher\u2013student knowledge distillation, transferring knowledge<br>from a high-capacity \u201dteacher\u201d network to a lightweight \u201dstudent\u201d model [5], [6]. Our experiments on Jetson-class edge devices<br>and Pixel-8 smartphones sweep multiple quantization bit-widths<br>and sparsity levels, producing accuracy\u2013latency\u2013power Pareto<br>curves. At 50 ms median latency and 0.9 W average power, our<br>distilled INT8\/70 % sparse student attains within 1 % of baseline<br>accuracy, yielding &gt;5\u00d7 energy savings. Hardware-aware model<br>compression techniques [7] and adaptive bit-width selection [8]<br>enable deployment on resource-constrained wearable platforms.<br>We release our code, datasets and measurement harness to foster<br>reproducible research in RF\u2013AR compression.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Head-mounted augmented-reality (AR) devices areincreasingly used by first responders and military medics tovisualize radio-frequency (RF) tracks, casualty vitals and threatsignatures in real time. These platforms operate under severeresource constraints: the computational budget is on the orderof tens of milliseconds, the power budget is under one watt, andthe thermal headroom is limited by the user\u2019s skin.&hellip;&nbsp;<a href=\"https:\/\/172-234-197-23.ip.linodeusercontent.com\/?p=3557\" rel=\"bookmark\"><span class=\"screen-reader-text\">On-Device RF Filtering &amp; Compression for Wearables<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":3552,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"neve_meta_sidebar":"","neve_meta_container":"","neve_meta_enable_content_width":"","neve_meta_content_width":0,"neve_meta_title_alignment":"","neve_meta_author_avatar":"","neve_post_elements_order":"","neve_meta_disable_header":"","neve_meta_disable_footer":"","neve_meta_disable_title":"","footnotes":""},"categories":[10],"tags":[],"class_list":["post-3557","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-signal_scythe"],"_links":{"self":[{"href":"https:\/\/172-234-197-23.ip.linodeusercontent.com\/index.php?rest_route=\/wp\/v2\/posts\/3557","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/172-234-197-23.ip.linodeusercontent.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/172-234-197-23.ip.linodeusercontent.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/172-234-197-23.ip.linodeusercontent.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/172-234-197-23.ip.linodeusercontent.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=3557"}],"version-history":[{"count":2,"href":"https:\/\/172-234-197-23.ip.linodeusercontent.com\/index.php?rest_route=\/wp\/v2\/posts\/3557\/revisions"}],"predecessor-version":[{"id":3559,"href":"https:\/\/172-234-197-23.ip.linodeusercontent.com\/index.php?rest_route=\/wp\/v2\/posts\/3557\/revisions\/3559"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/172-234-197-23.ip.linodeusercontent.com\/index.php?rest_route=\/wp\/v2\/media\/3552"}],"wp:attachment":[{"href":"https:\/\/172-234-197-23.ip.linodeusercontent.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=3557"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/172-234-197-23.ip.linodeusercontent.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=3557"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/172-234-197-23.ip.linodeusercontent.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=3557"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}