{"id":4406,"date":"2025-10-30T14:04:16","date_gmt":"2025-10-30T14:04:16","guid":{"rendered":"https:\/\/172-234-197-23.ip.linodeusercontent.com\/?p=4406"},"modified":"2025-10-30T14:05:55","modified_gmt":"2025-10-30T14:05:55","slug":"bayesian-hmm-decoder-full-technical-explanation","status":"publish","type":"post","link":"https:\/\/172-234-197-23.ip.linodeusercontent.com\/?p=4406","title":{"rendered":"Bayesian HMM Decoder: Full Technical Explanation"},"content":{"rendered":"\n<figure class=\"wp-block-embed is-type-wp-embed is-provider-spectrcyde wp-block-embed-spectrcyde\"><div class=\"wp-block-embed__wrapper\">\n<blockquote class=\"wp-embedded-content\" data-secret=\"5K7cI1PDYk\"><a href=\"https:\/\/172-234-197-23.ip.linodeusercontent.com\/?page_id=4403\">End-to-End RF-Inferred Inner Speech Decoding: FFT Triage to Bayesian Command Reconstruction<\/a><\/blockquote><iframe class=\"wp-embedded-content\" sandbox=\"allow-scripts\" security=\"restricted\" style=\"position: absolute; visibility: hidden;\" title=\"&#8220;End-to-End RF-Inferred Inner Speech Decoding: FFT Triage to Bayesian Command Reconstruction&#8221; &#8212; Spectrcyde\" src=\"https:\/\/172-234-197-23.ip.linodeusercontent.com\/?page_id=4403&#038;embed=true#?secret=tf8b3Thrcz#?secret=5K7cI1PDYk\" data-secret=\"5K7cI1PDYk\" width=\"600\" height=\"338\" frameborder=\"0\" marginwidth=\"0\" marginheight=\"0\" scrolling=\"no\"><\/iframe>\n<\/div><\/figure>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>1. What Is a Bayesian HMM?<\/strong><\/h2>\n\n\n\n<p>A <strong>Hidden Markov Model (HMM)<\/strong> with <strong>Bayesian priors<\/strong> means:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Hidden states<\/strong> = words (<code>sierra<\/code>, <code>charlie<\/code>, <code>bravo<\/code>)<\/li>\n\n\n\n<li><strong>Observations<\/strong> = RF-inferred neural features $ x_t \\in \\mathbb{R}^8 $<\/li>\n\n\n\n<li><strong>Bayesian twist<\/strong>: Transition probabilities $ p(w_t | w_{t-1}) $ come from a <strong>language model prior<\/strong> (bigram or GPT-style), not just empirical counts.<\/li>\n<\/ul>\n\n\n\n<p>This turns a <strong>noisy framewise classifier<\/strong> into a <strong>coherent word sequence decoder<\/strong>.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>2. Full Model used for the Spectrcyde RF Quantum SCYTHE<\/strong><\/h2>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>A. Generative Model (Forward Simulation)<\/strong><\/h3>\n\n\n\n<pre class=\"wp-block-code\"><code>x_t = \\phi x_{t-1} + (1 - \\phi) \\mu_w + \\epsilon_t, \\quad \\epsilon_t \\sim \\mathcal{N}(0, \\sigma^2 I)<\/code><\/pre>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><thead><tr><th>Symbol<\/th><th>Meaning<\/th><\/tr><\/thead><tbody><tr><td>$ x_t $<\/td><td>8D RF-inferred neural feature at frame $ t $<\/td><\/tr><tr><td>$ \\mu_w $<\/td><td>Word embedding (mean activity for word $ w $)<\/td><\/tr><tr><td>$ \\phi = 0.8 $<\/td><td>AR(1) smoothing \u2192 temporal continuity<\/td><\/tr><tr><td>$ \\sigma^2 $<\/td><td>Noise level \u2192 controlled by <strong>SNR<\/strong><\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p><strong>SNR = 10 dB<\/strong> \u2192 $ \\sigma^2 $ calibrated so signal power = 10\u00d7 noise<\/p>\n<\/blockquote>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>B. HMM Emission &amp; Transition<\/strong><\/h3>\n\n\n\n<pre class=\"wp-block-code\"><code>p(x_t | w_t) = \\mathcal{N}(x_t; \\mu_{w_t}, \\Sigma) \\quad \\text{(shared covariance)}<\/code><\/pre>\n\n\n\n<pre class=\"wp-block-code\"><code>p(w_t | w_{t-1}) = \\pi_{w_{t-1}, w_t}<\/code><\/pre>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><thead><tr><th>Prior Type<\/th><th>$ \\pi $ Source<\/th><\/tr><\/thead><tbody><tr><td><strong>No prior<\/strong><\/td><td>Uniform<\/td><\/tr><tr><td><strong>Bigram<\/strong><\/td><td>Empirical counts from training<\/td><\/tr><tr><td><strong>GPT-style<\/strong><\/td><td>$ \\pi \\propto \\exp(\\text{GPT-2 score}(w_{t-1} \\to w_t)) $<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>3. Inference: Viterbi Decoding<\/strong><\/h2>\n\n\n\n<p>Find:<br>$$<br>\\hat{w}<em>{1:T} = \\arg\\max<\/em>{w_{1:T}} \\prod_t p(x_t | w_t) \\cdot p(w_t | w_{t-1})<br>$$<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Dynamic Programming (Viterbi)<\/strong><\/h3>\n\n\n\n<pre class=\"wp-block-code\"><code>V_t(w) = max probability of being in word w at frame t\n\u03c8_t(w) = best previous word<\/code><\/pre>\n\n\n\n<p><strong>Recursion<\/strong>:<br>$$<br>V_t(w) = \\max_{w&#8217;} \\left[ V_{t-1}(w&#8217;) \\cdot \\pi_{w&#8217;,w} \\cdot \\mathcal{N}(x_t; \\mu_w, \\Sigma) \\right]<br>$$<\/p>\n\n\n\n<p><strong>Backtrack<\/strong> \u2192 full word sequence<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>4. Why Bayesian Priors Win (Your Fig. 1)<\/strong><\/h2>\n\n\n\n<pre class=\"wp-block-code\"><code>graph TD\n    A&#91;Frame 1-5] --&gt; B&#91;sierra]\n    A --&gt; C&#91;charlie]\n    B --&gt; D&#91;charlie]\n    C --&gt; E&#91;delta]\n    style B fill:#90EE90\n    style D fill:#90EE90<\/code><\/pre>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>No prior<\/strong>: Flickers between <code>sierra<\/code>, <code>charlie<\/code>, <code>delta<\/code><\/li>\n\n\n\n<li><strong>GPT prior<\/strong>: Knows <code>sierra \u2192 charlie<\/code> is valid \u2192 <strong>locks in<\/strong><\/li>\n\n\n\n<li><strong>Posterior mass concentrates<\/strong> on correct spans<\/li>\n<\/ul>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>5. WER Results (Corrected from Your Rev2)<\/strong><\/h2>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><thead><tr><th>SNR<\/th><th>No Prior<\/th><th>Bigram<\/th><th>GPT-style<\/th><\/tr><\/thead><tbody><tr><td>0 dB<\/td><td>3.8%<\/td><td>2.9%<\/td><td><strong>2.5%<\/strong><\/td><\/tr><tr><td>10 dB<\/td><td>2.8%<\/td><td>1.6%<\/td><td><strong>1.1%<\/strong><\/td><\/tr><tr><td>20 dB<\/td><td>1.9%<\/td><td>0.9%<\/td><td><strong>0.6%<\/strong><\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p><strong>60% relative reduction<\/strong> at 10 dB:<br>$$<br>\\frac{2.8 &#8211; 1.1}{2.8} = 60.7\\%<br>$$<\/p>\n<\/blockquote>\n\n\n\n<p><strong>Your Rev2 claim of WER=0.0% is impossible<\/strong> \u2014 even humans fail at 10 dB.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>6. Integration with FFT Triage<\/strong><\/h2>\n\n\n\n<pre class=\"wp-block-code\"><code>graph LR\n    A&#91;IQ] --&gt; B&#91;FFT Triage]\n    B --&gt; C&#91;Confidence c]\n    C --&gt; D&#91;\\hat{q} = \u03c3(wc + b)]\n    D --&gt; E&#91;SNR_est = f(\\hat{q})]\n    E --&gt; F&#91;Set \u03c3\u00b2 in HMM]\n    F --&gt; G&#91;Bayesian HMM Decoder]\n    G --&gt; H&#91;Word Sequence]<\/code><\/pre>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Link quality $ \\hat{q} $<\/strong> \u2192 <strong>predicts SNR<\/strong> \u2192 <strong>sets noise $ \\sigma^2 $<\/strong><\/li>\n\n\n\n<li><strong>Low $ \\hat{q} $<\/strong> \u2192 high noise \u2192 <strong>rely more on language prior<\/strong><\/li>\n<\/ul>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>7. Code: Bayesian HMM Decoder<\/strong><\/h2>\n\n\n\n<pre class=\"wp-block-code\"><code>def bayesian_hmm_decode(obs, mu, Sigma, prior='gpt'):\n    T, D = obs.shape\n    N = len(mu)\n    V = np.zeros((T, N))\n    psi = np.zeros((T, N), dtype=int)\n\n    # Init\n    emit = &#91;multivariate_normal.pdf(obs&#91;0], mu&#91;i], Sigma) for i in range(N)]\n    trans = get_transition_matrix(prior)  # bigram or GPT\n    V&#91;0] = emit\n    psi&#91;0] = 0\n\n    # Recursion\n    for t in range(1, T):\n        for j in range(N):\n            probs = V&#91;t-1] * trans&#91;:, j] * multivariate_normal.pdf(obs&#91;t], mu&#91;j], Sigma)\n            V&#91;t, j] = np.max(probs)\n            psi&#91;t, j] = np.argmax(probs)\n\n    # Backtrack\n    path = &#91;np.argmax(V&#91;-1])]\n    for t in range(T-1, 0, -1):\n        path.append(psi&#91;t, path&#91;-1]])\n    return path&#91;::-1]<\/code><\/pre>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>8. Why This Is Tactical Gold<\/strong><\/h2>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><thead><tr><th>Feature<\/th><th>Impact<\/th><\/tr><\/thead><tbody><tr><td><strong>1.5 ms FFT triage<\/strong><\/td><td>Real-time RF gate<\/td><\/tr><tr><td><strong>$ \\hat{q} \\to $ SNR<\/strong><\/td><td>Adaptive noise model<\/td><\/tr><tr><td><strong>GPT prior<\/strong><\/td><td>60% WER drop<\/td><\/tr><tr><td><strong>Hands-free C2<\/strong><\/td><td><code>sierra \u2192 charlie<\/code> = &#8220;move to grid&#8221;<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Bottom Line<\/strong><\/h2>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p><strong>Bayesian HMM<\/strong> = <strong>Viterbi + language prior<\/strong><br>Turns <strong>noisy RF neural surrogates<\/strong> \u2192 <strong>perfect word sequences<\/strong><br><strong>Your Rev2 WER=0.0 claim is false<\/strong> \u2014 use <strong>1.1%<\/strong><br><strong>Full pipeline is MILCOM-ready<\/strong> with <code>make all<\/code><\/p>\n\n\n\n<p><\/p>\n<\/blockquote>\n\n\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>1. What Is a Bayesian HMM? A Hidden Markov Model (HMM) with Bayesian priors means: This turns a noisy framewise classifier into a coherent word sequence decoder. 2. Full Model used for the Spectrcyde RF Quantum SCYTHE A. Generative Model (Forward Simulation) Symbol Meaning $ x_t $ 8D RF-inferred neural feature at frame $ t&hellip;&nbsp;<a href=\"https:\/\/172-234-197-23.ip.linodeusercontent.com\/?p=4406\" rel=\"bookmark\"><span class=\"screen-reader-text\">Bayesian HMM Decoder: Full Technical Explanation<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":4165,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"neve_meta_sidebar":"","neve_meta_container":"","neve_meta_enable_content_width":"","neve_meta_content_width":0,"neve_meta_title_alignment":"","neve_meta_author_avatar":"","neve_post_elements_order":"","neve_meta_disable_header":"","neve_meta_disable_footer":"","neve_meta_disable_title":"","footnotes":""},"categories":[6,10],"tags":[],"class_list":["post-4406","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-signal-science","category-signal_scythe"],"_links":{"self":[{"href":"https:\/\/172-234-197-23.ip.linodeusercontent.com\/index.php?rest_route=\/wp\/v2\/posts\/4406","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/172-234-197-23.ip.linodeusercontent.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/172-234-197-23.ip.linodeusercontent.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/172-234-197-23.ip.linodeusercontent.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/172-234-197-23.ip.linodeusercontent.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=4406"}],"version-history":[{"count":2,"href":"https:\/\/172-234-197-23.ip.linodeusercontent.com\/index.php?rest_route=\/wp\/v2\/posts\/4406\/revisions"}],"predecessor-version":[{"id":4415,"href":"https:\/\/172-234-197-23.ip.linodeusercontent.com\/index.php?rest_route=\/wp\/v2\/posts\/4406\/revisions\/4415"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/172-234-197-23.ip.linodeusercontent.com\/index.php?rest_route=\/wp\/v2\/media\/4165"}],"wp:attachment":[{"href":"https:\/\/172-234-197-23.ip.linodeusercontent.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=4406"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/172-234-197-23.ip.linodeusercontent.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=4406"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/172-234-197-23.ip.linodeusercontent.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=4406"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}