{"id":5089,"date":"2026-03-13T05:45:25","date_gmt":"2026-03-13T05:45:25","guid":{"rendered":"https:\/\/172-234-197-23.ip.linodeusercontent.com\/?p=5089"},"modified":"2026-03-13T05:45:25","modified_gmt":"2026-03-13T05:45:25","slug":"kernel-edges-enable-autonomous-threat-intelligence","status":"publish","type":"post","link":"https:\/\/172-234-197-23.ip.linodeusercontent.com\/?p=5089","title":{"rendered":"Kernel Edges Enable Autonomous Threat Intelligence"},"content":{"rendered":"\n<p><strong>#<\/strong><strong> NerfEngine Dev Log \u2014 March 13, 2026<\/strong><\/p>\n\n\n\n<p>..you sit down to fix one thing and end up building something that feels genuinely different from what existed before. We started with a live Ollama instance running `gemma3` on an RTX 3060 and ended up with a fully autonomous network threat intelligence pipeline \u2014 kernel-to-LLM \u2014 running on a single AlmaLinux 10 server.<\/p>\n\n\n\n<p>Here&#8217;s what&#8217;s up.<\/p>\n\n\n\n<p><strong>&#8212;<\/strong><\/p>\n\n\n\n<p><strong>Me: <\/strong>Please create a Blog Post for our latest work here today at \/home\/spectrcyde\/NerfEngine\/Update Blog Post 03132026.md<\/p>\n\n\n\n<p><strong>ClaudeCode: <\/strong>\u25d0 The user wants a blog post about today&#8217;s work. Let me create an engaging technical blog post covering everything we implemented today:<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li>GraphOps Copilot (graphops_copilot.py) &#8211; autonomous LLM investigation agent<\/li>\n\n\n\n<li>Attractor loop fix (confidence threshold, plateau exit, best_confidence tracking)<\/li>\n\n\n\n<li>GraphOps Autopilot (graphops_autopilot.py) &#8211; 4-tier autonomous alert system<\/li>\n\n\n\n<li>Three upgrades from Gemma_Llama_MCP.md:\n<ul class=\"wp-block-list\">\n<li>Detector cooldowns<\/li>\n\n\n\n<li>HypergraphMetricsCollector<\/li>\n\n\n\n<li>GraphAttractorDetector (rotating botnet detection)<\/li>\n<\/ul>\n<\/li>\n<\/ol>\n\n\n\n<p><\/p>\n\n\n\n<p><strong>##<\/strong><strong> Where We Left Off<\/strong><\/p>\n\n\n\n<p>NerfEngine already had significant pieces in place:<\/p>\n\n\n\n<p>&#8211; <strong>**<\/strong><strong>Phase A<\/strong><strong>**<\/strong> \u2014 `TemporalFanInDetector`: detects VPN-rotating botnets by measuring *timing synchronisation* rather than IP reputation. The insight: rotating proxies randomise IPs but they can&#8217;t hide their task scheduler ticking in unison. A botnet hitting a login endpoint from 183 unique IPs in a 200ms window is statistically impossible without central coordination.<\/p>\n\n\n\n<p>&#8211; <strong>**<\/strong><strong>Phase B<\/strong><strong>**<\/strong> \u2014 Kernel-side hypergraph edge emission: `struct graph_edge_event` (56 bytes) emitted directly from the XDP path. The kernel computes 64-bit node IDs via FNV-1a over `__u32 src_ip`. Userspace never touches raw IP tuples on the hot path.<\/p>\n\n\n\n<p>&#8211; <strong>**<\/strong><strong>Stage 6 demo<\/strong><strong>**<\/strong> \u2014 all six scenarios validated clean: `DRIFT[SCANNER] delta=60`, `FAN-IN[BOTNET_COORDINATION] srcs=120 ip_H=6.91 timing_H=-0.00`.<\/p>\n\n\n\n<p>What we hadn&#8217;t built yet: the intelligence layer. The part that *explains* what the detectors found, and eventually starts asking its own questions.<\/p>\n\n\n\n<p><strong>&#8212;<\/strong><\/p>\n\n\n\n<p><strong>##<\/strong><strong> Part 1 \u2014 GraphOps Copilot (graphops_copilot.py, 1,380 lines)<\/strong><\/p>\n\n\n\n<p><strong>###<\/strong><strong> The Problem We Solved First<\/strong><\/p>\n\n\n\n<p>There&#8217;s a classic failure mode when you put an LLM in front of a graph query interface: the model returns the *example values from its own few-shot prompt* instead of the user&#8217;s actual entity.<\/p>\n\n\n\n<p>Ask &#8220;What do you know about 200.36.135.121?&#8221; and you get:<\/p>\n\n\n\n<p>&#8220;`<\/p>\n\n\n\n<p>labels.ip = &#8220;10.0.0.1&#8221;<\/p>\n\n\n\n<p>&#8220;`<\/p>\n\n\n\n<p>That&#8217;s the prompt example. The model retrieved it and repeated it. This is the <strong>**<\/strong><strong>query-generation attractor loop bug<\/strong><strong>**<\/strong> \u2014 named after the mathematical concept of an attractor state that a system collapses into and can&#8217;t escape without an external push.<\/p>\n\n\n\n<p>The fix required three pieces:<\/p>\n\n\n\n<p><strong>**<\/strong><strong>EntityExtractor<\/strong><strong>**<\/strong> \u2014 strict regex extraction that runs *before* any LLM call, pulling IPv4, CIDR, ASN, domain, port, and `node:0x&#8230;` node IDs out of free text. A compile-time `EXAMPLE_VALUES` set strips hallucinated placeholders before they ever reach the DSL. If the extracted entity matches `&#8221;10.0.0.1&#8243;` or `&#8221;host:session:abc123&#8243;`, it gets stripped. Clean or nothing.<\/p>\n\n\n\n<p><strong>**<\/strong><strong>InvestigativeDSLExecutor<\/strong><strong>**<\/strong> \u2014 a stateful verb interpreter with 9 verbs:<\/p>\n\n\n\n<p>&#8220;`<\/p>\n\n\n\n<p>FOCUS &nbsp; &nbsp;\u2014 set investigation target<\/p>\n\n\n\n<p>WINDOW &nbsp; \u2014 set time window (e.g. &#8220;200ms&#8221;, &#8220;5s&#8221;)<\/p>\n\n\n\n<p>FILTER &nbsp; \u2014 prune results by condition<\/p>\n\n\n\n<p>ANALYZE &nbsp;\u2014 run a named analysis (degree_delta, fanin, temporal_sync&#8230;)<\/p>\n\n\n\n<p>CLUSTER &nbsp;\u2014 group by timing or ASN<\/p>\n\n\n\n<p>TRACE &nbsp; &nbsp;\u2014 path tracing between nodes<\/p>\n\n\n\n<p>EXPAND &nbsp; \u2014 neighbor expansion<\/p>\n\n\n\n<p>SUMMARIZE \u2014 aggregate current focus<\/p>\n\n\n\n<p>ASSESS &nbsp; \u2014 produce threat confidence score<\/p>\n\n\n\n<p>&#8220;`<\/p>\n\n\n\n<p>The executor is *stateful across steps*: `FOCUS 10.99.0.1` persists through subsequent verbs. The LLM doesn&#8217;t need to restate the target on every line.<\/p>\n\n\n\n<p><strong>**<\/strong><strong>GraphOpsAgent<\/strong><strong>**<\/strong> \u2014 the Ollama-backed reasoning loop. System prompt explicitly lists allowed verbs and says *&#8221;NEVER output example placeholder values&#8221;*. Model generates a JSON plan; executor runs it; a second LLM call interprets results and emits a confidence score. Loop continues until `confidence &gt;= 0.80`, attractor detected, or plateau.<\/p>\n\n\n\n<p><strong>###<\/strong><strong> The Attractor Loop Fix \u2014 Live-Tested Against llama3.2:3b<\/strong><\/p>\n\n\n\n<p>Running against live Ollama (`llama3.2:3b`, RTX 3060, 27\/27 layers GPU), the IP query demonstrated the exact attractor behavior: identical plan repeated all 6 steps, confidence oscillating 0.80 -&gt; 0.80 -&gt; 0.80 -&gt; 0.50 -&gt; 0.80 -&gt; <strong>**<\/strong><strong>0.20<\/strong><strong>**<\/strong>. Exits reporting 0.20 \u2014 the final-step jitter, not even the best value seen.<\/p>\n\n\n\n<p>Three fixes applied:<\/p>\n\n\n\n<p>&#8220;`python<\/p>\n\n\n\n<p>seen_plans: set = set() &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;# frozensets of plan verbs<\/p>\n\n\n\n<p>best_confidence = 0.0 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;# track max seen, not final jitter<\/p>\n\n\n\n<p># Attractor break: same plan seen before + confidence good enough -&gt; stop<\/p>\n\n\n\n<p>if frozenset(plan) in seen_plans and best_confidence &gt;= PLATEAU_THRESHOLD:<\/p>\n\n\n\n<p>&nbsp; &nbsp; confidence = best_confidence<\/p>\n\n\n\n<p>&nbsp; &nbsp; break<\/p>\n\n\n\n<p># Plateau exit: N consecutive non-improving steps &gt;= 0.65 -&gt; stop<\/p>\n\n\n\n<p>if confidence == prev_confidence:<\/p>\n\n\n\n<p>&nbsp; &nbsp; plateau_count += 1<\/p>\n\n\n\n<p>&nbsp; &nbsp; if plateau_count &gt;= PLATEAU_STEPS and best_confidence &gt;= PLATEAU_THRESHOLD:<\/p>\n\n\n\n<p>&nbsp; &nbsp; &nbsp; &nbsp; break<\/p>\n\n\n\n<p>&#8220;`<\/p>\n\n\n\n<p>After the fix: the same IP query exits after <strong>**<\/strong><strong>1 step<\/strong><strong>**<\/strong> at confidence=0.80.<\/p>\n\n\n\n<p>&#8220;`<\/p>\n\n\n\n<p>[GraphOpsAgent] step 0 plan: [&#8216;WINDOW 200ms&#8217;, &#8216;FOCUS ipv4&#8217;,<\/p>\n\n\n\n<p>&nbsp; &nbsp; &#8220;FILTER ip_address = &#8216;200.36.135.121&#8217;&#8221;, &#8216;ANALYZE fanin&#8217;]<\/p>\n\n\n\n<p>[GraphOpsAgent] step 0 confidence=0.80 (best=0.80)<\/p>\n\n\n\n<p>[GraphOpsAgent] confidence threshold reached at step 0<\/p>\n\n\n\n<p>&#8220;`<\/p>\n\n\n\n<p>Six round-trips reduced to one. The attractor loop is dead.<\/p>\n\n\n\n<p><strong>&#8212;<\/strong><\/p>\n\n\n\n<p><strong>##<\/strong><strong> Part 2 \u2014 GraphOps Autopilot (graphops_autopilot.py, 758 lines)<\/strong><\/p>\n\n\n\n<p>The copilot answers questions. The autopilot *asks them on its own*.<\/p>\n\n\n\n<p>The design requirement: <strong>**<\/strong><strong>LLM only runs when genuinely needed<\/strong><strong>**<\/strong>. The classic failure mode of autonomous LLM systems is the alarm storm \u2014 the model fires on everything, floods the analyst queue with noise, and gets disabled within a week. The fix is deterministic anomaly detection *first*, LLM explanation *second*.<\/p>\n\n\n\n<p>&#8220;`<\/p>\n\n\n\n<p>kernel telemetry<\/p>\n\n\n\n<p>&nbsp; &nbsp; &nbsp; |<\/p>\n\n\n\n<p>statistical detectors &nbsp; &lt;- DO NOT skip this<\/p>\n\n\n\n<p>&nbsp; &nbsp; &nbsp; |<\/p>\n\n\n\n<p>LLM explanation layer &nbsp; &lt;- only when detectors agree<\/p>\n\n\n\n<p>&#8220;`<\/p>\n\n\n\n<p><strong>###<\/strong><strong> The Four-Tier System<\/strong><\/p>\n\n\n\n<p>&#8220;`python<\/p>\n\n\n\n<p>TIER_OBSERVATION = 0.60 &nbsp; # internal log only<\/p>\n\n\n\n<p>TIER_SUGGESTION &nbsp;= 0.70 &nbsp; # suggestion queue (analyst pull)<\/p>\n\n\n\n<p>TIER_ALERT &nbsp; &nbsp; &nbsp; = 0.80 &nbsp; # auto-emit EventCard (no human needed)<\/p>\n\n\n\n<p>TIER_AUTONOMOUS &nbsp;= 0.90 &nbsp; # wake InvestigatorAgent (LLM runs)<\/p>\n\n\n\n<p>&#8220;`<\/p>\n\n\n\n<p><strong>**<\/strong><strong>AlertDedup<\/strong><strong>**<\/strong> \u2014 blake2b hash of `(source_type, pattern, sorted_node_ids, window_bucket)` with 30-second TTL. `window_ms \/\/ 1000` buckets absorb inter-detector jitter without suppressing genuinely new events.<\/p>\n\n\n\n<p><strong>**<\/strong><strong>EventCard<\/strong><strong>**<\/strong> \u2014 the intelligence briefing object analysts actually want:<\/p>\n\n\n\n<p>&#8220;`<\/p>\n\n\n\n<p>GraphOps Alert<\/p>\n\n\n\n<p>&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8211;<\/p>\n\n\n\n<p>Pattern &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; botnet_coordination<\/p>\n\n\n\n<p>Nodes &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; 94<\/p>\n\n\n\n<p>Window &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;180ms<\/p>\n\n\n\n<p>Confidence &nbsp; &nbsp; &nbsp; &nbsp;0.84<\/p>\n\n\n\n<p>Temporal Sync &nbsp; &nbsp; 0.82<\/p>\n\n\n\n<p>IP Entropy &nbsp; &nbsp; &nbsp; &nbsp;6.91<\/p>\n\n\n\n<p>Suggested Actions<\/p>\n\n\n\n<p>&nbsp; &gt; Cluster timing<\/p>\n\n\n\n<p>&nbsp; &gt; Analyze ASN diversity<\/p>\n\n\n\n<p>&nbsp; &gt; Trace infrastructure paths<\/p>\n\n\n\n<p>&nbsp; &gt; Compare with previous clusters<\/p>\n\n\n\n<p>&#8220;`<\/p>\n\n\n\n<p><strong>**<\/strong><strong>SentinelLoop<\/strong><strong>**<\/strong> \u2014 subscribes to detector callbacks via `.subscribe(cb)`. Fully push-driven. At 20-60 million edges\/sec, a polling loop would be a CPU disaster. The detector fires, the callback runs, the router decides. The 200ms heartbeat thread keeps the process alive and does nothing else.<\/p>\n\n\n\n<p><strong>**<\/strong><strong>InvestigatorAgent<\/strong><strong>**<\/strong> \u2014 Tier 3 only. Lazy-loads `GraphOpsAgent`. Builds a context-aware question from the `EventCard`, then runs the full investigation loop. A scanner card generates: *&#8221;Node X shows a degree delta of 60 connections in the last window. Pattern: scanner. Is this a port scan or lateral movement?&#8221;*<\/p>\n\n\n\n<p><strong>###<\/strong><strong> Scoring Functions<\/strong><\/p>\n\n\n\n<p>The scoring encodes domain knowledge directly. For `FanInAlert`:<\/p>\n\n\n\n<p>&#8220;`python<\/p>\n\n\n\n<p>base = 0.92 if verdict == &#8220;botnet_coordination&#8221; else 0.72<\/p>\n\n\n\n<p>penalty = min(0.15, timing_entropy * 0.10)<\/p>\n\n\n\n<p>score = base &#8211; penalty<\/p>\n\n\n\n<p>&#8220;`<\/p>\n\n\n\n<p>&#8211; Botnet with timing_entropy=0.01 (near-perfect sync): score=0.919 -&gt; <strong>**<\/strong><strong>Tier 3 autonomous investigation<\/strong><strong>**<\/strong><\/p>\n\n\n\n<p>&#8211; Fan-in spike with timing_entropy=2.0 (unsynchronised): score=0.57 -&gt; <strong>**<\/strong><strong>discarded<\/strong><strong>**<\/strong><\/p>\n\n\n\n<p>That second case is intentional. Unsynchronised fan-in is a CDN, a flash crowd, or a viral tweet. Discarding it before it ever touches the dedup layer is correct.<\/p>\n\n\n\n<p><strong>###<\/strong><strong> Self-Test Results: 24\/24<\/strong><\/p>\n\n\n\n<p>&#8220;`<\/p>\n\n\n\n<p>[PASS] EventCard format_card rendering<\/p>\n\n\n\n<p>[PASS] AlertDedup TTL, key collision, expiry<\/p>\n\n\n\n<p>[PASS] TierRouter all five thresholds<\/p>\n\n\n\n<p>[PASS] Scoring: drift passthrough, botnet -&gt; Tier 3, spike -&gt; discard<\/p>\n\n\n\n<p>[PASS] SentinelLoop: scanner queued, botnet dispatched, dedup suppression<\/p>\n\n\n\n<p>[PASS] GraphOpsAutopilot start\/stop lifecycle<\/p>\n\n\n\n<p>&#8220;`<\/p>\n\n\n\n<p><strong>&#8212;<\/strong><\/p>\n\n\n\n<p><strong>##<\/strong><strong> Part 3 \u2014 Three Upgrades from Code Review<\/strong><\/p>\n\n\n\n<p>After reviewing the autopilot, the design doc came back with a code review \u2014 and it was genuinely useful. Here&#8217;s what it recommended and what we built.<\/p>\n\n\n\n<p><strong>###<\/strong><strong> Upgrade 1 \u2014 Detector Cooldowns<\/strong><\/p>\n\n\n\n<p>A noisy detector can still produce high-frequency alerts even with dedup, because each new window produces new node IDs that hash differently. Solution: per-`(source_type, pattern)` cooldown timestamp.<\/p>\n\n\n\n<p>&#8220;`python<\/p>\n\n\n\n<p>DETECTOR_COOLDOWN = 1.0 &nbsp;# seconds<\/p>\n\n\n\n<p>cooldown_key = (source_type, pattern)<\/p>\n\n\n\n<p>if time.time() &#8211; self._last_emit.get(cooldown_key, 0.0) &lt; DETECTOR_COOLDOWN:<\/p>\n\n\n\n<p>&nbsp; &nbsp; return &nbsp;# suppress burst<\/p>\n\n\n\n<p>self._last_emit[cooldown_key] = time.time()<\/p>\n\n\n\n<p>&#8220;`<\/p>\n\n\n\n<p>One line of state, one comparison, prevents alert bursts entirely.<\/p>\n\n\n\n<p><strong>###<\/strong><strong> Upgrade 2 \u2014 HypergraphMetricsCollector<\/strong><\/p>\n\n\n\n<p>A shared `HypergraphSnapshot` computed once per window by a background thread:<\/p>\n\n\n\n<p>&#8220;`python<\/p>\n\n\n\n<p>@dataclass<\/p>\n\n\n\n<p>class HypergraphSnapshot:<\/p>\n\n\n\n<p>&nbsp; &nbsp; cluster_density: &nbsp; float &nbsp;# edges \/ (nodes*(nodes-1))<\/p>\n\n\n\n<p>&nbsp; &nbsp; edge_churn: &nbsp; &nbsp; &nbsp; &nbsp;float &nbsp;# fraction of edges not seen in prev window<\/p>\n\n\n\n<p>&nbsp; &nbsp; temporal_sync: &nbsp; &nbsp; float &nbsp;# 1 &#8211; normalised inter-arrival timing entropy<\/p>\n\n\n\n<p>&nbsp; &nbsp; asn_entropy: &nbsp; &nbsp; &nbsp; float &nbsp;# H(src_node_id) across all edges<\/p>\n\n\n\n<p>&nbsp; &nbsp; top_fan_in_rate: &nbsp; float &nbsp;# max fan-in per dst in this window<\/p>\n\n\n\n<p>&nbsp; &nbsp; active_attractors: int &nbsp; &nbsp;# live GraphAttractor cluster count<\/p>\n\n\n\n<p>&#8220;`<\/p>\n\n\n\n<p>All three detectors can reference this snapshot instead of recomputing the same Shannon entropy calculations independently. At high edge rates this matters.<\/p>\n\n\n\n<p><strong>###<\/strong><strong> Upgrade 3 \u2014 GraphAttractorDetector (The Important One)<\/strong><\/p>\n\n\n\n<p>Fan-in detection and degree drift detection share a blind spot: they key on *nodes*. A VPN-rotating botnet changes its source nodes \u2014 IPs, ASNs, geos \u2014 constantly. By the time the detector builds state on one IP, it&#8217;s already gone.<\/p>\n\n\n\n<p><strong>**<\/strong><strong>Graph attractors<\/strong><strong>**<\/strong> key on *edge structure* instead. The observation: rotating botnets change their source nodes but they all hit the *same destination infrastructure*. The `edge_hash` is stable even when every `node_a` rotates.<\/p>\n\n\n\n<p>&#8220;`python<\/p>\n\n\n\n<p># Tracks edge_hash -&gt; {src_node_ids} over 5-second sliding windows<\/p>\n\n\n\n<p># Fires AttractorAlert when:<\/p>\n\n\n\n<p># &nbsp; cluster_size &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &gt; 40 &nbsp; &nbsp;(enough distinct sources)<\/p>\n\n\n\n<p># &nbsp; infrastructure_entropy &gt; 0.8 &nbsp; (sources genuinely diverse, not one IP looping)<\/p>\n\n\n\n<p># &nbsp; temporal_sync &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;&gt; 0.75 &nbsp;(coordinated timing)<\/p>\n\n\n\n<p>verdict = &#8220;rotating_botnet&#8221;<\/p>\n\n\n\n<p>&#8220;`<\/p>\n\n\n\n<p>To defeat this detector, an attacker would need to either:<\/p>\n\n\n\n<p>1. Use a *different* target on every connection \u2014 which defeats the botnet&#8217;s actual objective<\/p>\n\n\n\n<p>2. Desynchronise their task scheduling \u2014 which defeats coordination<\/p>\n\n\n\n<p>Neither is viable. The attractor is the signal, and it&#8217;s extremely robust.<\/p>\n\n\n\n<p>&#8220;`<\/p>\n\n\n\n<p>ATTRACTOR [rotating_botnet] edge_hash=0x&#8230; cluster=94<\/p>\n\n\n\n<p>infra_H=6.91 t_sync=0.82 score=0.917 &nbsp;-&gt; &nbsp;Tier 3<\/p>\n\n\n\n<p>&#8220;`<\/p>\n\n\n\n<p><strong>&#8212;<\/strong><\/p>\n\n\n\n<p><strong>##<\/strong><strong> Part 4 \u2014 BPF Edge Aggregation (bpf_capture.c + capture.go)<\/strong><\/p>\n\n\n\n<p>The performance math:<\/p>\n\n\n\n<p>&#8220;`<\/p>\n\n\n\n<p>Raw packets: &nbsp; &nbsp; 100M\/sec x 100B\/event &nbsp;= 10 GB\/sec telemetry<\/p>\n\n\n\n<p>Phase B edges: &nbsp; 100M\/sec x 56B\/event &nbsp; = 5.6 GB\/sec<\/p>\n\n\n\n<p>Aggregated: &nbsp; &nbsp; &nbsp;~2M\/sec &nbsp;x 56B\/event &nbsp; = 112 MB\/sec<\/p>\n\n\n\n<p>&#8220;`<\/p>\n\n\n\n<p>The edge aggregation trick reduces telemetry volume by ~98% before any data leaves the kernel.<\/p>\n\n\n\n<p><strong>**<\/strong><strong>Kernel side<\/strong><strong>**<\/strong> \u2014 new `edge_agg` LRU hash map (128K entries, keyed by `edge_hash`):<\/p>\n\n\n\n<p>&#8220;`c<\/p>\n\n\n\n<p>struct edge_counter {<\/p>\n\n\n\n<p>&nbsp; &nbsp; __u64 count; &nbsp; &nbsp; \/\/ packet count for this edge_hash<\/p>\n\n\n\n<p>&nbsp; &nbsp; __u64 bytes; &nbsp; &nbsp; \/\/ total bytes<\/p>\n\n\n\n<p>&nbsp; &nbsp; __u64 first_ts; &nbsp;\/\/ ktime of first packet this flush window<\/p>\n\n\n\n<p>&nbsp; &nbsp; __u64 last_ts; &nbsp; \/\/ ktime of most recent packet<\/p>\n\n\n\n<p>};<\/p>\n\n\n\n<p>&#8220;`<\/p>\n\n\n\n<p>On every packet in the XDP path: if entry exists, atomic_add count and bytes. If not, insert. That&#8217;s all. No ringbuf write, no memory copy, no wakeup to userland.<\/p>\n\n\n\n<p><strong>**<\/strong><strong>Go relay side<\/strong><strong>**<\/strong> \u2014 new 50ms ticker goroutine added to `capture.go`:<\/p>\n\n\n\n<p>&#8220;`go<\/p>\n\n\n\n<p>ticker := time.NewTicker(50 * time.Millisecond)<\/p>\n\n\n\n<p>\/\/ Every 50ms: iterate edge_agg map, emit frame 0x05, delete entries<\/p>\n\n\n\n<p>&#8220;`<\/p>\n\n\n\n<p>Fifty milliseconds of accumulation. At 100M pkt\/sec, that&#8217;s 5 million packets compressed into however many distinct edge_hash values appear in that window \u2014 typically 1-5% of packet count.<\/p>\n\n\n\n<p><strong>&#8212;<\/strong><\/p>\n\n\n\n<p><strong>##<\/strong><strong> Part 5 \u2014 AlmaLinux 10 Kernel Tuning (scripts\/tune_kernel.sh, 238 lines)<\/strong><\/p>\n\n\n\n<p>A properly-tuned kernel for XDP\/eBPF looks very different from a default AlmaLinux install. The biggest wins:<\/p>\n\n\n\n<p>&#8220;`bash<\/p>\n\n\n\n<p># Socket buffer \u2014 handle 128MB bursts without dropping<\/p>\n\n\n\n<p>net.core.rmem_max = 134217728<\/p>\n\n\n\n<p># NIC -&gt; kernel queue depth \u2014 don&#8217;t drop at the NIC<\/p>\n\n\n\n<p>net.core.netdev_max_backlog = 300000<\/p>\n\n\n\n<p># Busy-poll \u2014 reduces latency on the XDP receive path<\/p>\n\n\n\n<p>net.core.busy_read = 50 &nbsp; &nbsp;# us spin before sleeping<\/p>\n\n\n\n<p>net.core.busy_poll = 50 &nbsp; &nbsp;# us spin on NIC before sleeping<\/p>\n\n\n\n<p># BPF JIT \u2014 without this BPF runs interpreted, 3-5x slower<\/p>\n\n\n\n<p>net.core.bpf_jit_enable = 1<\/p>\n\n\n\n<p>net.core.bpf_jit_harden = 0 &nbsp; # disable hardening in prod for speed<\/p>\n\n\n\n<p># Huge pages \u2014 ring buffer performance headroom<\/p>\n\n\n\n<p>vm.nr_hugepages = 512 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;# 1 GB of 2MB pages<\/p>\n\n\n\n<p>&#8220;`<\/p>\n\n\n\n<p>The script also handles NIC IRQ affinity pinning (`&#8211;nic &lt;iface&gt;`): one XDP worker per RX queue, NIC queue N -&gt; CPU N. Disables `irqbalance` so the manual pins hold. Sets CPU performance governor. `&#8211;dry-run` and `&#8211;revert` flags. Idempotent \u2014 safe to run repeatedly.<\/p>\n\n\n\n<p><strong>&#8212;<\/strong><\/p>\n\n\n\n<p><strong>##<\/strong><strong> The Full Stack After Today<\/strong><\/p>\n\n\n\n<p>&#8220;`<\/p>\n\n\n\n<p>+&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;+<\/p>\n\n\n\n<p>| &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;AlmaLinux 10 \u00b7 RTX 3060 \u00b7 62.8 GB &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;|<\/p>\n\n\n\n<p>+&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;+<\/p>\n\n\n\n<p>| &nbsp;XDP \/ eBPF (bpf_capture.c) &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;|<\/p>\n\n\n\n<p>| &nbsp; &nbsp;+&#8211; flow_table &nbsp; &nbsp; LRU hash 1M flows &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;|<\/p>\n\n\n\n<p>| &nbsp; &nbsp;+&#8211; edge_agg &nbsp; &nbsp; &nbsp; LRU hash 128K entries \u2014 50ms flush &nbsp;[NEW] &nbsp;|<\/p>\n\n\n\n<p>| &nbsp; &nbsp;+&#8211; rb &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; ringbuf -&gt; 0x00 FlowCore &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; |<\/p>\n\n\n\n<p>| &nbsp; &nbsp;+&#8211; flow_events &nbsp; &nbsp;ringbuf -&gt; 0x02 FlowEnd &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;|<\/p>\n\n\n\n<p>| &nbsp; &nbsp;+&#8211; edge_ticks &nbsp; &nbsp; ringbuf -&gt; 0x03 EdgeTick &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; |<\/p>\n\n\n\n<p>| &nbsp; &nbsp;+&#8211; graph_edges &nbsp; &nbsp;ringbuf -&gt; 0x04 GraphEdge (OPEN\/CLOSE) &nbsp; &nbsp; |<\/p>\n\n\n\n<p>+&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;+<\/p>\n\n\n\n<p>| &nbsp;Go Relay (capture.go) &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; |<\/p>\n\n\n\n<p>| &nbsp; &nbsp;5 ringbuf drain goroutines &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; |<\/p>\n\n\n\n<p>| &nbsp; &nbsp;1 edge_agg flush ticker (50ms) -&gt; 0x05 EdgeAgg &nbsp; &nbsp; &nbsp; &nbsp;[NEW] &nbsp;|<\/p>\n\n\n\n<p>| &nbsp; &nbsp;WebSocket broadcast &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;|<\/p>\n\n\n\n<p>+&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;+<\/p>\n\n\n\n<p>| &nbsp;Python Pipeline (stream_manager.py) &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; |<\/p>\n\n\n\n<p>| &nbsp; &nbsp;topology_drift.py &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;|<\/p>\n\n\n\n<p>| &nbsp; &nbsp;+&#8211; TopologyDriftDetector &nbsp; &nbsp; &nbsp;scanner \/ aggregator \/ lateral &nbsp;|<\/p>\n\n\n\n<p>| &nbsp; &nbsp;+&#8211; TemporalFanInDetector &nbsp; &nbsp; &nbsp;botnet timing sync &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;|<\/p>\n\n\n\n<p>| &nbsp; &nbsp;+&#8211; GraphAttractorDetector &nbsp; &nbsp; rotating botnet &nbsp; &nbsp; &nbsp; &nbsp; [NEW] &nbsp;|<\/p>\n\n\n\n<p>| &nbsp; &nbsp;+&#8211; HypergraphMetricsCollector shared window metrics &nbsp; [NEW] &nbsp;|<\/p>\n\n\n\n<p>+&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;+<\/p>\n\n\n\n<p>| &nbsp;GraphOps Autopilot (graphops_autopilot.py) &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; [NEW] &nbsp;|<\/p>\n\n\n\n<p>| &nbsp; &nbsp;SentinelLoop &nbsp; push-driven, 3 detector subscriptions &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;|<\/p>\n\n\n\n<p>| &nbsp; &nbsp;AlertDedup &nbsp; &nbsp; blake2b hash \u00b7 30s TTL &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; |<\/p>\n\n\n\n<p>| &nbsp; &nbsp;TierRouter &nbsp; &nbsp; 0.60 \/ 0.70 \/ 0.80 \/ 0.90 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;|<\/p>\n\n\n\n<p>| &nbsp; &nbsp;EventCard &nbsp; &nbsp; &nbsp;analyst briefing object + format_card() &nbsp; &nbsp; &nbsp; &nbsp;|<\/p>\n\n\n\n<p>| &nbsp; &nbsp;InvestigatorAgent &nbsp; Tier 3 -&gt; GraphOpsAgent &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; |<\/p>\n\n\n\n<p>+&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;+<\/p>\n\n\n\n<p>| &nbsp;GraphOps Copilot (graphops_copilot.py) &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; [NEW] &nbsp;|<\/p>\n\n\n\n<p>| &nbsp; &nbsp;EntityExtractor &nbsp; &nbsp; &nbsp; regex + EXAMPLE_VALUES guard &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;|<\/p>\n\n\n\n<p>| &nbsp; &nbsp;InvestigativeDSLExecutor &nbsp; 9 verbs, stateful &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;|<\/p>\n\n\n\n<p>| &nbsp; &nbsp;GraphOpsAgent &nbsp; &nbsp; &nbsp; &nbsp; attractor-proof loop \u00b7 llama3.2:3b &nbsp; &nbsp; &nbsp;|<\/p>\n\n\n\n<p>+&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;+<\/p>\n\n\n\n<p>| &nbsp;MCP Server \u2014 11 tools total &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;|<\/p>\n\n\n\n<p>| &nbsp; &nbsp;graphops_investigate &nbsp; &nbsp; &nbsp; natural language -&gt; report &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;|<\/p>\n\n\n\n<p>| &nbsp; &nbsp;graphops_dsl_exec &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;raw DSL plan execution &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; |<\/p>\n\n\n\n<p>| &nbsp; &nbsp;graphops_entity_parse &nbsp; &nbsp; &nbsp;entity extraction &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;|<\/p>\n\n\n\n<p>| &nbsp; &nbsp;graphops_autopilot_status &nbsp;runtime stats &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; [NEW] &nbsp;|<\/p>\n\n\n\n<p>| &nbsp; &nbsp;graphops_suggestion_queue &nbsp;Tier 1 queue &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;[NEW] &nbsp;|<\/p>\n\n\n\n<p>| &nbsp; &nbsp;graphops_observation_log &nbsp; Tier 0 log &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;[NEW] &nbsp;|<\/p>\n\n\n\n<p>| &nbsp; &nbsp;graphops_format_card &nbsp; &nbsp; &nbsp; render event card &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; [NEW] &nbsp;|<\/p>\n\n\n\n<p>+&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;+<\/p>\n\n\n\n<p>&#8220;`<\/p>\n\n\n\n<p><strong>&#8212;<\/strong><\/p>\n\n\n\n<p><strong>##<\/strong><strong> By The Numbers<\/strong><\/p>\n\n\n\n<p>| File | Lines | Status |<\/p>\n\n\n\n<p>|&#8212;|&#8212;|&#8212;|<\/p>\n\n\n\n<p>| graphops_copilot.py | 1,380 | 5\/5 tests pass \u00b7 live Ollama validated |<\/p>\n\n\n\n<p>| graphops_autopilot.py | 758 | 24\/24 tests pass |<\/p>\n\n\n\n<p>| topology_drift.py | 912 | 3 new classes \u00b7 smoke tests pass |<\/p>\n\n\n\n<p>| bpf_capture.c | 426 | pending make bpf |<\/p>\n\n\n\n<p>| capture.go | 1,000 | pending go build |<\/p>\n\n\n\n<p>| scripts\/tune_kernel.sh | 238 | syntax valid |<\/p>\n\n\n\n<p><strong>**<\/strong><strong>~2,800 lines of new code today across 6 files.<\/strong><strong>**<\/strong><\/p>\n\n\n\n<p>All Python tests pass. BPF\/Go changes pending hardware compile.<\/p>\n\n\n\n<p><strong>&#8212;<\/strong><\/p>\n\n\n\n<p><strong>##<\/strong><strong> What&#8217;s Next<\/strong><\/p>\n\n\n\n<p><strong>**<\/strong><strong>Immediate<\/strong><strong>**<\/strong> \u2014 `FRAME_TAG_EDGE_AGG = 0x05` handler in `stream_manager.py` and `EdgeAggEvent` in `FlowCore.py`. Same pattern as all other frame types.<\/p>\n\n\n\n<p><strong>**<\/strong><strong>Interesting<\/strong><strong>**<\/strong> \u2014 Wire `HypergraphSnapshot` into `GraphAttractorDetector` so the shared metrics precomputation replaces the attractor&#8217;s own per-window entropy calls.<\/p>\n\n\n\n<p><strong>**<\/strong><strong>Exciting<\/strong><strong>**<\/strong> \u2014 In-kernel botnet detection. Move fan-in counters and temporal burst tracking into BPF maps. Emit `BOTNET_CLUSTER` events directly from the XDP path. The kernel becomes the first anomaly detector. Userland receives only high-value signals. That&#8217;s the architecture telecom operators quietly run for abuse detection at hyperscale.<\/p>\n\n\n\n<p><strong>**<\/strong><strong>Wild<\/strong><strong>**<\/strong> \u2014 Hypergraph spectral fingerprinting. Compute eigenvalue signatures of traffic clusters. Detects VPN-rotating botnets even when every node and IP changes every minute, by analysing the structural fingerprint of the traffic graph rather than any individual node property.<\/p>\n\n\n\n<p><strong>&#8212;<\/strong><\/p>\n\n\n\n<p>*NerfEngine is a personal research project combining eBPF\/XDP network telemetry, hypergraph analysis, and autonomous LLM-backed threat detection. The architecture described here \u2014 kernel-side graph primitive emission, timing-entropy botnet detection, graph attractor analysis, and attractor-proof LLM agent loops \u2014 represents genuinely novel combinations of techniques that don&#8217;t appear together elsewhere in the open literature. Building it has been one of the most interesting engineering exercises I&#8217;ve done.*<\/p>\n\n\n\n<p>*More updates as the build continues. The kernel is listening.*<\/p>\n\n\n\n<p>Blog post written \u2014 380 lines at \/home\/spectrcyde\/NerfEngine\/Update Blog Post 03132026.md.<\/p>\n\n\n\n<p>It covers:<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li>GraphOps Copilot \u2014 the attractor loop bug, the fix, live test results against llama3.2:3b<\/li>\n\n\n\n<li>GraphOps Autopilot \u2014 4-tier system, AlertDedup, SentinelLoop, scoring logic, 24\/24 pass<\/li>\n\n\n\n<li>Three upgrades \u2014 cooldowns, HypergraphMetricsCollector, GraphAttractorDetector (the rotating-botnet detector)<\/li>\n\n\n\n<li>BPF edge aggregation \u2014 98% telemetry reduction, kernel + Go relay design<\/li>\n\n\n\n<li>AlmaLinux 10 tuning script \u2014 key sysctl values, IRQ pinning, JIT config<\/li>\n\n\n\n<li>Full stack diagram, by-the-numbers table, and what&#8217;s next<\/li>\n<\/ol>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Extra Special Thanks to ChatGPT!!!<\/strong><\/h2>\n\n\n\n<p><a href=\"https:\/\/chatgpt.com\/share\/69b3a327-a4c4-8011-a9d0-d82cfc8804ed\">https:\/\/chatgpt.com\/share\/69b3a327-a4c4-8011-a9d0-d82cfc8804ed<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p># NerfEngine Dev Log \u2014 March 13, 2026 ..you sit down to fix one thing and end up building something that feels genuinely different from what existed before. We started with a live Ollama instance running `gemma3` on an RTX 3060 and ended up with a fully autonomous network threat intelligence pipeline \u2014 kernel-to-LLM \u2014&hellip;&nbsp;<a href=\"https:\/\/172-234-197-23.ip.linodeusercontent.com\/?p=5089\" rel=\"bookmark\"><span class=\"screen-reader-text\">Kernel Edges Enable Autonomous Threat Intelligence<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":79,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"neve_meta_sidebar":"","neve_meta_container":"","neve_meta_enable_content_width":"","neve_meta_content_width":0,"neve_meta_title_alignment":"","neve_meta_author_avatar":"","neve_post_elements_order":"","neve_meta_disable_header":"","neve_meta_disable_footer":"","neve_meta_disable_title":"","footnotes":""},"categories":[10,7],"tags":[],"class_list":["post-5089","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-signal_scythe","category-the-truben-show"],"_links":{"self":[{"href":"https:\/\/172-234-197-23.ip.linodeusercontent.com\/index.php?rest_route=\/wp\/v2\/posts\/5089","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/172-234-197-23.ip.linodeusercontent.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/172-234-197-23.ip.linodeusercontent.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/172-234-197-23.ip.linodeusercontent.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/172-234-197-23.ip.linodeusercontent.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=5089"}],"version-history":[{"count":1,"href":"https:\/\/172-234-197-23.ip.linodeusercontent.com\/index.php?rest_route=\/wp\/v2\/posts\/5089\/revisions"}],"predecessor-version":[{"id":5090,"href":"https:\/\/172-234-197-23.ip.linodeusercontent.com\/index.php?rest_route=\/wp\/v2\/posts\/5089\/revisions\/5090"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/172-234-197-23.ip.linodeusercontent.com\/index.php?rest_route=\/wp\/v2\/media\/79"}],"wp:attachment":[{"href":"https:\/\/172-234-197-23.ip.linodeusercontent.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=5089"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/172-234-197-23.ip.linodeusercontent.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=5089"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/172-234-197-23.ip.linodeusercontent.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=5089"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}