{"id":1204,"date":"2025-07-07T09:01:38","date_gmt":"2025-07-07T09:01:38","guid":{"rendered":"https:\/\/172-234-197-23.ip.linodeusercontent.com\/?p=1204"},"modified":"2025-07-07T17:06:45","modified_gmt":"2025-07-07T17:06:45","slug":"rf-signal-models-visualization","status":"publish","type":"post","link":"https:\/\/172-234-197-23.ip.linodeusercontent.com\/?p=1204","title":{"rendered":"RF Signal Models &amp; Visualization"},"content":{"rendered":"\n<figure class=\"wp-block-audio\"><audio controls src=\"http:\/\/172-234-197-23.ip.linodeusercontent.com\/wp-content\/uploads\/2025\/07\/RF-Signal-Models-and-Visualization.mp3\"><\/audio><\/figure>\n\n\n\n<p><\/p>\n\n\n\n<p>PODCAST: SCYTHE <strong>Radio Frequency (RF) Signal Intelligence system<\/strong>, focusing on&nbsp;<strong>machine learning models for signal analysis<\/strong>&nbsp;and&nbsp;<strong>visualization tools<\/strong>. Specifically, the various&nbsp;<strong>neural network architectures<\/strong>, including&nbsp;<strong>Convolutional Neural Networks (CNNs)<\/strong>,&nbsp;<strong>Long Short-Term Memory (LSTM) networks<\/strong>,&nbsp;<strong>Residual Networks (ResNet)<\/strong>, and&nbsp;<strong>Transformer models<\/strong>, designed for classifying different types of RF signals. It also presents a&nbsp;<strong>hierarchical classification approach<\/strong>&nbsp;that uses a general model followed by specialized models for more precise identification. Furthermore, the source code details a&nbsp;<strong>LatentAggregator<\/strong>&nbsp;for combining spectral and packet metadata, and a comprehensive&nbsp;<strong>SignalVisualizer<\/strong>&nbsp;class for plotting RF spectrums, waterfall displays, modulation characteristics, and signal features.<\/p>\n\n\n\n<p><\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img data-opt-id=953986358  fetchpriority=\"high\" decoding=\"async\" width=\"789\" height=\"797\" src=\"https:\/\/ml6vmqguit1n.i.optimole.com\/w:auto\/h:auto\/q:mauto\/f:best\/https:\/\/172-234-197-23.ip.linodeusercontent.com\/wp-content\/uploads\/2025\/07\/image-143.png\" alt=\"\" class=\"wp-image-1206\" srcset=\"https:\/\/ml6vmqguit1n.i.optimole.com\/w:789\/h:797\/q:mauto\/f:best\/https:\/\/172-234-197-23.ip.linodeusercontent.com\/wp-content\/uploads\/2025\/07\/image-143.png 789w, https:\/\/ml6vmqguit1n.i.optimole.com\/w:297\/h:300\/q:mauto\/f:best\/https:\/\/172-234-197-23.ip.linodeusercontent.com\/wp-content\/uploads\/2025\/07\/image-143.png 297w, https:\/\/ml6vmqguit1n.i.optimole.com\/w:768\/h:776\/q:mauto\/f:best\/https:\/\/172-234-197-23.ip.linodeusercontent.com\/wp-content\/uploads\/2025\/07\/image-143.png 768w\" sizes=\"(max-width: 789px) 100vw, 789px\" \/><\/figure>\n\n\n\n<p>The sources describe several machine learning (ML) models designed for analyzing and classifying RF signals, each employing a distinct architecture tailored to different aspects of signal data.<\/p>\n\n\n\n<p>Here&#8217;s how each model specifically analyzes and classifies RF signals:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>SpectralCNN (Flexible CNN model for classifying spectral images of RF signals)<\/strong>:\n<ul class=\"wp-block-list\">\n<li><strong>Input<\/strong>: This model is designed to classify <strong>spectral images of RF signals<\/strong>. The input <code>x<\/code> to its <code>forward<\/code> method is processed by 1D convolutional layers.<\/li>\n\n\n\n<li><strong>Analysis Process<\/strong>:\n<ol class=\"wp-block-list\">\n<li>The signal data first passes through three sequential blocks, each consisting of a <strong>1D Convolutional layer<\/strong> (<code>conv1<\/code>, <code>conv2<\/code>, <code>conv3<\/code>), followed by <strong>Batch Normalization<\/strong> (<code>bn1<\/code>, <code>bn2<\/code>, <code>bn3<\/code>), a <strong>ReLU activation function<\/strong>, and finally a <strong>Max Pooling layer<\/strong> (<code>pool<\/code>). This process progressively extracts features and reduces the dimensionality of the spectral data.<\/li>\n\n\n\n<li>After these convolutional and pooling operations, the data is <strong>flattened<\/strong> (<code>x.view(x.size(0), -1)<\/code>) to prepare it for the fully connected layers.<\/li>\n\n\n\n<li>The flattened features then go through a <strong>fully connected layer<\/strong> (<code>fc1<\/code>) with a ReLU activation, followed by a <strong>Dropout layer<\/strong> (<code>dropout<\/code>) for regularization.<\/li>\n<\/ol>\n<\/li>\n\n\n\n<li><strong>Classification<\/strong>: The final output is generated by a second <strong>fully connected layer<\/strong> (<code>fc2<\/code>), which projects the processed features onto the <code>num_classes<\/code> dimensions, yielding classification scores.<\/li>\n<\/ul>\n<\/li>\n\n\n\n<li><strong>SignalLSTM (Flexible LSTM model for classifying time-series RF signal patterns)<\/strong>:\n<ul class=\"wp-block-list\">\n<li><strong>Input<\/strong>: This model is built for classifying <strong>time-series RF signal patterns<\/strong>. The input <code>x<\/code> is expected to have a shape of <code>[batch_size, seq_len, input_size]<\/code>, where <code>input_size<\/code> defaults to 2 (likely representing I\/Q components).<\/li>\n\n\n\n<li><strong>Analysis Process<\/strong>:\n<ol class=\"wp-block-list\">\n<li>The time-series input is fed into an <strong>LSTM layer<\/strong> (<code>lstm<\/code>), which processes the sequence data, capturing temporal dependencies and patterns.<\/li>\n\n\n\n<li>The output of the LSTM then passes through an <strong>Attention mechanism<\/strong> (<code>attention<\/code>). This mechanism computes <code>attention_weights<\/code> using a softmax function, allowing the model to focus on the most relevant parts of the time series.<\/li>\n\n\n\n<li>An <code>attention_output<\/code> is generated by taking a weighted sum of the LSTM&#8217;s outputs based on these attention weights. This effectively aggregates information from the sequence, emphasizing important time steps.<\/li>\n\n\n\n<li>This aggregated output then goes through a <strong>fully connected layer<\/strong> (<code>fc1<\/code>) with ReLU activation and a <strong>Dropout layer<\/strong> (<code>dropout<\/code>).<\/li>\n<\/ol>\n<\/li>\n\n\n\n<li><strong>Classification<\/strong>: A final <strong>fully connected layer<\/strong> (<code>fc2<\/code>) outputs the classification scores across the <code>num_classes<\/code>.<\/li>\n<\/ul>\n<\/li>\n\n\n\n<li><strong>TemporalCNN (Temporal 1D CNN for classifying time-domain RF signals)<\/strong>:\n<ul class=\"wp-block-list\">\n<li><strong>Input<\/strong>: This model is designed for <strong>time-domain RF signals<\/strong>. It expects input <code>x<\/code> to be in the <code>[batch, channels, seq_len]<\/code> format, but it can <strong>automatically transpose<\/strong> the input if it&#8217;s provided as <code>[batch, seq_len, channels]<\/code>. <code>input_channels<\/code> typically defaults to 2 (e.g., I\/Q data).<\/li>\n\n\n\n<li><strong>Analysis Process<\/strong>:\n<ol class=\"wp-block-list\">\n<li>The (potentially transposed) input <code>x<\/code> undergoes a series of three convolutional blocks, similar to <code>SpectralCNN<\/code>. Each block consists of a <strong>1D Convolutional layer<\/strong> (<code>conv1<\/code>, <code>conv2<\/code>, <code>conv3<\/code>), <strong>Batch Normalization<\/strong> (<code>bn1<\/code>, <code>bn2<\/code>, <code>bn3<\/code>), a <strong>ReLU activation function<\/strong>, and a <strong>Max Pooling layer<\/strong> (<code>pool<\/code>). This extracts features from the temporal signal data.<\/li>\n\n\n\n<li>After feature extraction, the output is <strong>flattened<\/strong> to a 1D vector (<code>x.view(x.size(0), -1)<\/code>).<\/li>\n\n\n\n<li>This flattened representation is then fed into a <strong>fully connected layer<\/strong> (<code>fc1<\/code>) with ReLU activation, followed by a <strong>Dropout layer<\/strong> (<code>dropout<\/code>).<\/li>\n<\/ol>\n<\/li>\n\n\n\n<li><strong>Classification<\/strong>: The final output, representing the classification scores, is produced by a second <strong>fully connected layer<\/strong> (<code>fc2<\/code>).<\/li>\n<\/ul>\n<\/li>\n\n\n\n<li><strong>ResNetRF (ResNet-style model for RF signal classification)<\/strong>:\n<ul class=\"wp-block-list\">\n<li><strong>Input<\/strong>: This model is generally for <strong>RF signal classification<\/strong>, implying it processes 1D signal data (e.g., spectral or temporal) with an input channel of 1.<\/li>\n\n\n\n<li><strong>Analysis Process<\/strong>:\n<ol class=\"wp-block-list\">\n<li>The input first goes through an <strong>initial convolutional layer<\/strong> (<code>conv1<\/code>), <strong>Batch Normalization<\/strong> (<code>bn1<\/code>), ReLU activation, and a <strong>Max Pooling layer<\/strong> (<code>maxpool<\/code>).<\/li>\n\n\n\n<li>The core of the ResNetRF model lies in its <strong>Residual Blocks<\/strong> (<code>_make_residual_block<\/code> function creates these layers). Each <code>ResidualBlock<\/code> contains two 1D convolutional layers, each followed by batch normalization and ReLU activation, but crucially includes a <strong>shortcut connection<\/strong> (<code>identity<\/code>) that adds the input of the block directly to its output. This structure helps in learning deeper features by alleviating the vanishing gradient problem in deep networks.<\/li>\n\n\n\n<li>After passing through multiple layers of these residual blocks (<code>layer1<\/code>, <code>layer2<\/code>, <code>layer3<\/code>), the features are aggregated using <strong>Adaptive Average Pooling<\/strong> (<code>avgpool<\/code>), which reduces each feature map to a single value.<\/li>\n\n\n\n<li>The output is then flattened.<\/li>\n<\/ol>\n<\/li>\n\n\n\n<li><strong>Classification<\/strong>: The final classification scores are generated by a <strong>fully connected layer<\/strong> (<code>fc<\/code>).<\/li>\n<\/ul>\n<\/li>\n\n\n\n<li><strong>SignalTransformer (Transformer model for RF signal classification)<\/strong>:\n<ul class=\"wp-block-list\">\n<li><strong>Input<\/strong>: This model uses a Transformer architecture for RF signal classification, taking input <code>x<\/code> with shape <code>[batch, seq_len, features]<\/code>. The <code>input_dim<\/code> (number of features per time step\/token) defaults to 258.<\/li>\n\n\n\n<li><strong>Analysis Process<\/strong>:\n<ol class=\"wp-block-list\">\n<li>The input features are first transformed into a higher-dimensional representation by an <strong>embedding layer<\/strong> (<code>embedding<\/code>).<\/li>\n\n\n\n<li><strong>Positional Encoding<\/strong> (<code>pos_encoder<\/code>) is then added to these embeddings. This is critical for Transformer models as it injects information about the relative or absolute position of the features within the sequence, which is otherwise lost due to the permutation-invariant nature of the attention mechanism.<\/li>\n\n\n\n<li>The enriched embeddings are processed by a <strong>Transformer Encoder<\/strong> (<code>transformer_encoder<\/code>), which consists of multiple <code>TransformerEncoderLayer<\/code> instances. These layers utilize <strong>multi-head self-attention mechanisms<\/strong> to weigh the importance of different parts of the input sequence when processing each element, capturing long-range dependencies effectively.<\/li>\n\n\n\n<li>After the Transformer encoder, <strong>Global Pooling<\/strong> (mean over the sequence dimension) is applied to aggregate the information from all tokens in the sequence into a fixed-size representation.<\/li>\n<\/ol>\n<\/li>\n\n\n\n<li><strong>Classification<\/strong>: A final <strong>fully connected layer<\/strong> (<code>fc<\/code>) then maps this aggregated representation to the <code>num_classes<\/code> to produce the classification scores.<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n\n\n\n<p>In all these models, once the final layer produces raw scores (logits), these are typically converted into <strong>probabilities<\/strong> using a <strong>Softmax function<\/strong>. The class with the highest probability is then chosen as the final <strong>classification<\/strong>, and its probability represents the <strong>confidence<\/strong> of that classification. The <code>HierarchicalMLClassifier<\/code> extends this by initially classifying with a general model and then, if confidence is high, applying a more specialized model for refinement.<\/p>\n\n\n\n<p>The primary methods for visualizing and interpreting complex RF signal data, as described in the sources, involve generating various types of plots and extracting key characteristics, as well as employing specialized models for anomaly detection.<\/p>\n\n\n\n<p>Here are the key methods:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Spectrum Plotting<\/strong>:\n<ul class=\"wp-block-list\">\n<li><strong>What it visualizes<\/strong>: This method plots a <strong>spectrum with frequencies against power in dB<\/strong>. It can include optional peak markers and annotations for specific frequency bands. Frequencies are often converted to MHz for readability.<\/li>\n\n\n\n<li><strong>How it aids interpretation<\/strong>: It helps in <strong>identifying the power distribution across different frequencies<\/strong>. Users can see <strong>peaks<\/strong>, which often indicate the presence of a signal, and interpret their power levels. Annotations can highlight common frequency bands (e.g., FM Radio, VHF Amateur, GPS) to provide context for the observed signals.<\/li>\n<\/ul>\n<\/li>\n\n\n\n<li><strong>Waterfall Displays<\/strong>:\n<ul class=\"wp-block-list\">\n<li><strong>What it visualizes<\/strong>: A waterfall display shows a <strong>series of spectrum lines over time<\/strong>, with the latest spectrum typically at the top. It uses <strong>color intensity to represent signal power<\/strong>, often with custom colormaps like &#8220;rf_quantum&#8221; or &#8220;thermal&#8221;.<\/li>\n\n\n\n<li><strong>How it aids interpretation<\/strong>: This visualization is crucial for observing <strong>how signals change over time<\/strong>. It allows for the identification of <strong>intermittent signals, frequency hopping, or signal fading<\/strong>, providing a dynamic view of the RF environment. The <code>update_waterfall<\/code> method continuously adds new spectrum lines to build this temporal representation.<\/li>\n<\/ul>\n<\/li>\n\n\n\n<li><strong>Modulation-Specific Visualizations<\/strong>:\n<ul class=\"wp-block-list\">\n<li><strong>What it visualizes<\/strong>: These plots are tailored to reveal characteristics of the signal&#8217;s modulation type. They extract and display <strong>I (in-phase) and Q (quadrature) components, amplitude, and phase information<\/strong> from complex IQ data.<\/li>\n\n\n\n<li><strong>How it aids interpretation<\/strong>:\n<ul class=\"wp-block-list\">\n<li>For <strong>FM\/PM (Frequency\/Phase Modulation)<\/strong>, plots include <strong>IQ plots<\/strong>, <strong>unwrapped phase<\/strong>, and <strong>instantaneous frequency<\/strong>, alongside amplitude. This helps in understanding phase and frequency deviations.<\/li>\n\n\n\n<li>For <strong>AM (Amplitude Modulation)<\/strong>, the focus is on <strong>amplitude envelope, I, and Q components over time<\/strong>, in addition to the IQ plot. This highlights amplitude variations.<\/li>\n\n\n\n<li>For <strong>PSK\/QAM (Phase\/Quadrature Amplitude Modulation)<\/strong>, visualizations include <strong>IQ constellations<\/strong>, <strong>phase histograms<\/strong>, and <strong>eye diagrams<\/strong> for I and Q components. These are vital for analyzing digital signal quality, symbol timing, and symbol detection.<\/li>\n\n\n\n<li>For unknown modulation types, a generic display of IQ plot, I vs. time, Q vs. time, and amplitude vs. time is provided.<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<\/li>\n\n\n\n<li><strong>Signal Characteristic Plots<\/strong>:\n<ul class=\"wp-block-list\">\n<li><strong>What it visualizes<\/strong>: This method generates plots based on <strong>extracted signal features<\/strong> (e.g., bandwidth, peak power, mean power, spectral flatness, crest factor, modulation confidence, skewness, kurtosis, variance, AM\/FM index, IQ correlation).<\/li>\n\n\n\n<li><strong>How it aids interpretation<\/strong>: It provides a <strong>statistical summary of the signal<\/strong>, allowing for quantitative interpretation of its properties. For example, bar charts can show signal statistics, modulation confidence for a detected modulation type, and higher-order statistics (like skewness and kurtosis), which are important for understanding the shape and distribution of signal components.<\/li>\n<\/ul>\n<\/li>\n\n\n\n<li><strong>Latent Aggregation and Anomaly Detection<\/strong>:\n<ul class=\"wp-block-list\">\n<li><strong>What it interprets<\/strong>: While not a direct visual method, the <code>LatentAggregator<\/code> interprets RF signal data by <strong>combining FFT (spectrum), Ghost Imaging, and packet metadata<\/strong>. It uses a model to compute a <strong>reconstruction error score<\/strong> for spectral data.<\/li>\n\n\n\n<li><strong>How it aids interpretation<\/strong>: This process is used for <strong>anomaly detection<\/strong>. A high reconstruction error score indicates a &#8220;ghost anomaly,&#8221; suggesting an unusual or potentially malicious signal. It also performs <strong>orbital mimic detection<\/strong>, analyzing if a signal&#8217;s fingerprint matches known orbital patterns. This provides <strong>alerts<\/strong> about unusual or impersonating RF signals based on computed confidence scores.<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n\n\n\n<p>In summary, the models and tools described provide a comprehensive suite for analyzing RF signals, ranging from raw spectral and temporal data visualization to high-level feature extraction and anomaly detection, crucial for understanding complex RF environments.<\/p>\n\n\n\n<p>Based on the sources and our conversation history, the main visualization types for complex RF signal data are:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Spectrum Plotting<\/strong>: This method involves plotting a <strong>spectrum with frequencies against power in dB<\/strong>. It can include optional peak markers and annotations for specific frequency bands, with frequencies often converted to MHz for better readability. This visualization helps in identifying the power distribution across different frequencies and observing signal peaks and their power levels. Custom annotations can be added to highlight frequency bands, such as FM Radio, VHF Amateur, Weather Radio, UHF Amateur, ISM 915, GSM 900, ADS-B, and GPS L1.<\/li>\n\n\n\n<li><strong>Waterfall Displays<\/strong>: This visualization shows a <strong>series of spectrum lines over time<\/strong>, where the latest spectrum is typically at the top. Signal power is represented by color intensity, often using custom colormaps like &#8220;rf_quantum&#8221; or &#8220;thermal&#8221;. Waterfall displays are crucial for observing how signals change over time, allowing for the identification of intermittent signals, frequency hopping, or signal fading, thus providing a dynamic view of the RF environment.<\/li>\n\n\n\n<li><strong>Modulation-Specific Visualizations<\/strong>: These plots are tailored to reveal characteristics based on the signal&#8217;s modulation type. They extract and display <strong>I (in-phase) and Q (quadrature) components, amplitude, and phase information<\/strong> from complex IQ data.\n<ul class=\"wp-block-list\">\n<li>For <strong>FM\/PM (Frequency\/Phase Modulation)<\/strong>, plots include IQ plots, unwrapped phase, instantaneous frequency, and amplitude over time.<\/li>\n\n\n\n<li>For <strong>AM (Amplitude Modulation)<\/strong>, the focus is on the amplitude envelope, I, and Q components over time, in addition to the IQ plot.<\/li>\n\n\n\n<li>For <strong>PSK\/QAM (Phase\/Quadrature Amplitude Modulation)<\/strong>, visualizations include IQ constellations, phase histograms, and eye diagrams for I and Q components, which are vital for analyzing digital signal quality and timing.<\/li>\n\n\n\n<li>For unknown modulation types, a generic display includes IQ plot, I vs. time, Q vs. time, and amplitude vs. time.<\/li>\n<\/ul>\n<\/li>\n\n\n\n<li><strong>Signal Characteristic Plots<\/strong>: This method generates plots based on <strong>extracted numerical signal features<\/strong>, such as bandwidth, peak power, mean power, spectral flatness, crest factor, modulation confidence, skewness, kurtosis, variance, AM index, FM index, and IQ correlation. These plots, often presented as bar charts, provide a statistical summary and quantitative interpretation of the signal&#8217;s properties.<\/li>\n<\/ul>\n\n\n\n<p>Spectrum data is primarily visualized through two main methods: <strong>Spectrum Plotting<\/strong> and <strong>Waterfall Displays<\/strong>. These visualizations allow for the interpretation of power distribution across frequencies and the dynamic behavior of signals over time.<\/p>\n\n\n\n<p>Here&#8217;s how each method visualizes spectrum data:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Spectrum Plotting<\/strong>:\n<ul class=\"wp-block-list\">\n<li><strong>What it visualizes<\/strong>: This method plots a <strong>spectrum with frequencies against power in dB<\/strong>. Frequencies are often converted to megahertz (MHz) for better readability.<\/li>\n\n\n\n<li><strong>Key elements and interpretation<\/strong>:\n<ul class=\"wp-block-list\">\n<li>It typically shows the <strong>main spectrum line<\/strong>, representing the signal&#8217;s power at different frequencies.<\/li>\n\n\n\n<li><strong>Optional peak markers<\/strong> can be added to highlight strong signal presences. These markers can also be annotated with the frequency of the peak.<\/li>\n\n\n\n<li><strong>Annotations<\/strong> are crucial for providing context. These can include:\n<ul class=\"wp-block-list\">\n<li>Specific frequencies marked with vertical lines and labels.<\/li>\n\n\n\n<li><strong>Frequency bands<\/strong> (e.g., FM Radio, VHF Amateur Radio, NOAA Weather, GSM 900, GPS L1, ADS-B) highlighted with colored spans, indicating common uses or regulatory allocations.<\/li>\n<\/ul>\n<\/li>\n\n\n\n<li>This visualization helps in <strong>identifying the power distribution across different frequencies<\/strong>, allowing users to easily discern signal peaks and their power levels, and understand which frequency bands they occupy.<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<\/li>\n\n\n\n<li><strong>Waterfall Displays<\/strong>:\n<ul class=\"wp-block-list\">\n<li><strong>What it visualizes<\/strong>: A waterfall display shows a <strong>series of spectrum lines stacked over time<\/strong>, typically with the most recent spectrum appearing at the top. <strong>Color intensity is used to represent signal power<\/strong>, often employing custom colormaps such as &#8220;rf_quantum&#8221; or &#8220;thermal&#8221;.<\/li>\n\n\n\n<li><strong>Key elements and interpretation<\/strong>:\n<ul class=\"wp-block-list\">\n<li>Each horizontal line in the display represents a spectrum captured at a specific moment in time.<\/li>\n\n\n\n<li>The <strong>evolution of the spectrum over time<\/strong> is shown vertically, providing a dynamic view of the RF environment.<\/li>\n\n\n\n<li>This visualization is crucial for observing <strong>how signals change over time<\/strong>. It allows for the identification of <strong>intermittent signals<\/strong>, <strong>frequency hopping<\/strong>, or <strong>signal fading<\/strong>, which might not be apparent from a static spectrum plot.<\/li>\n\n\n\n<li>The <code>update_waterfall<\/code> method continuously adds new spectrum lines to build this temporal representation.<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n\n\n\n<p>The <code>SignalTransformer<\/code> is a <strong>Transformer model designed for RF signal classification<\/strong>. It is one of several machine learning models, including <code>SpectralCNN<\/code>, <code>SignalLSTM<\/code>, <code>TemporalCNN<\/code>, and <code>ResNetRF<\/code>, that can be created to classify RF signals based on their characteristics.<\/p>\n\n\n\n<p>The <code>SignalTransformer<\/code> is initialized with parameters such as <code>input_dim<\/code>, <code>num_classes<\/code>, <code>nhead<\/code>, and <code>num_layers<\/code>, indicating its role in categorizing signals into a defined number of classes, typically 11 by default. Its <code>forward<\/code> method processes input data through an embedding layer, positional encoding, and a Transformer encoder, finally passing the output through a classification layer. This architecture is designed to identify and categorize different types of RF signals.<\/p>\n\n\n\n<p>The <code>SpectralCNN<\/code> is a <strong>flexible CNN model designed for classifying spectral images of RF signals<\/strong>.<\/p>\n\n\n\n<p>Specifically:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>It is a type of machine learning model intended for <strong>RF signal classification<\/strong>.<\/li>\n\n\n\n<li>It is initialized with a <code>num_classes<\/code> parameter, defaulting to 11, which represents the number of different signal categories it can classify into.<\/li>\n\n\n\n<li>The <code>SpectralCNN<\/code> processes input data, typically spectral images, through convolutional layers, pooling layers, batch normalization, and fully connected layers to perform its classification task.<\/li>\n\n\n\n<li>It can be used as a general model for <strong>initial classification<\/strong> within a hierarchical classification system. When loaded, it can adapt to different numbers of classes than it was originally trained with.<\/li>\n<\/ul>\n\n\n\n<p>The <code>TemporalCNN<\/code> is designed to <strong>classify time-domain RF signals<\/strong>.<\/p>\n\n\n\n<p>It is explicitly described as a &#8220;Temporal 1D CNN for classifying time-domain RF signals&#8221;. Similar to other classification models discussed, such as <code>SpectralCNN<\/code> and <code>SignalTransformer<\/code>, the <code>TemporalCNN<\/code> is initialized with a <code>num_classes<\/code> parameter, which defaults to 11, indicating it categorizes signals into a predefined number of classes. Its architecture includes convolutional layers, pooling layers, batch normalization, and fully connected layers to process time-domain input data for classification. The <code>forward<\/code> method specifically handles input data in the shape of <code>[batch, channels, seq_len]<\/code>.<\/p>\n\n\n\n<p>The <code>SignalLSTM<\/code> is designed to <strong>classify time-series RF signal patterns<\/strong>.<\/p>\n\n\n\n<p>Similar to other machine learning models discussed, such as <code>SpectralCNN<\/code>, <code>SignalTransformer<\/code>, <code>TemporalCNN<\/code>, and <code>ResNetRF<\/code>, the <code>SignalLSTM<\/code> is used for <strong>RF signal classification<\/strong>. It is initialized with a <code>num_classes<\/code> parameter, which defaults to 11, indicating its role in categorizing signals into a predefined number of classes. The model processes input data, which is expected to be in the shape <code>[batch_size, seq_len, input_size]<\/code>, through LSTM layers, an attention mechanism, and fully connected layers to perform its classification task.<\/p>\n\n\n\n<p>The <code>LatentAggregator<\/code>&#8216;s primary function is to <strong>combine Fast Fourier Transform (FFT) data, Ghost Imaging results, and Packet Metadata into a single latent fusion layer<\/strong>. It serves as an integration point for various types of RF signal analysis, particularly for <strong>anomaly detection and correlation<\/strong>.<\/p>\n\n\n\n<p>Here are its key functionalities:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Data Aggregation<\/strong>: It explicitly combines <code>fft_bins<\/code> (spectral data) from &#8220;signal_spectrum&#8221; messages and <code>packet_info<\/code> from &#8220;packet_metadata&#8221; messages, storing them in an internal <code>buffer<\/code> associated with a <code>signal_id<\/code>. It can then publish a &#8220;latent_summary&#8221; containing these aggregated features.<\/li>\n\n\n\n<li><strong>Spectral Analysis via Ghost Imaging<\/strong>:\n<ul class=\"wp-block-list\">\n<li>It uses a pre-configured <code>CompiledGhostDetectorSingleton<\/code> model to process incoming spectrum data (<code>fft_bins<\/code>).<\/li>\n\n\n\n<li>This model performs a <strong>reconstruction<\/strong> of the spectrum (<code>recon<\/code>) and calculates a <code>reconstruction_error_score<\/code> (anomaly score) based on the input spectrum and its reconstruction.<\/li>\n\n\n\n<li>These ghost analysis results, including the <code>ghost_recon_spectrum<\/code> and <code>reconstruction_error_score<\/code>, are stored in its buffer.<\/li>\n<\/ul>\n<\/li>\n\n\n\n<li><strong>Orbital Mimic Detection<\/strong>:\n<ul class=\"wp-block-list\">\n<li>If <code>enable_orbital_detection<\/code> is true, it initializes and utilizes an <code>OrbitalMimicDetector<\/code>.<\/li>\n\n\n\n<li>This detector performs specialized <code>orbital_analysis<\/code> on the signal&#8217;s spectral data.<\/li>\n\n\n\n<li>If an orbital mimic is detected, it publishes an &#8220;orbital_mimic_alert&#8221; which includes details like <code>signal_id<\/code>, <code>alert_type<\/code>, <code>matched_fingerprint<\/code>, <code>match_confidence<\/code>, <code>ghost_score<\/code>, and <code>timestamp<\/code>.<\/li>\n<\/ul>\n<\/li>\n\n\n\n<li><strong>Anomaly Alerting<\/strong>:\n<ul class=\"wp-block-list\">\n<li>It defines an <code>anomaly_threshold<\/code>.<\/li>\n\n\n\n<li>If the calculated <code>reconstruction_error_score<\/code> for a signal exceeds this threshold, it publishes a &#8220;signal_alert&#8221; with an <code>alert_type<\/code> of &#8220;ghost_anomaly&#8221;. This alert also includes the <code>confidence<\/code> (score), <code>timestamp<\/code>, and whether an <code>orbital_mimic<\/code> was detected.<\/li>\n\n\n\n<li>PyTorch is a prerequisite for the <code>LatentAggregator<\/code> to function.<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n\n\n\n<p>Waterfall colors refer to the <strong>custom colormaps used to represent signal power in waterfall displays<\/strong>. These colors are crucial for visualizing how signal strength changes over time in a dynamic RF environment.<\/p>\n\n\n\n<p>The <code>SignalVisualizer<\/code> module defines specific colormaps for this purpose:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>&#8220;rf_quantum&#8221;<\/strong>: This colormap is defined by specific red, green, and blue color transitions at different data values.\n<ul class=\"wp-block-list\">\n<li>Red component: Starts at 0.0, rises to 0.5 at 0.4, then to 1.0 at 0.6, and stays at 1.0.<\/li>\n\n\n\n<li>Green component: Starts at 0.0, rises to 0.5 at 0.6, then to 1.0 at 0.8, and stays at 1.0.<\/li>\n\n\n\n<li>Blue component: Starts at 0.0, rises to 0.5 at 0.2, then to 1.0 at 0.4, and then declines to 0.0 at 0.8 and stays at 0.0.<\/li>\n<\/ul>\n<\/li>\n\n\n\n<li><strong>&#8220;thermal&#8221;<\/strong>: This colormap also has defined red, green, and blue transitions, designed to evoke a thermal imaging effect.\n<ul class=\"wp-block-list\">\n<li>Red component: Starts at 0.0, then jumps to 1.0 at 0.66, and stays at 1.0.<\/li>\n\n\n\n<li>Green component: Starts at 0.0, rises to 0.5 at 0.33, then to 1.0 at 0.66, and then drops to 0.0 at 1.0.<\/li>\n\n\n\n<li>Blue component: Starts at 0.0, rises to 1.0 at 0.33, and then drops to 0.0 at 0.66 and stays at 0.0.<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n\n\n\n<p>These custom colormaps are registered with Matplotlib and can be selected via the <code>colormap<\/code> configuration parameter when initializing the <code>SignalVisualizer<\/code>, with &#8220;rf_quantum&#8221; being the default. When plotting a waterfall display, the <code>imshow<\/code> function uses the chosen colormap to render the <code>waterfall_data<\/code>, where signal power is represented by color intensity.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>PODCAST: SCYTHE Radio Frequency (RF) Signal Intelligence system, focusing on&nbsp;machine learning models for signal analysis&nbsp;and&nbsp;visualization tools. Specifically, the various&nbsp;neural network architectures, including&nbsp;Convolutional Neural Networks (CNNs),&nbsp;Long Short-Term Memory (LSTM) networks,&nbsp;Residual Networks (ResNet), and&nbsp;Transformer models, designed for classifying different types of RF signals. It also presents a&nbsp;hierarchical classification approach&nbsp;that uses a general model followed by specialized models&hellip;&nbsp;<a href=\"https:\/\/172-234-197-23.ip.linodeusercontent.com\/?p=1204\" rel=\"bookmark\"><span class=\"screen-reader-text\">RF Signal Models &amp; Visualization<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":1222,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"neve_meta_sidebar":"","neve_meta_container":"","neve_meta_enable_content_width":"","neve_meta_content_width":0,"neve_meta_title_alignment":"","neve_meta_author_avatar":"","neve_post_elements_order":"","neve_meta_disable_header":"","neve_meta_disable_footer":"","neve_meta_disable_title":"","footnotes":""},"categories":[14,10],"tags":[],"class_list":["post-1204","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-podcast","category-signal_scythe"],"_links":{"self":[{"href":"https:\/\/172-234-197-23.ip.linodeusercontent.com\/index.php?rest_route=\/wp\/v2\/posts\/1204","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/172-234-197-23.ip.linodeusercontent.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/172-234-197-23.ip.linodeusercontent.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/172-234-197-23.ip.linodeusercontent.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/172-234-197-23.ip.linodeusercontent.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=1204"}],"version-history":[{"count":5,"href":"https:\/\/172-234-197-23.ip.linodeusercontent.com\/index.php?rest_route=\/wp\/v2\/posts\/1204\/revisions"}],"predecessor-version":[{"id":1224,"href":"https:\/\/172-234-197-23.ip.linodeusercontent.com\/index.php?rest_route=\/wp\/v2\/posts\/1204\/revisions\/1224"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/172-234-197-23.ip.linodeusercontent.com\/index.php?rest_route=\/wp\/v2\/media\/1222"}],"wp:attachment":[{"href":"https:\/\/172-234-197-23.ip.linodeusercontent.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=1204"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/172-234-197-23.ip.linodeusercontent.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=1204"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/172-234-197-23.ip.linodeusercontent.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=1204"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}