Temporal RF models typically require fixed-length IQ
sequences, yet real-world bursts arrive at variable durations and
sampling rates. In RF–QUANTUM–SCYTHE, the temporal input
builder _create_temporal_input normalizes each complex
IQ stream to a configured sequence length before feeding recurrent
and transformer-style encoders.
This paper compares three practical IQ length normalization
policies—evenly spaced downsampling, windowed pooling, and
strided crops—in a shared RF modulation classification stack.
We sweep sequence length from very short (tens of samples) to
long (hundreds to thousands) and quantify the trade-off between
aliasing distortion and classification accuracy. On synthetic RF scenarios, we find that simple evenly spaced downsampling achieves
near-baseline accuracy at modest lengths, while aggressive strided
cropping can shed computation but risks missing informative
structure. The windowed pooling policy provides a middle ground,
smoothing local variations at the cost of mild aliasing. On our
synthetic RF benchmark, evenly spaced downsampling retains
up to 89.2% accuracy at L=128, while more aggressive crops
and pools trade a few percentage points of accuracy for reduced
temporal resolution. We release a harness and figure-generation
scripts so new policies and lengths can be evaluated without
modifying the LATEX.
Index Terms—Automatic modulation classification, RF machine
learning, IQ processing, sequence length, downsampling.