We quantify when a parent
HierarchicalMLClassifier beats a flat ensemble and
vice versa. We report per-class win profiles, confusion
deltas, and latency trade-offs, with code paths mapped to
super().classify_signal() vs the ensemble voting block.
We find a hierarchical classifier is never worse than a flat ensemble
of identical capacity on RML2016.10a, with strict gains on higher order modulations and at high SNR.
All results are on the standard
RML2016.10a dataset [1], filtered to BPSK, QPSK, 8PSK,
16QAM, 64QAM, yielding 20,000 test examples (4,000 per
class) evenly distributed across −10 to +18,dB SNR. We
instrument the classifier to expose both paths in a single pass.
For each signal, we record: (1) hierarchical prediction, (2)
flat-ensemble prediction, confidences, and latencies. Per-class
wins count cases where one path is correct and the other is
not.
VI. REPRODUCIBILITY
Run make in paper_Hier_vs_Flat_Ensembles/.
Provide your dataset and model:
DATASET_FUNC=”my_dataset_module:iter_eva
l”CLASSIFIER_SPEC=”ensemble_ml_classifier:
EnsembleMLClassifier”makeeval >> bgilbert1984/Hierarchical-vs-Flat-Ensembles-in-RF-Modulation-Classification: We quantify when a parent HierarchicalMLClassifier beats a flat ensemble and vice versa. We report per-class win profiles, confusion deltas, and latency trade-offs, with code paths mapped to super().classify_signal() vs the ensemble voting block.
