We quantify when a parent
HierarchicalMLClassifier beats a flat ensemble and
vice versa. We report per-class win profiles, confusion
deltas, and latency trade-offs, with code paths mapped to
super().classify_signal() vs the ensemble voting block.
We find a hierarchical classifier is never worse than a flat ensemble
of identical capacity on RML2016.10a, with strict gains on higherorder modulations and at high SNR.
I. METHOD
a) Dataset.: All results are on the standard
RML2016.10a dataset [1], filtered to BPSK, QPSK, 8PSK,
16QAM, 64QAM, yielding 20,000 test examples (4,000 per
class) evenly distributed across −10 to +18,dB SNR. We
instrument the classifier to expose both paths in a single pass.
For each signal, we record: (1) hierarchical prediction, (2)
flat-ensemble prediction, confidences, and latencies. Per-class
wins count cases where one path is correct and the other is
not.