Mixture of Experts Dispatching for Scalable Communication Pipelines

We study a Mixture-of-Experts (MoE) dispatcher formessage-oriented middleware with sparse activation (top-k), loadaware gating, and performance-adaptive scoring. Against roundrobin and random baselines, MoE improves throughput, lowerslatency, balances load, and reduces gating overhead via simplemultiplicative gating with online EWMA performance updates.