We study a Mixture-of-Experts (MoE) dispatcher for
message-oriented middleware with sparse activation (top-k), loadaware gating, and performance-adaptive scoring. Against roundrobin and random baselines, MoE improves throughput, lowers
latency, balances load, and reduces gating overhead via simple
multiplicative gating with online EWMA performance updates.