Mixture of Experts Dispatching for Scalable Communication Pipelines
We study a Mixture-of-Experts (MoE) dispatcher formessage-oriented middleware with sparse activation (top-k), loadaware gating, and performance-adaptive scoring. Against roundrobin and random baselines, MoE improves throughput, lowerslatency, balances load, and reduces gating overhead via simplemultiplicative gating with online EWMA performance updates.
Copy and paste this URL into your WordPress site to embed
Copy and paste this code into your site to embed