EMAFusion: A Self-Optimizing System for Seamless LLM Selection and Integration

📅 2025-04-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the dual challenges of high deployment cost and low robustness in large language model (LLM) systems, this paper proposes a self-optimizing multi-level fusion framework that enables query-driven dynamic model selection and collaborative execution. The method introduces a novel taxonomy-plus-learning dual-routing mechanism coupled with a multi-criterion cascaded decision architecture, supporting label-free adaptive optimization and fine-grained cost–accuracy trade-offs. Specifically, it integrates taxonomy-based routing, a lightweight learning-based router, multi-judge confidence scoring, and cascaded inference. Experiments demonstrate that the framework achieves 94.3% accuracy—2.6 percentage points higher than the best single-model baseline—while reducing cost to only one-fourth of the baseline average and one-twentieth of GPT-4’s. It significantly outperforms pure taxonomy routing (88.1%) and pure learning-based routing (91.7%), validating its effectiveness in balancing efficiency, accuracy, and adaptability.

Technology Category

Application Category

📝 Abstract
While recent advances in large language models (LLMs) have significantly enhanced performance across diverse natural language tasks, the high computational and financial costs associated with their deployment remain substantial barriers. Existing routing strategies partially alleviate this challenge by assigning queries to cheaper or specialized models, but they frequently rely on extensive labeled data or fragile task-specific heuristics. Conversely, fusion techniques aggregate multiple LLM outputs to boost accuracy and robustness, yet they often exacerbate cost and may reinforce shared biases. We introduce EMAFusion, a new framework that self-optimizes for seamless LLM selection and reliable execution for a given query. Specifically, EMAFusion integrates a taxonomy-based router for familiar query types, a learned router for ambiguous inputs, and a cascading approach that progressively escalates from cheaper to more expensive models based on multi-judge confidence evaluations. Through extensive evaluations, we find EMAFusion outperforms the best individual models by over 2.6 percentage points (94.3% vs. 91.7%), while being 4X cheaper than the average cost. EMAFusion further achieves a remarkable 17.1 percentage point improvement over models like GPT-4 at less than 1/20th the cost. Our combined routing approach delivers 94.3% accuracy compared to taxonomy-based (88.1%) and learned model predictor-based (91.7%) methods alone, demonstrating the effectiveness of our unified strategy. Finally, EMAFusion supports flexible cost-accuracy trade-offs, allowing users to balance their budgetary constraints and performance needs.
Problem

Research questions and friction points this paper is trying to address.

Reducing high computational and financial costs of LLM deployment
Improving accuracy and robustness in LLM selection and integration
Balancing cost-accuracy trade-offs for flexible user needs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-optimizing LLM selection via multi-router integration
Cascading model escalation based on confidence evaluations
Flexible cost-accuracy trade-offs for budget-aware deployment
🔎 Similar Papers
No similar papers found.
S
Soham Shah
Ema Unlimited, Inc.
Kumar Shridhar
Kumar Shridhar
ETH Zurich
NLPDeep LearningMachine Learning
S
Surojit Chatterjee
Ema Unlimited, Inc.
S
Souvik Sen
Ema Unlimited, Inc.