🤖 AI Summary
This work addresses the susceptibility of large language models to hallucination and systematic bias, particularly exacerbated in Mixture-of-Experts (MoE) architectures due to uneven expert activation. To mitigate these issues, the authors propose Council Mode, a novel multi-agent collaborative decision-making framework structured into three stages: intelligent triage for query routing, parallel generation by heterogeneous state-of-the-art models, and a structured consensus mechanism that integrates overlapping, divergent, and unique perspectives. By explicitly modeling the distribution of multi-agent opinions, Council Mode significantly enhances output truthfulness and fairness—reducing hallucination rates by 35.9% on HaluEval, improving TruthfulQA scores by 7.8 points, and substantially decreasing cross-domain bias variance.
📝 Abstract
Large Language Models (LLMs), particularly those employing Mixture-of-Experts (MoE) architectures, have achieved remarkable capabilities across diverse natural language processing tasks. However, these models frequently suffer from hallucinations -- generating plausible but factually incorrect content -- and exhibit systematic biases that are amplified by uneven expert activation during inference. In this paper, we propose the Council Mode, a novel multi-agent consensus framework that addresses these limitations by dispatching queries to multiple heterogeneous frontier LLMs in parallel and synthesizing their outputs through a dedicated consensus model. The Council pipeline operates in three phases: (1) an intelligent triage classifier that routes queries based on complexity, (2) parallel expert generation across architecturally diverse models, and (3) a structured consensus synthesis that explicitly identifies agreement, disagreement, and unique findings before producing the final response. We implement and evaluate this architecture within an open-source AI workspace. Our comprehensive evaluation across multiple benchmarks demonstrates that the Council Mode achieves a 35.9% relative reduction in hallucination rates on the HaluEval benchmark and a 7.8-point improvement on TruthfulQA compared to the best-performing individual model, while maintaining significantly lower bias variance across domains. We provide the mathematical formulation of the consensus mechanism, detail the system architecture, and present extensive empirical results with ablation studies.