Balancing Information Accuracy and Response Timeliness in Networked LLMs

📅 2025-08-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the trade-off between information accuracy and response latency in networked large language model (LLM) systems, this paper proposes a joint optimization framework based on a three-tier architecture comprising end users, a central task processor, and multiple groups of topic-specialized LLMs. The framework integrates dynamic task routing, binary classification-based query preprocessing, and multi-model response aggregation to achieve coordinated optimization. Its key contribution is the identification and empirical validation of a novel phenomenon: topic-specialized LLMs with comparable individual performance yield significantly higher accuracy gains under response aggregation. Simulation results demonstrate that the proposed method consistently achieves higher aggregated accuracy than any single constituent model while maintaining low end-to-end latency. Notably, accuracy improvements are most pronounced when baseline model capabilities are similar—establishing a new paradigm for efficient collaboration within specialized LLM ensembles.

Technology Category

Application Category

📝 Abstract
Recent advancements in Large Language Models (LLMs) have transformed many fields including scientific discovery, content generation, biomedical text mining, and educational technology. However, the substantial requirements for training data, computational resources, and energy consumption pose significant challenges for their practical deployment. A promising alternative is to leverage smaller, specialized language models and aggregate their outputs to improve overall response quality. In this work, we investigate a networked LLM system composed of multiple users, a central task processor, and clusters of topic-specialized LLMs. Each user submits categorical binary (true/false) queries, which are routed by the task processor to a selected cluster of $m$ LLMs. After gathering individual responses, the processor returns a final aggregated answer to the user. We characterize both the information accuracy and response timeliness in this setting, and formulate a joint optimization problem to balance these two competing objectives. Our extensive simulations demonstrate that the aggregated responses consistently achieve higher accuracy than those of individual LLMs. Notably, this improvement is more significant when the participating LLMs exhibit similar standalone performance.
Problem

Research questions and friction points this paper is trying to address.

Balancing accuracy and timeliness in networked LLM systems
Optimizing aggregated responses from specialized LLM clusters
Improving query accuracy via collaborative smaller LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Networked LLM system with specialized clusters
Aggregated responses for higher accuracy
Joint optimization of accuracy and timeliness