Power and Limitations of Aggregation in Compound AI Systems

📅 2026-02-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether aggregating outputs from multiple homogeneous models in composite AI systems can transcend the limitations of individual model capabilities and prompt engineering, thereby expanding the range of controllable system outputs. By modeling the designer’s use of reward functions to guide multi-agent behavior within a principal-agent framework, the work formally proposes and rigorously characterizes three aggregation mechanisms—feasibility expansion, support set expansion, and constraint set contraction—and proves they are necessary and sufficient conditions for enhancing controllability. Integrating theoretical analysis with experiments on large language models in reference-based text generation tasks, the paper elucidates both the potential and fundamental limits of aggregation in augmenting system expressivity, offering a theoretical foundation for understanding when and how aggregation effectively overcomes inherent model and prompting constraints.

Technology Category

Application Category

📝 Abstract
When designing compound AI systems, a common approach is to query multiple copies of the same model and aggregate the responses to produce a synthesized output. Given the homogeneity of these models, this raises the question of whether aggregation unlocks access to a greater set of outputs than querying a single model. In this work, we investigate the power and limitations of aggregation within a stylized principal-agent framework. This framework models how the system designer can partially steer each agent's output through its reward function specification, but still faces limitations due to prompt engineering ability and model capabilities. Our analysis uncovers three natural mechanisms -- feasibility expansion, support expansion, and binding set contraction -- through which aggregation expands the set of outputs that are elicitable by the system designer. We prove that any aggregation operation must implement one of these mechanisms in order to be elicitability-expanding, and that strengthened versions of these mechanisms provide necessary and sufficient conditions that fully characterize elicitability-expansion. Finally, we provide an empirical illustration of our findings for LLMs deployed in a toy reference-generation task. Altogether, our results take a step towards characterizing when compound AI systems can overcome limitations in model capabilities and in prompt engineering.
Problem

Research questions and friction points this paper is trying to address.

aggregation
compound AI systems
elicitable outputs
model homogeneity
output expansion
Innovation

Methods, ideas, or system contributions that make the work stand out.

aggregation
compound AI systems
elicibility
principal-agent framework
output expansion
🔎 Similar Papers
No similar papers found.