ARCANE: A Multi-Agent Framework for Interpretable and Configurable Alignment

๐Ÿ“… 2025-12-05
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address the challenge of aligning large language model (LLM) agents with stakeholder preferences in long-horizon tasks, this paper proposes a runtime-configurable and interpretable alignment framework. It dynamically models preferences as natural language scoring criteria and, for the first time, formalizes alignment as a multi-agent collaborative task. Methodologically, it introduces a utility-theoretic scoring criterion learning mechanism enabling real-time preference updates without fine-tuning, and proposes Regularized Group Sequence Policy Optimization (GSPO), which jointly optimizes natural language generation and multi-step reasoning. The framework is trained and validated on the GDPVal benchmark comprising 219 human-annotated criteria. Experiments demonstrate that the generated scoring criteria are concise, human-readable, and enable flexible trade-offs across dimensions such as correctness and brevity. This significantly enhances transparency, auditability, and adaptability of the alignment process.

Technology Category

Application Category

๐Ÿ“ Abstract
As agents based on large language models are increasingly deployed to long-horizon tasks, maintaining their alignment with stakeholder preferences becomes critical. Effective alignment in such settings requires reward models that are interpretable so that stakeholders can understand and audit model objectives. Moreover, reward models must be capable of steering agents at interaction time, allowing preference shifts to be incorporated without retraining. We introduce ARCANE, a framework that frames alignment as a multi-agent collaboration problem that dynamically represents stakeholder preferences as natural-language rubrics: weighted sets of verifiable criteria that can be generated on-the-fly from task context. Inspired by utility theory, we formulate rubric learning as a reconstruction problem and apply a regularized Group-Sequence Policy Optimization (GSPO) procedure that balances interpretability, faithfulness, and computational efficiency. Using a corpus of 219 labeled rubrics derived from the GDPVal benchmark, we evaluate ARCANE on challenging tasks requiring multi-step reasoning and tool use. The learned rubrics produce compact, legible evaluations and enable configurable trade-offs (e.g., correctness vs. conciseness) without retraining. Our results show that rubric-based reward models offer a promising path toward interpretable, test-time adaptive alignment for complex, long-horizon AI systems.
Problem

Research questions and friction points this paper is trying to address.

Develop interpretable reward models for long-horizon AI agent alignment
Enable real-time preference steering without retraining through configurable rubrics
Balance interpretability, faithfulness, and efficiency in multi-agent collaboration frameworks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-agent framework dynamically represents preferences as natural-language rubrics
Uses regularized Group-Sequence Policy Optimization for interpretable reward learning
Enables configurable trade-offs without retraining via compact rubric evaluations
๐Ÿ”Ž Similar Papers
No similar papers found.
C
Charlie Masters
DeepFlow, London, United Kingdom
M
Marta Grzeล›kiewicz
University of Cambridge, United Kingdom
Stefano V. Albrecht
Stefano V. Albrecht
School of Informatics, University of Edinburgh
Artificial IntelligenceAutonomous AgentsMulti-Agent SystemsReinforcement Learning