π€ AI Summary
To address the vulnerability of error cascading in large language modelβdriven multi-agent systems (MAS), this paper proposes an unsupervised metacognitive self-correction framework. The method models anomaly detection as a history-conditioned execution step reconstruction task, innovatively incorporating prototype priors to enhance representation stability under sparse contextual conditions. It further integrates causal consistency modeling with an anomaly scoring mechanism to enable fine-grained, real-time, step-level error identification and targeted correction. A correction agent dynamically intervenes via next-step reconstruction and prototype-guided embedding learning. Evaluated on the Who&When benchmark, the framework achieves an 8.47% improvement in error detection AUC-ROC and delivers consistent end-to-end performance gains across diverse MAS architectures, while incurring negligible computational overhead.
π Abstract
Large Language Model based multi-agent systems (MAS) excel at collaborative problem solving but remain brittle to cascading errors: a single faulty step can propagate across agents and disrupt the trajectory. In this paper, we present MASC, a metacognitive framework that endows MAS with real-time, unsupervised, step-level error detection and self-correction. MASC rethinks detection as history-conditioned anomaly scoring via two complementary designs: (1) Next-Execution Reconstruction, which predicts the embedding of the next step from the query and interaction history to capture causal consistency, and (2) Prototype-Guided Enhancement, which learns a prototype prior over normal-step embeddings and uses it to stabilize reconstruction and anomaly scoring under sparse context (e.g., early steps). When an anomaly step is flagged, MASC triggers a correction agent to revise the acting agent's output before information flows downstream. On the Who&When benchmark, MASC consistently outperforms all baselines, improving step-level error detection by up to 8.47% AUC-ROC ; When plugged into diverse MAS frameworks, it delivers consistent end-to-end gains across architectures, confirming that our metacognitive monitoring and targeted correction can mitigate error propagation with minimal overhead.