🤖 AI Summary
This work addresses the challenge of overly aggressive or insufficient intervention in language model error correction. We propose MERA, a selective and adaptive mechanistic intervention framework. Its core innovation lies in jointly optimizing intervention directions—derived from mechanistic activation analysis—with confidence-aware intervention decisions, while incorporating an active abstention mechanism that automatically refrains from correction when reliability is low. MERA is the first method to combine provably improved performance with theoretically grounded abstention guarantees, enabling dynamic calibration of intervention timing and intensity; it is also modular and compatible with existing steering techniques. Extensive experiments across multiple models and datasets demonstrate that MERA significantly outperforms baselines, achieving non-degrading, safe, and robust error correction.
📝 Abstract
We introduce Mechanistic Error Reduction with Abstention (MERA), a principled framework for steering language models (LMs) to mitigate errors through selective, adaptive interventions. Unlike existing methods that rely on fixed, manually tuned steering strengths, often resulting in under or oversteering, MERA addresses these limitations by (i) optimising the intervention direction, and (ii) calibrating when, and how much to steer, thereby provably improving performance or abstaining when no confident correction is possible. Experiments across diverse datasets, and LM families demonstrate safe, effective, non-degrading error correction, and that MERA outperforms existing baselines. Moreover, MERA can be applied on top of existing steering techniques to further enhance their performance, establishing it as a general-purpose, and efficient approach to mechanistic activation steering.