Constructive Circuit Amplification: Improving Math Reasoning in LLMs via Targeted Sub-Network Updates

📅 2025-12-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the limited mathematical reasoning capabilities of large language models (LLMs) and the degradation of generalization caused by full-parameter fine-tuning, this paper proposes the “Constructive Circuit Amplification” paradigm. It first identifies critical tokens and their associated functionally sparse subnetworks—termed *reasoning circuits*—via causal tracing; then applies gradient masking to enable localized parameter updates exclusively within these selected circuits. This approach decouples task-specific enhancement from multi-task robustness optimization, circumventing global fine-tuning. Evaluated on multiple mainstream LLMs, the method achieves up to a 11.4% absolute gain in mathematical reasoning accuracy while modifying only 1.59% of parameters, with near-zero degradation on general benchmarks such as MMLU. To our knowledge, this is the first systematic framework for targeted, interpretable, and minimally invasive capability enhancement of LLMs via sparse subnetwork intervention.

Technology Category

Application Category

📝 Abstract
Prior studies investigating the internal workings of LLMs have uncovered sparse subnetworks, often referred to as circuits, that are responsible for performing specific tasks. Additionally, it has been shown that model performance improvement through fine-tuning often results from the strengthening of existing circuits in the model. Taken together, these findings suggest the possibility of intervening directly on such circuits to make precise, task-targeted updates. Motivated by these findings, we propose a novel method called Constructive Circuit Amplification which identifies pivotal tokens from model reasoning traces as well as model components responsible for the desired task, and updates only those components. Applied to mathematical reasoning, it improves accuracy by up to +11.4% across multiple models while modifying as little as 1.59% of model components, with minimal impact on other abilities as measured by MMLU, TriviaQA, and TruthfulQA. These results demonstrate that targeted capabilities can be reliably enhanced by selectively updating a sparse set of model components.
Problem

Research questions and friction points this paper is trying to address.

Improves math reasoning accuracy in LLMs via targeted updates.
Identifies and updates sparse subnetworks responsible for specific tasks.
Enhances targeted capabilities with minimal impact on other abilities.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Identifies pivotal tokens and task-specific model components
Updates only targeted sub-networks for precise task improvement
Enhances math reasoning accuracy with minimal component modifications
🔎 Similar Papers
No similar papers found.