Answer, Assemble, Ace: Understanding How LMs Answer Multiple Choice Questions

๐Ÿ“… 2024-07-21
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work investigates the mechanisms underlying robust reasoning in large language models (LLMs) on formatted multiple-choice question answering (MCQA), particularly the performance degradation induced by answer option order perturbations. Method: We employ activation intervention, vocabulary-space projection, and inter-layer causal attribution to identify sparse self-attention heads in intermediate transformer layers as critical for MCQA decision-making. These heads dynamically amplify token-level probabilities of correct answers, enabling symbol-level causal prediction. A synthetic task is designed to decouple error sources; answer logit margins are tracked across training steps. Contribution/Results: We demonstrate that formatted reasoning capability emerges progressively, as answer logit differences monotonically increase during training. Crucially, we uncover a cross-model โ€œsymbol adaptation strategyโ€โ€”a shared functional specialization wherein MCQA success depends critically on a small subset of attention heads. This is the first study to identify and empirically validate such a universal, head-specific mechanism for robust MCQA.

Technology Category

Application Category

๐Ÿ“ Abstract
Multiple-choice question answering (MCQA) is a key competence of performant transformer language models that is tested by mainstream benchmarks. However, recent evidence shows that models can have quite a range of performance, particularly when the task format is diversified slightly (such as by shuffling answer choice order). In this work we ask: how do successful models perform formatted MCQA? We employ vocabulary projection and activation patching methods to localize key hidden states that encode relevant information for predicting the correct answer. We find that the prediction of a specific answer symbol is causally attributed to a few middle layers, and specifically their multi-head self-attention mechanisms. We show that subsequent layers increase the probability of the predicted answer symbol in vocabulary space, and that this probability increase is associated with a sparse set of attention heads with unique roles. We additionally uncover differences in how different models adjust to alternative symbols. Finally, we demonstrate that a synthetic task can disentangle sources of model error to pinpoint when a model has learned formatted MCQA, and show that logit differences between answer choice tokens continue to grow over the course of training.
Problem

Research questions and friction points this paper is trying to address.

Understand how language models perform multiple-choice question answering.
Identify key hidden states and mechanisms for correct answer prediction.
Analyze model adjustments to different answer symbols and error sources.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Vocabulary projection identifies key hidden states.
Activation patching localizes causal middle layers.
Sparse attention heads increase answer probability.
๐Ÿ”Ž Similar Papers
No similar papers found.