Interpretable Language Modeling via Induction-head Ngram Models

📅 2024-10-31
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Large Transformer models achieve high performance but suffer from poor interpretability, limiting their deployment in high-stakes domains such as neuroscience. To address this, we propose Induction-Gram, a novel language model that integrates hand-crafted “induction heads” into an n-gram framework. It employs a custom neural similarity metric for context-aware candidate word retrieval and enables fine-grained, n-gram-level generation tracing. Our approach achieves both strong interpretability and competitive performance: it improves next-word prediction accuracy by 26 percentage points over interpretable baselines; achieves a 20% relative gain in correlation with fMRI-measured neural language responses—marking the first empirical validation of its neuroscientific validity; and supports speculative decoding, enabling efficient inference in large models.

Technology Category

Application Category

📝 Abstract
Recent large language models (LLMs) have excelled across a wide range of tasks, but their use in high-stakes and compute-limited settings has intensified the demand for interpretability and efficiency. We address this need by proposing Induction-head ngram models (Induction-Gram), a method that builds an efficient, interpretable LM by bolstering modern ngram models with a hand-engineered"induction head". This induction head uses a custom neural similarity metric to efficiently search the model's input context for potential next-word completions. This process enables Induction-Gram to provide ngram-level grounding for each generated token. Moreover, experiments show that this simple method significantly improves next-word prediction over baseline interpretable models (up to 26%p) and can be used to speed up LLM inference for large models through speculative decoding. We further study Induction-Gram in a natural-language neuroscience setting, where the goal is to predict the next fMRI response in a sequence. It again provides a significant improvement over interpretable models (20% relative increase in the correlation of predicted fMRI responses), potentially enabling deeper scientific investigation of language selectivity in the brain. The code is available at https://github.com/ejkim47/induction-gram.
Problem

Research questions and friction points this paper is trying to address.

Addressing interpretability limitations in high-stakes transformer applications
Developing interpretable next-token prediction via generalized induction heads
Bridging performance gaps between interpretable models and black-box LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses interpretable Generalized Induction-Head Model
Combines n-gram matching with neural similarity
Applies to language modeling and fMRI prediction