In-Context Compositional Learning via Sparse Coding Transformer

📅 2025-11-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Transformers exhibit limited compositional generalization—particularly in tasks requiring structured rule inference from contextual examples—due to their lack of explicit structural inductive bias. To address this, we propose Sparse-Attention: a reformulation of standard self-attention as a sparse representation framework grounded in learnable encoding/decoding dictionaries. By imposing sparsity constraints on attention coefficients, Sparse-Attention explicitly models the combinatorial structure of inputs; context-dependent sparse linear combinations then enable systematic compositionality. Crucially, this approach enhances structural relational modeling without altering the underlying network architecture. On the S-RAVEN and RAVEN benchmarks—canonical diagnostic suites for compositional reasoning—Sparse-Attention achieves substantial improvements over standard Transformers and demonstrates robust generalization across diverse compositional inference tasks. These results empirically validate sparse representations as an effective structural inductive bias for enhancing compositional generalization in attention-based models.

Technology Category

Application Category

📝 Abstract
Transformer architectures have achieved remarkable success across language, vision, and multimodal tasks, and there is growing demand for them to address in-context compositional learning tasks. In these tasks, models solve the target problems by inferring compositional rules from context examples, which are composed of basic components structured by underlying rules. However, some of these tasks remain challenging for Transformers, which are not inherently designed to handle compositional tasks and offer limited structural inductive bias. In this work, inspired by the principle of sparse coding, we propose a reformulation of the attention to enhance its capability for compositional tasks. In sparse coding, data are represented as sparse combinations of dictionary atoms with coefficients that capture their compositional rules. Specifically, we reinterpret the attention block as a mapping of inputs into outputs through projections onto two sets of learned dictionary atoms: an encoding dictionary and a decoding dictionary. The encoding dictionary decomposes the input into a set of coefficients, which represent the compositional structure of the input. To enhance structured representations, we impose sparsity on these coefficients. The sparse coefficients are then used to linearly combine the decoding dictionary atoms to generate the output. Furthermore, to assist compositional generalization tasks, we propose estimating the coefficients of the target problem as a linear combination of the coefficients obtained from the context examples. We demonstrate the effectiveness of our approach on the S-RAVEN and RAVEN datasets. For certain compositional generalization tasks, our method maintains performance even when standard Transformers fail, owing to its ability to learn and apply compositional rules.
Problem

Research questions and friction points this paper is trying to address.

Transformers struggle with in-context compositional learning tasks
Limited structural inductive bias hinders compositional rule inference
Sparse coding enhances attention for compositional generalization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Sparse coding reformulates attention mechanism
Imposes sparsity on compositional input coefficients
Estimates target coefficients from context examples
🔎 Similar Papers
2024-05-10arXiv.orgCitations: 2