Schema for In-Context Learning

📅 2025-10-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional in-context learning (ICL) lacks explicit mechanisms for abstract knowledge retrieval and transfer, hindering large language models’ ability to implicitly construct and leverage schema-based reasoning representations. To address this, we propose Schema-Activated ICL (SA-ICL), the first ICL framework integrating cognitive science’s schema theory. SA-ICL introduces lightweight, structured abstract reasoning templates that explicitly model task-invariant patterns, unifying chain-of-thought and schema-guided reasoning strategies. Built upon Transformer architectures, it extracts cognitive primitives from demonstrations and dynamically activates task-adaptive schemas to enhance inference. Evaluated on the GPQA chemistry/physics subset, SA-ICL achieves up to a 36.19% absolute improvement in single-shot accuracy, substantially reduces dependence on demonstration count, and improves both interpretability and human-like reasoning fidelity.

Technology Category

Application Category

📝 Abstract
In-Context Learning (ICL) enables transformer-based language models to adapt to new tasks by conditioning on demonstration examples. However, traditional example-driven in-context learning lacks explicit modules for knowledge retrieval and transfer at the abstraction level. Inspired by cognitive science, specifically schema theory, which holds that humans interpret new information by activating pre-existing mental frameworks (schemas) to structure understanding, we introduce SCHEMA ACTIVATED IN CONTEXT LEARNING (SA-ICL). This framework extracts the representation of the building blocks of cognition for the reasoning process instilled from prior examples, creating an abstracted schema, a lightweight, structured template of key inferential steps and their relationships, which is then used to augment a model's reasoning process when presented with a novel question. We demonstrate that a broad range of large language models (LLMs) lack the capacity to form and utilize internal schema-based learning representations implicitly, but instead benefit significantly from explicit schema-based scaffolding. Across chemistry and physics questions from the GPQA dataset, our experiments show that SA-ICL consistently boosts performance, up to 36.19 percent, when the single demonstration example is of high quality, which simultaneously reduces reliance on the number of demonstrations and enhances interpretability. SCHEMA ACTIVATED IN CONTEXT LEARNING not only bridges disparate ICL strategies ranging from pattern priming to Chain-of-Thought prompting, but also paves a new path for enhancing human-like reasoning in LLMs.
Problem

Research questions and friction points this paper is trying to address.

Traditional in-context learning lacks explicit knowledge abstraction modules
Large language models struggle with implicit schema-based reasoning formation
Current methods require many demonstrations and offer limited interpretability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Schema Activated In-Context Learning framework introduced
Abstract schema extracted from prior examples used
Explicit schema-based scaffolding boosts model performance
🔎 Similar Papers
No similar papers found.
P
Pan Chen
Department of Computer Science, University of Toronto, Sandford Fleming Building, 10 King’s College Road, ON M5S 3G4, Toronto, Canada
S
Shaohong Chen
Department of Computer Science, University of Toronto, Sandford Fleming Building, 10 King’s College Road, ON M5S 3G4, Toronto, Canada
M
Mark Wang
Department of Computer Science, University of Toronto, Sandford Fleming Building, 10 King’s College Road, ON M5S 3G4, Toronto, Canada
S
Shi Xuan Leong
School of Chemistry, Chemical Engineering and Biotechnology, Nanyang Technological University, Singapore, Singapore
P
Priscilla Fung
Department of Pyschology, University of Toronto, Sidney Smith Hall, 100 St. George Street, ON M5S 3G3, Toronto, Canada
Varinia Bernales
Varinia Bernales
University of Toronto
Theoretical ChemistryCatalysisGreen Chemistry
Alan Aspuru-Guzik
Alan Aspuru-Guzik
Professor of Chemistry and Computer Science, University of Toronto (starting July 1, 2018)
Theoretical chemistryquantum information sciencePhysical ChemistryEnergy MaterialsMachine Learning