Blackbird Language Matrices: A Framework to Investigate the Linguistic Competence of Language Models

📅 2026-02-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether large language models possess the capacity to recognize linguistic structures, systematic patterns, and distinguish between linguistic and reasoning errors. To this end, the authors introduce the Blackbird Language Matrix task—a structured multiple-choice framework inspired by intelligence tests—and construct a multilayered dataset that integrates synthetically designed and natural language instances. This dataset enables a systematic evaluation of models’ linguistic competence and their ability to generalize systematically. Experimental results demonstrate that models can effectively identify grammatical objects and attributes, achieve high performance by leveraging cross-sentence systematic patterns, and exhibit robustness across multilingual settings. This work establishes a novel evaluation paradigm for assessing linguistic understanding and interpretability in language models.

Technology Category

Application Category

📝 Abstract
This article describes a novel language task, the Blackbird Language Matrices (BLM) task, inspired by intelligence tests, and illustrates the BLM datasets, their construction and benchmarking, and targeted experiments on chunking and systematicity. BLMs are multiple-choice problems, structured at multiple levels: within each sentence, across the input sequence, within each candidate answer. Because of their rich structure, these curated, but naturalistic datasets are key to answer some core questions about current large language models abilities: do LLMs detect linguistic objects and their properties? Do they detect and use systematic patterns across sentences? Are they more prone to linguistic or reasoning errors, and how do these interact? We show that BLMs, while challenging, can be solved at good levels of performance, in more than one language, with simple baseline models or, at better performance levels, with more tailored models. We show that their representations contain the grammatical objects and attributes relevant to solve a linguistic task. We also show that these solutions are reached by detecting systematic patterns across sentences. The paper supports the point of view that curated, structured datasets support multi-faceted investigations of properties of language and large language models. Because they present a curated, articulated structure, because they comprise both learning contexts and expected answers, and because they are partly built by hand, BLMs fall in the category of datasets that can support explainability investigations, and be useful to ask why large language models behave the way they do.
Problem

Research questions and friction points this paper is trying to address.

linguistic competence
systematicity
language models
chunking
explainability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Blackbird Language Matrices
systematicity
linguistic competence
structured datasets
explainability
🔎 Similar Papers
No similar papers found.