Parallelograms Strike Back: LLMs Generate Better Analogies than People

📅 2026-03-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether the classic “parallelogram” model of analogical reasoning is genuinely flawed or merely constrained by human limitations in generating relationally consistent analogies. By comparing human and large language model (LLM) performance on four-term analogy tasks—and integrating analyses of GloVe embedding geometry, human quality ratings, and lexical frequency statistics—the research demonstrates that LLM-generated analogies adhere more closely to the parallelogram structure and achieve higher overall quality. This advantage stems not from enhanced local semantic similarity but from superior relational consistency and more effective utilization of low-frequency vocabulary. The findings suggest that the parallelogram model remains valid; observed human underperformance primarily reflects cognitive constraints rather than inherent deficiencies in the model itself.

Technology Category

Application Category

📝 Abstract
Four-term word analogies (A:B::C:D) are classically modeled geometrically as ''parallelograms,'' yet recent work suggests this model poorly captures how humans produce analogies, with simple local-similarity heuristics often providing a better account (Peterson et al., 2020). But does the parallelogram model fail because it is a bad model of analogical relations, or because people are not very good at generating relation-preserving analogies? We compared human and large language model (LLM) analogy completions on the same set of analogy problems from (Peterson et al., 2020). We find that LLM-generated analogies are reliably judged as better than human-generated ones, and are also more closely aligned with the parallelogram structure in a distributional embedding space (GloVe). Crucially, we show that the improvement over human analogies was driven by greater parallelogram alignment and reduced reliance on accessible words rather than enhanced sensitivity to local similarity. Moreover, the LLM advantage is driven not by uniformly superior responses by LLMs, but by humans producing a long tail of weak completions: when only modal (most frequent) responses by both systems are compared, the LLM advantage disappears. However, greater parallelogram alignment and lower word frequency continue to predict which LLM completions are rated higher than those of humans. Overall, these results suggest that the parallelogram model is not a poor account of word analogy. Rather, humans may often fail to produce completions that satisfy this relational constraint, whereas LLMs do so more consistently.
Problem

Research questions and friction points this paper is trying to address.

word analogies
parallelogram model
large language models
human analogy generation
relational reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

parallelogram model
large language models
word analogies
distributional semantics
relational reasoning