Diagnosing Structural Failures in LLM-Based Evidence Extraction for Meta-Analysis

📅 2026-02-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models struggle to reliably extract structured evidence for systematic reviews and meta-analyses, often exhibiting critical structural errors such as role reversal, cross-analysis binding drift, and misattribution of effect sizes. To address this, this work proposes a structured diagnostic framework that systematically evaluates model performance under single- and multi-document long-context settings by incrementally increasing relational and numerical complexity through pattern-constrained queries. Using a curated, expert-annotated corpus spanning five domains, a unified query suite, and a standardized evaluation protocol, the study finds that while models perform adequately on isolated attribute extraction, their accuracy sharply declines on tasks requiring joint binding of roles, methods, and effect sizes. Full tuple extraction is nearly ineffective, and upstream extraction errors are substantially amplified during downstream evidence aggregation.

Technology Category

Application Category

📝 Abstract
Systematic reviews and meta-analyses rely on converting narrative articles into structured, numerically grounded study records. Despite rapid advances in large language models (LLMs), it remains unclear whether they can meet the structural requirements of this process, which hinge on preserving roles, methods, and effect-size attribution across documents rather than on recognizing isolated entities. We propose a structural, diagnostic framework that evaluates LLM-based evidence extraction as a progression of schema-constrained queries with increasing relational and numerical complexity, enabling precise identification of failure points beyond atom-level extraction. Using a manually curated corpus spanning five scientific domains, together with a unified query suite and evaluation protocol, we evaluate two state-of-the-art LLMs under both per-document and long-context, multi-document input regimes. Across domains and models, performance remains moderate for single-property queries but degrades sharply once tasks require stable binding between variables, roles, statistical methods, and effect sizes. Full meta-analytic association tuples are extracted with near-zero reliability, and long-context inputs further exacerbate these failures. Downstream aggregation amplifies even minor upstream errors, rendering corpus-level statistics unreliable. Our analysis shows that these limitations stem not from entity recognition errors, but from systematic structural breakdowns, including role reversals, cross-analysis binding drift, instance compression in dense result sections, and numeric misattribution, indicating that current LLMs lack the structural fidelity, relational binding, and numerical grounding required for automated meta-analysis. The code and data are publicly available at GitHub (https://github.com/zhiyintan/LLM-Meta-Analysis).
Problem

Research questions and friction points this paper is trying to address.

structural failure
evidence extraction
meta-analysis
large language models
relational binding
Innovation

Methods, ideas, or system contributions that make the work stand out.

structural diagnosis
evidence extraction
relational binding
numerical grounding
meta-analysis
🔎 Similar Papers
No similar papers found.