LiTEx: A Linguistic Taxonomy of Explanations for Understanding Within-Label Variation in Natural Language Inference

📅 2025-05-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Natural language inference (NLI) exhibits “intra-label variation”: annotators agree on entailment labels for premise-hypothesis pairs but diverge substantially in their free-text explanations and salient evidence highlights—a phenomenon lacking systematic linguistic characterization. Method: We propose the first linguistics-driven taxonomy for free-text explanations, covering dimensions such as inferential grounds, logical relations, and semantic focus; we manually annotate and validate it on an e-SNLI subset (Cohen’s κ > 0.8). Building upon this, we introduce LITEX—a framework that fine-tunes large language models to generate conditionally grounded explanations aligned with the taxonomy. Contribution/Results: LITEX significantly improves the linguistic plausibility of generated explanations, boosting BERTScore semantic similarity with human annotations by 12.7%. Our analysis uncovers a deep misalignment among labels, attention highlights, and underlying reasoning justifications, establishing a new paradigm for interpretable NLI modeling.

Technology Category

Application Category

📝 Abstract
There is increasing evidence of Human Label Variation (HLV) in Natural Language Inference (NLI), where annotators assign different labels to the same premise-hypothesis pair. However, within-label variation--cases where annotators agree on the same label but provide divergent reasoning--poses an additional and mostly overlooked challenge. Several NLI datasets contain highlighted words in the NLI item as explanations, but the same spans on the NLI item can be highlighted for different reasons, as evidenced by free-text explanations, which offer a window into annotators' reasoning. To systematically understand this problem and gain insight into the rationales behind NLI labels, we introduce LITEX, a linguistically-informed taxonomy for categorizing free-text explanations. Using this taxonomy, we annotate a subset of the e-SNLI dataset, validate the taxonomy's reliability, and analyze how it aligns with NLI labels, highlights, and explanations. We further assess the taxonomy's usefulness in explanation generation, demonstrating that conditioning generation on LITEX yields explanations that are linguistically closer to human explanations than those generated using only labels or highlights. Our approach thus not only captures within-label variation but also shows how taxonomy-guided generation for reasoning can bridge the gap between human and model explanations more effectively than existing strategies.
Problem

Research questions and friction points this paper is trying to address.

Understanding within-label variation in NLI annotations
Categorizing free-text explanations using a linguistic taxonomy
Improving explanation generation with taxonomy-guided reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Linguistic taxonomy for free-text explanations
Annotate e-SNLI dataset with taxonomy
Taxonomy-guided explanation generation
🔎 Similar Papers
No similar papers found.
P
Pingjun Hong
MaiNLP, Center for Information and Language Processing, LMU Munich, Germany
Beiduo Chen
Beiduo Chen
ELLIS PhD Student, Ludwig-Maximilians-Universität München
LinguisticsNatural Language Processing
S
Siyao Peng
MaiNLP, Center for Information and Language Processing, LMU Munich, Germany
M
M. Marneffe
FNRS, CENTAL, UCLouvain, Belgium
Barbara Plank
Barbara Plank
Professor, LMU Munich, Visiting Prof ITU Copenhagen
Natural Language ProcessingComputational LinguisticsMachine LearningTransfer Learning