Can LLM-Generated Textual Explanations Enhance Model Classification Performance? An Empirical Study

📅 2025-08-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Manual annotation of textual explanations for interpretable NLP is costly and inherently unscalable. Method: We propose a multi-LLM collaborative framework for automated explanation generation to enhance natural language inference (NLI) classifiers. It integrates outputs from multiple state-of-the-art large language models to produce high-quality, faithful reasoning rationales; employs NLG evaluation metrics to assess explanation quality; and fine-tunes downstream NLI classifiers—specifically on the SNLI and MNLI benchmarks—using these generated explanations as auxiliary supervision. Contribution/Results: Explanations automatically generated by LLMs significantly improve pre-trained NLI model performance, matching the efficacy of human-annotated explanations. This work provides the first empirical validation of the effectiveness and scalability of *automatically generated* explanations for model enhancement. By eliminating reliance on manual annotation, it establishes a novel, scalable paradigm for interpretable NLP.

Technology Category

Application Category

📝 Abstract
In the rapidly evolving field of Explainable Natural Language Processing (NLP), textual explanations, i.e., human-like rationales, are pivotal for explaining model predictions and enriching datasets with interpretable labels. Traditional approaches rely on human annotation, which is costly, labor-intensive, and impedes scalability. In this work, we present an automated framework that leverages multiple state-of-the-art large language models (LLMs) to generate high-quality textual explanations. We rigorously assess the quality of these LLM-generated explanations using a comprehensive suite of Natural Language Generation (NLG) metrics. Furthermore, we investigate the downstream impact of these explanations on the performance of pre-trained language models (PLMs) and LLMs across natural language inference tasks on two diverse benchmark datasets. Our experiments demonstrate that automated explanations exhibit highly competitive effectiveness compared to human-annotated explanations in improving model performance. Our findings underscore a promising avenue for scalable, automated LLM-based textual explanation generation for extending NLP datasets and enhancing model performance.
Problem

Research questions and friction points this paper is trying to address.

Evaluating if LLM-generated explanations improve classification performance
Automating textual explanations to replace costly human annotations
Assessing impact of automated explanations on model task performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leveraging multiple LLMs for explanation generation
Automated framework replaces costly human annotation
Comprehensive NLG metrics assess explanation quality
🔎 Similar Papers
No similar papers found.