Investigating Training and Generalization in Faithful Self-Explanations of Large Language Models

📅 2025-12-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing self-explanations generated by large language models (LLMs) often lack faithfulness—i.e., they diverge from the model’s actual reasoning process—and exhibit limited generalization across tasks and explanation styles. Method: To address this, we propose a feature-attribution-based method to automatically construct pseudo-faithful, word-level constraint explanations without human annotation. This approach supports diverse explanation styles (e.g., reasons, steps, keywords) and unseen tasks. We jointly optimize the model via instruction tuning and continual learning to internalize faithful explanation generation. Contribution/Results: Experiments demonstrate significant improvements in explanation faithfulness across multiple classification tasks and heterogeneous explanation styles. Moreover, the method generalizes effectively to multi-word explanations and zero-shot tasks, confirming that our training paradigm endows LLMs with robust, task-agnostic faithfulness capabilities—without requiring style- or task-specific supervision.

Technology Category

Application Category

📝 Abstract
Large language models have the potential to generate explanations for their own predictions in a variety of styles based on user instructions. Recent research has examined whether these self-explanations faithfully reflect the models' actual behavior and has found that they often lack faithfulness. However, the question of how to improve faithfulness remains underexplored. Moreover, because different explanation styles have superficially distinct characteristics, it is unclear whether improvements observed in one style also arise when using other styles. This study analyzes the effects of training for faithful self-explanations and the extent to which these effects generalize, using three classification tasks and three explanation styles. We construct one-word constrained explanations that are likely to be faithful using a feature attribution method, and use these pseudo-faithful self-explanations for continual learning on instruction-tuned models. Our experiments demonstrate that training can improve self-explanation faithfulness across all classification tasks and explanation styles, and that these improvements also show signs of generalization to the multi-word settings and to unseen tasks. Furthermore, we find consistent cross-style generalization among three styles, suggesting that training may contribute to a broader improvement in faithful self-explanation ability.
Problem

Research questions and friction points this paper is trying to address.

Improving faithfulness in self-explanations of large language models
Examining generalization of training effects across explanation styles
Assessing cross-task generalization in faithful self-explanation ability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Using pseudo-faithful self-explanations from feature attribution for continual learning
Training improves faithfulness across tasks and explanation styles
Cross-style generalization suggests broader improvement in explanation ability