Breaking Free from MMI: A New Frontier in Rationalization by Probing Input Utilization

📅 2025-03-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the diminishing marginal returns observed in Maximum Mutual Information (MMI)-based rationale extraction methods for explainable AI. We propose a novel weight-space-driven paradigm that abandons the conventional prediction-reconstruction objective and instead leverages the low-rank structure of neural network weight matrices to measure the representational norm of input subsets within the model’s capability space—thereby identifying the actual input fragments upon which the model critically depends. Our approach is the first to theoretically uncover the intrinsic cause of MMI decay and to establish a rationale extraction framework grounded in weight usability as the core selection criterion. Extensive evaluation across four text classification and one graph classification benchmark demonstrates consistent and significant improvements over MMI and its variants. Notably, on several tasks, our method matches or even surpasses the performance of Llama-3.1-8B-Instruct, validating its lightweight design, broad applicability, and superior explanatory power.

Technology Category

Application Category

📝 Abstract
Extracting a small subset of crucial rationales from the full input is a key problem in explainability research. The most widely used fundamental criterion for rationale extraction is the maximum mutual information (MMI) criterion. In this paper, we first demonstrate that MMI suffers from diminishing marginal returns. Once part of the rationale has been identified, finding the remaining portions contributes only marginally to increasing the mutual information, making it difficult to use MMI to locate the rest. In contrast to MMI that aims to reproduce the prediction, we seek to identify the parts of the input that the network can actually utilize. This is achieved by comparing how different rationale candidates match the capability space of the weight matrix. The weight matrix of a neural network is typically low-rank, meaning that the linear combinations of its column vectors can only cover part of the directions in a high-dimensional space (high-dimension: the dimensions of an input vector). If an input is fully utilized by the network, {it generally matches these directions (e.g., a portion of a hypersphere), resulting in a representation with a high norm. Conversely, if an input primarily falls outside (orthogonal to) these directions}, its representation norm will approach zero, behaving like noise that the network cannot effectively utilize. Building on this, we propose using the norms of rationale candidates as an alternative objective to MMI. Through experiments on four text classification datasets and one graph classification dataset using three network architectures (GRUs, BERT, and GCN), we show that our method outperforms MMI and its improved variants in identifying better rationales. We also compare our method with a representative LLM (llama-3.1-8b-instruct) and find that our simple method gets comparable results to it and can sometimes even outperform it.
Problem

Research questions and friction points this paper is trying to address.

Overcoming diminishing returns in MMI for rationale extraction.
Identifying input parts effectively utilized by neural networks.
Proposing norm-based rationale selection to outperform MMI.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Probes input utilization via weight matrix analysis.
Uses rationale candidate norms as objective.
Outperforms MMI in rationale identification tasks.
🔎 Similar Papers
No similar papers found.
W
Wei Liu
School of Computer Science and Technology, HUST
Z
Zhiying Deng
Faculty of Artificial Intelligence in Education, Central China Normal University
Z
Zhongyu Niu
School of Computer Science and Technology, HUST
J
Jun Wang
iWudao Tech
Haozhao Wang
Haozhao Wang
Huazhong University of Science and Technology
Could-edge Distributed LearningFederated LearningAI SecurityMulti-modal LLM Agent
Zhigang Zeng
Zhigang Zeng
Huazhong University of Science and Technology
Stability analysisMemristorComputational intelligenceAssociative memoriesNeural Networks
R
Ruixuan Li
School of Computer Science and Technology, HUST