VITAL: More Understandable Feature Visualization through Distribution Alignment and Relevant Information Flow

📅 2025-03-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing feature visualization methods often produce blurry images with repetitive textures and artifacts, resulting in poor interpretability. To address this, we propose a joint optimization framework that (i) enforces distribution alignment between synthesized and real-data features across network layers via KL-divergence constraints, and (ii) models gradient-sensitive information flow paths by introducing inter-layer correlation-weighted guidance for backpropagation-based optimization. This is the first approach to synergistically integrate distribution alignment and information-flow modeling for feature maximization, effectively mitigating semantic ambiguity and texture redundancy. We validate our method on mainstream architectures—including ResNet and ViT—demonstrating qualitatively sharper, more prototypical, and semantically coherent activation visualizations. Quantitatively, it achieves superior performance over state-of-the-art methods across established metrics, including Inception Score and human evaluation scores.

Technology Category

Application Category

📝 Abstract
Neural networks are widely adopted to solve complex and challenging tasks. Especially in high-stakes decision-making, understanding their reasoning process is crucial, yet proves challenging for modern deep networks. Feature visualization (FV) is a powerful tool to decode what information neurons are responding to and hence to better understand the reasoning behind such networks. In particular, in FV we generate human-understandable images that reflect the information detected by neurons of interest. However, current methods often yield unrecognizable visualizations, exhibiting repetitive patterns and visual artifacts that are hard to understand for a human. To address these problems, we propose to guide FV through statistics of real image features combined with measures of relevant network flow to generate prototypical images. Our approach yields human-understandable visualizations that both qualitatively and quantitatively improve over state-of-the-art FVs across various architectures. As such, it can be used to decode which information the network uses, complementing mechanistic circuits that identify where it is encoded. Code is available at: https://github.com/adagorgun/VITAL
Problem

Research questions and friction points this paper is trying to address.

Improving feature visualization clarity in neural networks
Reducing repetitive patterns in neuron response images
Enhancing human understanding of network decision-making
Innovation

Methods, ideas, or system contributions that make the work stand out.

Aligns feature visualization with real image statistics
Measures relevant network flow for better interpretability
Generates prototypical images to improve understanding
🔎 Similar Papers
No similar papers found.
A
Ada Gorgun
Max Planck Institute for Informatics, Saarland Informatics Campus, Germany
B
B. Schiele
Max Planck Institute for Informatics, Saarland Informatics Campus, Germany
Jonas Fischer
Jonas Fischer
Group Leader, Max-Planck-Institute for Informatics
Machine LearningXAIComputational Biology