AudioGenX: Explainability on Text-to-Audio Generative Models

📅 2025-02-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Text-to-audio generation (TAG) models suffer from a lack of interpretability regarding how input text tokens influence the generated audio. This work introduces the first faithfulness-oriented explainability framework tailored for audio generation, delivering verifiable token-level attributions over audio tokens via joint optimization of factual attribution and counterfactual masking objectives. Our method employs a differentiable Explainer network trained with a multi-objective loss and incorporates novel XAI evaluation metrics specifically designed for audio modalities. Extensive experiments across multiple state-of-the-art TAG models demonstrate that our approach substantially outperforms existing baselines: quantitative evaluation shows a 23.6% improvement in faithfulness, while human evaluation confirms high consistency, comprehensibility, and credibility of the attributions. The core contribution is the first end-to-end, token-level, and empirically verifiable interpretability framework for audio generation.

Technology Category

Application Category

📝 Abstract
Text-to-audio generation models (TAG) have achieved significant advances in generating audio conditioned on text descriptions. However, a critical challenge lies in the lack of transparency regarding how each textual input impacts the generated audio. To address this issue, we introduce AudioGenX, an Explainable AI (XAI) method that provides explanations for text-to-audio generation models by highlighting the importance of input tokens. AudioGenX optimizes an Explainer by leveraging factual and counterfactual objective functions to provide faithful explanations at the audio token level. This method offers a detailed and comprehensive understanding of the relationship between text inputs and audio outputs, enhancing both the explainability and trustworthiness of TAG models. Extensive experiments demonstrate the effectiveness of AudioGenX in producing faithful explanations, benchmarked against existing methods using novel evaluation metrics specifically designed for audio generation tasks.
Problem

Research questions and friction points this paper is trying to address.

Lack of transparency in text-to-audio generation models
Understanding how textual inputs impact generated audio
Providing faithful explanations for audio token importance
Innovation

Methods, ideas, or system contributions that make the work stand out.

XAI method for text-to-audio generation models
Optimizes Explainer with factual and counterfactual objectives
Provides token-level explanations for audio generation
🔎 Similar Papers
No similar papers found.