When Audio Generators Become Good Listeners: Generative Features for Understanding Tasks

📅 2025-09-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Discriminative audio representations often lose fine-grained spatiotemporal details and struggle to balance perceptual fidelity with semantic abstraction. Method: We propose a novel generative-discriminative feature fusion paradigm. We systematically characterize the advantages of generative features—learned by models such as Diffusion or VAE—in time-frequency locality and structural fidelity, and establish their complementarity with discriminative features. A multi-task collaborative optimization framework is designed to dynamically fuse generative features (capturing fine-grained acoustic structures) and discriminative features (encoding high-level semantics). Contribution/Results: Extensive evaluation across audio classification, event labeling, and fine-grained description tasks demonstrates consistent gains. Notably, on audio captioning, our method significantly improves BLEU-4 (+2.1) and SPICE (+3.4), confirming its dual capability in perceptual precision and semantic robustness. This work provides a new direction for audio representation learning.

Technology Category

Application Category

📝 Abstract
This work pioneers the utilization of generative features in enhancing audio understanding. Unlike conventional discriminative features that directly optimize posterior and thus emphasize semantic abstraction while losing fine grained details, audio generation models inherently encode both spatiotemporal perception (capturing local acoustic texture across time and frequency) and semantic prior (knowing what to generate). It motivates us to explore the bridge of these complementary strengths. We provide a systematic investigation of their differences and complementary relationships, and ultimately propose an effective fusion strategy. Experiments across multiple tasks, including sound event classification, tagging, and particularly the fine grained task of audio captioning, demonstrate consistent performance gains. Beyond empirical improvements, this work more importantly introduces a new perspective on audio representation learning, highlighting that generative discriminative complementarity can provide both detailed perception and semantic awareness for audio understanding.
Problem

Research questions and friction points this paper is trying to address.

Exploring generative features to enhance audio understanding tasks
Bridging generative spatiotemporal perception with semantic prior knowledge
Developing fusion strategy for detailed perception and semantic awareness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Using generative features for audio understanding tasks
Fusing generative and discriminative audio model strengths
Providing detailed perception and semantic awareness
🔎 Similar Papers
No similar papers found.
Z
Zeyu Xie
Guangdong Provincial Key Laboratory of Ultra High Definition Immersive Media Technology, Peking University, Shenzhen
C
Chenxing Li
Tencent AI Lab, Seattle
Xuenan Xu
Xuenan Xu
Shanghai Jiao Tong University
audio generationaudio understandingspeech synthesis
Mengyue Wu
Mengyue Wu
Shanghai Jiao Tong University
Speech perception and productionaffective computingaudio cognition
W
Wenfu Wang
Tencent AI Lab, Seattle
Ruibo Fu
Ruibo Fu
Associate Professor,CASIA
AIGCLMMIntelligent speech interactionDeepfake detection
M
Meng Yu
Tencent AI Lab, Seattle
D
Dong Yu
Tencent AI Lab, Seattle
Yuexian Zou
Yuexian Zou
Peking University Shenzhen Graduate School
Machine LearningSpeech ProcessingImage Processing