🤖 AI Summary
Discriminative audio representations often lose fine-grained spatiotemporal details and struggle to balance perceptual fidelity with semantic abstraction. Method: We propose a novel generative-discriminative feature fusion paradigm. We systematically characterize the advantages of generative features—learned by models such as Diffusion or VAE—in time-frequency locality and structural fidelity, and establish their complementarity with discriminative features. A multi-task collaborative optimization framework is designed to dynamically fuse generative features (capturing fine-grained acoustic structures) and discriminative features (encoding high-level semantics). Contribution/Results: Extensive evaluation across audio classification, event labeling, and fine-grained description tasks demonstrates consistent gains. Notably, on audio captioning, our method significantly improves BLEU-4 (+2.1) and SPICE (+3.4), confirming its dual capability in perceptual precision and semantic robustness. This work provides a new direction for audio representation learning.
📝 Abstract
This work pioneers the utilization of generative features in enhancing audio understanding. Unlike conventional discriminative features that directly optimize posterior and thus emphasize semantic abstraction while losing fine grained details, audio generation models inherently encode both spatiotemporal perception (capturing local acoustic texture across time and frequency) and semantic prior (knowing what to generate). It motivates us to explore the bridge of these complementary strengths. We provide a systematic investigation of their differences and complementary relationships, and ultimately propose an effective fusion strategy. Experiments across multiple tasks, including sound event classification, tagging, and particularly the fine grained task of audio captioning, demonstrate consistent performance gains. Beyond empirical improvements, this work more importantly introduces a new perspective on audio representation learning, highlighting that generative discriminative complementarity can provide both detailed perception and semantic awareness for audio understanding.