๐ค AI Summary
This work addresses the challenge of efficient, accurate, and interpretable attribute extraction in low-resource audio classification, where conventional handcrafted approaches suffer from inefficiency. The authors propose AdaFlock, a novel framework that, for the first time, integrates multimodal large language models (MLLMs) into an adaptive audio attribute discovery pipeline. By leveraging dynamic prompt engineering, AdaFlock automatically uncovers interpretable audio attributes and constructs an attribute-based ensemble classifierโall without human intervention. Evaluated across multiple audio classification tasks, the method consistently outperforms direct MLLM predictions in both accuracy and interpretability. Notably, the entire training process requires only 11 minutes, demonstrating substantial improvements in computational efficiency, classification performance, and model transparency.
๐ Abstract
In predictive modeling for low-resource audio classification, extracting high-accuracy and interpretable attributes is critical. Particularly in high-reliability applications, interpretable audio attributes are indispensable. While human-driven attribute discovery is effective, its low throughput becomes a bottleneck. We propose a method for adaptively discovering interpretable audio attributes using Multimodal Large Language Models (MLLMs). By replacing humans in the AdaFlock framework with MLLMs, our method achieves significantly faster attribute discovery. Our method dynamically identifies salient acoustic characteristics via prompting and constructs an attribute-based ensemble classifier. Experimental results across various audio tasks demonstrate that our method outperforms direct MLLM prediction in the majority of evaluated cases. The entire training completes within 11 minutes, proving it a practical, adaptive solution that surpasses conventional human-reliant approaches.