🤖 AI Summary
Current audio large language models (ALLMs) lack systematic, modality-specific trustworthiness evaluation frameworks addressing their unique risks.
Method: We introduce the first multidimensional benchmark covering fairness, hallucination, safety, privacy, robustness, and speaker authentication—built upon 4,420+ real-world audio-text samples and 18 experimental configurations. Our methodology includes multimodal audio-text data curation, design of nine audio-specific evaluation metrics, a scalable automated scoring pipeline, and real-scenario-driven adversarial testing.
Contribution/Results: We formally define and quantify audio-specific trustworthiness risks for the first time, open-source an extensible, automated ALLM trustworthiness evaluation platform, and empirically demonstrate pervasive hallucinations, privacy leakage, and speaker misidentification in mainstream ALLMs under high-risk audio conditions—providing empirical foundations and technical support for trustworthy deployment.
📝 Abstract
The rapid advancement and expanding applications of Audio Large Language Models (ALLMs) demand a rigorous understanding of their trustworthiness. However, systematic research on evaluating these models, particularly concerning risks unique to the audio modality, remains largely unexplored. Existing evaluation frameworks primarily focus on the text modality or address only a restricted set of safety dimensions, failing to adequately account for the unique characteristics and application scenarios inherent to the audio modality. We introduce AudioTrust-the first multifaceted trustworthiness evaluation framework and benchmark specifically designed for ALLMs. AudioTrust facilitates assessments across six key dimensions: fairness, hallucination, safety, privacy, robustness, and authentication. To comprehensively evaluate these dimensions, AudioTrust is structured around 18 distinct experimental setups. Its core is a meticulously constructed dataset of over 4,420 audio/text samples, drawn from real-world scenarios (e.g., daily conversations, emergency calls, voice assistant interactions), specifically designed to probe the multifaceted trustworthiness of ALLMs. For assessment, the benchmark carefully designs 9 audio-specific evaluation metrics, and we employ a large-scale automated pipeline for objective and scalable scoring of model outputs. Experimental results reveal the trustworthiness boundaries and limitations of current state-of-the-art open-source and closed-source ALLMs when confronted with various high-risk audio scenarios, offering valuable insights for the secure and trustworthy deployment of future audio models. Our platform and benchmark are available at https://github.com/JusperLee/AudioTrust.