🤖 AI Summary
Current large language models (LLMs) largely neglect the critical multimodal scenario of jointly understanding spoken instructions and raw acoustic signals. This work introduces the first LLM framework explicitly designed for co-understanding speech commands and acoustic context, enabling end-to-end joint modeling of spoken queries and real-time audio perception. Methodologically, we propose an audio event annotation module, an ASR-enhanced speech comprehension mechanism, and a multimodal alignment training architecture. We further construct SA-Eval—the first multi-task benchmark for speech-audio understanding—featuring realistic speaking styles and dual-difficulty acoustic scenarios. Experiments demonstrate that our framework achieves state-of-the-art or competitive performance on all three SA-Eval tasks—audio event classification, descriptive generation, and question answering—across both easy and hard test splits, significantly improving cross-modal semantic alignment between speech and audio.
📝 Abstract
Large Language Models (LLMs) have recently shown remarkable ability to process not only text but also multimodal inputs such as speech and audio. However, most existing models primarily focus on analyzing input signals using text instructions, overlooking scenarios in which speech instructions and audio are mixed and serve as inputs to the model. To address these challenges, we introduce Solla, a novel framework designed to understand speech-based questions and hear the acoustic context concurrently. Solla incorporates an audio tagging module to effectively identify and represent audio events, as well as an ASR-assisted prediction method to improve comprehension of spoken content. To rigorously evaluate Solla and other publicly available models, we propose a new benchmark dataset called SA-Eval, which includes three tasks: audio event classification, audio captioning, and audio question answering. SA-Eval has diverse speech instruction with various speaking styles, encompassing two difficulty levels, easy and hard, to capture the range of real-world acoustic conditions. Experimental results show that Solla performs on par with or outperforms baseline models on both the easy and hard test sets, underscoring its effectiveness in jointly understanding speech and audio.