🤖 AI Summary
This study addresses a critical limitation in current large audio language models, which prioritize semantic understanding while neglecting paralinguistic cues such as intonation and emotion, thereby impairing contextual awareness. The work presents the first systematic investigation into the hierarchical processing of semantic and paralinguistic information within these models and introduces Paralinguistic-Enhanced Fine-Tuning (PE-FT), a novel protocol that integrates selective layer fine-tuning with a dual-level classification head. PE-FT significantly enhances paralinguistic perception without increasing computational overhead. Experimental results demonstrate that PE-FT outperforms full-model fine-tuning across multiple metrics, achieving more efficient and effective context-aware spoken language interaction.
📝 Abstract
Large Audio Language Models (LALMs) have expanded the interaction with human to speech modality, which introduces great interactive potential, due to the paralinguistic cues implicitly indicating the user context. However, building on the current content-centred paradigm, LALMs usually neglect such paralinguistic cues and respond solely based on query content. In this work, to resurface the paralinguistic awareness in LALMs, we introduce five diverse layer-wise analyses to jointly identify paralinguistic layers and semantic understanding layers. Based on these insights, we propose a paralinguistic-enhanced fine-tuning (PE-FT) protocol accordingly to equip LALMs with paralinguistic-aware capabilities, including (1) selective-layer fine-tuning, and (2) an auxiliary dual-level classification head. Our experiments demonstrate that PE-FT protocol efficiently and effectively resurfaces the paralinguistic awareness, even surpassing the performance of the all-layer fine-tuning strategy.