Resurfacing Paralinguistic Awareness in Large Audio Language Models

📅 2026-03-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses a critical limitation in current large audio language models, which prioritize semantic understanding while neglecting paralinguistic cues such as intonation and emotion, thereby impairing contextual awareness. The work presents the first systematic investigation into the hierarchical processing of semantic and paralinguistic information within these models and introduces Paralinguistic-Enhanced Fine-Tuning (PE-FT), a novel protocol that integrates selective layer fine-tuning with a dual-level classification head. PE-FT significantly enhances paralinguistic perception without increasing computational overhead. Experimental results demonstrate that PE-FT outperforms full-model fine-tuning across multiple metrics, achieving more efficient and effective context-aware spoken language interaction.

Technology Category

Application Category

📝 Abstract
Large Audio Language Models (LALMs) have expanded the interaction with human to speech modality, which introduces great interactive potential, due to the paralinguistic cues implicitly indicating the user context. However, building on the current content-centred paradigm, LALMs usually neglect such paralinguistic cues and respond solely based on query content. In this work, to resurface the paralinguistic awareness in LALMs, we introduce five diverse layer-wise analyses to jointly identify paralinguistic layers and semantic understanding layers. Based on these insights, we propose a paralinguistic-enhanced fine-tuning (PE-FT) protocol accordingly to equip LALMs with paralinguistic-aware capabilities, including (1) selective-layer fine-tuning, and (2) an auxiliary dual-level classification head. Our experiments demonstrate that PE-FT protocol efficiently and effectively resurfaces the paralinguistic awareness, even surpassing the performance of the all-layer fine-tuning strategy.
Problem

Research questions and friction points this paper is trying to address.

paralinguistic awareness
Large Audio Language Models
speech modality
user context
content-centred paradigm
Innovation

Methods, ideas, or system contributions that make the work stand out.

Paralinguistic Awareness
Large Audio Language Models
Layer-wise Analysis
Selective-layer Fine-tuning
Dual-level Classification
🔎 Similar Papers
No similar papers found.
H
Hao Yang
Department of Data Science & AI, Monash University, Australia
M
Minghan Wang
Department of Data Science & AI, Monash University, Australia
T
Tongtong Wu
Department of Data Science & AI, Monash University, Australia
L
Lizhen Qu
Department of Data Science & AI, Monash University, Australia
Ehsan Shareghi
Ehsan Shareghi
Monash University
Natural Language Processing
G
Gholamreza Haffari
Department of Data Science & AI, Monash University, Australia