🤖 AI Summary
Deploying speech emotion captioning systems on resource-constrained edge devices faces significant challenges, including high computational overhead, elevated privacy risks, and the difficulty of small models in accurately capturing fine-grained emotional semantics. This work proposes a novel edge–cloud collaborative inference framework that, for the first time, introduces token-level speculative decoding to this task. The approach employs a lightweight edge model guided by uncertainty estimates to generate initial captions and selectively uploads only high-uncertainty token chunks to a powerful cloud model for verification. Evaluated on the MER2024 benchmark, the method achieves up to a 62.7% improvement in BLEU score over baseline approaches, reduces latency by 1.4× compared to pure edge execution, and increases token throughput by 8.5×, thereby enabling a tunable trade-off among efficiency, output quality, and privacy preservation.
📝 Abstract
Speech Emotion Captioning (SEC) leverages large audio-language models to generate rich, context-aware affective descriptions from speech. However, real-world deployment remains challenging due to the substantial computational demands on resource-constrained edge devices and the privacy risks of transmitting biometric audio. While smaller audio-language models enable efficient on-device SEC, their limited capacity often weakens subtle paralinguistic modeling and fine-grained affective grounding. We propose an edge-cloud collaborative framework based on Uncertainty-Guided Speculative Decoding (UGSD). A lightweight edge model drafts captions locally, and only high-uncertainty token blocks are selectively escalated to a stronger cloud verifier for validation. Experiments on the MER2024 benchmark demonstrate substantial BLEU improvements up to 62.7%. UGSD further achieves 1.4x lower latency and 8.5x higher token throughput compared to an edge-only model. These results empirically characterize the quality-efficiency-privacy trade-off in deployable SEC systems.