🤖 AI Summary
Current speech technologies (ASR/TTS) are largely confined to explicit semantic understanding, failing to model suprasemantic communicative signals—such as emotion, contextual dynamics, and implicit meaning—thereby limiting the naturalness and depth of human–machine interaction. To address this, we propose the “Beyond-Semantic Speech” (BoSS) paradigm and introduce a five-level spoken interaction capability framework (L1–L5) that systematically characterizes multidimensional implicit signals in speech. Integrating cognitive association theory with machine learning, we formalize the modeling of temporally dynamic, context-dependent non-semantic information. Empirical evaluation reveals that state-of-the-art spoken language models perform significantly below human baselines on BoSS tasks, exposing critical bottlenecks in situational awareness and interactional richness. This work establishes a novel paradigm, a principled framework, and a verifiable evaluation methodology for human-like speech intelligence.
📝 Abstract
Human communication involves more than explicit semantics, with implicit signals and contextual cues playing a critical role in shaping meaning. However, modern speech technologies, such as Automatic Speech Recognition (ASR) and Text-to-Speech (TTS) often fail to capture these beyond-semantic dimensions. To better characterize and benchmark the progression of speech intelligence, we introduce Spoken Interaction System Capability Levels (L1-L5), a hierarchical framework illustrated the evolution of spoken dialogue systems from basic command recognition to human-like social interaction. To support these advanced capabilities, we propose Beyond-Semantic Speech (BoSS), which refers to the set of information in speech communication that encompasses but transcends explicit semantics. It conveys emotions, contexts, and modifies or extends meanings through multidimensional features such as affective cues, contextual dynamics, and implicit semantics, thereby enhancing the understanding of communicative intentions and scenarios. We present a formalized framework for BoSS, leveraging cognitive relevance theories and machine learning models to analyze temporal and contextual speech dynamics. We evaluate BoSS-related attributes across five different dimensions, reveals that current spoken language models (SLMs) are hard to fully interpret beyond-semantic signals. These findings highlight the need for advancing BoSS research to enable richer, more context-aware human-machine communication.