🤖 AI Summary
Large language models (LLMs) generate speech lacking human-like disfluencies—such as fillers, repetitions, and self-corrections—resulting in low perceptual spontaneity and limited anthropomorphism in conversational AI agents.
Method: This paper proposes a controllable disfluency modeling framework: the first to formulate discrete spoken disfluencies as a learnable, post-hoc LLM enhancement module; integrates LoRA-efficient fine-tuning with end-to-end text-to-speech (TTS) optimization supporting prosodic and paralinguistic features; and enables fine-grained control via disfluency-pattern injection through prompt engineering.
Contribution/Results: User studies demonstrate significant improvements in perceived spontaneity and overall speech naturalness—achieving state-of-the-art performance—while incurring only marginal intelligibility degradation. The approach establishes a novel paradigm for humanizing conversational AI speech through structured, controllable disfluency synthesis.
📝 Abstract
Disfluencies are a natural feature of spontaneous human speech but are typically absent from the outputs of Large Language Models (LLMs). This absence can diminish the perceived naturalness of synthesized speech, which is an important criteria when building conversational agents that aim to mimick human behaviours. We show how the insertion of disfluencies can alleviate this shortcoming. The proposed approach involves (1) fine-tuning an LLM with Low-Rank Adaptation (LoRA) to incorporate various types of disfluencies into LLM-generated utterances and (2) synthesizing those utterances using a text-to-speech model that supports the generation of speech phenomena such as disfluencies. We evaluated the quality of the generated speech across two metrics: intelligibility and perceived spontaneity. We demonstrate through a user study that the insertion of disfluencies significantly increase the perceived spontaneity of the generated speech. This increase came, however, along with a slight reduction in intelligibility.