🤖 AI Summary
Current LLM-based chatbots exhibit fluent language generation but struggle to dynamically model fine-grained character traits—such as affective tone, contextual awareness, and personality evolution—due to their implicit, time-varying nature and poor learnability from synthetic data. To address this, we propose the Verbal Variational Autoencoder (V-VAE), the first framework to construct an interpretable, intervenable three-dimensional latent space that disentangles dialogue style, interaction patterns, and personality attributes, enabling dynamic and controllable anthropomorphic generation. We concurrently release HumanChatData, a high-quality human-annotated dialogue dataset, and HumanChatBench, a dedicated evaluation benchmark. Experiments demonstrate that our method significantly outperforms state-of-the-art baselines on both HumanChatBench and DialogBench, validating the critical role of fine-grained controllability mechanisms and authentic, high-fidelity training data in advancing human-like conversational performance.
📝 Abstract
With the continued proliferation of Large Language Model (LLM) based chatbots, there is a growing demand for generating responses that are not only linguistically fluent but also consistently aligned with persona-specific traits in conversations. However, existing role-play and persona-based chat approaches rely heavily on static role descriptions, coarse-grained signal space, and low-quality synthetic data, which fail to capture dynamic fine-grained details in human-like chat. Human-like chat requires modeling subtle latent traits, such as emotional tone, situational awareness, and evolving personality, which are difficult to predefine and cannot be easily learned from synthetic or distillation-based data. To address these limitations, we propose a Verbal Variational Auto-Encoding (V-VAE) framework, containing a variational auto-encoding module and fine-grained control space which dynamically adapts dialogue behaviour based on fine-grained, interpretable latent variables across talking style, interaction patterns, and personal attributes. We also construct a high-quality dataset, HumanChatData, and benchmark HumanChatBench to address the scarcity of high-quality data in the human-like domain. Experiments show that LLMs based on V-VAE consistently outperform standard baselines on HumanChatBench and DialogBench, which further demonstrates the effectiveness of V-VAE and HumanChatData.