🤖 AI Summary
This work proposes a method to improve the accuracy of next-turn response prediction in dialogue by modeling implicit chains of thought present in human conversations. Treating the chain of thought as a latent variable, the authors derive and optimize a variational lower bound on the log-likelihood of real dialogue data, thereby achieving a distribution matching objective. In contrast to reward-based approaches that rely on large language models as judges (LLM-as-a-judge), the proposed approach more effectively enhances the human-likeness of generated responses. Experimental results demonstrate that the method significantly outperforms existing baselines in both log-likelihood and human preference win rates, confirming the effectiveness and superiority of integrating implicit chains of thought with a distribution matching objective.
📝 Abstract
To predict what someone will say is to model how they think. We study this through next-turn dialogue prediction: given a conversation, predict the next utterance produced by a person. We compare learning approaches along two dimensions: (1) whether the model is allowed to think before responding, and (2) how learning is rewarded either through an LLM-as-a-judge that scores semantic similarity and information completeness relative to the ground-truth response, or by directly maximizing the log-probability of the true human dialogue. We find that optimizing for judge-based rewards indeed increases judge scores throughout training, however it decreases the likelihood assigned to ground truth human responses and decreases the win rate when human judges choose the most human-like response among a real and synthetic option. This failure is amplified when the model is allowed to think before answering. In contrast, by directly maximizing the log-probability of observed human responses, the model learns to better predict what people actually say, improving on both log-probability and win rate evaluations. Treating chain-of-thought as a latent variable, we derive a lower bound on the log-probability. Optimizing this objective yields the best results on all our evaluations. These results suggest that thinking helps primarily when trained with a distribution-matching objective grounded in real human dialogue, and that scaling this approach to broader conversational data may produce models with a more nuanced understanding of human behavior.