🤖 AI Summary
This work addresses the challenge of aligning dialogue agents using only a single session-level global feedback signal—such as an overall rating or multimodal cues (e.g., voice tone, facial expressions)—without relying on handcrafted reward functions or per-turn annotations. We propose the first LLM-driven reward decomposition framework: a frozen large language model performs reasoning-based decomposition of global feedback into turn-level implicit rewards; this is integrated with natural-language-based multimodal behavior encoding, lightweight reward model distillation, and RLHF fine-tuning. Crucially, our method eliminates the need for a trainable reward model and unifies modeling across both textual and multimodal feedback. Human evaluations demonstrate significant improvements in dialogue quality, outperforming existing reward decomposition approaches. These results validate the efficacy and generalizability of frozen LLMs as powerful, reasoning-capable reward decomposers.
📝 Abstract
We propose a large language model based reward decomposition framework for aligning dialogue agents using only a single session-level feedback signal. We leverage the reasoning capabilities of a frozen, pretrained large language model (LLM) to infer fine-grained local implicit rewards by decomposing global, session-level feedback. Our first text-only variant prompts the LLM to perform reward decomposition using only the dialogue transcript. The second multimodal variant incorporates additional behavioral cues, such as pitch, gaze, and facial affect, expressed as natural language descriptions. These inferred turn-level rewards are distilled into a lightweight reward model, which we utilize for RL-based fine-tuning for dialogue generation. We evaluate both text-only and multimodal variants against state-of-the-art reward decomposition methods and demonstrate notable improvements in human evaluations of conversation quality, suggesting that LLMs are strong reward decomposers that obviate the need for manual reward shaping and granular human feedback.