π€ AI Summary
Addressing the dual challenges of insufficient agent decision interpretability and high reward annotation cost in humanβAI coexistence scenarios, this paper proposes a model-agnostic natural language explanation generation framework. Methodologically, it pioneers the deep integration of flow-matching generative models with latent representations of large language models (LLMs), explicitly embedding linguistic explanation cues into the reward modeling process to enable semantically aligned, dense reward auto-generation. Crucially, it operates without human reward annotations and jointly optimizes explanation generation and reinforcement learning objectives end-to-end. Empirically, the approach significantly improves explanation plausibility and faithfulness across diverse RL and LLM benchmarks, while simultaneously enhancing downstream task performance. It demonstrates strong generalization capability and training efficiency, offering a scalable solution for interpretable, annotation-free reward learning.
π Abstract
As humans increasingly share environments with diverse agents powered by RL, LLMs, and beyond, the ability to explain their policies in natural language will be vital for reliable coexistence. In this paper, we build a model-agnostic explanation generator based on an LLM. The technical novelty is that the rewards for training this LLM are generated by a generative flow matching model. This model has a specially designed structure with a hidden layer merged with an LLM to harness the linguistic cues of explanations into generating appropriate rewards. Experiments on both RL and LLM tasks demonstrate that our method can generate dense and effective rewards while saving on expensive human feedback; it thus enables effective explanations and even improves the accuracy of the decisions in original tasks.