🤖 AI Summary
Large language models (LLMs) lack the capacity for progressive Bayesian updating over latent variables—such as user preferences—resulting in markedly inferior probabilistic reasoning during multi-turn interactions compared to humans. To address this, we propose the Bayesian Teaching Paradigm (BTP), a supervised fine-tuning framework that trains LLMs to emulate the belief-updating behavior of an optimal Bayesian reasoner, thereby endowing them with generalizable, rather than task-specific, probabilistic inference capabilities. Our method integrates principled Bayesian modeling, cross-task generalization evaluation, and human-subject comparison experiments. Results demonstrate substantial performance gains on recommendation tasks, strong zero-shot generalization to unseen tasks, belief-update trajectories closely approximating theoretical optima, consistent superiority over state-of-the-art LLM baselines, and—on several metrics—even outperformance of human participants.
📝 Abstract
Artificial intelligence systems based on large language models (LLMs) are increasingly used as agents that interact with users and with the world. To do so successfully, LLMs need to construct internal representations of the world and form probabilistic beliefs about those representations. To provide a user with personalized recommendations, for example, the LLM needs to gradually infer the user's preferences, over the course of multiple interactions. To evaluate whether contemporary LLMs are able to do so, we use the Bayesian inference framework from probability theory, which lays out the optimal way to update an agent's beliefs as it receives new information. We first show that the LLMs do not update their beliefs as expected from the Bayesian framework, and that consequently their predictions do not improve as expected as more information becomes available, even less so than we find is the case for humans. To address this issue, we teach the LLMs to reason in a Bayesian manner by training them to mimic the predictions of an optimal Bayesian model. We find that this approach not only significantly improves the LLM's performance on the particular recommendation task it is trained on, but also enables generalization to other tasks. This suggests that this method endows the LLM with broader Bayesian reasoning skills. More generally, our results indicate that LLMs can learn about reasoning strategies effectively and generalize those skills to new domains, which in part explains LLMs' empirical success.