🤖 AI Summary
This paper addresses a fundamental design trade-off in large language model (LLM) policy preference modeling: whether LLMs should act as “representatives” (mirroring users’ immediate expressed preferences) or “trustees” (optimizing decisions for users’ long-term welfare). Current work overly relies on behavioral cloning, neglecting normative considerations.
Method: We propose and empirically compare two alignment paradigms: behavioral cloning to instantiate the representative role, and a time-weighted utility framework to formalize trustee decision-making—evaluated against expert consensus on policy reasonableness.
Contribution/Results: The trustee paradigm achieves significantly higher alignment with expert recommendations on consensus-driven issues, improving policy reasonableness; however, on non-consensus issues, it amplifies model prior biases, exposing an inherent tension between autonomy and paternalism. This is the first systematic study to reveal the temporal utility trade-offs in LLM value alignment and their associated risks, providing foundational theoretical insights and empirical benchmarks for AI governance.
📝 Abstract
Large language models (LLMs) have shown promising accuracy in predicting survey responses and policy preferences, which has increased interest in their potential to represent human interests in various domains. Most existing research has focused on behavioral cloning, effectively evaluating how well models reproduce individuals'expressed preferences. Drawing on theories of political representation, we highlight an underexplored design trade-off: whether AI systems should act as delegates, mirroring expressed preferences, or as trustees, exercising judgment about what best serves an individual's interests. This trade-off is closely related to issues of LLM sycophancy, where models can encourage behavior or validate beliefs that may be aligned with a user's short-term preferences, but is detrimental to their long-term interests. Through a series of experiments simulating votes on various policy issues in the U.S. context, we apply a temporal utility framework that weighs short and long-term interests (simulating a trustee role) and compare voting outcomes to behavior-cloning models (simulating a delegate). We find that trustee-style predictions weighted toward long-term interests produce policy decisions that align more closely with expert consensus on well-understood issues, but also show greater bias toward models'default stances on topics lacking clear agreement. These findings reveal a fundamental trade-off in designing AI systems to represent human interests. Delegate models better preserve user autonomy but may diverge from well-supported policy positions, while trustee models can promote welfare on well-understood issues yet risk paternalism and bias on subjective topics.