🤖 AI Summary
To address the challenge of dynamically aligning large language models (LLMs) with users’ diverse and evolving values and preferences in decision support, this paper proposes ALIGN—a fine-grained attribute alignment framework grounded in prompt engineering. ALIGN employs a modular architecture supporting dynamic prompt configuration, structured output generation, chain-of-reasoning augmentation, and swappable LLM backends, enabling value-driven personalized decision modeling. Its novelty lies in the integrated use of qualitative comparative interfaces and quantitative alignment evaluation metrics, validated through empirical studies in two high-stakes domains: public opinion analysis and medical triage. Open-sourced implementation demonstrates ALIGN’s effectiveness in demographic and value alignment tasks, significantly improving LLM-based decisions in reliability, interpretability, and accountability. The framework provides a scalable, principled methodology for responsible, personalized AI-assisted decision-making in risk-sensitive applications.
📝 Abstract
Large language models (LLMs) are increasingly being used as decision aids. However, users have diverse values and preferences that can affect their decision-making, which requires novel methods for LLM alignment and personalization. Existing LLM comparison tools largely focus on benchmarking tasks, such as knowledge-based question answering. In contrast, our proposed ALIGN system focuses on dynamic personalization of LLM-based decision-makers through prompt-based alignment to a set of fine-grained attributes. Key features of our system include robust configuration management, structured output generation with reasoning, and several algorithm implementations with swappable LLM backbones, enabling different types of analyses. Our user interface enables a qualitative, side-by-side comparison of LLMs and their alignment to various attributes, with a modular backend for easy algorithm integration. Additionally, we perform a quantitative analysis comparing alignment approaches in two different domains: demographic alignment for public opinion surveys and value alignment for medical triage decision-making. The entire ALIGN framework is open source and will enable new research on reliable, responsible, and personalized LLM-based decision-makers.