ALIGN: Prompt-based Attribute Alignment for Reliable, Responsible, and Personalized LLM-based Decision-Making

📅 2025-07-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of dynamically aligning large language models (LLMs) with users’ diverse and evolving values and preferences in decision support, this paper proposes ALIGN—a fine-grained attribute alignment framework grounded in prompt engineering. ALIGN employs a modular architecture supporting dynamic prompt configuration, structured output generation, chain-of-reasoning augmentation, and swappable LLM backends, enabling value-driven personalized decision modeling. Its novelty lies in the integrated use of qualitative comparative interfaces and quantitative alignment evaluation metrics, validated through empirical studies in two high-stakes domains: public opinion analysis and medical triage. Open-sourced implementation demonstrates ALIGN’s effectiveness in demographic and value alignment tasks, significantly improving LLM-based decisions in reliability, interpretability, and accountability. The framework provides a scalable, principled methodology for responsible, personalized AI-assisted decision-making in risk-sensitive applications.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) are increasingly being used as decision aids. However, users have diverse values and preferences that can affect their decision-making, which requires novel methods for LLM alignment and personalization. Existing LLM comparison tools largely focus on benchmarking tasks, such as knowledge-based question answering. In contrast, our proposed ALIGN system focuses on dynamic personalization of LLM-based decision-makers through prompt-based alignment to a set of fine-grained attributes. Key features of our system include robust configuration management, structured output generation with reasoning, and several algorithm implementations with swappable LLM backbones, enabling different types of analyses. Our user interface enables a qualitative, side-by-side comparison of LLMs and their alignment to various attributes, with a modular backend for easy algorithm integration. Additionally, we perform a quantitative analysis comparing alignment approaches in two different domains: demographic alignment for public opinion surveys and value alignment for medical triage decision-making. The entire ALIGN framework is open source and will enable new research on reliable, responsible, and personalized LLM-based decision-makers.
Problem

Research questions and friction points this paper is trying to address.

Aligning LLMs with diverse user values and preferences
Personalizing LLM-based decision-making via prompt-based attribute alignment
Enabling reliable and responsible LLM decisions through dynamic alignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Prompt-based alignment for personalized LLM decisions
Modular backend with swappable LLM backbones
Qualitative and quantitative alignment analysis tools
🔎 Similar Papers
No similar papers found.