LLM-Powered Preference Elicitation in Combinatorial Assignment

📅 2025-02-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the high cognitive burden imposed on users by traditional multi-round preference elicitation (PE) in combinatorial allocation. We propose leveraging large language models (LLMs) as human preference proxies to enable one-shot, high-fidelity preference elicitation. To tackle new challenges—including LLM response variability and computational overhead—we introduce the first LLM-adapted preference proxy framework, which synergistically integrates combinatorial optimization, preference modeling, and course allocation mechanisms. Our method eliminates iterative human-in-the-loop queries, directly parsing natural-language preferences into utility representations. Evaluated on real-world course allocation tasks, it achieves up to 20% improvement in configuration efficiency. Results demonstrate cross-model robustness (across GPT, Claude, etc.) and strong tolerance to fluctuations in LLM output quality.

Technology Category

Application Category

📝 Abstract
We study the potential of large language models (LLMs) as proxies for humans to simplify preference elicitation (PE) in combinatorial assignment. While traditional PE methods rely on iterative queries to capture preferences, LLMs offer a one-shot alternative with reduced human effort. We propose a framework for LLM proxies that can work in tandem with SOTA ML-powered preference elicitation schemes. Our framework handles the novel challenges introduced by LLMs, such as response variability and increased computational costs. We experimentally evaluate the efficiency of LLM proxies against human queries in the well-studied course allocation domain, and we investigate the model capabilities required for success. We find that our approach improves allocative efficiency by up to 20%, and these results are robust across different LLMs and to differences in quality and accuracy of reporting.
Problem

Research questions and friction points this paper is trying to address.

LLMs simplify preference elicitation in combinatorial assignment
Propose LLM framework for SOTA ML-powered PE schemes
Improve allocative efficiency by up to 20%
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs simplify preference elicitation
Framework integrates LLMs with ML
Improves allocative efficiency by 20%
🔎 Similar Papers
No similar papers found.