🤖 AI Summary
This paper investigates the reasoning capabilities of large language models (LLMs) in preference-constrained many-to-one matching problems—exemplified by college admissions. To this end, we introduce the first systematic benchmark for matching markets, comprising 369 instances rigorously evaluated along three dimensions: feasibility, stability, and optimality. We conduct extensive experiments across both reasoning-oriented models (e.g., QwQ, GPT-oss) and standard autoregressive LLMs (e.g., Llama, Qwen, Mistral), comparing prompting strategies including chain-of-thought, role prompting, in-context learning, and self-feedback iterative prompting. Results show that reasoning models significantly outperform traditional LLMs; however, prompting efficacy is non-monotonic—iterative prompting exhibits performance inflection points beyond which accuracy degrades. Our work uncovers both the promise and limitations of LLMs in structured constraint reasoning, establishing a novel empirical benchmark and actionable insights for AI-augmented decision-making in matching markets.
📝 Abstract
Recent advances in reasoning with large language models (LLMs) have demonstrated strong performance on complex mathematical tasks, including combinatorial optimization. Techniques such as Chain-of-Thought and In-Context Learning have further enhanced this capability, making LLMs both powerful and accessible tools for a wide range of users, including non-experts. However, applying LLMs to matching problems, which require reasoning under preferential and structural constraints, remains underexplored. To address this gap, we introduce a novel benchmark of 369 instances of the College Admission Problem, a canonical example of a matching problem with preferences, to evaluate LLMs across key dimensions: feasibility, stability, and optimality. We employ this benchmark to assess the performance of several open-weight LLMs. Our results first reveal that while LLMs can satisfy certain constraints, they struggle to meet all evaluation criteria consistently. They also show that reasoning LLMs, like QwQ and GPT-oss, significantly outperform traditional models such as Llama, Qwen or Mistral, defined here as models used without any dedicated reasoning mechanisms. Moreover, we observed that LLMs reacted differently to the various prompting strategies tested, which include Chain-of-Thought, In-Context Learning and role-based prompting, with no prompt consistently offering the best performance. Finally, we report the performances from iterative prompting with auto-generated feedback and show that they are not monotonic; they can peak early and then significantly decline in later attempts. Overall, this work offers a new perspective on model reasoning performance and the effectiveness of prompting strategies in combinatorial optimization problems with preferential constraints.