🤖 AI Summary
Existing methods for generating distractors in mathematical multiple-choice questions (MCQs) struggle to ensure alignment between LLM outputs and students’ authentic error patterns. To address this, we propose a novel training paradigm: first, leveraging self-inconsistency of the model to automatically mine and construct high-quality synthetic preference pairs; then, alternately performing supervised fine-tuning (SFT) and direct preference optimization (DPO) to enhance distractor alignment with prevalent student misconceptions. An LLM-as-a-judge framework is integrated to rigorously assess and ensure data quality. Evaluated on over 1,400 real-world mathematical MCQs, our method achieves 51.6% distractor accuracy and 57.2% error-pattern coverage—substantially outperforming state-of-the-art approaches. The core contribution lies in fully automated construction of semantically consistent preference data without human annotation, coupled with synergistic and stable co-optimization of SFT and DPO.
📝 Abstract
Large language models (LLMs) are increasingly used to generate distractors for multiple-choice questions (MCQs), especially in domains like math education. However, existing approaches are limited in ensuring that the generated distractors are consistent with common student errors. We propose LookAlike, a method that improves error-distractor consistency via preference optimization. Our two main innovations are: (a) mining synthetic preference pairs from model inconsistencies, and (b) alternating supervised fine-tuning (SFT) with Direct Preference Optimization (DPO) to stabilize training. Unlike prior work that relies on heuristics or manually annotated preference data, LookAlike uses its own generation inconsistencies as dispreferred samples, thus enabling scalable and stable training. Evaluated on a real-world dataset of 1,400+ math MCQs, LookAlike achieves 51.6% accuracy in distractor generation and 57.2% in error generation under LLM-as-a-judge evaluation, outperforming an existing state-of-the-art method (45.6% / 47.7%). These improvements highlight the effectiveness of preference-based regularization and inconsistency mining for generating consistent math MCQ distractors at scale.