🤖 AI Summary
This work addresses inference-intensive regression (RiR)—a challenging task requiring precise numerical attribute inference (e.g., Likert-scale scores, domain-specific retrieval scores) from text under data scarcity and computational constraints. We propose MENTAT, a parameter-efficient framework that integrates batched reflective prompt optimization with neural ensembling over a frozen large language model (LLM), eliminating full-parameter fine-tuning. Instead, MENTAT leverages prompt engineering and lightweight Transformer encoders to achieve accurate, scalable regression. Evaluated on three real-world RiR tasks, MENTAT achieves an average 65% improvement over strong baselines, substantially outperforming existing few-shot regression methods. Our contributions include: (1) a novel, tuning-free paradigm for numerical semantic reasoning in low-resource settings; (2) the first dedicated benchmark and evaluation framework for RiR; and (3) empirical validation of prompt-based ensembling as a viable alternative to parameter adaptation for regression over frozen LLMs.
📝 Abstract
AI researchers and practitioners increasingly apply large language models (LLMs) to what we call reasoning-intensive regression (RiR), i.e. deducing subtle numerical properties from text. Unlike standard language regression tasks, e.g. for sentiment or similarity, RiR often appears instead in ad-hoc problems like rubric-based scoring or domain-specific retrieval, where much deeper analysis of text is required while only limited task-specific training data and computation are available. We cast three realistic problems as RiR tasks to establish an initial benchmark, and use that to test our hypothesis that prompting frozen LLMs and finetuning Transformer encoders via gradient descent will both often struggle in RiR. We then propose MENTAT, a simple and lightweight method that combines batch-reflective prompt optimization with neural ensemble learning. MENTAT achieves up to 65% improvement over both baselines, though substantial room remains for future advances in RiR.