Generative Models, Humans, Predictive Models: Who Is Worse at High-Stakes Decision Making?

📅 2024-10-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study systematically evaluates the suitability of mainstream large language models (LLMs) for high-stakes judicial recidivism prediction—a domain demanding high accuracy, robustness, and fairness. Method: We conduct a rigorous comparative assessment against human judgments and domain-specific predictive models, employing consistency analysis, adversarial prompt engineering, irrelevant information perturbation (e.g., extraneous photographs), and bias stress testing. Contribution/Results: Our empirical analysis is the first to demonstrate that LLMs underperform domain-specialized models across all dimensions—accuracy, robustness, and fairness—and exhibit significant susceptibility to irrelevant inputs. Critically, several widely adopted bias-mitigation techniques exacerbate decision distortion rather than alleviate it. These findings challenge the viability of deploying LLMs as direct substitutes for human experts or purpose-built models in high-stakes decision-making contexts, providing critical empirical evidence for AI governance and responsible deployment in sensitive domains.

Technology Category

Application Category

📝 Abstract
Despite strong advisory against it, large generative models (LMs) are already being used for decision making tasks that were previously done by predictive models or humans. We put popular LMs to the test in a high-stakes decision making task: recidivism prediction. Studying three closed-access and open-source LMs, we analyze the LMs not exclusively in terms of accuracy, but also in terms of agreement with (imperfect, noisy, and sometimes biased) human predictions or existing predictive models. We conduct experiments that assess how providing different types of information, including distractor information such as photos, can influence LM decisions. We also stress test techniques designed to either increase accuracy or mitigate bias in LMs, and find that some to have unintended consequences on LM decisions. Our results provide additional quantitative evidence to the wisdom that current LMs are not the right tools for these types of tasks.
Problem

Research questions and friction points this paper is trying to address.

Generative models in high-stakes decisions
Comparison with human and predictive models
Impact of information types on decisions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Tested LMs in recidivism prediction tasks
Analyzed LMs against human and model predictions
Assessed impact of diverse information on LM decisions
🔎 Similar Papers
No similar papers found.