Trade-Offs in Deploying Legal AI: Insights from a Public Opinion Study to Guide AI Risk Management

πŸ“… 2026-02-10
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This study addresses the critical gap in current deployments of generative AI in the legal domain, which systematically neglect public risk perceptions and rely predominantly on expert-driven regulatory frameworks that marginalize the voices of affected populations. Drawing on a representative sample of 488 German citizens, the research integrates survey data, statistical modeling, and thematic analysis to incorporate public risk–benefit trade-offs into AI risk governance for the first time. It identifies core concerns centered on transparency, fairness, and accountability, uncovers key predictors shaping risk acceptance, and distills policy-relevant trade-off themes. These findings provide an empirical foundation for developing AI governance mechanisms that are not only compliant with legal standards but also ethically sound and socially legitimate.

Technology Category

Application Category

πŸ“ Abstract
Generative AI tools are increasingly used for legal tasks, including legal research, drafting documents, and even for legal decision-making. As for other purposes, the use of GenAI in the legal domain comes with various risks and benefits that needs to be properly managed to ensure implementation in a way that serves public values and protect human rights. While the EU mandates risk assessment and audits before market introduction for some use cases (e.g., use by judges for administration of justice) other use cases do not fall under the AI Acts'high-risk classifications (e.g., use by citizens for legal consultation or drafting documents). Further, current risk management practices prioritize expert judgment on risk factor identification and prioritization without a corresponding legal requirement to consult with affected communities. Seeing the societal importance of the legal sector and the potentially transformative impact of GenAI in this sector, the acceptability and legitimacy of GenAI solutions also depends on public perceptions and a better understanding of the risks and benefits citizens associated with the use of AI in the legal sector. As a response, this papers presents data from a representative sample of German citizens (n=488) outlining citizens'perspectives on the use of GenAI for two legal tasks: legal consultation and legal mediation. Concretely, we i) systematically map risks and benefit factors for both legal tasks, ii) describe predictors that influence risk acceptance of the use of GenAI for those tasks, and iii) highlight emerging trade-off themes that citizens engage in when weighing up risk acceptability. Our results provides an empirical overview of citizens'concerns regarding risk management of GenAI for the legal domain, foregrounding critical themes that complement current risk assessment procedures.
Problem

Research questions and friction points this paper is trying to address.

Legal AI
Generative AI
Public Opinion
Risk Management
AI Governance
Innovation

Methods, ideas, or system contributions that make the work stand out.

public opinion
legal AI
risk management
generative AI
human-centered AI
πŸ”Ž Similar Papers
No similar papers found.
K
Kimon Kieslich
AI, Media & Democracy Lab, University of Amsterdam, The Netherlands; UKUDLA, University of Hohenheim, Germany; Amsterdam School of Communication Research, University of Amsterdam, The Netherlands; Institute for Information Law, University of Amsterdam, The Netherlands
S
Sophie Morosoli
AI, Media & Democracy Lab, University of Amsterdam, The Netherlands; Amsterdam School of Communication Research, University of Amsterdam, The Netherlands
Nicholas Diakopoulos
Nicholas Diakopoulos
Professor, Northwestern University
Computational JournalismAlgorithmic AccountabilityHuman Computer InteractionAI EthicsSocial
Natali Helberger
Natali Helberger
Institute for Information Law
Data analyticsAIpersonlised communicationplatformslaw & regulation