🤖 AI Summary
This study addresses the lack of effective AI support for fact-checking in non-litigious legal practice, where attorneys remain cautious about adopting generative AI in high-stakes contexts due to concerns over accuracy, confidentiality, and liability. Through semi-structured interviews with 18 practicing lawyers, the research systematically uncovers current fact-checking practices and key barriers to GenAI adoption. Grounded in the core requirements of trustworthiness and accountability, the work proposes design principles for human-AI collaborative systems tailored to legal fact-checking. It introduces the first auditable collaboration framework that jointly optimizes efficiency, professional judgment, and legal ethics, offering both theoretical foundations and practical guidance for developing reliable, ethically compliant legal AI tools.
📝 Abstract
Fact verification is a critical yet underexplored component of non-litigation legal practice. While existing research has examined automation in legal workflow and human-AI collaboration in high-stakes domains, little is known about how GenAI can support fact verification, a task that demands prudent judgment and strict accountability. To address this, we conducted semi-structured interviews with 18 lawyers to understand their current verification practices, attitudes toward GenAI adoption, and expectations for future systems. We found that while lawyers use GenAI for low-risk tasks like drafting and language optimization, concerns over accuracy, confidentiality, and liability are currently limiting its adoption for fact verification. These concerns translate into core design requirements for AI systems that are trustworthy and accountable. Based on these, we contribute design insights for human-AI collaboration in legal fact verification, emphasizing the development of auditable systems that balance efficiency with professional judgment and uphold ethical and legal accountability in high-stakes practice.