DALDALL: Data Augmentation for Lexical and Semantic Diverse in Legal Domain by leveraging LLM-Persona

📅 2026-03-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of data scarcity in legal information retrieval, where existing data augmentation techniques struggle to produce high-quality, domain-appropriate queries. To overcome this limitation, the authors propose a novel large language model–based data augmentation framework that incorporates professional role–specific prompts—such as those emulating lawyers, judges, and prosecutors—into the generation process. This approach, which uniquely integrates domain-expert personas into prompting strategies, significantly enhances both lexical diversity and semantic fidelity of the synthesized queries. Experimental results on the CLERC and COLIEE benchmarks demonstrate that the generated queries achieve lower Self-BLEU scores, indicating higher diversity, and when used to fine-tune dense retrievers, yield superior recall performance compared to current state-of-the-art baselines.

Technology Category

Application Category

📝 Abstract
Data scarcity remains a persistent challenge in low-resource domains. While existing data augmentation methods leverage the generative capabilities of large language models (LLMs) to produce large volumes of synthetic data, these approaches often prioritize quantity over quality and lack domain-specific strategies. In this work, we introduce DALDALL, a persona-based data augmentation framework tailored for legal information retrieval (IR). Our method employs domain-specific professional personas--such as attorneys, prosecutors, and judges--to generate synthetic queries that exhibit substantially greater lexical and semantic diversity than vanilla prompting approaches. Experiments on the CLERC and COLIEE benchmarks demonstrate that persona-based augmentation achieves improvement in lexical diversity as measured by Self-BLEU scores, while preserving semantic fidelity to the original queries. Furthermore, dense retrievers fine-tuned on persona-augmented data consistently achieve competitive or superior recall performance compared to those trained on original data or generic augmentations. These findings establish persona-based prompting as an effective strategy for generating high-quality training data in specialized, low-resource domains.
Problem

Research questions and friction points this paper is trying to address.

data augmentation
legal domain
lexical diversity
semantic fidelity
low-resource domains
Innovation

Methods, ideas, or system contributions that make the work stand out.

persona-based prompting
legal information retrieval
data augmentation
lexical diversity
large language models
🔎 Similar Papers
No similar papers found.
J
Janghyeok Choi
Department of Industrial Engineering, Seoul National University, Seoul, South Korea
Jaewon Lee
Jaewon Lee
Korea University
3D Vision
S
Sungzoon Cho
Department of Industrial Engineering, Seoul National University, Seoul, South Korea