🤖 AI Summary
Prior research on patent claim generation heavily relies on USPTO data, lacking jurisdictional adaptability—particularly for European patent law—and failing to capture drafting norm diversity across legal systems. Method: We introduce EPD, the first high-quality, granted European patent dataset, spanning multiple technical domains and including a challenging subset reflecting real-world complexity. We propose a novel preprocessing strategy integrating structured metadata with legal-context-aware text processing, and fine-tune large language models (LLMs) on EPD. Contribution/Results: EPD-finetuned models significantly outperform both USPTO-based baselines and GPT-4o in claim quality and cross-domain generalization. However, performance degrades markedly on the challenging subset, revealing a critical bottleneck in LLMs’ ability to generate complex, legally compliant claims. This precisely identifies a key technical gap, providing a clear direction for future advances in legal-domain LLMs.
📝 Abstract
Drafting patent claims is time-intensive, costly, and requires professional skill. Therefore, researchers have investigated large language models (LLMs) to assist inventors in writing claims. However, existing work has largely relied on datasets from the United States Patent and Trademark Office (USPTO). To enlarge research scope regarding various jurisdictions, drafting conventions, and legal standards, we introduce EPD, a European patent dataset. EPD presents rich textual data and structured metadata to support multiple patent-related tasks, including claim generation. This dataset enriches the field in three critical aspects: (1) Jurisdictional diversity: Patents from different offices vary in legal and drafting conventions. EPD fills a critical gap by providing a benchmark for European patents to enable more comprehensive evaluation. (2) Quality improvement: EPD offers high-quality granted patents with finalized and legally approved texts, whereas others consist of patent applications that are unexamined or provisional. Experiments show that LLMs fine-tuned on EPD significantly outperform those trained on previous datasets and even GPT-4o in claim quality and cross-domain generalization. (3) Real-world simulation: We propose a difficult subset of EPD to better reflect real-world challenges of claim generation. Results reveal that all tested LLMs perform substantially worse on these challenging samples, which highlights the need for future research.