PatentGPT: A Large Language Model for Patent Drafting Using Knowledge-based Fine-tuning Method

📅 2024-08-26
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current large language models (LLMs) suffer from insufficient domain-specific knowledge for innovative concept generation and patent drafting. To address this, we propose Knowledge-Guided Fine-Tuning (KFT), a novel framework integrating knowledge graph pretraining, domain-specific supervised fine-tuning (SFT), and reinforcement learning from human feedback (RLHF), yielding the first LLM specialized for intellectual property (IP) generation. Leveraging a curated patent corpus, structured prompt engineering, and knowledge graph embeddings, our model enables end-to-end generation of legally compliant patent documents—including claims and detailed descriptions—from technical ideas. Evaluated on a dedicated patent benchmark, it achieves approximately fourfold improvements over state-of-the-art models in technical accuracy, legal compliance, and patentability—demonstrating substantial gains in both output quality and domain-specific fidelity.

Technology Category

Application Category

📝 Abstract
As humanity stands on the brink of a new era of technological innovation, the ability to rapidly transform creative ideas into protected intellectual property (IP) is more crucial than ever. However, the conventional processes for patent drafting are fraught with challenges, demanding a nuanced understanding of advanced field knowledge and technical concepts. Existing large language models (LLMs), while powerful, often fall short in this IP creation domain due to their lack of specialized knowledge and context-awareness necessary for generating technically accurate patent documents. To bridge this critical gap, we propose a groundbreaking framework for Knowledge Fine-Tuning (KFT) of LLMs, designed to endow AI with the ability to autonomously mine, understand, and apply domain-specific knowledge. Our model, PatentGPT leverages a unique combination of knowledge graph-based pre-training, domain-specific supervised fine-tuning (SFT), and reinforcement learning from human feedback (RLHF). Through extensive evaluation, PatentGPT has demonstrated outstanding performance, scoring up to approximately 400% higher in patent related benchmark tests compared to state-of-the-art models. By KFT method the model's capability to not only assist but also augment human creativity and innovation, our approach sets a new standard for AI-driven intellectual property generation, paving the way for more efficient and effective invention processes.
Problem

Research questions and friction points this paper is trying to address.

LLMs lack specialized knowledge for innovative concept generation
Bridging gap in autonomous domain-specific knowledge application
Enhancing patent and concept generation with AI
Innovation

Methods, ideas, or system contributions that make the work stand out.

Knowledge finetuning framework for LLMs
Integrates injection pre-training and supervised finetuning
Uses reinforcement learning from human feedback
🔎 Similar Papers
No similar papers found.
Runtao Ren
Runtao Ren
City University of Hong Kong
Generative AI
J
Jian Ma
Department of Information Systems, City University of Hong Kong, Kowloon Tong, Hong Kong