Large Language Models for Patent Classification: Strengths, Trade-offs, and the Long Tail Effect

📅 2026-01-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Patent classification faces significant challenges due to the deep hierarchical structure of the Cooperative Patent Classification (CPC) system, multi-label assignments, and extreme class imbalance—particularly poor performance on rare technical categories. This study systematically evaluates BERT-based encoders (e.g., SciBERT, PatentSBERTa) and open-source large language models (LLMs) under various strategies, including zero-shot, few-shot, retrieval-augmented generation (RAG), and parameter-efficient fine-tuning (PEFT). The work reveals a complementary relationship between the two model families: encoders achieve higher overall accuracy and are three orders of magnitude more computationally efficient at inference, whereas LLMs demonstrate superior performance on high-level, rare CPC subclasses. These findings provide empirical support for hybrid classification systems that balance computational efficiency with robust coverage of long-tail categories.

Technology Category

Application Category

📝 Abstract
Patent classification into CPC codes underpins large scale analyses of technological change but remains challenging due to its hierarchical, multi label, and highly imbalanced structure. While pre Generative AI supervised encoder based models became the de facto standard for large scale patent classification, recent advances in large language models (LLMs) raise questions about whether they can provide complementary capabilities, particularly for rare or weakly represented technological categories. In this work, we perform a systematic comparison of encoder based classifiers (BERT, SciBERT, and PatentSBERTa) and open weight LLMs on a highly imbalanced benchmark dataset (USPTO 70k). We evaluate LLMs under zero shot, few shot, and retrieval augmented prompting, and further assess parameter efficient fine tuning of the best performing model. Our results show that encoder based models achieve higher aggregate performance, driven by strong results on frequent CPC subclasses, but struggle on rare ones. In contrast, LLMs achieve relatively higher performance on infrequent subclasses, often associated with early stage, cross domain, or weakly institutionalised technologies, particularly at higher hierarchical levels. These findings indicate that encoder based and LLM based approaches play complementary roles in patent classification. We additionally quantify inference time and energy consumption, showing that encoder based models are up to three orders of magnitude more efficient than LLMs. Overall, our results inform responsible patentometrics and technology mapping, and motivate hybrid classification approaches that combine encoder efficiency with the long tail coverage of LLMs under computational and environmental constraints.
Problem

Research questions and friction points this paper is trying to address.

patent classification
long tail effect
class imbalance
hierarchical multi-label classification
CPC codes
Innovation

Methods, ideas, or system contributions that make the work stand out.

large language models
patent classification
long tail effect
encoder-based models
retrieval-augmented prompting
🔎 Similar Papers
No similar papers found.
L
Lorenzo Emer
Institute of Economics and L'EMbeDS, Scuola Superiore Sant'Anna, Piazza Martiri della Libertà, 33, Pisa, 56127, Italy; Department of Computer Science, University of Pisa, Largo B. Pontecorvo 3, Pisa, 56126, Italy
Marco Lippi
Marco Lippi
University of Florence
Artificial IntelligenceMachine LearningNatural Language ProcessingArgument MiningAI & Law
A
Andrea Mina
Institute of Economics and L'EMbeDS, Scuola Superiore Sant'Anna, Piazza Martiri della Libertà, 33, Pisa, 56127, Italy; Centre for Business Research, University of Cambridge, 11–12 Trumpington Street, Cambridge, CB2 1QA, United Kingdom
A
Andrea Vandin
Institute of Economics and L'EMbeDS, Scuola Superiore Sant'Anna, Piazza Martiri della Libertà, 33, Pisa, 56127, Italy; DTU Compute, Technical University of Denmark, Anker Engelunds Vej 101, Kongens Lyngby, 2800, Denmark