🤖 AI Summary
Patent classification faces significant challenges due to the deep hierarchical structure of the Cooperative Patent Classification (CPC) system, multi-label assignments, and extreme class imbalance—particularly poor performance on rare technical categories. This study systematically evaluates BERT-based encoders (e.g., SciBERT, PatentSBERTa) and open-source large language models (LLMs) under various strategies, including zero-shot, few-shot, retrieval-augmented generation (RAG), and parameter-efficient fine-tuning (PEFT). The work reveals a complementary relationship between the two model families: encoders achieve higher overall accuracy and are three orders of magnitude more computationally efficient at inference, whereas LLMs demonstrate superior performance on high-level, rare CPC subclasses. These findings provide empirical support for hybrid classification systems that balance computational efficiency with robust coverage of long-tail categories.
📝 Abstract
Patent classification into CPC codes underpins large scale analyses of technological change but remains challenging due to its hierarchical, multi label, and highly imbalanced structure. While pre Generative AI supervised encoder based models became the de facto standard for large scale patent classification, recent advances in large language models (LLMs) raise questions about whether they can provide complementary capabilities, particularly for rare or weakly represented technological categories. In this work, we perform a systematic comparison of encoder based classifiers (BERT, SciBERT, and PatentSBERTa) and open weight LLMs on a highly imbalanced benchmark dataset (USPTO 70k). We evaluate LLMs under zero shot, few shot, and retrieval augmented prompting, and further assess parameter efficient fine tuning of the best performing model. Our results show that encoder based models achieve higher aggregate performance, driven by strong results on frequent CPC subclasses, but struggle on rare ones. In contrast, LLMs achieve relatively higher performance on infrequent subclasses, often associated with early stage, cross domain, or weakly institutionalised technologies, particularly at higher hierarchical levels. These findings indicate that encoder based and LLM based approaches play complementary roles in patent classification. We additionally quantify inference time and energy consumption, showing that encoder based models are up to three orders of magnitude more efficient than LLMs. Overall, our results inform responsible patentometrics and technology mapping, and motivate hybrid classification approaches that combine encoder efficiency with the long tail coverage of LLMs under computational and environmental constraints.