HCRE: LLM-based Hierarchical Classification for Cross-Document Relation Extraction with a Prediction-then-Verification Strategy

📅 2026-04-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges in cross-document relation extraction, where small language models lack sufficient semantic understanding and large language models suffer performance degradation due to an excessive number of relation categories. To overcome these limitations, the authors propose HCRE, a novel approach that integrates large language models with a hierarchical relation tree for the first time. HCRE employs a layer-wise classification strategy to efficiently navigate complex relation structures and introduces a post-prediction verification mechanism to mitigate error propagation across hierarchy levels. Extensive experiments on multiple benchmark datasets demonstrate that HCRE significantly outperforms existing methods, confirming its effectiveness in enhancing both accuracy and robustness in cross-document relation extraction.
📝 Abstract
Cross-document relation extraction (RE) aims to identify relations between the head and tail entities located in different documents. Existing approaches typically adopt the paradigm of ``\textit{Small Language Model (SLM) + Classifier}''. However, the limited language understanding ability of SLMs hinders further improvement of their performance. In this paper, we conduct a preliminary study to explore the performance of Large Language Models (LLMs) in cross-document RE. Despite their extensive parameters, our findings indicate that LLMs do not consistently surpass existing SLMs. Further analysis suggests that the underperformance is largely attributed to the challenges posed by the numerous predefined relations. To overcome this issue, we propose an LLM-based \underline{H}ierarchical \underline{C}lassification model for cross-document \underline{RE} (HCRE), which consists of two core components: 1) an LLM for relation prediction and 2) a \textit{hierarchical relation tree} derived from the predefined relation set. This tree enables the LLM to perform hierarchical classification, where the target relation is inferred level by level. Since the number of child nodes is much smaller than the size of the entire predefined relation set, the hierarchical relation tree significantly reduces the number of relation options that LLM needs to consider during inference. However, hierarchical classification introduces the risk of error propagation across levels. To mitigate this, we propose a \textit{prediction-then-verification} inference strategy that improves prediction reliability through multi-view verification at each level. Extensive experiments show that HCRE outperforms existing baselines, validating its effectiveness.
Problem

Research questions and friction points this paper is trying to address.

Cross-document Relation Extraction
Large Language Models
Hierarchical Classification
Predefined Relations
Error Propagation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large Language Models
Hierarchical Classification
Cross-document Relation Extraction
Prediction-then-Verification
Relation Tree
🔎 Similar Papers
No similar papers found.