🤖 AI Summary
This study addresses the limitations of large language models (LLMs) in supervised relation extraction tasks involving high relation density and complex graph-structured data. The authors systematically evaluate four prominent LLMs against a lightweight graph parser across six standard benchmarks, examining diverse paradigms including in-context learning, fine-tuning, and graph neural networks. Experimental results demonstrate that as the number of relations per sentence increases, the graph parser consistently and significantly outperforms LLMs, exhibiting superior efficiency and reliability—particularly on linguistically intricate graph structures. These findings challenge the prevailing assumption that LLMs universally dominate across natural language processing tasks and underscore the irreplaceable value of structured graph-based approaches for modeling complex relational patterns.
📝 Abstract
Relation extraction represents a fundamental component in the process of creating knowledge graphs, among other applications. Large language models (LLMs) have been adopted as a promising tool for relation extraction, both in supervised and in-context learning settings. However, in this work we show that their performance still lags behind much smaller architectures when the linguistic graph underlying a text has great complexity. To demonstrate this, we evaluate four LLMs against a graph-based parser on six relation extraction datasets with sentence graphs of varying sizes and complexities. Our results show that the graph-based parser increasingly outperforms the LLMs, as the number of relations in the input documents increases. This makes the much lighter graph-based parser a superior choice in the presence of complex linguistic graphs.