🤖 AI Summary
To address three core challenges in scientific literature information extraction—modeling long documents, understanding multimodal content, and standardizing fine-grained cross-paper information (especially under dynamically evolving data schemas)—this paper proposes SciEx, a modular, decoupled framework. SciEx explicitly separates PDF parsing, multimodal retrieval, LLM-driven extraction, and cross-document aggregation, enabling plug-and-play integration of diverse prompting strategies, foundation models, and inference mechanisms for rapid adaptation. Evaluated across three domain-specific datasets, SciEx achieves high accuracy and consistency in fine-grained information extraction. The study systematically identifies key strengths and bottlenecks of current LLM-based pipelines, offering an extensible and maintainable technical pathway for constructing scientific knowledge graphs that evolve with shifting data patterns and scholarly conventions.
📝 Abstract
Large language models (LLMs) are increasingly touted as powerful tools for automating scientific information extraction. However, existing methods and tools often struggle with the realities of scientific literature: long-context documents, multi-modal content, and reconciling varied and inconsistent fine-grained information across multiple publications into standardized formats. These challenges are further compounded when the desired data schema or extraction ontology changes rapidly, making it difficult to re-architect or fine-tune existing systems. We present SciEx, a modular and composable framework that decouples key components including PDF parsing, multi-modal retrieval, extraction, and aggregation. This design streamlines on-demand data extraction while enabling extensibility and flexible integration of new models, prompting strategies, and reasoning mechanisms. We evaluate SciEx on datasets spanning three scientific topics for its ability to extract fine-grained information accurately and consistently. Our findings provide practical insights into both the strengths and limitations of current LLM-based pipelines.