🤖 AI Summary
Information extraction (IE) outputs often mismatch downstream database schemas, hindering direct integration. Method: This paper introduces TEXT2DB—a novel task requiring models to dynamically perform data completion, row insertion, and column expansion based on user instructions, document collections, and target database schemas. To address it, we propose OPAL, an agent framework operating via an Observe-Plan-Analyze closed-loop that orchestrates database interaction, code generation, IE model invocation, and pre-execution feedback analysis for end-to-end instruction understanding, schema alignment, and structured data population. Contribution/Results: Experiments demonstrate that OPAL accurately executes complex IE-database joint tasks across diverse database systems, significantly improving information-to-database deployment efficiency. The study further identifies critical challenges—including large-scale schema adaptation and model hallucination—highlighting open research directions for robust database-grounded IE.
📝 Abstract
The task of information extraction (IE) is to extract structured knowledge from text. However, it is often not straightforward to utilize IE output due to the mismatch between the IE ontology and the downstream application needs. We propose a new formulation of IE TEXT2DB that emphasizes the integration of IE output and the target database (or knowledge base). Given a user instruction, a document set, and a database, our task requires the model to update the database with values from the document set to satisfy the user instruction. This task requires understanding user instructions for what to extract and adapting to the given DB/KB schema for how to extract on the fly. To evaluate this new task, we introduce a new benchmark featuring common demands such as data infilling, row population, and column addition. In addition, we propose an LLM agent framework OPAL (Observe-PlanAnalyze LLM) which includes an Observer component that interacts with the database, the Planner component that generates a code-based plan with calls to IE models, and the Analyzer component that provides feedback regarding code quality before execution. Experiments show that OPAL can successfully adapt to diverse database schemas by generating different code plans and calling the required IE models. We also highlight difficult cases such as dealing with large databases with complex dependencies and extraction hallucination, which we believe deserve further investigation. Source code: https://github.com/yzjiao/Text2DB