🤖 AI Summary
This work addresses the limitations of existing data discovery systems, which struggle to efficiently handle natural language queries involving multi-table joins and rely on resource-intensive offline preprocessing, thereby failing to achieve cell-level retrieval precision. To overcome these challenges, the authors propose a lightweight, training-free, entity-aware mechanism that leverages large language models (LLMs) to parse queries, extract column names and value mentions, and align them with table content through compact embedding-based header matching and direct table scanning. This approach enables fine-grained entity alignment and supports cell-level retrieval in both single-table and joined-table scenarios, substantially reducing computational overhead and LLM invocation costs. Evaluated on a newly constructed multi-table data discovery benchmark, the proposed system outperforms existing methods in accuracy while significantly improving efficiency.
📝 Abstract
Tabular data constitute a dominant form of information in modern data lakes and repositories, yet discovering the relevant tables to answer user questions remains challenging. Existing data discovery systems assume that each question can be answered by a single table and often rely on resource-intensive offline preprocessing, such as model training or large-scale content indexing. In practice, however, many questions require information spread across multiple tables -- either independently or through joins -- and users often seek specific cell values rather than entire tables. In this paper, we present Octopus, a lightweight, entity-aware, and training-free system for multi-table data discovery and cell-level value retrieval. Instead of embedding entire questions, Octopus identifies fine-grained entities (column mentions and value mentions) from natural-language queries using an LLM parser. It then matches these entities to table headers through a compact embedding index and scans table contents directly for value occurrences, eliminating the need for heavy content indexing or costly offline stages. The resulting fine-grained alignment not only improves table retrieval accuracy but also facilitates efficient downstream NL2SQL execution by reducing token usage and redundant LLM calls. To evaluate Octopus, we introduce a new benchmark covering both table- and cell-level discovery under multi-table settings, including five datasets for independent discovery and two for join-based discovery. Experimental results show that Octopus consistently outperforms existing systems while achieving substantially lower computational and token costs. Code is available at https://github.com/wenzhilics/octopus.