Retrieve-and-Verify: A Table Context Selection Framework for Accurate Column Annotations

📅 2025-08-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Incomplete table metadata—such as column semantic types and attributes—severely limits the performance of Column Type Annotation (CTA) and Column Property Annotation (CPA), especially on wide tables where numerous irrelevant or misleading columns dilute contextual effectiveness. To address this, we propose a retrieval–verification collaborative framework for context selection. First, we employ unsupervised retrieval coupled with semantic role embeddings to identify highly relevant and diverse candidate columns. Second, we introduce a novel column-context verification mechanism that models context quality assessment as a binary classification task, supported by a top-down efficient inference strategy. Our method significantly improves CTA/CPA accuracy on wide tables and consistently outperforms state-of-the-art approaches across six benchmark datasets, with particularly pronounced gains on wide-table instances.

Technology Category

Application Category

📝 Abstract
Tables are a prevalent format for structured data, yet their metadata, such as semantic types and column relationships, is often incomplete or ambiguous. Column annotation tasks, including Column Type Annotation (CTA) and Column Property Annotation (CPA), address this by leveraging table context, which are critical for data management. Existing methods typically serialize all columns in a table into pretrained language models to incorporate context, but this coarse-grained approach often degrades performance in wide tables with many irrelevant or misleading columns. To address this, we propose a novel retrieve-and-verify context selection framework for accurate column annotation, introducing two methods: REVEAL and REVEAL+. In REVEAL, we design an efficient unsupervised retrieval technique to select compact, informative column contexts by balancing semantic relevance and diversity, and develop context-aware encoding techniques with role embeddings and target-context pair training to effectively differentiate target and context columns. To further improve performance, in REVEAL+, we design a verification model that refines the selected context by directly estimating its quality for specific annotation tasks. To achieve this, we formulate a novel column context verification problem as a classification task and then develop the verification model. Moreover, in REVEAL+, we develop a top-down verification inference technique to ensure efficiency by reducing the search space for high-quality context subsets from exponential to quadratic. Extensive experiments on six benchmark datasets demonstrate that our methods consistently outperform state-of-the-art baselines.
Problem

Research questions and friction points this paper is trying to address.

Selecting relevant table context for accurate column annotations
Addressing performance degradation in wide tables with irrelevant columns
Verifying and refining context quality for specific annotation tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unsupervised retrieval for context selection
Verification model refines context quality
Top-down inference reduces search space
🔎 Similar Papers
No similar papers found.
Z
Zhihao Ding
Hong Kong Polytechnic University
Y
Yongkang Sun
Hong Kong Polytechnic University
Jieming Shi
Jieming Shi
The Hong Kong Polytechnic University
Data ManagementData miningBig data analytics