ModelTables: A Corpus of Tables about Models

📅 2025-12-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the semantic retrieval challenge for AI model performance and configuration tables in Model Lakes. To this end, we introduce ModelLake-Bench—the first large-scale structured knowledge benchmark—comprising 60K models and 90K tables curated from Hugging Face, GitHub, and academic papers, with explicit modeling of semantic relationships (e.g., cross-table referencing, inheritance, and data sharing). We systematically define and release the first methodology for constructing ground-truth structured table relevance annotations. Furthermore, we propose a union-based semantic retrieval paradigm, demonstrating its superiority over conventional keyword-and-metadata hybrid approaches. Experimental results show that table-level dense embedding retrieval achieves a P@1 of 66.5%, significantly outperforming union-based (54.8%) and metadata-only baselines (54.1%), thereby revealing substantial room for improvement in structured model knowledge discovery.

Technology Category

Application Category

📝 Abstract
We present ModelTables, a benchmark of tables in Model Lakes that captures the structured semantics of performance and configuration tables often overlooked by text only retrieval. The corpus is built from Hugging Face model cards, GitHub READMEs, and referenced papers, linking each table to its surrounding model and publication context. Compared with open data lake tables, model tables are smaller yet exhibit denser inter table relationships, reflecting tightly coupled model and benchmark evolution. The current release covers over 60K models and 90K tables. To evaluate model and table relatedness, we construct a multi source ground truth using three complementary signals: (1) paper citation links, (2) explicit model card links and inheritance, and (3) shared training datasets. We present one extensive empirical use case for the benchmark which is table search. We compare canonical Data Lake search operators (unionable, joinable, keyword) and Information Retrieval baselines (dense, sparse, hybrid retrieval) on this benchmark. Union based semantic table retrieval attains 54.8 % P@1 overall (54.6 % on citation, 31.3 % on inheritance, 30.6 % on shared dataset signals); table based dense retrieval reaches 66.5 % P@1, and metadata hybrid retrieval achieves 54.1 %. This evaluation indicates clear room for developing better table search methods. By releasing ModelTables and its creation protocol, we provide the first large scale benchmark of structured data describing AI model. Our use case of table discovery in Model Lakes, provides intuition and evidence for developing more accurate semantic retrieval, structured comparison, and principled organization of structured model knowledge. Source code, data, and other artifacts have been made available at https://github.com/RJMillerLab/ModelTables.
Problem

Research questions and friction points this paper is trying to address.

Creating a benchmark for structured model performance and configuration tables
Evaluating table search methods using multi-source ground truth signals
Providing a large-scale dataset for AI model structured data analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Corpus built from model cards, READMEs, and papers
Multi-source ground truth using citation, inheritance, dataset signals
Evaluation of union-based, dense, and hybrid retrieval methods
🔎 Similar Papers
No similar papers found.