🤖 AI Summary
This paper addresses the semantic retrieval challenge for AI model performance and configuration tables in Model Lakes. To this end, we introduce ModelLake-Bench—the first large-scale structured knowledge benchmark—comprising 60K models and 90K tables curated from Hugging Face, GitHub, and academic papers, with explicit modeling of semantic relationships (e.g., cross-table referencing, inheritance, and data sharing). We systematically define and release the first methodology for constructing ground-truth structured table relevance annotations. Furthermore, we propose a union-based semantic retrieval paradigm, demonstrating its superiority over conventional keyword-and-metadata hybrid approaches. Experimental results show that table-level dense embedding retrieval achieves a P@1 of 66.5%, significantly outperforming union-based (54.8%) and metadata-only baselines (54.1%), thereby revealing substantial room for improvement in structured model knowledge discovery.
📝 Abstract
We present ModelTables, a benchmark of tables in Model Lakes that captures the structured semantics of performance and configuration tables often overlooked by text only retrieval. The corpus is built from Hugging Face model cards, GitHub READMEs, and referenced papers, linking each table to its surrounding model and publication context. Compared with open data lake tables, model tables are smaller yet exhibit denser inter table relationships, reflecting tightly coupled model and benchmark evolution. The current release covers over 60K models and 90K tables. To evaluate model and table relatedness, we construct a multi source ground truth using three complementary signals: (1) paper citation links, (2) explicit model card links and inheritance, and (3) shared training datasets. We present one extensive empirical use case for the benchmark which is table search. We compare canonical Data Lake search operators (unionable, joinable, keyword) and Information Retrieval baselines (dense, sparse, hybrid retrieval) on this benchmark. Union based semantic table retrieval attains 54.8 % P@1 overall (54.6 % on citation, 31.3 % on inheritance, 30.6 % on shared dataset signals); table based dense retrieval reaches 66.5 % P@1, and metadata hybrid retrieval achieves 54.1 %. This evaluation indicates clear room for developing better table search methods. By releasing ModelTables and its creation protocol, we provide the first large scale benchmark of structured data describing AI model. Our use case of table discovery in Model Lakes, provides intuition and evidence for developing more accurate semantic retrieval, structured comparison, and principled organization of structured model knowledge. Source code, data, and other artifacts have been made available at https://github.com/RJMillerLab/ModelTables.