Magneto: Combining Small and Large Language Models for Schema Matching

📅 2024-12-11
🏛️ arXiv.org
📈 Citations: 4
Influential: 0
📄 PDF
🤖 AI Summary
Schema matching across heterogeneous, cross-source data faces dual bottlenecks: small language models (SLMs) rely heavily on scarce labeled data, while large language models (LLMs) incur prohibitive computational costs and suffer from context-length limitations. Method: We propose a two-stage SLM–LLM collaborative framework: SLMs perform efficient candidate generation, followed by LLM-based re-ranking via prompt engineering and generative self-supervised fine-tuning. Contribution/Results: This work introduces the first SLM–LLM collaboration paradigm for schema matching; designs a novel generative self-supervised fine-tuning strategy for LLMs—eliminating dependence on manual annotations; and establishes BioSchema, the first realistic, biomedical-domain-specific schema matching benchmark. Evaluated across multiple domains, our method achieves state-of-the-art accuracy while reducing inference cost by 42%, significantly outperforming both pure-SLM and pure-LLM baselines.

Technology Category

Application Category

📝 Abstract
Recent advances in language models opened new opportunities to address complex schema matching tasks. Schema matching approaches have been proposed that demonstrate the usefulness of language models, but they have also uncovered important limitations: Small language models (SLMs) require training data (which can be both expensive and challenging to obtain), and large language models (LLMs) often incur high computational costs and must deal with constraints imposed by context windows. We present Magneto, a cost-effective and accurate solution for schema matching that combines the advantages of SLMs and LLMs to address their limitations. By structuring the schema matching pipeline in two phases, retrieval and reranking, Magneto can use computationally efficient SLM-based strategies to derive candidate matches which can then be reranked by LLMs, thus making it possible to reduce runtime without compromising matching accuracy. We propose a self-supervised approach to fine-tune SLMs which uses LLMs to generate syntactically diverse training data, and prompting strategies that are effective for reranking. We also introduce a new benchmark, developed in collaboration with domain experts, which includes real biomedical datasets and presents new challenges to schema matching methods. Through a detailed experimental evaluation, using both our new and existing benchmarks, we show that Magneto is scalable and attains high accuracy for datasets from different domains.
Problem

Research questions and friction points this paper is trying to address.

Combining SLMs and LLMs for efficient schema matching
Reducing computational costs without losing accuracy
Generating synthetic training data for self-supervised SLM fine-tuning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines SLMs and LLMs for schema matching
Uses self-supervised SLM fine-tuning with LLM-generated data
Two-phase pipeline: SLM retrieval and LLM reranking
🔎 Similar Papers
No similar papers found.
Y
Yurong Liu
New York University
E
Eduardo H. M. Pena
New York University
A
Aécio S. R. Santos
New York University
Eden Wu
Eden Wu
New York University
Juliana Freire
Juliana Freire
New York University
data managementvisualizationprovenancereproducibilitybig data