XAutoLM: Efficient Fine-Tuning of Language Models via Meta-Learning and AutoML

📅 2025-07-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing automated frameworks struggle to jointly optimize model selection and hyperparameter optimization (HPO) in language model (LM) fine-tuning, resulting in high computational overhead and low efficiency. This paper introduces the first deep integration of meta-learning and AutoML for resource-efficient LM fine-tuning: it extracts task-level and system-level meta-features to guide search space pruning using historical fine-tuning experience, thereby avoiding invalid configurations. The method unifies end-to-end optimization across both discriminative and generative LMs. Evaluated on six benchmark tasks, it outperforms zero-shot optimizers in peak F1 on five tasks; reduces average evaluation time by 4.5× and error rate by 7×; and discovers more high-performance, low-cost fine-tuning configurations.

Technology Category

Application Category

📝 Abstract
Experts in machine learning leverage domain knowledge to navigate decisions in model selection, hyperparameter optimisation, and resource allocation. This is particularly critical for fine-tuning language models (LMs), where repeated trials incur substantial computational overhead and environmental impact. However, no existing automated framework simultaneously tackles the entire model selection and HPO task for resource-efficient LM fine-tuning. We introduce XAutoLM, a meta-learning-augmented AutoML framework that reuses past experiences to optimise discriminative and generative LM fine-tuning pipelines efficiently. XAutoLM learns from stored successes and failures by extracting task- and system-level meta-features to bias its sampling toward fruitful configurations and away from costly dead ends. On four text classification and two question-answering benchmarks, XAutoLM surpasses zero-shot optimiser's peak F1 on five of six tasks, cuts mean evaluation time by up to 4.5x, reduces error ratios by up to sevenfold, and uncovers up to 50% more pipelines above the zero-shot Pareto front. In contrast, simpler memory-based baselines suffer negative transfer. We release XAutoLM and our experience store to catalyse resource-efficient, Green AI fine-tuning in the NLP community.
Problem

Research questions and friction points this paper is trying to address.

Automates model selection and hyperparameter optimization for efficient LM fine-tuning
Reduces computational overhead and environmental impact of repeated trials
Enhances performance and speed in text classification and question-answering tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Meta-learning-augmented AutoML for LM fine-tuning
Reuses past experiences to optimize configurations
Extracts meta-features to avoid costly dead ends
🔎 Similar Papers
No similar papers found.
E
Ernesto L. Estevanell-Valladares
University of Alicante, University of Havana
S
Suilan Estevez-Velarde
University of Havana
Y
Yoan Gutiérrez
University of Alicante
A
Andrés Montoyo
University of Alicante
Ruslan Mitkov
Ruslan Mitkov
Lancaster University
Natural Language ProcessingComputational LinguisticsDeep Learning