🤖 AI Summary
In fine-tuning large language models (LLMs), manually or heuristically determined task mixture ratios often fail to simultaneously ensure representativeness and diversity across tasks.
Method: We propose TASKPGM, the first framework to incorporate inter-task predictive distribution mutual information into mixture optimization. It models task relationships via a Markov random field, quantifies task similarity using Jensen–Shannon divergence and pointwise mutual information, and formulates a simplex-constrained energy function for continuous, interpretable, and provably convergent mixture ratio optimization—supported by weak submodularity guarantees and budget scalability.
Results: Experiments on Llama 2 and Mistral demonstrate significant and consistent improvements on multi-task benchmarks (e.g., MMLU, BIG-Bench), with gains robust across model architectures. TASKPGM further uncovers interpretable task influence pathways and principled mixture composition patterns.
📝 Abstract
The performance of finetuned large language models (LLMs) hinges critically on the composition of the training mixture. However, selecting an optimal blend of task datasets remains a largely manual, heuristic driven process, with practitioners often relying on uniform or size based sampling strategies. We introduce TASKPGM, a principled and scalable framework for mixture optimization that selects continuous task proportions by minimizing an energy function over a Markov Random Field (MRF). Task relationships are modeled using behavioral divergences such as Jensen Shannon Divergence and Pointwise Mutual Information computed from the predictive distributions of single task finetuned models. Our method yields a closed form solution under simplex constraints and provably balances representativeness and diversity among tasks. We provide theoretical guarantees, including weak submodularity for budgeted variants, and demonstrate consistent empirical improvements on Llama 2 and Mistral across evaluation suites such as MMLU and BIGBench. Beyond performance, TASKPGM offers interpretable insights into task influence and mixture composition, making it a powerful tool for efficient and robust LLM finetuning.