Multi-task Code LLMs: Data Mix or Model Merge?

📅 2026-01-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge of efficiently developing compact, multitask code large language models that jointly perform code generation and summarization under resource constraints. The authors systematically compare two strategies—data mixing and model merging—through multitask fine-tuning on Qwen Coder and DeepSeek Coder at both 2B and 7B scales. Their findings reveal that model merging (e.g., weight interpolation) outperforms single-task fine-tuning for 7B models, achieving a HumanEval Pass@1 of 92.7% versus 90.9%, whereas data mixing is more effective for 2B models. Additionally, they propose a weight analysis method to uncover how different tasks influence model parameters. The merged models retain full summarization capability while preserving 96% of their code generation performance.

Technology Category

Application Category

📝 Abstract
Recent research advocates deploying smaller, specialized code LLMs in agentic frameworks alongside frontier models, sparking interest in efficient strategies for multi-task learning that balance performance, constraints, and costs. We compare two approaches for creating small, multi-task code LLMs: data mixing versus model merging. We conduct extensive experiments across two model families (Qwen Coder and DeepSeek Coder) at two scales (2B and 7B parameters), fine-tuning them for code generation and code summarization tasks. Our evaluation on HumanEval, MBPP, and CodeXGlue benchmarks reveals that model merging achieves the best overall performance at larger scale across model families, retaining 96% of specialized model performance on code generation tasks while maintaining summarization capabilities. Notably, merged models can even surpass individually fine-tuned models, with our best configuration of Qwen Coder 2.5 7B model achieving 92.7% Pass@1 on HumanEval compared to 90.9% for its task-specific fine-tuned equivalent. At a smaller scale we find instead data mixing to be a preferred strategy. We further introduce a weight analysis technique to understand how different tasks affect model parameters and their implications for merging strategies. The results suggest that careful merging and mixing strategies can effectively combine task-specific capabilities without significant performance degradation, making them ideal for resource-constrained deployment scenarios.
Problem

Research questions and friction points this paper is trying to address.

multi-task learning
code LLMs
model merging
data mixing
resource-constrained deployment
Innovation

Methods, ideas, or system contributions that make the work stand out.

model merging
data mixing
multi-task code LLMs
weight analysis
code generation
🔎 Similar Papers
No similar papers found.
M
Mingzhi Zhu
Rensselaer Polytechnic Institute
B
Boris Sobolev
Cisco
R
Rahul Krishna
IBM Research
R
Raju Pavuluri
IBM Research
Stacy Patterson
Stacy Patterson
Associate Professor, Rensselaer Polytechnic Institute
Distributed SystemsMachine Learning
Michele Merler
Michele Merler
IBM TJ Watson Research Center
MultimediaComputer VisionMachine Learning