How Does Code Pretraining Affect Language Model Task Performance?

📅 2024-09-06
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the causal impact of code–natural language mixed pretraining on large language model performance. We systematically vary the code proportion—under both additive and competitive data mixing regimes—while maintaining a uniform Transformer architecture for pretraining, and evaluate models across diverse benchmarks including BigBench, semantic parsing, syntactic transformation, and commonsense reasoning. Our work establishes, for the first time, a causal relationship between code pretraining ratio and downstream task performance. We find that higher code proportions significantly enhance structured reasoning capabilities (e.g., semantic parsing and mathematical reasoning) but degrade sensitivity to linguistic structure (syntax and morphology) and impair commonsense reasoning. These results reveal a task-selective gain mechanism induced by code pretraining, wherein structural inductive biases from code benefit formal reasoning at the cost of natural language understanding. The findings provide both theoretical grounding and empirical evidence for principled, capability-aware pretraining data composition.

Technology Category

Application Category

📝 Abstract
Large language models are increasingly trained on corpora containing both natural language and non-linguistic data like source code. Aside from aiding programming-related tasks, anecdotal evidence suggests that including code in pretraining corpora may improve performance on other, unrelated tasks, yet to date no work has been able to establish a causal connection by controlling between language and code data. Here we do just this. We pretrain language models on datasets which interleave natural language and code in two different settings: additive, in which the total volume of data seen during pretraining is held constant; and competitive, in which the volume of language data is held constant. We study how the pretraining mixture affects performance on (a) a diverse collection of tasks included in the BigBench benchmark, and (b) compositionality, measured by generalization accuracy on semantic parsing and syntactic transformations. We find that pretraining on higher proportions of code improves performance on compositional tasks involving structured output (like semantic parsing), and mathematics. Conversely, increase code mixture can harm performance on other tasks, including on tasks that requires sensitivity to linguistic structure such as syntax or morphology, and tasks measuring real-world knowledge.
Problem

Research questions and friction points this paper is trying to address.

Effect of code pretraining on language models
Impact of code mixture on task performance
Causal connection between code and language data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Pretrains models with mixed natural language and code.
Studies impact of code ratio on task performance.
Identifies code's effect on compositional and mathematical tasks.
🔎 Similar Papers
No similar papers found.