🤖 AI Summary
Existing code large language models (LLMs) treat source code as plain text, neglecting critical semantic graph structures such as data-flow graphs; meanwhile, prior approaches incorporating structural information require architectural modifications to the Transformer, compromising scalability and LLM compatibility.
Method: We propose a non-intrusive fine-tuning framework that, during training, encodes data-flow graphs via a graph neural network (GNN) and injects this structural knowledge as an auxiliary task through cross-modal alignment—without altering the LLM’s architecture or incurring inference overhead.
Contribution/Results: Our method achieves the first zero-coupling alignment between structural semantics and pre-trained LLMs, requiring only off-target graph data for training. It consistently improves performance across five code-related tasks and seven baseline models (350M–14B parameters), notably enhancing code comprehension capabilities of strong foundation models including LLaMA3 and Qwen2.5-Coder.
📝 Abstract
Programming languages possess rich semantic information - such as data flow - that is represented by graphs and not available from the surface form of source code. Recent code language models have scaled to billions of parameters, but model source code solely as text tokens while ignoring any other structural information. Conversely, models that do encode structural information of code make modifications to the Transformer architecture, limiting their scale and compatibility with pretrained LLMs. In this work, we take the best of both worlds with GALLa - Graph Aligned Large Language Models. GALLa utilizes graph neural networks and cross-modal alignment technologies to inject the structural information of code into LLMs as an auxiliary task during finetuning. This framework is both model-agnostic and task-agnostic, as it can be applied to any code LLM for any code downstream task, and requires the structural graph data only at training time from a corpus unrelated to the finetuning data, while incurring no cost at inference time over the baseline LLM. Experiments on five code tasks with seven different baseline LLMs ranging in size from 350M to 14B validate the effectiveness of GALLa, demonstrating consistent improvement over the baseline, even for powerful models such as LLaMA3 and Qwen2.5-Coder.