EyeMulator: Improving Code Language Models by Mimicking Human Visual Attention

📅 2025-08-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitation of Code Large Language Models (CodeLLMs) that rely solely on self-attention mechanisms while neglecting human developers’ visual cognitive patterns during code reading. We propose the first explicit modeling of human visual attention in CodeLLM fine-tuning by incorporating gaze-derived visual attention weights into the loss function. Specifically, we augment the standard cross-entropy loss with empirically measured eye-tracking weights during fine-tuning of pretrained code models on code translation, completion, and summarization tasks—using gaze data as supervisory signals to guide the model toward human-attended code regions. Experiments demonstrate significant improvements over strong baselines across multiple benchmarks; ablation studies confirm the effectiveness and robustness of the visual attention guidance mechanism. Our core contribution is the pioneering integration of human visual attention into the CodeLLM training paradigm, thereby enhancing semantic understanding and improving human–model cognitive alignment.

Technology Category

Application Category

📝 Abstract
Code language models (so-called CodeLLMs) are now commonplace in software development. As a general rule, CodeLLMs are trained by dividing training examples into input tokens and then learn importance of those tokens in a process called machine attention. Machine attention is based solely on input token salience to output token examples during training. Human software developers are different, as humans intuitively know that some tokens are more salient than others. While intuition itself is ineffable and a subject of philosophy, clues about salience are present in human visual attention, since people tend to look at more salient words more often. In this paper, we present EyeMulator, a technique for training CodeLLMs to mimic human visual attention while training for various software development tasks. We add special weights for each token in each input example to the loss function used during LLM fine-tuning. We draw these weights from observations of human visual attention derived from a previously-collected publicly-available dataset of eye-tracking experiments in software engineering tasks. These new weights ultimately induce changes in the attention of the subject LLM during training, resulting in a model that does not need eye-tracking data during inference. Our evaluation shows that EyeMulator outperforms strong LLM baselines on several tasks such as code translation, completion and summarization. We further show an ablation study that demonstrates the improvement is due to subject models learning to mimic human attention.
Problem

Research questions and friction points this paper is trying to address.

Improving CodeLLMs by mimicking human visual attention
Enhancing token importance learning from eye-tracking data
Boosting performance on code translation and summarization tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mimics human visual attention weights
Adds eye-tracking weights to loss
Improves code models without inference data
🔎 Similar Papers