LEANCODE: Understanding Models Better for Code Simplification of Pre-trained Large Language Models

πŸ“… 2025-05-20
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address the prohibitive computational overhead in training and inference of large code language models caused by increasing input length, this paper proposes a context-aware dynamic code simplification method. Our approach jointly models token importance via encoder self-attention (centered on the CLS token) and encoder-decoder cross-attention, enabling fine-grained identification and pruning of redundant tokens. Unlike prior global average attention-based pruning paradigms, we introduce the first semantic-context-driven importance scoring mechanism, specifically tailored for code search and code summarization tasks. Experiments demonstrate substantial improvements: on code search, our method outperforms DietCode and SlimCode by 60% and 16%, respectively; on code summarization, it achieves gains of 29% and 27%. Crucially, these improvements are attained with significantly reduced computational costβ€”while preserving or even enhancing downstream task performance.

Technology Category

Application Category

πŸ“ Abstract
Large Language Models for code often entail significant computational complexity, which grows significantly with the length of the input code sequence. We propose LeanCode for code simplification to reduce training and prediction time, leveraging code contexts in utilizing attention scores to represent the tokens' importance. We advocate for the selective removal of tokens based on the average context-aware attention scores rather than average scores across all inputs. LeanCode uses the attention scores of `CLS' tokens within the encoder for classification tasks, such as code search. It also employs the encoder-decoder attention scores to determine token significance for sequence-to-sequence tasks like code summarization.Our evaluation shows LeanCode's superiority over the SOTAs DietCode and Slimcode, with improvements of 60% and 16% for code search, and 29% and 27% for code summarization, respectively.
Problem

Research questions and friction points this paper is trying to address.

Reducing computational complexity in Large Language Models for code
Simplifying code to decrease training and prediction time
Improving code search and summarization via attention score optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses attention scores for token importance
Employs CLS tokens for classification tasks
Leverages encoder-decoder attention for summarization
πŸ”Ž Similar Papers
No similar papers found.
Y
Yan Wang
Central University of Finance and Economics
L
Ling Ding
Central University of Finance and Economics
T
Tien N Nguyen
University of Texas at Dallas
S
Shaohua Wang
Central University of Finance and Economics
Yanan Zheng
Yanan Zheng
Tsinghua University
Natural Language ProcessingDeep Learning