🤖 AI Summary
This study investigates how instruction tuning enhances large language models’ precise control over generated text length in English and Italian. Using a cumulative weighted attribution method based on direct logit attribution, the authors quantify the contributions of individual attention heads and MLP modules across layers to length control. Results reveal that instruction tuning strengthens length regulation through deep-layer component specialization, with pronounced cross-lingual divergence: later-layer attention heads dominate in English, whereas final-layer MLPs exhibit compensatory enhancement in Italian. This work provides the first component-level mechanistic account of cross-lingual differences in length control, uncovering distinct neural pathways underlying language-specific controllability. The findings offer interpretable, neuro-symbolic insights for developing language-adaptive controllable text generation systems.
📝 Abstract
Adhering to explicit length constraints, such as generating text with a precise word count, remains a significant challenge for Large Language Models (LLMs). This study aims at investigating the differences between foundation models and their instruction-tuned counterparts, on length-controlled text generation in English and Italian. We analyze both performance and internal component contributions using Cumulative Weighted Attribution, a metric derived from Direct Logit Attribution. Our findings reveal that instruction-tuning substantially improves length control, primarily by specializing components in deeper model layers. Specifically, attention heads in later layers of IT models show increasingly positive contributions, particularly in English. In Italian, while attention contributions are more attenuated, final-layer MLPs exhibit a stronger positive role, suggesting a compensatory mechanism. These results indicate that instruction-tuning reconfigures later layers for task adherence, with component-level strategies potentially adapting to linguistic context.