🤖 AI Summary
This work addresses the challenge that existing large language models struggle to accurately model both local and global dependencies within hybrid attention mechanisms, while static head selection often leads to entangled attention head behaviors. To overcome these limitations, the authors propose BOSCH—a training-free, black-box binary optimization method that formulates short-context attention head selection as a large neighborhood search problem. This problem is decomposed into three subtasks: layer importance probing, adaptive sliding window attention (SWA) ratio allocation, and grouped head-level optimization. BOSCH is the first method to enable dynamic, order-independent, SWA ratio-adaptive head selection. Evaluated across four mainstream models ranging from 1.7B to 30B parameters, it significantly outperforms existing layer-wise or static head-level approaches and demonstrates accelerated recovery—and even surpassing—of original long-context performance during continued pretraining.
📝 Abstract
Post-training hybridization of large language models (LLMs) often replaces quadratic self-attention with sliding-window attention (SWA) to reduce KV cache usage and improve latency. Existing hybridization schemes are typically defined either at the layer level (e.g., interleaving) or at the head level via static rankings from local to global. Layer-level schemes ignore that local and global dependencies are routed through heads within the same layer, while static head-level rankings suffer from entanglement: a head's local/global behavior can change after hybridization. We propose BOSCH, Black-box Binary Optimization for Short-context Head Selection, a training-free method that formulates the problem as a Large Neighborhood Search and decomposes it into three subproblems: (i) layer-importance detection via small-budget black-box probes, (ii) adaptive per-layer SWA-ratio assignment based on these sensitivities, and (iii) grouped head-level optimization within ratio buckets. Extensive experiments on 4 LLMs ranging from 1.7B to 30B parameters, across 4 SWA ratios, show that BOSCH consistently outperforms layer-level heuristics and 6 strong static head-level methods, with larger gains at higher SWA ratios. Under continual pretraining, BOSCH recover original long-context performance faster and to a higher level. Analysis of the selected heads reveals substantial turnover for BOSCH across different SWA ratios, underscoring the importance of performing head-level selection for each target ratio rather than relying on fixed locality rankings.