🤖 AI Summary
This study investigates whether protein language models (PLMs) exhibit a “curse of depth”—a phenomenon observed in large language models where deeper layers contribute minimally to predictions, indicating depth inefficiency. Employing a unified probing and perturbation framework, the authors systematically analyze layer-wise contribution dynamics across six prominent PLMs trained under diverse paradigms, including autoregressive, masked language modeling, and diffusion objectives. Extending depth-efficiency analysis for the first time to non-autoregressive and multimodal PLMs, the work reveals a consistent depth-dependence pattern: deeper layers primarily fine-tune the output distribution, and this effect intensifies with increasing model depth. These findings confirm the widespread presence of the curse of depth in PLMs, offering critical theoretical insights for designing more efficient architectures.
📝 Abstract
Protein language models (PLMs) have become widely adopted as general-purpose models, demonstrating strong performance in protein engineering and de novo design. Like large language models (LLMs), they are typically trained as deep transformers with next-token or masked-token prediction objectives on massive sequence corpora and are scaled by increasing model depth. Recent work on autoregressive LLMs has identified the Curse of Depth: later layers contribute little to the final output predictions. These findings naturally raise the question of whether a similar depth inefficiency also appears in PLMs, where many widely used models are not autoregressive, and some are multimodal, accepting both protein sequence and structure as input. In this work, we present a depth analysis of six popular PLMs across model families and scales, spanning three training objectives, namely autoregressive, masked, and diffusion, and quantify how layer contributions evolve with depth using a unified set of probing- and perturbation-based measurements. Across all models, we observe consistent depth-dependent patterns that extend prior findings on LLMs: later layers depend less on earlier computations and mainly refine the final output distribution, and these effects are increasingly pronounced in deeper models. Taken together, our results suggest that PLMs exhibit a form of depth inefficiency, motivating future work on more depth-efficient architectures and training methods.