From Words to Amino Acids: Does the Curse of Depth Persist?

📅 2026-02-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether protein language models (PLMs) exhibit a “curse of depth”—a phenomenon observed in large language models where deeper layers contribute minimally to predictions, indicating depth inefficiency. Employing a unified probing and perturbation framework, the authors systematically analyze layer-wise contribution dynamics across six prominent PLMs trained under diverse paradigms, including autoregressive, masked language modeling, and diffusion objectives. Extending depth-efficiency analysis for the first time to non-autoregressive and multimodal PLMs, the work reveals a consistent depth-dependence pattern: deeper layers primarily fine-tune the output distribution, and this effect intensifies with increasing model depth. These findings confirm the widespread presence of the curse of depth in PLMs, offering critical theoretical insights for designing more efficient architectures.

Technology Category

Application Category

📝 Abstract
Protein language models (PLMs) have become widely adopted as general-purpose models, demonstrating strong performance in protein engineering and de novo design. Like large language models (LLMs), they are typically trained as deep transformers with next-token or masked-token prediction objectives on massive sequence corpora and are scaled by increasing model depth. Recent work on autoregressive LLMs has identified the Curse of Depth: later layers contribute little to the final output predictions. These findings naturally raise the question of whether a similar depth inefficiency also appears in PLMs, where many widely used models are not autoregressive, and some are multimodal, accepting both protein sequence and structure as input. In this work, we present a depth analysis of six popular PLMs across model families and scales, spanning three training objectives, namely autoregressive, masked, and diffusion, and quantify how layer contributions evolve with depth using a unified set of probing- and perturbation-based measurements. Across all models, we observe consistent depth-dependent patterns that extend prior findings on LLMs: later layers depend less on earlier computations and mainly refine the final output distribution, and these effects are increasingly pronounced in deeper models. Taken together, our results suggest that PLMs exhibit a form of depth inefficiency, motivating future work on more depth-efficient architectures and training methods.
Problem

Research questions and friction points this paper is trying to address.

protein language models
curse of depth
depth inefficiency
transformer depth
layer contribution
Innovation

Methods, ideas, or system contributions that make the work stand out.

Curse of Depth
Protein Language Models
Depth Efficiency
Layer Contribution Analysis
Transformer Architecture
🔎 Similar Papers
No similar papers found.
A
Aleena Siji
Technical University of Munich
A
Amir Mohammad Karimi Mamaghan
KTH Royal Institute of Technology
F
Ferdinand Kapl
Technical University of Munich
Tobias Höppe
Tobias Höppe
Helmholtz AI | TUM
Self-supervised learningRepresentation learningGenerative modeling
Emmanouil Angelis
Emmanouil Angelis
PHD student, Helmholtz AI/TUM
Machine LearningCausality
Andrea Dittadi
Andrea Dittadi
Helmholtz AI | Technical University of Munich
generative modelsrepresentation learningmachine learningdeep learning
M
Maurice Brenner
Institute of Computational Biology, Helmholtz Munich
M
Michael Heinzinger
Institute of Computational Biology, Helmholtz Munich
K
Karl Henrik Johansson
KTH Royal Institute of Technology
K
Kaitlin Maile
Google, Paradigms of Intelligence Team
Johannes von Oswald
Johannes von Oswald
Research Scientist, Google
Deep Learning
Stefan Bauer
Stefan Bauer
Helmholtz | TUM | CIFAR
causal inferencerepresentation learninghealthcare