Controlling Repetition in Protein Language Models

📅 2026-01-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Protein language models often generate pathological repetitions during sequence synthesis, compromising structural foldability and functional viability. This work presents the first systematic quantification of this issue and introduces Utility-Controlled Contrastive Steering (UCCS), a novel inference-time method that decouples repetition control from structural utility by injecting steering vectors—eliminating the need for model retraining. Evaluated on ESM-3 and ProtGPT2 across CATH, UniRef50, and SCOP benchmarks, UCCS significantly reduces repetition rates compared to conventional decoding penalty baselines while preserving high-confidence AlphaFold predictions. These results demonstrate UCCS’s effectiveness in enhancing both the quality and structural reliability of generated protein sequences.

Technology Category

Application Category

📝 Abstract
Protein language models (PLMs) have enabled advances in structure prediction and de novo protein design, yet they frequently collapse into pathological repetition during generation. Unlike in text, where repetition merely reduces readability, in proteins it undermines structural confidence and functional viability. To unify this problem, we present the first systematic study of repetition in PLMs. We first propose quantitative metrics to characterize motif-level and homopolymer repetition and then demonstrate their negative impact on folding reliability. To address this challenge, we propose UCCS (Utility-Controlled Contrastive Steering), which steers protein generation with a constrained dataset. Instead of naively contrasting high- vs. low-repetition sequences, we construct contrastive sets that maximize differences in repetition while tightly controlling for structural utility. This disentanglement yields steering vectors that specifically target repetition without degrading foldability. Injected at inference, these vectors consistently reduce repetition without retraining or heuristic decoding. Experiments with ESM-3 and ProtGPT2 in CATH, UniRef50, and SCOP show that our method outperforms decoding penalties and other baselines, substantially lowering repetition while preserving AlphaFold confidence scores. Our results establish repetition control as a central challenge for PLMs and highlight dataset-guided steering as a principled approach for reliable protein generation.
Problem

Research questions and friction points this paper is trying to address.

protein language models
repetition
structural confidence
functional viability
protein generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

protein language models
repetition control
contrastive steering
UCCS
structure-aware generation
🔎 Similar Papers
No similar papers found.