🤖 AI Summary
This work addresses the underappreciated biosecurity risk that fine-tuned protein language models may inadvertently generate toxic sequences. To mitigate this without retraining, the study introduces Logit Difference Amplification (LDA), a novel inference-time mechanism that effectively suppresses toxicity generation. Evaluated across four taxonomic groups, LDA significantly reduces predicted toxicity—as measured by ToxDL2—while preserving the biological plausibility and structural foldability of generated sequences, as confirmed by Fréchet ESM Distance and pLDDT metrics. The approach outperforms existing activation-based guidance strategies, demonstrating both efficacy and innovation as a safety control during inference.
📝 Abstract
Protein language models (PLMs) are becoming practical tools for de novo protein design, yet their dual-use potential raises safety concerns. We show that domain adaptation to specific taxonomic groups can elicit toxic protein generation, even when toxicity is not the training objective. To address this, we adapt Logit Diff Amplification (LDA) as an inference-time control mechanism for PLMs. LDA modifies token probabilities by amplifying the logit difference between a baseline model and a toxicity-finetuned model, requiring no retraining. Across four taxonomic groups, LDA consistently reduces predicted toxicity rate (measured via ToxDL2) below the taxon-finetuned baseline while preserving biological plausibility. We evaluate quality using Fréchet ESM Distance and predicted foldability (pLDDT), finding that LDA maintains distributional similarity to natural proteins and structural viability (unlike activation-based steering methods that tend to degrade sequence properties). Our results demonstrate that LDA provides a practical safety knob for protein generators that mitigates elicited toxicity while retaining generative quality.