ProtTeX-CC: Activating In-Context Learning in Protein LLM via Two-Stage Instruction Compression

📅 2025-08-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
ProtTeX encodes protein sequences and structures as concatenated discrete tokens, leading to doubled input length, misalignment between modalities, and inability to support in-context learning (ICL) due to context window limitations. To address this, we propose a two-stage residue-level compression framework: first, a sequence-structure joint embedding compression preserving residue-level alignment; second, a learnable self-compression module that reduces each sample to ~16 tokens—achieving a 93.68% prompt compression rate. Our method requires no modification to the backbone model and is the first to enable ICL for proteins without altering the ProtTeX architecture. Integrated with parameter-efficient fine-tuning (PEFT) and end-to-end training, it improves performance by 2% on domain-specific benchmarks and up to 11% on cross-domain datasets, significantly enhancing generalization.

Technology Category

Application Category

📝 Abstract
Recent advances in protein large language models, such as ProtTeX, represent both side-chain amino acids and backbone structure as discrete token sequences of residue length. While this design enables unified modeling of multimodal protein information, it suffers from two major limitations: (1) The concatenation of sequence and structure tokens approximately doubles the protein length and breaks the intrinsic residue-level alignment between modalities. (2) Constrained by the training corpus and limited context window, ProtTeX is typically trained on single-protein inputs, rendering it incompatible with in-context learning (ICL) and thus limiting its generalization capability. To address these issues, we propose ProtTeX-CC, a lightweight two-stage compression framework designed to enhance ProtTeX under few-shot settings. We first design a joint embedding compression mechanism that fuses sequence and structure representations at the residue level, effectively reducing the protein input length by half without sacrificing performance. Then we propose a self-compression module that aggregates each full demonstration into the latent space of the last few linguistic tokens, reducing the average demonstration length from 751 tokens to less than 16 tokens. Compared to the original ProtTeX, our self-compression approach achieves a compression ratio of approximately 93.68% in the total prompt length under the 16-shot setting. Without modifying the backbone model, ProtTeX-CC introduces only a small number of additional parameters through PEFT-based tuning in the joint embedding compression stage and a single trainable projection layer in the self-compression stage. Extensive experiments on protein function prediction show that ProtTeX-CC improves performance on the in-domain benchmark by 2%, and generalizes well to the out-of-domain dataset with a performance gain of 11%.
Problem

Research questions and friction points this paper is trying to address.

Resolves misalignment in multimodal protein token sequences
Enables in-context learning for protein language models
Reduces input length without performance loss
Innovation

Methods, ideas, or system contributions that make the work stand out.

Joint embedding compression for residue-level fusion
Self-compression module reduces demonstration length
PEFT-based tuning with minimal additional parameters
🔎 Similar Papers
No similar papers found.
C
Chuanliu Fan
School of Computer Science and Technology, Soochow University, Suzhou, China
Zicheng Ma
Zicheng Ma
Peking University
BiophysicsBioinformaticsDeep learning
J
Jun Gao
Zhejiang University, Hangzhou, China
N
Nan Yu
School of Computer Science and Technology, Soochow University, Suzhou, China
J
Jun Zhang
Changping Laboratory, Beijing, China
Ziqiang Cao
Ziqiang Cao
Soochow University
Natural Language Processing
Yi Qin Gao
Yi Qin Gao
Peking University
chemistrybiophysics
G
Guohong Fu
School of Computer Science and Technology, Soochow University, Suzhou, China; Institute of Artificial Intelligence, Soochow University, Suzhou, China