🤖 AI Summary
ProtTeX encodes protein sequences and structures as concatenated discrete tokens, leading to doubled input length, misalignment between modalities, and inability to support in-context learning (ICL) due to context window limitations. To address this, we propose a two-stage residue-level compression framework: first, a sequence-structure joint embedding compression preserving residue-level alignment; second, a learnable self-compression module that reduces each sample to ~16 tokens—achieving a 93.68% prompt compression rate. Our method requires no modification to the backbone model and is the first to enable ICL for proteins without altering the ProtTeX architecture. Integrated with parameter-efficient fine-tuning (PEFT) and end-to-end training, it improves performance by 2% on domain-specific benchmarks and up to 11% on cross-domain datasets, significantly enhancing generalization.
📝 Abstract
Recent advances in protein large language models, such as ProtTeX, represent both side-chain amino acids and backbone structure as discrete token sequences of residue length. While this design enables unified modeling of multimodal protein information, it suffers from two major limitations: (1) The concatenation of sequence and structure tokens approximately doubles the protein length and breaks the intrinsic residue-level alignment between modalities. (2) Constrained by the training corpus and limited context window, ProtTeX is typically trained on single-protein inputs, rendering it incompatible with in-context learning (ICL) and thus limiting its generalization capability. To address these issues, we propose ProtTeX-CC, a lightweight two-stage compression framework designed to enhance ProtTeX under few-shot settings. We first design a joint embedding compression mechanism that fuses sequence and structure representations at the residue level, effectively reducing the protein input length by half without sacrificing performance. Then we propose a self-compression module that aggregates each full demonstration into the latent space of the last few linguistic tokens, reducing the average demonstration length from 751 tokens to less than 16 tokens. Compared to the original ProtTeX, our self-compression approach achieves a compression ratio of approximately 93.68% in the total prompt length under the 16-shot setting. Without modifying the backbone model, ProtTeX-CC introduces only a small number of additional parameters through PEFT-based tuning in the joint embedding compression stage and a single trainable projection layer in the self-compression stage. Extensive experiments on protein function prediction show that ProtTeX-CC improves performance on the in-domain benchmark by 2%, and generalizes well to the out-of-domain dataset with a performance gain of 11%.