🤖 AI Summary
This study addresses the challenge of effectively integrating multi-granular representations—such as amino acid–level and Pfam domain–level features—in biological sequences to enhance both model performance and interpretability. By combining the protein language model ESM (operating at the amino acid level) with the domain-level model BiGCARP, the authors systematically evaluate the semantic properties of these embeddings using probing tasks and representation analysis tools. Their analysis reveals that deeper-layer embeddings more faithfully capture the model’s learned knowledge and, for the first time, demonstrates that representations at different granularities encode complementary biological information. Leveraging this insight, the authors propose a cross-granularity fusion strategy that achieves significant and quantifiable performance gains across multiple intermediate-level biological prediction tasks.
📝 Abstract
Recent advances in general-purpose foundation models have stimulated the development of large biological sequence models. While natural language shows symbolic granularity (characters, words, sentences), biological sequences exhibit hierarchical granularity whose levels (nucleotides, amino acids, protein domains, genes) further encode biologically functional information. In this paper, we investigate the integration of cross-granularity knowledge from models through a case study of BiGCARP, a Pfam domain-level model for biosynthetic gene clusters, and ESM, an amino acid-level protein language model. Using representation analysis tools and a set of probe tasks, we first explain why a straightforward cross-model embedding initialization fails to improve downstream performance in BiGCARP, and show that deeper-layer embeddings capture a more contextual and faithful representation of the model's learned knowledge. Furthermore, we demonstrate that representations at different granularities encode complementary biological knowledge, and that combining them yields measurable performance gains in intermediate-level prediction tasks. Our findings highlight cross-granularity integration as a promising strategy for improving both the performance and interpretability of biological foundation models.