🤖 AI Summary
Viral variant detection in wastewater metagenomes faces challenges including high sequencing noise, low coverage, fragmented reads, and the absence of reference genomes or ground-truth labels. To address these, we propose the first reference-free, unsupervised deep learning framework that integrates k-mer tokenization, a vector-quantized variational autoencoder (VQ-VAE), masked reconstruction pretraining, and contrastive learning—enabling discrete representation learning and variant clustering of viral sequences. Our method requires neither sequence alignment nor manual annotation, automatically uncovering both conserved and variable genomic patterns. Evaluated on 100,000 SARS-CoV-2 sequencing reads, it achieves 99.52% token-level accuracy; after fine-tuning, the silhouette coefficient improves by 42% over baselines, substantially outperforming conventional approaches. This work establishes an interpretable, scalable paradigm for dynamic monitoring of environmental viromes.
📝 Abstract
Wastewater-based genomic surveillance has emerged as a powerful tool for population-level viral monitoring, offering comprehensive insights into circulating viral variants across entire communities. However, this approach faces significant computational challenges stemming from high sequencing noise, low viral coverage, fragmented reads, and the complete absence of labeled variant annotations. Traditional reference-based variant calling pipelines struggle with novel mutations and require extensive computational resources. We present a comprehensive framework for unsupervised viral variant detection using Vector-Quantized Variational Autoencoders (VQ-VAE) that learns discrete codebooks of genomic patterns from k-mer tokenized sequences without requiring reference genomes or variant labels. Our approach extends the base VQ-VAE architecture with masked reconstruction pretraining for robustness to missing data and contrastive learning for highly discriminative embeddings. Evaluated on SARS-CoV-2 wastewater sequencing data comprising approximately 100,000 reads, our VQ-VAE achieves 99.52% mean token-level accuracy and 56.33% exact sequence match rate while maintaining 19.73% codebook utilization (101 of 512 codes active), demonstrating efficient discrete representation learning. Contrastive fine-tuning with different projection dimensions yields substantial clustering improvements: 64-dimensional embeddings achieve +35% Silhouette score improvement (0.31 to 0.42), while 128-dimensional embeddings achieve +42% improvement (0.31 to 0.44), clearly demonstrating the impact of embedding dimensionality on variant discrimination capability. Our reference-free framework provides a scalable, interpretable approach to genomic surveillance with direct applications to public health monitoring.