🤖 AI Summary
This work addresses the trajectory reconstruction problem in DNA data storage—recovering the original DNA sequence from multiple noisy reads independently corrupted by deletions, insertions, and substitutions. To overcome the limitation of existing methods in modeling technology-specific error patterns, we introduce, for the first time, pretrained language models to this task. We propose TReconLM, an autoregressive model trained via next-token prediction, pretrained on large-scale synthetically generated erroneous sequences and fine-tuned on real sequencing data to implicitly capture contextual dependencies and error distributions. Experiments demonstrate that TReconLM significantly outperforms state-of-the-art methods across diverse noise regimes, achieving substantially higher error-free recovery rates. Our approach establishes a novel paradigm for high-fidelity DNA data readout, bridging advances in natural language processing with molecular information storage.
📝 Abstract
The general trace reconstruction problem seeks to recover an original sequence from its noisy copies independently corrupted by deletions, insertions, and substitutions. This problem arises in applications such as DNA data storage, a promising storage medium due to its high information density and longevity. However, errors introduced during DNA synthesis, storage, and sequencing require correction through algorithms and codes, with trace reconstruction often used as part of the data retrieval process. In this work, we propose TReconLM, which leverages language models trained on next-token prediction for trace reconstruction. We pretrain language models on synthetic data and fine-tune on real-world data to adapt to technology-specific error patterns. TReconLM outperforms state-of-the-art trace reconstruction algorithms, including prior deep learning approaches, recovering a substantially higher fraction of sequences without error.