🤖 AI Summary
This work addresses the performance gap between decoder-only and encoder-only large language models in time-dependent partial differential equation (PDE) simulation tasks, focusing on enhancing cross-modal modeling capability in purely autoregressive architectures. To overcome the inherent unidirectionality limitation of decoder-only models, we propose two novel sequence modeling strategies—Parallel Flipping and Sequence Doubling—that explicitly capture bidirectional spatiotemporal dependencies without introducing encoder components. These are integrated with cross-modal adaptation techniques for robust multimodal representation learning. We conduct systematic evaluation across multiple canonical PDE benchmarks. Experimental results demonstrate that our approach significantly improves long-term prediction accuracy and stability of decoder-only models, narrowing the performance gap with encoder-only counterparts. This establishes a more flexible, computationally efficient, and scalable decoder-only paradigm for scientific machine learning.
📝 Abstract
Large language models have shown great success on natural language tasks in recent years, but they have also shown great promise when adapted to new modalities, e.g., for scientific machine learning tasks. Even though decoder-only models are more popular within NLP and scale exceedingly well at generating natural language, most proposed approaches for cross-modal adaptation focus on encoder-only models, raising the question of how model architecture affects these approaches. In this paper, we therefore perform a series of ablation studies to answer this question, systematically comparing encoder-only and decoder-only models on cross-modal adaptation for time-dependent simulation tasks based on partial differential equations (PDEs). We find that decoder-only models are far worse than encoder-only models, when existing approaches are applied unmodified. In contrast to several other domains, scaling decoder-only models also does not help. To harness the potential of decoder-only models in this context, we introduce two novel approaches, Parallel Flipping and Sequence Doubling, attempting to mimic bidirectionality in autoregressive models. Both our methods improve overall performance using decoder-only models for all tasks and all cross-model adaptation methods, closing the gap to encoder-only model performance. We hope that our findings broaden the spectrum of models used on cross-modal adaptation tasks to further scientific ML.