🤖 AI Summary
Chemical large language models (LLMs) suffer from round-trip inconsistency in bidirectional tasks—e.g., reaction prediction and retrosynthesis—where models generate molecular descriptions but fail to reconstruct the original structure from those descriptions, indicating reliance on unidirectional mappings. To address this, we propose Round-trip Reinforcement Learning (Round-trip RL), the first framework that explicitly optimizes for round-trip consistency. It alternates between forward (text ↔ molecule) and backward (molecule ↔ text) self-iterative training, leveraging massive unlabeled chemical data without human annotations. Integrating reinforcement learning with self-supervised signals, our method constructs differentiable rewards based on bidirectional generation quality. Experiments demonstrate significant improvements over strong baselines across multiple benchmarks, with round-trip consistency gains of up to 23.6%. This advances chemical foundation models toward reliable, reversible, bidirectional intelligence.
📝 Abstract
Large Language Models (LLMs) are emerging as versatile foundation models for computational chemistry, handling bidirectional tasks like reaction prediction and retrosynthesis. However, these models often lack round-trip consistency. For instance, a state-of-the-art chemical LLM may successfully caption a molecule, yet be unable to accurately reconstruct the original structure from its own generated text. This inconsistency suggests that models are learning unidirectional memorization rather than flexible mastery. Indeed, recent work has demonstrated a strong correlation between a model's round-trip consistency and its performance on the primary tasks. This strong correlation reframes consistency into a direct target for model improvement. We therefore introduce Round-Trip Reinforcement Learning (RTRL), a novel framework that trains a model to improve its consistency by using the success of a round-trip transformation as a reward signal. We further propose an iterative variant where forward and reverse mappings alternately train each other in a self-improvement loop, a process that is highly data-efficient and notably effective with the massive amount of unlabelled data common in chemistry. Experiments demonstrate that RTRL significantly extbf{boosts performance and consistency} over strong baselines across supervised, self-supervised, and synthetic data regimes. This work shows that round-trip consistency is not just a desirable property but a trainable objective, offering a new path toward more robust and reliable foundation models.