🤖 AI Summary
This work addresses the disconnect between existing biological reasoning datasets and cutting-edge research topics, as well as the lack of effective methods for automatically constructing high-quality, verifiable question-answer pairs from scientific literature—limitations that hinder models’ reasoning capabilities in biology. To bridge this gap, the authors propose BioAlchemy, a novel pipeline that systematically transforms biomedical research articles into reasoning-oriented training data aligned with contemporary biological themes. Integrating natural language processing, information extraction, and topic-alignment techniques, the pipeline yields BioAlchemy-345K, a dataset comprising 345K samples. Leveraging this resource, the authors train BioAlchemist-8B via reinforcement learning. Experimental results demonstrate that the model achieves a 9.12% improvement over baseline approaches on established biological reasoning benchmarks, substantially enhancing its capacity for scientific inference.
📝 Abstract
Despite the large corpus of biology training text, the impact of reasoning models on biological research generally lags behind math and coding. In this work, we show that biology questions from current large-scale reasoning datasets do not align well with modern research topic distributions in biology, and that this topic imbalance may negatively affect performance. In addition, we find that methods for extracting challenging and verifiable research problems from biology research text are a critical yet underdeveloped ingredient in applying reinforcement learning for better performance on biology research tasks. We introduce BioAlchemy, a pipeline for sourcing a diverse set of verifiable question-and-answer pairs from a scientific corpus of biology research text. We curate BioAlchemy-345K, a training dataset containing over 345K scientific reasoning problems in biology. Then, we demonstrate how aligning our dataset to the topic distribution of modern scientific biology can be used with reinforcement learning to improve reasoning performance. Finally, we present BioAlchemist-8B, which improves over its base reasoning model by 9.12% on biology benchmarks. These results demonstrate the efficacy of our approach for developing stronger scientific reasoning capabilities in biology. The BioAlchemist-8B model is available at: https://huggingface.co/BioAlchemy.