Unleashing Scientific Reasoning for Bio-experimental Protocol Generation via Structured Component-based Reward Mechanism

📅 2025-10-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current large language models (LLMs) suffer from step omissions, logical inconsistencies, and semantic inaccuracies when generating biological experimental protocols, severely limiting their scientific utility. To address this, we propose a “sketch-then-fill” two-stage generation paradigm that decouples structural analysis, component construction, and natural language realization of protocols. We further design a fine-grained, structure-aware reward mechanism—grounded in atomic protocol units—to enable verifiable optimization of step alignment, sequential consistency, and semantic fidelity. Leveraging our newly curated large-scale biological protocol dataset, SciRecipe, we develop a knowledge-to-action staged training framework. Our model, Thoth, achieves state-of-the-art performance across multiple benchmark dimensions, significantly outperforming leading open-source and proprietary LLMs—particularly in logical step ordering and operational feasibility.

Technology Category

Application Category

📝 Abstract
The foundation of reproducible science lies in protocols that are precise, logically ordered, and executable. The autonomous generation of these protocols through natural language queries could greatly improve the efficiency of the reproduction process. However, current leading large language models (LLMs) often generate incomplete or inconsistent protocols, limiting their utility. To address this limitation, we first introduce SciRecipe, a large-scale dataset of over 12K structured protocols spanning 27 biological subfields and encompassing both comprehension and problem-solving tasks. To further improve protocol generation, we propose the "Sketch-and-Fill" paradigm, which separates analysis, structuring, and expression to ensure each step is explicit and verifiable. Complementing this, the structured component-based reward mechanism evaluates step granularity, action order, and semantic fidelity, aligning model optimization with experimental reliability. Building on these components, we develop Thoth, trained through a staged Knowledge-to-Action process that progresses from knowledge acquisition to operational reasoning and ultimately to robust, executable protocol generation. Across multiple benchmarks, Thoth consistently surpasses both proprietary and open-source LLMs, achieving significant improvements in step alignment, logical sequencing, and semantic accuracy. Our approach paves the way for reliable scientific assistants that bridge knowledge with experimental execution. All data, code, and models will be released publicly.
Problem

Research questions and friction points this paper is trying to address.

Generating precise biological protocols from natural language queries
Addressing incomplete protocol generation by current language models
Ensuring experimental reliability through structured reasoning mechanisms
Innovation

Methods, ideas, or system contributions that make the work stand out.

Structured component-based reward mechanism for evaluation
Sketch-and-Fill paradigm separates analysis and expression
Staged Knowledge-to-Action process for protocol generation
🔎 Similar Papers
No similar papers found.
H
Haoran Sun
Shanghai Artificial Intelligence Laboratory, Fudan University
Y
Yankai Jiang
Shanghai Artificial Intelligence Laboratory
Z
Zhenyu Tang
Shanghai Artificial Intelligence Laboratory, Shanghai Jiao Tong University
Y
Yaning Pan
Shanghai Artificial Intelligence Laboratory, Fudan University
S
Shuang Gu
Shanghai Artificial Intelligence Laboratory, Shanghai Jiao Tong University
Z
Zekai Lin
Fudan University
L
Lilong Wang
Shanghai Artificial Intelligence Laboratory
W
Wenjie Lou
Shanghai Artificial Intelligence Laboratory
L
Lei Liu
Fudan University
Lei Bai
Lei Bai
Shanghai AI Laboratory
Foundation ModelScience IntelligenceMulti-Agent SystemAutonomous Discovery
Xiaosong Wang
Xiaosong Wang
Shanghai AI Laboratory
Medical Image AnalysisComputer VisionVision and Language