Aviary: training language agents on challenging scientific tasks

πŸ“… 2024-12-30
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Language assistants exhibit weak reasoning capabilities and high computational costs when tackling complex scientific tasksβ€”such as molecular cloning, literature-based question answering, and protein stability optimization. Method: This paper introduces Aviary, the first scalable training platform for scientific multi-step reasoning language agents. It formally defines the Language Decision Process (LDP) and proposes a unified agent framework supporting internal reasoning, tool invocation, and temperature-scaled sampling. Aviary leverages open-source, non-state-of-the-art large language models and integrates online training, inference-time compute scaling, tool-augmented interaction, and partially observable Markov decision process modeling. Results: Experiments demonstrate that Aviary outperforms state-of-the-art language agents and even human experts across multiple biological tasks, while reducing inference cost by two orders of magnitude (to 1% of baseline). This validates the feasibility of achieving high-performance scientific automation using lightweight models.

Technology Category

Application Category

πŸ“ Abstract
Solving complex real-world tasks requires cycles of actions and observations. This is particularly true in science, where tasks require many cycles of analysis, tool use, and experimentation. Language agents are promising for automating intellectual tasks in science because they can interact with tools via natural language or code. Yet their flexibility creates conceptual and practical challenges for software implementations, since agents may comprise non-standard components such as internal reasoning, planning, tool usage, as well as the inherent stochasticity of temperature-sampled language models. Here, we introduce Aviary, an extensible gymnasium for language agents. We formalize agents as policies solving language-grounded partially observable Markov decision processes, which we term language decision processes. We then implement five environments, including three challenging scientific environments: (1) manipulating DNA constructs for molecular cloning, (2) answering research questions by accessing scientific literature, and (3) engineering protein stability. These environments were selected for their focus on multi-step reasoning and their relevance to contemporary biology research. Finally, with online training and scaling inference-time compute, we show that language agents backed by open-source, non-frontier LLMs can match and exceed both frontier LLM agents and human experts on multiple tasks at up to 100x lower inference cost.
Problem

Research questions and friction points this paper is trying to address.

Aviary Platform
Complex Scientific Queries
Biological Challenges
Innovation

Methods, ideas, or system contributions that make the work stand out.

Aviary training framework
Scientific problem-solving environments
Cost-effective model optimization
πŸ”Ž Similar Papers
No similar papers found.
S
Siddharth Narayanan
FutureHouse Inc., San Francisco, CA
J
James D. Braza
FutureHouse Inc., San Francisco, CA
Ryan-Rhys Griffiths
Ryan-Rhys Griffiths
University of Cambridge
Machine LearningPhysicsBayesian OptimizationAI for ScienceLarge Language Models
M
Manu Ponnapati
FutureHouse Inc., San Francisco, CA
A
Albert Bou
FutureHouse Inc., San Francisco, CA
J
Jon M. Laurent
FutureHouse Inc., San Francisco, CA
O
Ori Kabeli
FutureHouse Inc., San Francisco, CA
G
Geemi P Wellawatte
FutureHouse Inc., San Francisco, CA
Sam Cox
Sam Cox
FutureHouse
computational chemistrymachine learning
Samuel G. Rodriques
Samuel G. Rodriques
Director and CEO, FutureHouse Inc.; Group Leader, Francis Crick Institute
Neuroengineeringneurosciencebioengineering
Andrew D. White
Andrew D. White
FutureHouse, University of Rochester
AI Scientist