π€ AI Summary
Language assistants exhibit weak reasoning capabilities and high computational costs when tackling complex scientific tasksβsuch as molecular cloning, literature-based question answering, and protein stability optimization.
Method: This paper introduces Aviary, the first scalable training platform for scientific multi-step reasoning language agents. It formally defines the Language Decision Process (LDP) and proposes a unified agent framework supporting internal reasoning, tool invocation, and temperature-scaled sampling. Aviary leverages open-source, non-state-of-the-art large language models and integrates online training, inference-time compute scaling, tool-augmented interaction, and partially observable Markov decision process modeling.
Results: Experiments demonstrate that Aviary outperforms state-of-the-art language agents and even human experts across multiple biological tasks, while reducing inference cost by two orders of magnitude (to 1% of baseline). This validates the feasibility of achieving high-performance scientific automation using lightweight models.
π Abstract
Solving complex real-world tasks requires cycles of actions and observations. This is particularly true in science, where tasks require many cycles of analysis, tool use, and experimentation. Language agents are promising for automating intellectual tasks in science because they can interact with tools via natural language or code. Yet their flexibility creates conceptual and practical challenges for software implementations, since agents may comprise non-standard components such as internal reasoning, planning, tool usage, as well as the inherent stochasticity of temperature-sampled language models. Here, we introduce Aviary, an extensible gymnasium for language agents. We formalize agents as policies solving language-grounded partially observable Markov decision processes, which we term language decision processes. We then implement five environments, including three challenging scientific environments: (1) manipulating DNA constructs for molecular cloning, (2) answering research questions by accessing scientific literature, and (3) engineering protein stability. These environments were selected for their focus on multi-step reasoning and their relevance to contemporary biology research. Finally, with online training and scaling inference-time compute, we show that language agents backed by open-source, non-frontier LLMs can match and exceed both frontier LLM agents and human experts on multiple tasks at up to 100x lower inference cost.