🤖 AI Summary
Existing plant phenotyping tools require specialized programming and computational expertise, resulting in low accessibility and poor maintainability. Method: We propose the first conversational multi-agent AI system tailored for plant phenotyping, centered on a large language model (LLM) that orchestrates specialized agents—including computer vision, automated machine learning (AutoML), and visualization modules—to enable end-to-end natural-language-driven tasks: image analysis, phenotypic trait extraction, result visualization, and model training. Contribution/Results: The architecture substantially lowers the barrier to entry for domain scientists without computational backgrounds, democratizing and automating phenotypic analysis. Evaluated across diverse real-world multi-crop scenarios, the system achieves high task completion rates, natural human–AI interaction, and strongly interpretable outputs. It establishes a scalable, deployable paradigm for agricultural AI.
📝 Abstract
Plant phenotyping increasingly relies on (semi-)automated image-based analysis workflows to improve its accuracy and scalability. However, many existing solutions remain overly complex, difficult to reimplement and maintain, and pose high barriers for users without substantial computational expertise. To address these challenges, we introduce PhenoAssistant: a pioneering AI-driven system that streamlines plant phenotyping via intuitive natural language interaction. PhenoAssistant leverages a large language model to orchestrate a curated toolkit supporting tasks including automated phenotype extraction, data visualisation and automated model training. We validate PhenoAssistant through several representative case studies and a set of evaluation tasks. By significantly lowering technical hurdles, PhenoAssistant underscores the promise of AI-driven methodologies to democratising AI adoption in plant biology.