🤖 AI Summary
Current large language models (LLMs) lack domain-specific evaluation standards for astronomical research. Method: We propose a user-behavior-driven evaluation paradigm, grounded in 368 real-world astronomy queries from Slack and 11 in-depth user interviews. Using inductive coding combined with retrieval-augmented generation (RAG), we systematically identify astronomers’ usage patterns and core evaluation dimensions—including accuracy, traceability, and domain adaptability—for LLM-based literature interaction systems. Contribution/Results: We introduce AstroBench, the first extensible, astronomy-specific LLM benchmark, and derive science-oriented evaluation principles. AstroBench significantly enhances both the practical usability of LLMs in scientific workflows and the validity of their domain-specific assessment, establishing a methodological framework for developing evaluation benchmarks for specialized LLMs.
📝 Abstract
There is growing interest in leveraging LLMs to aid in astronomy and other scientific research, but benchmarks for LLM evaluation in general have not kept pace with the increasingly diverse ways that real people evaluate and use these models. In this study, we seek to improve evaluation procedures by building an understanding of how users evaluate LLMs. We focus on a particular use case: an LLM-powered retrieval-augmented generation bot for engaging with astronomical literature, which we deployed via Slack. Our inductive coding of 368 queries to the bot over four weeks and our follow-up interviews with 11 astronomers reveal how humans evaluated this system, including the types of questions asked and the criteria for judging responses. We synthesize our findings into concrete recommendations for building better benchmarks, which we then employ in constructing a sample benchmark for evaluating LLMs for astronomy. Overall, our work offers ways to improve LLM evaluation and ultimately usability, particularly for use in scientific research.