🤖 AI Summary
Current evaluation methods for large language models (LLMs) in open-ended generation tasks suffer from low reliability, poor domain adaptability, and configuration-induced biases. To address these limitations, this work proposes a flexible and extensible end-to-end evaluation framework that integrates multiple model APIs, prompt engineering, and parameter tuning. The framework further introduces an interactive evaluation strategy guidance mechanism and an automated meta-evaluation approach based on perturbed data. These innovations substantially enhance the transparency and credibility of LLM assessments. The effectiveness of the framework is validated through empirical evaluation on clinical note generation tasks. The accompanying code and tools have been open-sourced, offering a practical and reproducible solution for domain-specific LLM evaluation.
📝 Abstract
Robust and comprehensive evaluation of large language models (LLMs) is essential for identifying effective LLM system configurations and mitigating risks associated with deploying LLMs in sensitive domains. However, traditional statistical metrics are poorly suited to open-ended generation tasks, leading to growing reliance on LLM-based evaluation methods. These methods, while often more flexible, introduce additional complexity: they depend on carefully chosen models, prompts, parameters, and evaluation strategies, making the evaluation process prone to misconfiguration and bias. In this work, we present EvalSense, a flexible, extensible framework for constructing domain-specific evaluation suites for LLMs. EvalSense provides out-of-the-box support for a broad range of model providers and evaluation strategies, and assists users in selecting and deploying suitable evaluation methods for their specific use-cases. This is achieved through two unique components: (1) an interactive guide aiding users in evaluation method selection and (2) automated meta-evaluation tools that assess the reliability of different evaluation approaches using perturbed data. We demonstrate the effectiveness of EvalSense in a case study involving the generation of clinical notes from unstructured doctor-patient dialogues, using a popular open dataset. All code, documentation, and assets associated with EvalSense are open-source and publicly available at https://github.com/nhsengland/evalsense.