🤖 AI Summary
Evaluating the quality of metamorphic relations (MRs) automatically generated by large language models (LLMs) remains challenging due to the lack of systematic, scalable, and reliable assessment frameworks across diverse systems under test (SUTs).
Method: We propose an enhanced MR evaluation framework applicable to heterogeneous SUTs, integrating a customized GPT-based evaluator, human expert review, and a multi-dimensional MR quality metric suite. Crucially, we conduct the first systematic comparative analysis of consistency and complementarity between GPT-based and human evaluations.
Contributions/Results: (1) We construct a benchmark comprising nine distinct SUT categories—from simple programs to complex AI/ML-integrated systems; (2) We empirically demonstrate that GPT-4 significantly outperforms GPT-3.5 in MR accuracy and practicality, with robust performance across SUT types; (3) We establish a human–AI collaborative evaluation paradigm, delivering a reproducible, extensible quality assurance methodology for AI-driven software testing.
📝 Abstract
Context: This paper provides an in-depth examination of the generation and evaluation of Metamorphic Relations (MRs) using GPT models developed by OpenAI, with a particular focus on the capabilities of GPT-4 in software testing environments. Objective: The aim is to examine the quality of MRs produced by GPT-3.5 and GPT-4 for a specific System Under Test (SUT) adopted from an earlier study, and to introduce and apply an improved set of evaluation criteria for a diverse range of SUTs. Method: The initial phase evaluates MRs generated by GPT-3.5 and GPT-4 using criteria from a prior study, followed by an application of an enhanced evaluation framework on MRs created by GPT-4 for a diverse range of nine SUTs, varying from simple programs to complex systems incorporating AI/ML components. A custom-built GPT evaluator, alongside human evaluators, assessed the MRs, enabling a direct comparison between automated and human evaluation methods. Results: The study finds that GPT-4 outperforms GPT-3.5 in generating accurate and useful MRs. With the advanced evaluation criteria, GPT-4 demonstrates a significant ability to produce high-quality MRs across a wide range of SUTs, including complex systems incorporating AI/ML components. Conclusions: GPT-4 exhibits advanced capabilities in generating MRs suitable for various applications. The research underscores the growing potential of AI in software testing, particularly in the generation and evaluation of MRs, and points towards the complementarity of human and AI skills in this domain.