Integrating Artificial Intelligence with Human Expertise: An In-depth Analysis of ChatGPT's Capabilities in Generating Metamorphic Relations

📅 2025-03-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Evaluating the quality of metamorphic relations (MRs) automatically generated by large language models (LLMs) remains challenging due to the lack of systematic, scalable, and reliable assessment frameworks across diverse systems under test (SUTs). Method: We propose an enhanced MR evaluation framework applicable to heterogeneous SUTs, integrating a customized GPT-based evaluator, human expert review, and a multi-dimensional MR quality metric suite. Crucially, we conduct the first systematic comparative analysis of consistency and complementarity between GPT-based and human evaluations. Contributions/Results: (1) We construct a benchmark comprising nine distinct SUT categories—from simple programs to complex AI/ML-integrated systems; (2) We empirically demonstrate that GPT-4 significantly outperforms GPT-3.5 in MR accuracy and practicality, with robust performance across SUT types; (3) We establish a human–AI collaborative evaluation paradigm, delivering a reproducible, extensible quality assurance methodology for AI-driven software testing.

Technology Category

Application Category

📝 Abstract
Context: This paper provides an in-depth examination of the generation and evaluation of Metamorphic Relations (MRs) using GPT models developed by OpenAI, with a particular focus on the capabilities of GPT-4 in software testing environments. Objective: The aim is to examine the quality of MRs produced by GPT-3.5 and GPT-4 for a specific System Under Test (SUT) adopted from an earlier study, and to introduce and apply an improved set of evaluation criteria for a diverse range of SUTs. Method: The initial phase evaluates MRs generated by GPT-3.5 and GPT-4 using criteria from a prior study, followed by an application of an enhanced evaluation framework on MRs created by GPT-4 for a diverse range of nine SUTs, varying from simple programs to complex systems incorporating AI/ML components. A custom-built GPT evaluator, alongside human evaluators, assessed the MRs, enabling a direct comparison between automated and human evaluation methods. Results: The study finds that GPT-4 outperforms GPT-3.5 in generating accurate and useful MRs. With the advanced evaluation criteria, GPT-4 demonstrates a significant ability to produce high-quality MRs across a wide range of SUTs, including complex systems incorporating AI/ML components. Conclusions: GPT-4 exhibits advanced capabilities in generating MRs suitable for various applications. The research underscores the growing potential of AI in software testing, particularly in the generation and evaluation of MRs, and points towards the complementarity of human and AI skills in this domain.
Problem

Research questions and friction points this paper is trying to address.

Evaluating GPT-4's ability to generate Metamorphic Relations for software testing.
Comparing MR quality between GPT-3.5 and GPT-4 across diverse systems.
Assessing AI-human collaboration in MR generation and evaluation criteria.
Innovation

Methods, ideas, or system contributions that make the work stand out.

GPT-4 generates high-quality Metamorphic Relations
Combines AI evaluation with human expertise
Advanced criteria for diverse software testing
🔎 Similar Papers
No similar papers found.