🤖 AI Summary
This work proposes the first agent-based, end-to-end debugging framework for machine learning pipelines, addressing the limitations of existing testing methods that often fail to uncover critical defects in complex models and provide actionable remediation guidance. The framework integrates Deepchecks for automated detection of potential issues and leverages an AI agent to generate comprehensive diagnostic reports that include severity rankings, interpretable explanations, and concrete repair suggestions. By revealing latent failure modes overlooked by conventional approaches, the method enables non-expert users to understand and act upon identified problems. Furthermore, it seamlessly integrates into continuous development workflows, significantly enhancing the reliability and maintainability of ML systems.
📝 Abstract
In recent years, machine learning (ML) based software systems are increasingly deployed in several critical applications, yet systematic testing of their behavior remains challenging due to complex model architectures, large input spaces, and evolving deployment environments. Existing testing approaches often rely on generating test cases based on given requirements, which often fail to reveal critical bugs of modern ML models due to their complex nature. Most importantly, such approaches, although they can be used to detect the presence of specific failures in the ML software, they hardly provide any message as to how to fix such errors. To tackle this, in this paper, we present DeepFix, a tool for automated testing of the entire ML pipeline using an agentic AI framework. Our testing approach first leverages Deepchecks to test the ML software for any potential bugs, and thereafter, uses an agentic AI-based approach to generate a detailed bug report. This includes a ranking, based on the severity of the found bugs, along with their explanations, which can be interpreted easily by any non-data science experts and most importantly, also provides possible ways to fix these bugs. Additionally, DeepFix supports several types of ML software systems and can be integrated easily to any ML workflow, enabling continuous testing throughout the development lifecycle. We discuss our already validated cases as well as some planned validations designed to demonstrate how the agentic testing process can reveal hidden failure modes that remain undetected by conventional testing methods. A 5-minute screencast demonstrating the tool's core functionality is available at https://youtu.be/WfwZmFcQgBQ.