🤖 AI Summary
A significant gap exists between requirement specification and test validation in robotic systems; conventional manual approaches are error-prone and ill-suited to the dynamic evolution of requirements, design, and implementation. To address this, we propose a knowledge-enhanced Behavior-Driven Development (BDD) methodology for automated acceptance testing. Our approach integrates domain-specific language (DSL)-based modeling, model transformation, and knowledge graph fusion to construct a composable and inferable BDD semantic model, enabling end-to-end translation from natural-language requirements to executable tests. We further integrate the Isaac Sim simulation platform to support variant-aware, multi-agent testing under diverse environmental configurations. Evaluated on a pick-and-place sorting task, our method automatically generates valid test cases, precisely detects behavioral deviations and failure modes, and significantly improves the systematicity, reproducibility, and reliability of acceptance testing.
📝 Abstract
The specification and validation of robotics applications require bridging the gap between formulating requirements and systematic testing. This often involves manual and error-prone tasks that become more complex as requirements, design, and implementation evolve. To address this challenge systematically, we propose extending behaviour-driven development (BDD) to define and verify acceptance criteria for robotic systems. In this context, we use domain-specific modelling and represent composable BDD models as knowledge graphs for robust querying and manipulation, facilitating the generation of executable testing models. A domain-specific language helps to efficiently specify robotic acceptance criteria. We explore the potential for automated generation and execution of acceptance tests through a software architecture that integrates a BDD framework, Isaac Sim, and model transformations, focusing on acceptance criteria for pick-and-place applications. We tested this architecture with an existing pick-and-place implementation and evaluated the execution results, which shows how this application behaves and fails differently when tested against variations of the agent and environment. This research advances the rigorous and automated evaluation of robotic systems, contributing to their reliability and trustworthiness.