🤖 AI Summary
This study addresses the growing deployment gap in autonomous driving research, where 3D object detection methods prioritize leaderboard performance over code quality and production readiness. For the first time, we conduct a systematic empirical analysis of 178 open-source repositories from the KITTI and NuScenes leaderboards, employing static analysis tools—Pylint, Bandit, and Radon—to quantitatively assess code defects, security vulnerabilities, maintainability, and CI/CD adoption. Our findings reveal that only 7.3% of projects meet basic production standards, with 80% of security issues concentrated in five vulnerability categories. Notably, CI/CD integration significantly enhances code maintainability. Building on these insights, we propose targeted development guidelines to effectively bridge the gap between academic research and real-world engineering deployment.
📝 Abstract
Autonomous vehicle (AV) perception models are typically evaluated solely on benchmark performance metrics, with limited attention to code quality, production readiness and long-term maintainability. This creates a significant gap between research excellence and real-world deployment in safety-critical systems subject to international safety standards. To address this gap, we present the first large-scale empirical study of software quality in AV perception repositories, systematically analyzing 178 unique models from the KITTI and NuScenes 3D Object Detection leaderboards. Using static analysis tools (Pylint, Bandit, and Radon), we evaluated code errors, security vulnerabilities, maintainability, and development practices. Our findings revealed that only 7.3% of the studied repositories meet basic production-readiness criteria, defined as having zero critical errors and no high-severity security vulnerabilities. Security issues are highly concentrated, with the top five issues responsible for almost 80% of occurrences, which prompted us to develop a set of actionable guidelines to prevent them. Additionally, the adoption of Continuous Integration/Continuous Deployment pipelines was correlated with better code maintainability. Our findings highlight that leaderboard performance does not reflect production readiness and that targeted interventions could substantially improve the quality and safety of AV perception code.