🤖 AI Summary
Research Software Engineers (RSEs) face distinct challenges in ensuring code quality and maintainability, yet peer code review practices remain underexplored and inadequately supported for this community.
Method: Addressing this gap, we designed and deployed a customized survey (N=61) grounded in a comparative analytical framework to systematically examine RSEs’ current review practices, key barriers, and improvement opportunities—focusing on motivations, process adaptability, tooling support, and cross-disciplinary collaboration.
Contribution/Results: We identify three critical enablers of effective review adoption: lightweight process integration, domain-aware tooling, and RSE-specific training. Building on these insights, we propose a structured, ecology-oriented code review optimization framework tailored to research software. Empirical findings confirm that well-supported peer review significantly enhances the sustainability of research software, providing evidence-based guidance for RSE practice and policy development.
📝 Abstract
Background: Research software is crucial for enabling research discoveries and supporting data analysis, simulation, and interpretation across domains. However, evolving requirements, complex inputs, and legacy dependencies hinder the software quality and maintainability. While peer code review can improve software quality, its adoption by research software engineers (RSEs) remains unexplored. Aims: This study explores RSE perspectives on peer code review, focusing on their practices, challenges, and potential improvements. Building on prior work, it aims to uncover how RSEs insights differ from those of other research software developers and identify factors that can enhance code review adoption in this domain. Method: We surveyed RSEs to gather insights into their perspectives on peer code review. The survey design aligned with previous research to enable comparative analysis while including additional questions tailored to RSEs. Results: We received 61 valid responses from the survey. The findings align with prior research while uncovering unique insights about the challenges and practices of RSEs compared to broader developer groups. Conclusions: Peer code review is vital in improving research software's quality, maintainability, and reliability. Despite the unique challenges RSEs face, addressing these through structured processes, improved tools, and targeted training can enhance peer review adoption and effectiveness in research software development.