🤖 AI Summary
This paper addresses the practical challenge of implementing privacy protection in software development—particularly in settings lacking cybersecurity training or sufficient privacy awareness—by systematically reviewing the state of “Privacy as Code” (PaC) research. Through a rapid literature review and thematic analysis, we identify two critical bottlenecks: (1) weak capabilities in automated source-code-level privacy property detection and privacy-preserving code generation; and (2) a widespread absence of rigorous performance evaluation and empirical validation of developer usability. We formally define the maturity bottleneck in PaC research and propose three evolutionary pathways: (i) conducting empirical studies in real-world development contexts; (ii) establishing standardized privacy benchmark datasets; and (iii) integrating generative AI techniques. Our findings yield three actionable research agendas, providing theoretical foundations and methodological guidance to advance PaC from conceptual frameworks toward scalable, engineering-ready practice.
📝 Abstract
Privacy and security are central to the design of information systems endowed with sound data protection and cyber resilience capabilities. Still, developers often struggle to incorporate these properties into software projects as they either lack proper cybersecurity training or do not consider them a priority. Prior work has tried to support privacy and security engineering activities through threat modeling methods for scrutinizing flaws in system architectures. Moreover, several techniques for the automatic identification of vulnerabilities and the generation of secure code implementations have also been proposed in the current literature. Conversely, such as-code approaches seem under-investigated in the privacy domain, with little work elaborating on (i) the automatic detection of privacy properties in source code or (ii) the generation of privacy-friendly code. In this work, we seek to characterize the current research landscape of Privacy as Code (PaC) methods and tools by conducting a rapid literature review. Our results suggest that PaC research is in its infancy, especially regarding the performance evaluation and usability assessment of the existing approaches. Based on these findings, we outline and discuss prospective research directions concerning empirical studies with software practitioners, the curation of benchmark datasets, and the role of generative AI technologies.