AI-based Programming Assistants for Privacy-related Code Generation: The Developers' Experience

📅 2025-03-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study identifies critical trust bottlenecks hindering AI programming assistants’ effectiveness in generating privacy-compliant code—namely, developers’ misinterpretation of privacy requirements, insufficient knowledge of data protection regulations (e.g., GDPR, CCPA), and the unverifiability of model outputs. Drawing on a mixed-methods investigation involving 51 developers worldwide—including surveys, in-depth interviews, and empirical thematic analysis—it provides the first systematic characterization of the practice gap in AI-assisted privacy-aware coding. The contributions are twofold: (1) seven empirically grounded, actionable usage guidelines for developers, addressing prompt engineering, human-in-the-loop review, and context augmentation; and (2) four high-priority research directions—privacy-aware modeling, regulation-grounded explainable reasoning, verifiable code generation, and cross-jurisdictional compliance adaptation. Collectively, this work establishes an evidence-based foundation and operational framework to enhance the trustworthiness and practical utility of AI programming assistants in data protection contexts.

Technology Category

Application Category

📝 Abstract
With the popularising of generative AI, the existence of AI-based programming assistants for developers is no surprise. Developers increasingly use them for their work, including generating code to fulfil the data protection requirements (privacy) of the apps they build. We wanted to know if the reality is the same as expectations of AI-based programming assistants when trying to fulfil software privacy requirements, and the challenges developers face when using AI-based programming assistants and how these can be improved. To this end, we conducted a survey with 51 developers worldwide. We found that AI-based programming assistants need to be improved in order for developers to better trust them with generating code that ensures privacy. In this paper, we provide some practical recommendations for developers to consider following when using AI-based programming assistants for privacy-related code development, and some key further research directions.
Problem

Research questions and friction points this paper is trying to address.

Evaluating AI-based programming assistants for privacy code generation.
Identifying challenges developers face with AI assistants in privacy tasks.
Providing recommendations to improve trust in AI-generated privacy code.
Innovation

Methods, ideas, or system contributions that make the work stand out.

AI-based assistants for privacy code generation
Surveyed 51 developers on AI assistant challenges
Proposed improvements for trustworthy privacy coding
🔎 Similar Papers
No similar papers found.
Kashumi Madampe
Kashumi Madampe
Research Fellow at Monash University
SE4AIAI4SEdeveloper productivityproduct managementprivacy
J
John Grundy
HumaniSE Lab, Department of Software Systems and Cybersecurity, Monash University, Melbourne, Australia
N
Nalin Arachchilage
School of Computing Technologies, RMIT University, Melbourne, Australia