Cluster-guided LLM-Based Anonymization of Software Analytics Data: Studying Privacy-Utility Trade-offs in JIT Defect Prediction

📅 2025-12-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing anonymization methods for Just-In-Time (JIT) defect prediction fail to preserve both privacy and predictive utility, as they neglect contextual dependencies among software metrics, thereby risking privacy leakage from software analytics data. Method: We propose a clustering-guided large language model (LLM)-based anonymization framework: first, semantically clustering software features; then, leveraging an LLM to adaptively generate alpha-beta ratio and churn-mixed distribution parameters conditioned on cluster structure—enabling context-aware, high-fidelity sanitization. Contribution/Results: This work introduces the first LLM-driven, cluster-adaptive anonymization paradigm, integrating semantic feature clustering, LLM-based contextual reasoning, statistical distribution modeling, and the IPR (Information Privacy Risk) privacy evaluation framework. Evaluated on six open-source projects, our method achieves ≥80% IPR (Privacy Level 2), improving privacy by 18–25% over four state-of-the-art graph-anonymization baselines, while maintaining near-equivalent F1 scores.

Technology Category

Application Category

📝 Abstract
The increasing use of machine learning (ML) for Just-In-Time (JIT) defect prediction raises concerns about privacy leakage from software analytics data. Existing anonymization methods, such as tabular transformations and graph perturbations, often overlook contextual dependencies among software metrics, leading to suboptimal privacy-utility tradeoffs. Leveraging the contextual reasoning of Large Language Models (LLMs), we propose a cluster-guided anonymization technique that preserves contextual and statistical relationships within JIT datasets. Our method groups commits into feature-based clusters and employs an LLM to generate context-aware parameter configurations for each commit cluster, defining alpha-beta ratios and churn mixture distributions used for anonymization. Our evaluation on six projects (Cassandra, Flink, Groovy, Ignite, OpenStack, and Qt) shows that our LLM-based approach achieves privacy level 2 (IPR >= 80 percent), improving privacy by 18 to 25 percent over four state-of-the-art graph-based anonymization baselines while maintaining comparable F1 scores. Our results demonstrate that LLMs can act as adaptive anonymization engines when provided with cluster-specific statistical information about similar data points, enabling context-sensitive and privacy-preserving software analytics without compromising predictive accuracy.
Problem

Research questions and friction points this paper is trying to address.

Anonymizes JIT defect prediction data using LLMs
Improves privacy-utility trade-offs in software analytics
Preserves contextual relationships while enhancing data privacy
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-guided anonymization with feature-based clustering
Context-aware parameter generation for commit clusters
Balancing privacy and utility in JIT defect prediction
M
Maaz Khan
SFG Lab, Lahore University of Management Sciences, Lahore, Pakistan
G
Gul Sher Khan
SFG Lab, Lahore University of Management Sciences, Lahore, Pakistan
Ahsan Raza
Ahsan Raza
Kyung Hee University
Haptics and Virtual Reality
P
Pir Sami Ullah
National University of Computer and Emerging Sciences (FAST), Islamabad, Pakistan
Abdul Ali Bangash
Abdul Ali Bangash
Assistant Professor at LUMS
SE4AImining software repositoriesmixed methods