🤖 AI Summary
Existing anonymization methods for Just-In-Time (JIT) defect prediction fail to preserve both privacy and predictive utility, as they neglect contextual dependencies among software metrics, thereby risking privacy leakage from software analytics data.
Method: We propose a clustering-guided large language model (LLM)-based anonymization framework: first, semantically clustering software features; then, leveraging an LLM to adaptively generate alpha-beta ratio and churn-mixed distribution parameters conditioned on cluster structure—enabling context-aware, high-fidelity sanitization.
Contribution/Results: This work introduces the first LLM-driven, cluster-adaptive anonymization paradigm, integrating semantic feature clustering, LLM-based contextual reasoning, statistical distribution modeling, and the IPR (Information Privacy Risk) privacy evaluation framework. Evaluated on six open-source projects, our method achieves ≥80% IPR (Privacy Level 2), improving privacy by 18–25% over four state-of-the-art graph-anonymization baselines, while maintaining near-equivalent F1 scores.
📝 Abstract
The increasing use of machine learning (ML) for Just-In-Time (JIT) defect prediction raises concerns about privacy leakage from software analytics data. Existing anonymization methods, such as tabular transformations and graph perturbations, often overlook contextual dependencies among software metrics, leading to suboptimal privacy-utility tradeoffs. Leveraging the contextual reasoning of Large Language Models (LLMs), we propose a cluster-guided anonymization technique that preserves contextual and statistical relationships within JIT datasets. Our method groups commits into feature-based clusters and employs an LLM to generate context-aware parameter configurations for each commit cluster, defining alpha-beta ratios and churn mixture distributions used for anonymization. Our evaluation on six projects (Cassandra, Flink, Groovy, Ignite, OpenStack, and Qt) shows that our LLM-based approach achieves privacy level 2 (IPR >= 80 percent), improving privacy by 18 to 25 percent over four state-of-the-art graph-based anonymization baselines while maintaining comparable F1 scores. Our results demonstrate that LLMs can act as adaptive anonymization engines when provided with cluster-specific statistical information about similar data points, enabling context-sensitive and privacy-preserving software analytics without compromising predictive accuracy.