🤖 AI Summary
This study addresses the dual challenges of user privacy leakage and degraded recommendation performance in personalized advertising. We propose a privacy-preserving recommendation framework integrating federated learning with differential privacy. Methodologically, it features: (1) a distributed feature extraction mechanism that minimizes raw data transmission; (2) a dynamic privacy budget allocation strategy that adaptively adjusts noise injection intensity based on client-specific data quality and contribution; and (3) robust model aggregation combined with secure multi-party computation, augmented by a lightweight anomaly detection module to enhance resilience against malicious clients. Experimental results demonstrate that, under strict ε-differential privacy guarantees (ε ≤ 4), the framework improves recommendation accuracy by 12.6% over baselines while reducing communication overhead by 37%. It thus establishes a new paradigm for advertising recommendation in privacy-sensitive settings—balancing security, effectiveness, and practicality.
📝 Abstract
To mitigate privacy leakage and performance issues in personalized advertising, this paper proposes a framework that integrates federated learning and differential privacy. The system combines distributed feature extraction, dynamic privacy budget allocation, and robust model aggregation to balance model accuracy, communication overhead, and privacy protection. Multi-party secure computing and anomaly detection mechanisms further enhance system resilience against malicious attacks. Experimental results demonstrate that the framework achieves dual optimization of recommendation accuracy and system efficiency while ensuring privacy, providing both a practical solution and a theoretical foundation for applying privacy protection technologies in advertisement recommendation.