🤖 AI Summary
This paper addresses the critical security and privacy challenge of membership inference—determining whether a given sample was part of a model’s training set—to enhance machine learning model auditability. We propose aMINT, a novel method that intrinsically embeds membership detection capability into the model training process via multi-task learning: the primary task (e.g., classification) is jointly optimized with an auxiliary task that identifies training samples based on intermediate-layer activation features. Crucially, aMINT requires no post-hoc processing or additional queries; instead, it leverages internal representations to achieve robust, high-accuracy inference. Our approach yields >80% average membership inference accuracy across standard benchmarks—including CIFAR-10, CIFAR-100, and SVHN—and diverse architectures such as ResNet and VGG, significantly outperforming existing state-of-the-art methods. The key contribution lies in unifying auditability with model training, enabling inherently auditable models without compromising primary task performance.
📝 Abstract
Active Membership Inference Test (aMINT) is a method designed to detect whether given data were used during the training of machine learning models. In Active MINT, we propose a novel multitask learning process that involves training simultaneously two models: the original or Audited Model, and a secondary model, referred to as the MINT Model, responsible for identifying the data used for training the Audited Model. This novel multi-task learning approach has been designed to incorporate the auditability of the model as an optimization objective during the training process of neural networks. The proposed approach incorporates intermediate activation maps as inputs to the MINT layers, which are trained to enhance the detection of training data. We present results using a wide range of neural networks, from lighter architectures such as MobileNet to more complex ones such as Vision Transformers, evaluated in 5 public benchmarks. Our proposed Active MINT achieves over 80% accuracy in detecting if given data was used for training, significantly outperforming previous approaches in the literature. Our aMINT and related methodological developments contribute to increasing transparency in AI models, facilitating stronger safeguards in AI deployments to achieve proper security, privacy, and copyright protection.