🤖 AI Summary
This paper addresses the copyright protection challenge arising from unauthorized use of image datasets in AI model training. To this end, we propose a verifiable, benign, and imperceptible data ownership verification mechanism. Our method selectively perturbs the embedding space of hard samples to implicitly embed an imperceptible yet detectable watermark, enabling lightweight verification via model output analysis. The key innovation lies in breaking the traditional trade-off between watermark security and model performance: our approach achieves highly robust ownership authentication without compromising data integrity or downstream task accuracy. Extensive experiments across four mainstream image benchmark datasets and multiple model architectures demonstrate verification accuracy exceeding 95%, classification accuracy degradation below 0.3%, and complete perceptual invisibility of the embedded watermark.
📝 Abstract
Image-based AI models are increasingly deployed across a wide range of domains, including healthcare, security, and consumer applications. However, many image datasets carry sensitive or proprietary content, raising critical concerns about unauthorized data usage. Data owners therefore need reliable mechanisms to verify whether their proprietary data has been misused to train third-party models. Existing solutions, such as backdoor watermarking and membership inference, face inherent trade-offs between verification effectiveness and preservation of data integrity. In this work, we propose HoneyImage, a novel method for dataset ownership verification in image recognition models. HoneyImage selectively modifies a small number of hard samples to embed imperceptible yet verifiable traces, enabling reliable ownership verification while maintaining dataset integrity. Extensive experiments across four benchmark datasets and multiple model architectures show that HoneyImage consistently achieves strong verification accuracy with minimal impact on downstream performance while maintaining imperceptible. The proposed HoneyImage method could provide data owners with a practical mechanism to protect ownership over valuable image datasets, encouraging safe sharing and unlocking the full transformative potential of data-driven AI.