HoneyImage: Verifiable, Harmless, and Stealthy Dataset Ownership Verification for Image Models

📅 2025-07-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the copyright protection challenge arising from unauthorized use of image datasets in AI model training. To this end, we propose a verifiable, benign, and imperceptible data ownership verification mechanism. Our method selectively perturbs the embedding space of hard samples to implicitly embed an imperceptible yet detectable watermark, enabling lightweight verification via model output analysis. The key innovation lies in breaking the traditional trade-off between watermark security and model performance: our approach achieves highly robust ownership authentication without compromising data integrity or downstream task accuracy. Extensive experiments across four mainstream image benchmark datasets and multiple model architectures demonstrate verification accuracy exceeding 95%, classification accuracy degradation below 0.3%, and complete perceptual invisibility of the embedded watermark.

Technology Category

Application Category

📝 Abstract
Image-based AI models are increasingly deployed across a wide range of domains, including healthcare, security, and consumer applications. However, many image datasets carry sensitive or proprietary content, raising critical concerns about unauthorized data usage. Data owners therefore need reliable mechanisms to verify whether their proprietary data has been misused to train third-party models. Existing solutions, such as backdoor watermarking and membership inference, face inherent trade-offs between verification effectiveness and preservation of data integrity. In this work, we propose HoneyImage, a novel method for dataset ownership verification in image recognition models. HoneyImage selectively modifies a small number of hard samples to embed imperceptible yet verifiable traces, enabling reliable ownership verification while maintaining dataset integrity. Extensive experiments across four benchmark datasets and multiple model architectures show that HoneyImage consistently achieves strong verification accuracy with minimal impact on downstream performance while maintaining imperceptible. The proposed HoneyImage method could provide data owners with a practical mechanism to protect ownership over valuable image datasets, encouraging safe sharing and unlocking the full transformative potential of data-driven AI.
Problem

Research questions and friction points this paper is trying to address.

Verify unauthorized usage of proprietary image datasets
Balance verification effectiveness and data integrity
Embed imperceptible traces for ownership verification
Innovation

Methods, ideas, or system contributions that make the work stand out.

Modifies hard samples for verifiable traces
Ensures dataset integrity with minimal impact
Achieves strong verification accuracy stealthily
🔎 Similar Papers
No similar papers found.
Zhihao Zhu
Zhihao Zhu
University of Science and Technology of China
Machine Learning PrivacyRecommender SystemGraph Neural Network
Jiale Han
Jiale Han
The Hong Kong University of Science and Technology
Natural Language Processing
Y
Yi Yang
Department of Information Systems, Business Statistics and Operations Management (ISOM), Hong Kong University of Science and Technology (HKUST)