Catch-Only-One: Non-Transferable Examples for Model-Specific Authorization

📅 2025-10-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of data authorization—ensuring that data remains effective only for authorized models while becoming ineffective for unauthorized ones, without controlling the model training process. To this end, we propose Non-Transferable Examples (NEs). Methodologically, NEs perform a training-free, data-agnostic input recoding via model-specific low-sensitivity subspaces, integrating subspace steganography with spectral-difference binding. Theoretically, we formalize the trade-off between performance degradation and spectral divergence using the Hoffman–Wielandt inequality, thereby preserving utility for authorized models while suppressing inference capability in unauthorized models. Experiments demonstrate near-lossless performance on mainstream vision and multimodal models under authorization, whereas unauthorized models—even when attempting reconstruction—exhibit significant performance collapse. To our knowledge, this is the first approach achieving robust, input-level, training-free, and data-agnostic control over data usage rights.

Technology Category

Application Category

📝 Abstract
Recent AI regulations call for data that remain useful for innovation while resistant to misuse, balancing utility with protection at the model level. Existing approaches either perturb data to make it unlearnable or retrain models to suppress transfer, but neither governs inference by unknown models, and both typically require control over training. We propose non-transferable examples (NEs), a training-free and data-agnostic input-side usage-control mechanism. We recode inputs within a model-specific low-sensitivity subspace, preserving outputs for the authorized model while reducing performance on unauthorized models through subspace misalignment. We establish formal bounds that guarantee utility for the authorized model and quantify deviation for unauthorized ones, with the Hoffman-Wielandt inequality linking degradation to spectral differences. Empirically, NEs retain performance on diverse vision backbones and state-of-the-art vision-language models under common preprocessing, whereas non-target models collapse even with reconstruction attempts. These results establish NEs as a practical means to preserve intended data utility while preventing unauthorized exploitation. Our project is available at https://trusted-system-lab.github.io/model-specificity
Problem

Research questions and friction points this paper is trying to address.

Develops model-specific authorization to prevent unauthorized model inference
Creates non-transferable examples preserving utility for authorized models only
Ensures data remains useful while resistant to misuse by unauthorized systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Non-transferable examples for model-specific authorization
Recode inputs in low-sensitivity subspace
Preserve outputs for authorized model only
🔎 Similar Papers
No similar papers found.