TokenTrace: Multi-Concept Attribution through Watermarked Token Recovery

📅 2026-02-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of fine-grained attribution in generative AI models when synthesizing images that combine multiple concepts—such as objects and artistic styles—where existing methods struggle to independently trace each constituent concept. To overcome the limitations of conventional watermarking techniques, the authors propose TokenTrace, a novel framework that embeds queryable secret signatures by jointly perturbing text prompt embeddings and initial latent noise in the semantic domain. TokenTrace introduces a decoupled verification module driven by textual queries, enabling precise and independent溯源 of individual concepts within a single generated image. This approach achieves state-of-the-art performance on both single- and multi-concept attribution tasks while preserving high visual fidelity and robustness against common image transformations.

Technology Category

Application Category

📝 Abstract
Generative AI models pose a significant challenge to intellectual property (IP), as they can replicate unique artistic styles and concepts without attribution. While watermarking offers a potential solution, existing methods often fail in complex scenarios where multiple concepts (e.g., an object and an artistic style) are composed within a single image. These methods struggle to disentangle and attribute each concept individually. In this work, we introduce TokenTrace, a novel proactive watermarking framework for robust, multi-concept attribution. Our method embeds secret signatures into the semantic domain by simultaneously perturbing the text prompt embedding and the initial latent noise that guide the diffusion model's generation process. For retrieval, we propose a query-based TokenTrace module that takes the generated image and a textual query specifying which concepts need to be retrieved (e.g., a specific object or style) as inputs. This query-based mechanism allows the module to disentangle and independently verify the presence of multiple concepts from a single generated image. Extensive experiments show that our method achieves state-of-the-art performance on both single-concept (object and style) and multi-concept attribution tasks, significantly outperforming existing baselines while maintaining high visual quality and robustness to common transformations.
Problem

Research questions and friction points this paper is trying to address.

multi-concept attribution
intellectual property
generative AI
watermarking
concept disentanglement
Innovation

Methods, ideas, or system contributions that make the work stand out.

multi-concept attribution
proactive watermarking
diffusion models
semantic watermarking
query-based retrieval
🔎 Similar Papers
No similar papers found.