🤖 AI Summary
This work addresses a critical issue in multimodal generation where shared attention mechanisms often lead to the unintended verbatim copying of reference image content rather than pure style transfer. The study reveals, for the first time, that this phenomenon is closely tied to the dominance of high-frequency components in Rotary Position Embedding (RoPE), which biases attention toward spatial alignment and causes undesirable content replication. To mitigate this, the authors propose a frequency-selective modulation strategy that decomposes RoPE into its spectral components and suppresses high-frequency signals, thereby steering attention toward semantic similarity instead of positional correspondence. Implemented within a Diffusion Transformer (DiT) framework, this approach effectively disentangles style from content, enables controllable style transfer intensity, and significantly enhances both semantic coherence and visual fidelity of generated outputs.
📝 Abstract
Positional encodings are essential to transformer-based generative models, yet their behavior in multimodal and attention-sharing settings is not fully understood. In this work, we present a principled analysis of Rotary Positional Embeddings (RoPE), showing that RoPE naturally decomposes into frequency components with distinct positional sensitivities. We demonstrate that this frequency structure explains why shared-attention mechanisms, where a target image is generated while attending to tokens from a reference image, can lead to reference copying, in which the model reproduces content from the reference instead of extracting only its stylistic cues. Our analysis reveals that the high-frequency components of RoPE dominate the attention computation, forcing queries to attend mainly to spatially aligned reference tokens and thereby inducing this unintended copying behavior. Building on these insights, we introduce a method for selectively modulating RoPE frequency bands so that attention reflects semantic similarity rather than strict positional alignment. Applied to modern transformer-based diffusion architectures, where all tokens share attention, this modulation restores stable and meaningful shared attention. As a result, it enables effective control over the degree of style transfer versus content copying, yielding a proper style-aligned generation process in which stylistic attributes are transferred without duplicating reference content.