🤖 AI Summary
This work addresses the longstanding reliance on textual intermediaries and the absence of direct visual conditional modeling in neural-signal-driven visual generation. We propose the first end-to-end unified framework for EEG-to-image generation, editing, and style transfer. Methodologically, we design a LoRA-based plug-and-play neural signal injection module and integrate it with causal attention within a diffusion Transformer architecture, enabling multimodal conditional modeling. We introduce EEG-Style—the first dedicated EEG-based style transfer dataset—and establish the CVPR40/Loongx cross-benchmark evaluation suite. Experiments demonstrate substantial improvements over state-of-the-art methods in generation fidelity, editing consistency, and style transfer quality. The framework achieves low computational overhead and supports multimodal extensibility, thereby bridging a critical gap in direct neural-signal-driven visual content generation.
📝 Abstract
Generating or editing images directly from Neural signals has immense potential at the intersection of neuroscience, vision, and Brain-computer interaction. In this paper, We present Uni-Neur2Img, a unified framework for neural signal-driven image generation and editing. The framework introduces a parameter-efficient LoRA-based neural signal injection module that independently processes each conditioning signal as a pluggable component, facilitating flexible multi-modal conditioning without altering base model parameters. Additionally, we employ a causal attention mechanism accommodate the long-sequence modeling demands of conditional generation tasks. Existing neural-driven generation research predominantly focuses on textual modalities as conditions or intermediate representations, resulting in limited exploration of visual modalities as direct conditioning signals. To bridge this research gap, we introduce the EEG-Style dataset. We conduct comprehensive evaluations across public benchmarks and self-collected neural signal datasets: (1) EEG-driven image generation on the public CVPR40 dataset; (2) neural signal-guided image editing on the public Loongx dataset for semantic-aware local modifications; and (3) EEG-driven style transfer on our self-collected EEG-Style dataset. Extensive experimental results demonstrate significant improvements in generation fidelity, editing consistency, and style transfer quality while maintaining low computational overhead and strong scalability to additional modalities. Thus, Uni-Neur2Img offers a unified, efficient, and extensible solution for bridging neural signals and visual content generation.