Pattern Analogies: Learning to Perform Programmatic Image Edits by Analogy

📅 2024-12-17
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of disorganized underlying program structures and poor semantic controllability in procedural pattern editing, this paper introduces an analogy-driven, structure-preserving editing paradigm: given only a single pair of simple example patterns (a pattern analogy), the system automatically infers and executes generative edits. Key contributions include: (1) SplitWeave, a lightweight domain-specific language that explicitly models pattern structure; (2) a synthetic analogy sampling framework for efficient generation of diverse training data; and (3) TriFuser, a latent diffusion model built upon the Latent Diffusion Model (LDM) architecture that jointly aligns and generalizes program semantics with image representations. Experiments demonstrate high-fidelity editing on real artist-created patterns and strong cross-style generalization to unseen pattern types—significantly outperforming both program-inversion and end-to-end image-translation baselines.

Technology Category

Application Category

📝 Abstract
Pattern images are everywhere in the digital and physical worlds, and tools to edit them are valuable. But editing pattern images is tricky: desired edits are often programmatic: structure-aware edits that alter the underlying program which generates the pattern. One could attempt to infer this underlying program, but current methods for doing so struggle with complex images and produce unorganized programs that make editing tedious. In this work, we introduce a novel approach to perform programmatic edits on pattern images. By using a pattern analogy -- a pair of simple patterns to demonstrate the intended edit -- and a learning-based generative model to execute these edits, our method allows users to intuitively edit patterns. To enable this paradigm, we introduce SplitWeave, a domain-specific language that, combined with a framework for sampling synthetic pattern analogies, enables the creation of a large, high-quality synthetic training dataset. We also present TriFuser, a Latent Diffusion Model (LDM) designed to overcome critical issues that arise when naively deploying LDMs to this task. Extensive experiments on real-world, artist-sourced patterns reveals that our method faithfully performs the demonstrated edit while also generalizing to related pattern styles beyond its training distribution.
Problem

Research questions and friction points this paper is trying to address.

Editing pattern images requires programmatic, structure-aware edits
Current methods struggle with complex images and produce unorganized programs
Need intuitive editing via pattern analogies and learning-based generative models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Learning-based generative model for pattern edits
SplitWeave DSL for synthetic training data
TriFuser LDM for robust pattern editing
🔎 Similar Papers
No similar papers found.