๐ค AI Summary
Traditional image synthesis prioritizes photorealism and semantic plausibility, yet users in creative platforms emphasize artistic expression, playfulness, and social appealโmotivating a shift toward expressive synthesis, which values stylistic diversity and flexible layout logic to model real-world sticker editing behavior. Method: We introduce StickerNet, a two-stage framework (classification followed by regression) that jointly predicts sticker transparency, position, scale, and mask, trained on the first large-scale dataset derived from real user editing logs. Crucially, our approach abandons pixel-level realism in favor of modeling human creative intent. Contribution/Results: Extensive user studies and quantitative evaluations demonstrate that StickerNet significantly outperforms existing baselines in style compatibility, layoutๅ็ๆง, and social acceptabilityโclosely aligning with authentic creative practices. This validates the effectiveness of learning expressive priors directly from behavioral data.
๐ Abstract
As a widely used operation in image editing workflows, image composition has traditionally been studied with a focus on achieving visual realism and semantic plausibility. However, in practical editing scenarios of the modern content creation landscape, many compositions are not intended to preserve realism. Instead, users of online platforms motivated by gaining community recognition often aim to create content that is more artistic, playful, or socially engaging. Taking inspiration from this observation, we define the expressive composition task, a new formulation of image composition that embraces stylistic diversity and looser placement logic, reflecting how users edit images on real-world creative platforms. To address this underexplored problem, we present StickerNet, a two-stage framework that first determines the composition type, then predicts placement parameters such as opacity, mask, location, and scale accordingly. Unlike prior work that constructs datasets by simulating object placements on real images, we directly build our dataset from 1.8 million editing actions collected on an anonymous online visual creation and editing platform, each reflecting user-community validated placement decisions. This grounding in authentic editing behavior ensures strong alignment between task definition and training supervision. User studies and quantitative evaluations show that StickerNet outperforms common baselines and closely matches human placement behavior, demonstrating the effectiveness of learning from real-world editing patterns despite the inherent ambiguity of the task. This work introduces a new direction in visual understanding that emphasizes expressiveness and user intent over realism.