🤖 AI Summary
Existing text-to-vector sketch generation methods lack structured semantic part-level information, making it difficult to produce outputs that are controllable, interpretable, and locally editable. To address this limitation, this work proposes a part-by-part generation framework powered by a multimodal language model agent. By integrating supervised fine-tuning with multi-round process-reward reinforcement learning, and leveraging visual feedback alongside structured part annotations, the approach enables fine-grained semantic control over sketch generation. To support this research, we introduce ControlSketch-Part, the first part-level vector sketch dataset, along with a general-purpose automated annotation pipeline. Our method is the first to incorporate part-level semantic structure into vector sketch synthesis, significantly enhancing controllability, interpretability, and local editability while maintaining global semantic alignment.
📝 Abstract
We develop a method for producing vector sketches one part at a time. To do this, we train a multi-modal language model-based agent using a novel multi-turn process-reward reinforcement learning following supervised fine-tuning. Our approach is enabled by a new dataset we call ControlSketch-Part, containing rich part-level annotations for sketches, obtained using a novel, generic automatic annotation pipeline that segments vector sketches into semantic parts and assigns paths to parts with a structured multi-stage labeling process. Our results indicate that incorporating structured part-level data and providing agent with the visual feedback through the process enables interpretable, controllable, and locally editable text-to-vector sketch generation.