IMPASTO: Integrating Model-Based Planning with Learned Dynamics Models for Robotic Oil Painting Reproduction

📅 2026-03-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing robotic methods for reproducing oil paintings are hindered by the absence of human demonstrations and high-fidelity simulation, making it difficult to achieve force-sensitive control and accurate stroke effect prediction with deformable tools such as soft-bristle brushes. This work proposes a novel framework that integrates low-level force control, a learned pixel-wise dynamics model, and high-level closed-loop planning. By leveraging self-play learning to construct an image-to-image dynamics model and combining parameterized stroke actions with receding-horizon model predictive control, the approach optimizes robotic arm trajectories and contact forces using only visual observations. For the first time, this method unifies force-sensitive control, learnable dynamics, and multi-step planning within a seven-degree-of-freedom robotic system, significantly outperforming existing baselines in both single- and multi-stroke oil painting reproduction tasks and approaching the quality of human artists.
📝 Abstract
Robotic reproduction of oil paintings using soft brushes and pigments requires force-sensitive control of deformable tools, prediction of brushstroke effects, and multi-step stroke planning, often without human step-by-step demonstrations or faithful simulators. Given only a sequence of target oil painting images, can a robot infer and execute the stroke trajectories, forces, and colors needed to reproduce it? We present IMPASTO, a robotic oil-painting system that integrates learned pixel dynamics models with model-based planning. The dynamics models predict canvas updates from image observations and parameterized stroke actions; a receding-horizon model predictive control optimizer then plans trajectories and forces, while a force-sensitive controller executes strokes on a 7-DoF robot arm. IMPASTO integrates low-level force control, learned dynamics models, and high-level closed-loop planning, learns solely from robot self-play, and approximates human artists' single-stroke datasets and multi-stroke artworks, outperforming baselines in reproduction accuracy. Project website: https://impasto-robopainting.github.io/
Problem

Research questions and friction points this paper is trying to address.

robotic painting
oil painting reproduction
brushstroke planning
learned dynamics
force-sensitive control
Innovation

Methods, ideas, or system contributions that make the work stand out.

learned dynamics models
model predictive control
force-sensitive control
robotic painting
self-play learning
🔎 Similar Papers
No similar papers found.