🤖 AI Summary
This work addresses the challenge of real-time, emotionally coherent, and dynamically consistent visual generation by robotic swarms driven by musical input. Method: We propose a cross-modal robotic painting control framework featuring a novel “music–emotion–painting” ternary mapping model and an art-oriented heterogeneous swarm coverage control paradigm. This paradigm integrates emotion-aware audio analysis, distributed motion planning, and heterogeneous coverage trajectory generation to coordinate multi-robot color deposition. Contribution/Results: A custom LED light-painting hardware system is developed and validated in both simulation and physical experiments. The framework supports diverse musical inputs, producing stylistically unified, spatiotemporally coherent, and emotionally interpretable collective artworks—demonstrating the first end-to-end pipeline for music-driven, emotion-grounded, swarm-based visual art creation.
📝 Abstract
This paper proposes a novel control framework for robotic swarms capable of turning a musical input into a painting. The approach connects the two artistic domains, music and painting, leveraging their respective connections to fundamental emotions. The robotic units of the swarm are controlled in a coordinated fashion using a heterogeneous coverage policy to control the motion of the robots which continuously release traces of color in the environment. The results of extensive simulations performed starting from different musical inputs and with different color equipments are reported. Finally, the proposed framework has been implemented on real robots equipped with LED lights and capable of light-painting.