🤖 AI Summary
This work uncovers a novel optical side-channel threat in remote monitoring of 3D printing: attackers can non-intrusively steal intellectual property—specifically, reconstruct G-code instructions—solely by analyzing nozzle motion trajectories extracted from surveillance camera videos. To address this, we propose an end-to-end framework integrating computer vision-based trajectory tracking, kinematic motion mapping, G-code semantic parsing, and geometrically robust functional equivalence verification. We introduce the first rotation- and translation-invariant functional-equivalence checker for reverse-engineered G-code, enabling formal validation of functional correctness. Experimental evaluation demonstrates an average instruction reconstruction accuracy of 90.87%, a 30.20% reduction in syntactic redundancy, and successful physical printing of functionally equivalent replicas.
📝 Abstract
The 3D printing industry is rapidly growing and increasingly adopted across various sectors including manufacturing, healthcare, and defense. However, the operational setup often involves hazardous environments, necessitating remote monitoring through cameras and other sensors, which opens the door to cyber-based attacks. In this paper, we show that an adversary with access to video recordings of the 3D printing process can reverse engineer the underlying 3D print instructions. Our model tracks the printer nozzle movements during the printing process and maps the corresponding trajectory into G-code instructions. Further, it identifies the correct parameters such as feed rate and extrusion rate, enabling successful intellectual property theft. To validate this, we design an equivalence checker that quantitatively compares two sets of 3D print instructions, evaluating their similarity in producing objects alike in shape, external appearance, and internal structure. Unlike simple distance-based metrics such as normalized mean square error, our equivalence checker is both rotationally and translationally invariant, accounting for shifts in the base position of the reverse engineered instructions caused by different camera positions. Our model achieves an average accuracy of 90.87 percent and generates 30.20 percent fewer instructions compared to existing methods, which often produce faulty or inaccurate prints. Finally, we demonstrate a fully functional counterfeit object generated by reverse engineering 3D print instructions from video.