🤖 AI Summary
This work addresses the challenge of directly inferring non-intuitive, dense chemical states from 3D visual data in optical 3D printing. To overcome modeling difficulties arising from strong coupling between optical propagation and material physics, we propose the first voxel-wise photochemical conversion prediction method for optical 3D printing, termed Coupled Physics-Gated Adaptation (C-PGA). C-PGA employs geometry and process parameters as query signals and dynamically modulates dual-stream 3D-CNN features—raw projections and diffusion-diffraction-corrected projections—via Feature-wise Linear Modulation (FiLM), explicitly embedding physical constraints governing light transport and mass transport. Evaluated on the largest publicly available optical 3D printing dataset, our method achieves high-fidelity voxel-level conversion distribution prediction. It enables, for the first time, virtual chemical characterization without post-hoc measurements and precise spatial control of chemical states within complex 3D structures.
📝 Abstract
We present a framework that pioneers the prediction of photochemical conversion in complex three-dimensionally printed objects, introducing a challenging new computer vision task: predicting dense, non-visual volumetric physical properties from 3D visual data. This approach leverages the largest-ever optically printed 3D specimen dataset, comprising a large family of parametrically designed complex minimal surface structures that have undergone terminal chemical characterisation. Conventional vision models are ill-equipped for this task, as they lack an inductive bias for the coupled, non-linear interactions of optical physics (diffraction, absorption) and material physics (diffusion, convection) that govern the final chemical state. To address this, we propose Coupled Physics-Gated Adaptation (C-PGA), a novel multimodal fusion architecture. Unlike standard concatenation, C-PGA explicitly models physical coupling by using sparse geometrical and process parameters (e.g., surface transport, print layer height) as a Query to dynamically gate and adapt the dense visual features via feature-wise linear modulation (FiLM). This mechanism spatially modulates dual 3D visual streams-extracted by parallel 3D-CNNs processing raw projection stacks and their diffusion-diffraction corrected counterparts allowing the model to recalibrate its visual perception based on the physical context. This approach offers a breakthrough in virtual chemical characterisation, eliminating the need for traditional post-print measurements and enabling precise control over the chemical conversion state.