π€ AI Summary
This work addresses the challenge of repairing severely corrupted or incomplete voxel models guided by multi-view images, proposing VIAFormerβa novel framework that fuses calibrated multi-view images with voxel data to achieve high-fidelity geometric reconstruction. The method introduces an image indexing mechanism with explicit 3D spatial localization to enable precise 2Dβ3D alignment, formulates a rectified flow optimization objective to learn direct repair trajectories, and employs a hybrid-flow Transformer for effective cross-modal feature fusion. Extensive experiments demonstrate that VIAFormer achieves state-of-the-art performance on both synthetically degraded voxels and those generated by real-world vision foundation models, significantly improving geometric completeness and detail fidelity. Furthermore, its successful integration into practical 3D content creation pipelines underscores its effectiveness and real-world applicability.
π Abstract
We propose VIAFormer, a Voxel-Image Alignment Transformer model designed for Multi-view Conditioned Voxel Refinement--the task of repairing incomplete noisy voxels using calibrated multi-view images as guidance. Its effectiveness stems from a synergistic design: an Image Index that provides explicit 3D spatial grounding for 2D image tokens, a Correctional Flow objective that learns a direct voxel-refinement trajectory, and a Hybrid Stream Transformer that enables robust cross-modal fusion. Experiments show that VIAFormer establishes a new state of the art in correcting both severe synthetic corruptions and realistic artifacts on the voxel shape obtained from powerful Vision Foundation Models. Beyond benchmarking, we demonstrate VIAFormer as a practical and reliable bridge in real-world 3D creation pipelines, paving the way for voxel-based methods to thrive in large-model, big-data wave.