PhysX: Physical-Grounded 3D Asset Generation

πŸ“… 2025-07-16
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing 3D generation methods primarily focus on geometry and texture while neglecting physical property modeling, limiting their applicability in simulation and embodied AI. To address this, we introduce PhysXNetβ€”the first large-scale 3D dataset annotated with five-dimensional physical attributes: scale, material, mass, friction, and functional affordances. We further propose PhysXGen, an end-to-end physics-aware generative framework that integrates human-in-the-loop vision-language annotation with a dual-branch network architecture to inject physical knowledge into pretrained 3D representation spaces. This enables joint generation of image-conditioned 3D assets with physically interpretable attributes. Experiments demonstrate that PhysXGen significantly outperforms state-of-the-art baselines in both physical attribute prediction accuracy and geometric reconstruction quality, while exhibiting strong cross-domain generalization. All code, data, and models are publicly released.

Technology Category

Application Category

πŸ“ Abstract
3D modeling is moving from virtual to physical. Existing 3D generation primarily emphasizes geometries and textures while neglecting physical-grounded modeling. Consequently, despite the rapid development of 3D generative models, the synthesized 3D assets often overlook rich and important physical properties, hampering their real-world application in physical domains like simulation and embodied AI. As an initial attempt to address this challenge, we propose extbf{PhysX}, an end-to-end paradigm for physical-grounded 3D asset generation. 1) To bridge the critical gap in physics-annotated 3D datasets, we present PhysXNet - the first physics-grounded 3D dataset systematically annotated across five foundational dimensions: absolute scale, material, affordance, kinematics, and function description. In particular, we devise a scalable human-in-the-loop annotation pipeline based on vision-language models, which enables efficient creation of physics-first assets from raw 3D assets.2) Furthermore, we propose extbf{PhysXGen}, a feed-forward framework for physics-grounded image-to-3D asset generation, injecting physical knowledge into the pre-trained 3D structural space. Specifically, PhysXGen employs a dual-branch architecture to explicitly model the latent correlations between 3D structures and physical properties, thereby producing 3D assets with plausible physical predictions while preserving the native geometry quality. Extensive experiments validate the superior performance and promising generalization capability of our framework. All the code, data, and models will be released to facilitate future research in generative physical AI.
Problem

Research questions and friction points this paper is trying to address.

Generates 3D assets with physical properties for real-world applications
Addresses lack of physics-annotated 3D datasets with PhysXNet
Proposes PhysXGen for physics-grounded image-to-3D generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

PhysXNet: first physics-annotated 3D dataset
PhysXGen: dual-branch physics-grounded 3D generation
Human-in-the-loop annotation pipeline for physics-first assets
πŸ”Ž Similar Papers
No similar papers found.