PDT: Point Distribution Transformation with Diffusion Models

📅 2025-07-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the fundamental challenge that unstructured point clouds lack explicit geometric and semantic structure. We propose the first end-to-end diffusion model framework that directly maps raw, unordered point clouds to semantically rich, structured point distributions. Methodologically, we design a novel denoising network architecture that jointly incorporates geometric priors and semantic constraints, augmented by a distribution alignment strategy to co-model shape structure and semantic attributes. To our knowledge, this is the first approach capable of generating multiple structured outputs—including surface-aligned keypoints, interior sparse joints, and continuous feature lines—within a unified framework. Extensive evaluation across diverse 3D understanding and reconstruction tasks demonstrates superior effectiveness, generalization capability, and cross-structural consistency, significantly outperforming existing optimization-based and supervised learning methods.

Technology Category

Application Category

📝 Abstract
Point-based representations have consistently played a vital role in geometric data structures. Most point cloud learning and processing methods typically leverage the unordered and unconstrained nature to represent the underlying geometry of 3D shapes. However, how to extract meaningful structural information from unstructured point cloud distributions and transform them into semantically meaningful point distributions remains an under-explored problem. We present PDT, a novel framework for point distribution transformation with diffusion models. Given a set of input points, PDT learns to transform the point set from its original geometric distribution into a target distribution that is semantically meaningful. Our method utilizes diffusion models with novel architecture and learning strategy, which effectively correlates the source and the target distribution through a denoising process. Through extensive experiments, we show that our method successfully transforms input point clouds into various forms of structured outputs - ranging from surface-aligned keypoints, and inner sparse joints to continuous feature lines. The results showcase our framework's ability to capture both geometric and semantic features, offering a powerful tool for various 3D geometry processing tasks where structured point distributions are desired. Code will be available at this link: https://github.com/shanemankiw/PDT.
Problem

Research questions and friction points this paper is trying to address.

Transform unstructured point clouds into semantic distributions
Learn geometric to meaningful point distribution mapping
Generate structured outputs like keypoints and feature lines
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses diffusion models for point distribution transformation
Novel architecture for source-target distribution correlation
Transforms point clouds into structured semantic outputs
🔎 Similar Papers
No similar papers found.
J
Jionghao Wang
Texas A&M University, USA
C
Cheng Lin
The University of Hong Kong, China
Y
Yuan Liu
The Hong Kong University of Science and Technology, China
R
Rui Xu
The University of Hong Kong, China
Z
Zhiyang Dou
The University of Hong Kong, China
Xiao-Xiao Long
Xiao-Xiao Long
Associate Professor at Nanjing University; AnySyn3D
3D VisionGenerative AISpatial IntelligenceEmbodied AI
Hao-Xiang Guo
Hao-Xiang Guo
Skywork AI
AIGCComputer Graphics3D Computer VisionGeometry Processing
Taku Komura
Taku Komura
The University of Hong Kong
Character AnimationComputer GraphicsRobotics
Wenping Wang
Wenping Wang
Texas A&M University
Computer GraphicsGeometric Computing
X
Xin Li
Texas A&M University, USA