Memorization in 3D Shape Generation: An Empirical Study

📅 2025-12-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study systematically investigates, for the first time, memorization phenomena in 3D generative models during novel shape synthesis—motivated by privacy-preserving training and enhanced generation diversity. To this end, we introduce the first quantitative evaluation framework for 3D shape memorization, proposing a latent vector set (Vecset)-based memorization metric and conducting controlled ablation studies. Key findings include: (i) memorization exhibits a non-monotonic dependence on diffusion guidance scale; (ii) jointly increasing Vecset length and applying lightweight rotation augmentation suppresses memorization by over 40% while preserving FID and Chamfer Distance stability; and (iii) data modality, conditioning granularity, and architectural design exert quantifiable, systematic influences on memorization. Our work establishes theoretical foundations and practical guidelines for improving both privacy compliance and generalization capability in 3D generative modeling.

Technology Category

Application Category

📝 Abstract
Generative models are increasingly used in 3D vision to synthesize novel shapes, yet it remains unclear whether their generation relies on memorizing training shapes. Understanding their memorization could help prevent training data leakage and improve the diversity of generated results. In this paper, we design an evaluation framework to quantify memorization in 3D generative models and study the influence of different data and modeling designs on memorization. We first apply our framework to quantify memorization in existing methods. Next, through controlled experiments with a latent vector-set (Vecset) diffusion model, we find that, on the data side, memorization depends on data modality, and increases with data diversity and finer-grained conditioning; on the modeling side, it peaks at a moderate guidance scale and can be mitigated by longer Vecsets and simple rotation augmentation. Together, our framework and analysis provide an empirical understanding of memorization in 3D generative models and suggest simple yet effective strategies to reduce it without degrading generation quality. Our code is available at https://github.com/zlab-princeton/3d_mem.
Problem

Research questions and friction points this paper is trying to address.

Quantify memorization in 3D generative models
Study data and modeling effects on memorization
Propose strategies to reduce memorization while preserving quality
Innovation

Methods, ideas, or system contributions that make the work stand out.

Framework quantifies memorization in 3D generative models
Latent vector-set diffusion model tests data and modeling factors
Longer Vecsets and rotation augmentation reduce memorization effectively
🔎 Similar Papers
No similar papers found.
Shu Pu
Shu Pu
Huazhong University of Science and Technology
3D VisionGeometry and Graphics3D Representation
B
Boya Zeng
Princeton University
K
Kaichen Zhou
Harvard University
M
Mengyu Wang
Harvard University
Z
Zhuang Liu
Princeton University