FreeMesh: Boosting Mesh Generation with Coordinates Merging

📅 2025-05-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing autoregressive mesh generation methods lack a training-free, interpretable evaluation metric for mesh tokenizers, hindering compression efficiency optimization. To address this, we propose Per-Token Mesh Entropy (PTME), the first theoretical framework enabling training-free quantification of tokenizer performance. We further introduce a plug-and-play coordinate-merging technique that reconstructs tokenization structures via sequence reordering and lossless fusion of high-frequency coordinate patterns. Our approach combines information-theoretic entropy analysis with statistical modeling of coordinate distributions, ensuring compatibility with mainstream mesh tokenizers—including MeshXL, MeshAnything V2, and Edgerunner. Experiments demonstrate significant compression ratio improvements across multiple tokenizers, while PTME exhibits strong correlation with actual generation quality. This work establishes a novel, interpretable, and optimization-friendly paradigm for mesh serialization.

Technology Category

Application Category

📝 Abstract
The next-coordinate prediction paradigm has emerged as the de facto standard in current auto-regressive mesh generation methods. Despite their effectiveness, there is no efficient measurement for the various tokenizers that serialize meshes into sequences. In this paper, we introduce a new metric Per-Token-Mesh-Entropy (PTME) to evaluate the existing mesh tokenizers theoretically without any training. Building upon PTME, we propose a plug-and-play tokenization technique called coordinate merging. It further improves the compression ratios of existing tokenizers by rearranging and merging the most frequent patterns of coordinates. Through experiments on various tokenization methods like MeshXL, MeshAnything V2, and Edgerunner, we further validate the performance of our method. We hope that the proposed PTME and coordinate merging can enhance the existing mesh tokenizers and guide the further development of native mesh generation.
Problem

Research questions and friction points this paper is trying to address.

Evaluating mesh tokenizers without training using PTME metric
Improving compression ratios via coordinate merging technique
Validating method performance across existing tokenization approaches
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces Per-Token-Mesh-Entropy (PTME) metric
Proposes plug-and-play coordinate merging technique
Improves compression ratios of mesh tokenizers
J
Jian Liu
Hong Kong University of Science and Technology
Haohan Weng
Haohan Weng
South China University of Technology
generative modelscomputer vision
Biwen Lei
Biwen Lei
Tencent
Computer VisionDeep Learning
X
Xianghui Yang
Tencent Hunyuan
Zibo Zhao
Zibo Zhao
Hunyuan, Tencent; ShanghaiTech
Z
Zhuo Chen
Tencent Hunyuan
Song Guo
Song Guo
Chair Professor of CSE, HKUST
Large Language ModelEdge AIMachine Learning Systems
T
Tao Han
Hong Kong University of Science and Technology
C
Chunchao Guo
Tencent Hunyuan