🤖 AI Summary
Existing autoregressive mesh generation methods lack a training-free, interpretable evaluation metric for mesh tokenizers, hindering compression efficiency optimization. To address this, we propose Per-Token Mesh Entropy (PTME), the first theoretical framework enabling training-free quantification of tokenizer performance. We further introduce a plug-and-play coordinate-merging technique that reconstructs tokenization structures via sequence reordering and lossless fusion of high-frequency coordinate patterns. Our approach combines information-theoretic entropy analysis with statistical modeling of coordinate distributions, ensuring compatibility with mainstream mesh tokenizers—including MeshXL, MeshAnything V2, and Edgerunner. Experiments demonstrate significant compression ratio improvements across multiple tokenizers, while PTME exhibits strong correlation with actual generation quality. This work establishes a novel, interpretable, and optimization-friendly paradigm for mesh serialization.
📝 Abstract
The next-coordinate prediction paradigm has emerged as the de facto standard in current auto-regressive mesh generation methods. Despite their effectiveness, there is no efficient measurement for the various tokenizers that serialize meshes into sequences. In this paper, we introduce a new metric Per-Token-Mesh-Entropy (PTME) to evaluate the existing mesh tokenizers theoretically without any training. Building upon PTME, we propose a plug-and-play tokenization technique called coordinate merging. It further improves the compression ratios of existing tokenizers by rearranging and merging the most frequent patterns of coordinates. Through experiments on various tokenization methods like MeshXL, MeshAnything V2, and Edgerunner, we further validate the performance of our method. We hope that the proposed PTME and coordinate merging can enhance the existing mesh tokenizers and guide the further development of native mesh generation.