🤖 AI Summary
Existing methods for generating manifold meshes typically rely on indirect representations—such as level sets or template deformations—making it difficult to directly produce high-quality, topologically unconstrained polygonal meshes with structural integrity. This paper introduces the first end-to-end differentiable framework that explicitly models half-edge structure via vertex-level continuous connectivity embeddings, enabling direct generation of discrete manifold-conforming meshes in a continuous latent space. Key contributions include: (1) the first continuous neighborhood relation learning mechanism; (2) mesh distribution fitting via stochastic optimization; and (3) topology-agnostic generation and repair capabilities. Evaluated on large-scale datasets, our method significantly improves mesh element quality, geometric fidelity, and topological diversity. It establishes the first truly end-to-end differentiable approach for manifold mesh generation and repair, bridging a critical gap between implicit representation learning and explicit, valid mesh synthesis.
📝 Abstract
Meshes are ubiquitous in visual computing and simulation, yet most existing machine learning techniques represent meshes only indirectly, e.g. as the level set of a scalar field or deformation of a template, or as a disordered triangle soup lacking local structure. This work presents a scheme to directly generate manifold, polygonal meshes of complex connectivity as the output of a neural network. Our key innovation is to define a continuous latent connectivity space at each mesh vertex, which implies the discrete mesh. In particular, our vertex embeddings generate cyclic neighbor relationships in a halfedge mesh representation, which gives a guarantee of edge-manifoldness and the ability to represent general polygonal meshes. This representation is well-suited to machine learning and stochastic optimization, without restriction on connectivity or topology. We first explore the basic properties of this representation, then use it to fit distributions of meshes from large datasets. The resulting models generate diverse meshes with tessellation structure learned from the dataset population, with concise details and high-quality mesh elements. In applications, this approach not only yields high-quality outputs from generative models, but also enables directly learning challenging geometry processing tasks such as mesh repair.