Densemarks: Learning Canonical Embeddings for Human Heads Images via Point Tracks

📅 2025-11-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the challenging problem of dense semantic correspondence in head images. We propose DenseMarks, a pixel-level embedding learning method based on Vision Transformers that maps each pixel to an interpretable 3D canonical space. Our key contributions are: (1) constructing strong supervision via point trajectories to enable pose-invariant, full-head (including hair) semantic matching; (2) integrating multi-task learning (face landmark detection and segmentation) with spatial continuity constraints—enforced via latent cubic feature regularization—to improve geometric consistency of embeddings; and (3) introducing a contrastive loss to enhance discriminability. DenseMarks achieves state-of-the-art performance on geometry-aware point matching and monocular head tracking using 3D deformable models, demonstrating significantly improved robustness to large pose variations.

Technology Category

Application Category

📝 Abstract
We propose DenseMarks - a new learned representation for human heads, enabling high-quality dense correspondences of human head images. For a 2D image of a human head, a Vision Transformer network predicts a 3D embedding for each pixel, which corresponds to a location in a 3D canonical unit cube. In order to train our network, we collect a dataset of pairwise point matches, estimated by a state-of-the-art point tracker over a collection of diverse in-the-wild talking heads videos, and guide the mapping via a contrastive loss, encouraging matched points to have close embeddings. We further employ multi-task learning with face landmarks and segmentation constraints, as well as imposing spatial continuity of embeddings through latent cube features, which results in an interpretable and queryable canonical space. The representation can be used for finding common semantic parts, face/head tracking, and stereo reconstruction. Due to the strong supervision, our method is robust to pose variations and covers the entire head, including hair. Additionally, the canonical space bottleneck makes sure the obtained representations are consistent across diverse poses and individuals. We demonstrate state-of-the-art results in geometry-aware point matching and monocular head tracking with 3D Morphable Models. The code and the model checkpoint will be made available to the public.
Problem

Research questions and friction points this paper is trying to address.

Learning canonical 3D embeddings for human head images
Establishing dense correspondences across diverse head poses
Creating interpretable canonical space for head tracking and reconstruction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Learned 3D embeddings for head image pixels
Training with contrastive loss on point tracks
Multi-task learning with spatial continuity constraints
🔎 Similar Papers
No similar papers found.