NeuralFur: Animal Fur Reconstruction From Multi-View Images

📅 2026-01-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing methods struggle to reconstruct realistic animal fur from multi-view images with fine details, self-occlusions, and view-dependent appearance, largely due to the absence of a general-purpose fur prior dataset. This work proposes the first high-fidelity 3D animal fur modeling approach that, starting from a bald geometry, represents fur as hair strands and incorporates a vision-language model (VLM) to inject body-part-specific priors on hair length, orientation, and gravitational常识. Through joint geometric and photometric optimization, the method simulates physically plausible fur growth. By innovatively integrating VLM into multi-view fur reconstruction, it significantly enhances generalization across species and fur types while achieving superior realism compared to existing approaches.

Technology Category

Application Category

📝 Abstract
Reconstructing realistic animal fur geometry from images is a challenging task due to the fine-scale details, self-occlusion, and view-dependent appearance of fur. In contrast to human hairstyle reconstruction, there are also no datasets that can be leveraged to learn a fur prior for different animals. In this work, we present a first multi-view-based method for high-fidelity 3D fur modeling of animals using a strand-based representation, leveraging the general knowledge of a vision language model. Given multi-view RGB images, we first reconstruct a coarse surface geometry using traditional multi-view stereo techniques. We then use a vision language model (VLM) system to retrieve information about the realistic length structure of the fur for each part of the body. We use this knowledge to construct the animal's furless geometry and grow strands atop it. The fur reconstruction is supervised with both geometric and photometric losses computed from multi-view images. To mitigate orientation ambiguities stemming from the Gabor filters that are applied to the input images, we additionally utilize the VLM to guide the strands'growth direction and their relation to the gravity vector that we incorporate as a loss. With this new schema of using a VLM to guide 3D reconstruction from multi-view inputs, we show generalization across a variety of animals with different fur types. For additional results and code, please refer to https://neuralfur.is.tue.mpg.de.
Problem

Research questions and friction points this paper is trying to address.

animal fur reconstruction
multi-view images
3D fur modeling
strand-based representation
fur prior
Innovation

Methods, ideas, or system contributions that make the work stand out.

NeuralFur
vision language model
strand-based fur reconstruction
multi-view 3D reconstruction
gravity-aware fur modeling
🔎 Similar Papers
No similar papers found.