CloseUpAvatar: High-Fidelity Animatable Full-Body Avatars with Mixture of Multi-Scale Textures

📅 2025-12-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Addressing the challenge of high-fidelity rendering of full-body virtual avatars under complex camera motion—particularly at close range—this paper proposes a novel two-layer learnable texture representation. Built upon texture-plane-based human modeling, our method introduces multi-scale texture mapping and a distance-aware differentiable fusion mechanism that adaptively selects high- or low-frequency detail representations based on camera distance. This enables real-time rendering while significantly improving near-field detail fidelity and cross-view generalization. Key contributions include: (1) the first integration of a two-layer texture structure into animatable human representations; and (2) a dynamically weighted multi-scale blending strategy ensuring geometric consistency across varying viewing distances. Evaluated on the ActorsHQ dataset for novel-view synthesis, our approach achieves state-of-the-art performance both qualitatively and quantitatively—outperforming existing methods—while sustaining high frame rates.

Technology Category

Application Category

📝 Abstract
We present a CloseUpAvatar - a novel approach for articulated human avatar representation dealing with more general camera motions, while preserving rendering quality for close-up views. CloseUpAvatar represents an avatar as a set of textured planes with two sets of learnable textures for low and high-frequency detail. The method automatically switches to high-frequency textures only for cameras positioned close to the avatar's surface and gradually reduces their impact as the camera moves farther away. Such parametrization of the avatar enables CloseUpAvatar to adjust rendering quality based on camera distance ensuring realistic rendering across a wider range of camera orientations than previous approaches. We provide experiments using the ActorsHQ dataset with high-resolution input images. CloseUpAvatar demonstrates both qualitative and quantitative improvements over existing methods in rendering from novel wide range camera positions, while maintaining high FPS by limiting the number of required primitives.
Problem

Research questions and friction points this paper is trying to address.

Representing articulated human avatars with multi-scale textures for close-up views
Automatically adjusting rendering quality based on camera distance
Ensuring realistic rendering across wide camera orientations with high performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mixture of multi-scale textures for detail
Automatic texture switching based on camera distance
Limited primitives for high FPS rendering
🔎 Similar Papers
No similar papers found.
D
D. Svitov
Università degli Studi di Genova
Pietro Morerio
Pietro Morerio
Researcher @ IIT
Computer VisionPattern RecognitionMachine LearningDeep LearningArtificial Intelligence
L
L. Agapito
Department of Computer Science, University College London
A
A. D. Bue
Istituto Italiano di Tecnologia (IIT)