Any4D: Unified Feed-Forward Metric 4D Reconstruction

๐Ÿ“… 2025-12-11
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing methods are limited to two-view scene flow estimation or sparse point tracking, failing to achieve N-frame, pixel-wise dense 4D reconstruction; monocular 4D approaches further lack the capability to fuse multi-modal sensor data (e.g., RGB-D, IMU, radar). To address these limitations, we propose a metric-scale, dense, feed-forward 4D scene reconstruction framework designed for multi-modal inputs. Our method introduces a modular 4D representation: view-dependent factors (e.g., depth, intrinsics) are modeled in the camera coordinate system, while view-invariant factors (e.g., extrinsics, scene flow) are unified within the world coordinate systemโ€”enabling cross-modal and scale-consistent modeling. We further design a multi-view Transformer architecture supporting joint local-global encoding and end-to-end feed-forward inference. Experiments demonstrate that our approach achieves 2โ€“3ร— higher accuracy and 15ร— greater computational efficiency compared to state-of-the-art methods.

Technology Category

Application Category

๐Ÿ“ Abstract
We present Any4D, a scalable multi-view transformer for metric-scale, dense feed-forward 4D reconstruction. Any4D directly generates per-pixel motion and geometry predictions for N frames, in contrast to prior work that typically focuses on either 2-view dense scene flow or sparse 3D point tracking. Moreover, unlike other recent methods for 4D reconstruction from monocular RGB videos, Any4D can process additional modalities and sensors such as RGB-D frames, IMU-based egomotion, and Radar Doppler measurements, when available. One of the key innovations that allows for such a flexible framework is a modular representation of a 4D scene; specifically, per-view 4D predictions are encoded using a variety of egocentric factors (depthmaps and camera intrinsics) represented in local camera coordinates, and allocentric factors (camera extrinsics and scene flow) represented in global world coordinates. We achieve superior performance across diverse setups - both in terms of accuracy (2-3X lower error) and compute efficiency (15X faster), opening avenues for multiple downstream applications.
Problem

Research questions and friction points this paper is trying to address.

Unified feed-forward 4D reconstruction from multi-view inputs
Handling multiple modalities like RGB-D, IMU, and Radar data
Achieving high accuracy and efficiency for downstream applications
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-view transformer for dense 4D reconstruction
Modular 4D scene representation with egocentric and allocentric factors
Supports multi-modal inputs like RGB-D, IMU, and Radar
๐Ÿ”Ž Similar Papers
No similar papers found.