Ev4DGS: Novel-view Rendering of Non-Rigid Objects from Monocular Event Streams

📅 2025-10-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing event-camera methods for novel-view synthesis of non-rigid objects require auxiliary sparse RGB inputs, limiting practical applicability. This work presents the first monocular event-only approach for dynamic non-rigid novel-view synthesis—without any RGB supervision. Our method introduces a differentiable deformable 3D Gaussian point-based explicit representation, jointly optimizing deformation and radiance fields directly from event streams. To guide learning, we design an event-driven binary mask generation mechanism to encode deformation priors, and incorporate a 2D event observation loss to enforce geometry-appearance co-reconstruction. Evaluated on both synthetic and real-world datasets, our approach significantly outperforms prior baselines, producing high-fidelity, temporally consistent novel views. All code, models, and datasets will be publicly released.

Technology Category

Application Category

📝 Abstract
Event cameras offer various advantages for novel view rendering compared to synchronously operating RGB cameras, and efficient event-based techniques supporting rigid scenes have been recently demonstrated in the literature. In the case of non-rigid objects, however, existing approaches additionally require sparse RGB inputs, which can be a substantial practical limitation; it remains unknown if similar models could be learned from event streams only. This paper sheds light on this challenging open question and introduces Ev4DGS, i.e., the first approach for novel view rendering of non-rigidly deforming objects in the explicit observation space (i.e., as RGB or greyscale images) from monocular event streams. Our method regresses a deformable 3D Gaussian Splatting representation through 1) a loss relating the outputs of the estimated model with the 2D event observation space, and 2) a coarse 3D deformation model trained from binary masks generated from events. We perform experimental comparisons on existing synthetic and newly recorded real datasets with non-rigid objects. The results demonstrate the validity of Ev4DGS and its superior performance compared to multiple naive baselines that can be applied in our setting. We will release our models and the datasets used in the evaluation for research purposes; see the project webpage: https://4dqv.mpi-inf.mpg.de/Ev4DGS/.
Problem

Research questions and friction points this paper is trying to address.

Rendering non-rigid objects from monocular event streams without RGB inputs
Learning 3D deformation models solely from event-based observations
Creating novel-view RGB images from event data for dynamic scenes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Regresses deformable 3D Gaussian Splatting representation
Uses loss relating model outputs to 2D event observations
Trains coarse 3D deformation model from event masks
🔎 Similar Papers
No similar papers found.