L4P: Low-Level 4D Vision Perception Unified

📅 2025-02-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current low-level 4D vision perception methods predominantly adopt task-specific architectures, lacking generality. This paper introduces the first unified feed-forward model for low-level 4D perception, built upon a Vision Transformer (ViT) backbone to learn shared spatiotemporal feature representations. Lightweight task-specific heads enable fully parallel inference across both dense (e.g., depth estimation, optical flow) and sparse (e.g., 2D/3D object tracking) tasks. Crucially, the model performs end-to-end joint prediction within a single forward pass—eliminating the need for task switching or parameter reloading. Evaluated on multiple benchmarks, it achieves state-of-the-art performance, matching or surpassing dedicated single-task models while maintaining inference latency comparable to individual task-specific methods. To our knowledge, this is the first framework that unifies generality, efficiency, and high performance in low-level 4D perception.

Technology Category

Application Category

📝 Abstract
The spatio-temporal relationship between the pixels of a video carries critical information for low-level 4D perception. A single model that reasons about it should be able to solve several such tasks well. Yet, most state-of-the-art methods rely on architectures specialized for the task at hand. We present L4P (pronounced"LAP"), a feedforward, general-purpose architecture that solves low-level 4D perception tasks in a unified framework. L4P combines a ViT-based backbone with per-task heads that are lightweight and therefore do not require extensive training. Despite its general and feedforward formulation, our method matches or surpasses the performance of existing specialized methods on both dense tasks, such as depth or optical flow estimation, and sparse tasks, such as 2D/3D tracking. Moreover, it solves all those tasks at once in a time comparable to that of individual single-task methods.
Problem

Research questions and friction points this paper is trying to address.

Unifies low-level 4D perception tasks
Combines ViT backbone with lightweight heads
Matches or surpasses specialized methods performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified feedforward 4D architecture
ViT-based backbone with lightweight heads
Simultaneous multi-task performance enhancement
🔎 Similar Papers
No similar papers found.