PvNeXt: Rethinking Network Design and Temporal Motion for Point Cloud Video Recognition

📅 2025-04-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing 4D representation learning methods for point cloud video recognition suffer from significant computational redundancy due to iterative dense inter-frame queries. To address this, we propose an efficient single-query framework. Our core innovation lies in a synergistic dual-module design: the Motion Imitator learns motion priors via frame-level virtual motion modeling, while the Single-Step Motion Encoder performs lightweight, one-shot cross-frame querying to jointly encode motion and geometry—fully eliminating the recurrent query paradigm. Evaluated on standard benchmarks (NTU, PKU, SYSU), our method achieves comparable or improved accuracy (+0.3%–0.8%) while substantially reducing computational cost—averaging 42% lower FLOPs and latency. This demonstrates a principled trade-off optimization between efficiency and discriminative performance.

Technology Category

Application Category

📝 Abstract
Point cloud video perception has become an essential task for the realm of 3D vision. Current 4D representation learning techniques typically engage in iterative processing coupled with dense query operations. Although effective in capturing temporal features, this approach leads to substantial computational redundancy. In this work, we propose a framework, named as PvNeXt, for effective yet efficient point cloud video recognition, via personalized one-shot query operation. Specially, PvNeXt consists of two key modules, the Motion Imitator and the Single-Step Motion Encoder. The former module, the Motion Imitator, is designed to capture the temporal dynamics inherent in sequences of point clouds, thus generating the virtual motion corresponding to each frame. The Single-Step Motion Encoder performs a one-step query operation, associating point cloud of each frame with its corresponding virtual motion frame, thereby extracting motion cues from point cloud sequences and capturing temporal dynamics across the entire sequence. Through the integration of these two modules, {PvNeXt} enables personalized one-shot queries for each frame, effectively eliminating the need for frame-specific looping and intensive query processes. Extensive experiments on multiple benchmarks demonstrate the effectiveness of our method.
Problem

Research questions and friction points this paper is trying to address.

Reduces computational redundancy in point cloud video recognition
Introduces one-shot query for efficient temporal feature extraction
Captures motion dynamics without frame-specific looping
Innovation

Methods, ideas, or system contributions that make the work stand out.

Personalized one-shot query operation
Motion Imitator captures temporal dynamics
Single-Step Motion Encoder extracts motion cues
🔎 Similar Papers
No similar papers found.