Efficient Event Camera Volume System

📅 2026-03-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges of integrating event cameras into standard robotic systems, where their sparse asynchronous output and conventional time-binning approaches often introduce motion artifacts. The authors propose modeling the event stream as a continuous-time sequence of Dirac impulses, enabling artifact-free compression directly at event timestamps. They introduce an adaptive framework that dynamically selects among DCT, DTFT, or DWT transforms based on real-time event density, coupled with a domain-specific coefficient pruning strategy. Evaluated on MVSEC, the method achieves an EventSAM segmentation mIoU of 0.87—substantially outperforming voxel grids (0.44)—with only 1.5 ms latency using DCT, a 2.7× throughput improvement, and state-of-the-art reconstruction fidelity. The approach demonstrates strong generalization and is suitable for real-time deployment in ROS2.

Technology Category

Application Category

📝 Abstract
Event cameras promise low latency and high dynamic range, yet their sparse output challenges integration into standard robotic pipelines. We introduce \nameframew (Efficient Event Camera Volume System), a novel framework that models event streams as continuous-time Dirac impulse trains, enabling artifact-free compression through direct transform evaluation at event timestamps. Our key innovation combines density-driven adaptive selection among DCT, DTFT, and DWT transforms with transform-specific coefficient pruning strategies tailored to each domain's sparsity characteristics. The framework eliminates temporal binning artifacts while automatically adapting compression strategies based on real-time event density analysis. On EHPT-XC and MVSEC datasets, our framework achieves superior reconstruction fidelity with DTFT delivering the lowest earth mover distance. In downstream segmentation tasks, EECVS demonstrates robust generalization. Notably, our approach demonstrates exceptional cross-dataset generalization: when evaluated with EventSAM segmentation, EECVS achieves mean IoU 0.87 on MVSEC versus 0.44 for voxel grids at 24 channels, while remaining competitive on EHPT-XC. Our ROS2 implementation provides real-time deployment with DCT processing achieving 1.5 ms latency and 2.7X higher throughput than alternative transforms, establishing the first adaptive event compression framework that maintains both computational efficiency and superior generalization across diverse robotic scenarios.
Problem

Research questions and friction points this paper is trying to address.

event camera
sparse output
temporal binning artifacts
adaptive compression
robotic pipelines
Innovation

Methods, ideas, or system contributions that make the work stand out.

event camera
adaptive transform selection
Dirac impulse train
coefficient pruning
temporal binning-free
🔎 Similar Papers
No similar papers found.
J
Juan Camilo Soto
Purdue University, USA
I
Ian Noronha
Purdue University, USA
S
Saru Bharti
Purdue University, USA
Upinder Kaur
Upinder Kaur
Assistant Professor, Purdue University
RoboticsCyber Physical SystemsMulti-Modal PerceptionArtificial Intelligence