๐ค AI Summary
Existing synthetic event datasets predominantly rely on dense RGB videos, suffering from limited viewpoint diversity, geometric inconsistency, and high hardware costs. To address these limitations, GS2E introduces the first large-scale, high-fidelity synthetic event dataset generated from sparse multi-view real-world RGB images. It first reconstructs static scenes using 3D Gaussian Splatting, then synthesizes temporally dense, geometrically consistent event streams by integrating adaptive trajectory interpolation with physically grounded contrast-threshold modelingโensuring robustness to illumination variations and motion dynamics. This approach breaks away from conventional video-based synthesis paradigms, enabling more realistic and scalable event generation. Evaluated on event-driven 3D reconstruction tasks, GS2E significantly improves model generalization across unseen scenes and viewpoints. Moreover, it establishes a new benchmark for event-based vision research, facilitating systematic evaluation of geometry-aware, lighting-robust, and motion-adaptive event processing methods.
๐ Abstract
We introduce GS2E (Gaussian Splatting to Event), a large-scale synthetic event dataset for high-fidelity event vision tasks, captured from real-world sparse multi-view RGB images. Existing event datasets are often synthesized from dense RGB videos, which typically lack viewpoint diversity and geometric consistency, or depend on expensive, difficult-to-scale hardware setups. GS2E overcomes these limitations by first reconstructing photorealistic static scenes using 3D Gaussian Splatting, and subsequently employing a novel, physically-informed event simulation pipeline. This pipeline generally integrates adaptive trajectory interpolation with physically-consistent event contrast threshold modeling. Such an approach yields temporally dense and geometrically consistent event streams under diverse motion and lighting conditions, while ensuring strong alignment with underlying scene structures. Experimental results on event-based 3D reconstruction demonstrate GS2E's superior generalization capabilities and its practical value as a benchmark for advancing event vision research.