🤖 AI Summary
In AGI research, real-world modeling remains hindered by modality fragmentation and dimensional isolation. This paper introduces the first unified multimodal generative-analytic framework structured around dimensional evolution—progressing systematically from 2D appearance to video dynamics, 3D geometry, and finally 4D spatiotemporal fusion—thereby bridging longstanding silos across image, video, 3D, and 4D generation. Leveraging integrated advances in diffusion models, autoregressive modeling, NeRF, spatiotemporal Transformers, and multimodal alignment, we construct a structured knowledge graph encompassing over 100 works, alongside cross-dimensional evaluation benchmarks and curated data resource guidelines. Key contributions include: (i) the first formal dimensional progression taxonomy enabling systematic inter-modal association; (ii) a scalable theoretical framework for world modeling; and (iii) four concrete research directions toward AGI-enabled realistic world simulation.
📝 Abstract
Understanding and replicating the real world is a critical challenge in Artificial General Intelligence (AGI) research. To achieve this, many existing approaches, such as world models, aim to capture the fundamental principles governing the physical world, enabling more accurate simulations and meaningful interactions. However, current methods often treat different modalities, including 2D (images), videos, 3D, and 4D representations, as independent domains, overlooking their interdependencies. Additionally, these methods typically focus on isolated dimensions of reality without systematically integrating their connections. In this survey, we present a unified survey for multimodal generative models that investigate the progression of data dimensionality in real-world simulation. Specifically, this survey starts from 2D generation (appearance), then moves to video (appearance+dynamics) and 3D generation (appearance+geometry), and finally culminates in 4D generation that integrate all dimensions. To the best of our knowledge, this is the first attempt to systematically unify the study of 2D, video, 3D and 4D generation within a single framework. To guide future research, we provide a comprehensive review of datasets, evaluation metrics and future directions, and fostering insights for newcomers. This survey serves as a bridge to advance the study of multimodal generative models and real-world simulation within a unified framework.