Pulp Motion: Framing-aware multimodal camera and human motion generation

📅 2025-10-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing video generation methods model human motion and camera trajectories separately, overlooking their strong compositional coupling. This paper proposes the first text-driven joint generation framework, using screen composition—defined by projected skeletal keypoints—as a cross-modal consistency anchor to co-model human motion and camera trajectories within a shared latent space. We design a composition-aware joint autoencoder that unifies these heterogeneous latent variables into a common composition latent space via linear mapping, and introduce an auxiliary sampling strategy to enhance generation consistency. Our method significantly outperforms baselines on both DiT and MAR architectures, achieving substantial improvements in text alignment and visual coherence. It is the first to enable semantic co-generation of cinematic camera motion and performer action, establishing a new benchmark for this task.

Technology Category

Application Category

📝 Abstract
Treating human motion and camera trajectory generation separately overlooks a core principle of cinematography: the tight interplay between actor performance and camera work in the screen space. In this paper, we are the first to cast this task as a text-conditioned joint generation, aiming to maintain consistent on-screen framing while producing two heterogeneous, yet intrinsically linked, modalities: human motion and camera trajectories. We propose a simple, model-agnostic framework that enforces multimodal coherence via an auxiliary modality: the on-screen framing induced by projecting human joints onto the camera. This on-screen framing provides a natural and effective bridge between modalities, promoting consistency and leading to more precise joint distribution. We first design a joint autoencoder that learns a shared latent space, together with a lightweight linear transform from the human and camera latents to a framing latent. We then introduce auxiliary sampling, which exploits this linear transform to steer generation toward a coherent framing modality. To support this task, we also introduce the PulpMotion dataset, a human-motion and camera-trajectory dataset with rich captions, and high-quality human motions. Extensive experiments across DiT- and MAR-based architectures show the generality and effectiveness of our method in generating on-frame coherent human-camera motions, while also achieving gains on textual alignment for both modalities. Our qualitative results yield more cinematographically meaningful framings setting the new state of the art for this task. Code, models and data are available in our href{https://www.lix.polytechnique.fr/vista/projects/2025_pulpmotion_courant/}{project page}.
Problem

Research questions and friction points this paper is trying to address.

Jointly generating human motion and camera trajectories from text descriptions
Maintaining consistent on-screen framing between actor performance and camera work
Creating multimodal coherence between heterogeneous but intrinsically linked modalities
Innovation

Methods, ideas, or system contributions that make the work stand out.

Joint generation of human motion and camera trajectories
Model-agnostic framework using on-screen framing as bridge
Auxiliary sampling steers generation toward coherent framing modality
🔎 Similar Papers
No similar papers found.