Keyframer: Empowering Animation Design using Large Language Models

📅 2024-02-08
🏛️ arXiv.org
📈 Citations: 9
Influential: 0
📄 PDF
🤖 AI Summary
Creating 2D animations is labor-intensive, requiring precise coordination of multiple visual elements across time—a significant barrier for designers at all experience levels. To address this, we introduce Keyframer, a novel tool enabling natural language–driven generation and interactive editing of SVG-based animations. Methodologically, we propose a *decompositional prompting* strategy and a prompt-edit co-iterative paradigm, coupled with a semantics-oriented taxonomy for motion descriptions—overcoming limitations of single-shot prompting. Keyframer integrates large language models’ (LLMs) natural language understanding and code generation capabilities with SVG specification parsing, real-time frontend editing, and variant synthesis mechanisms. A user study (n=13) demonstrates that Keyframer significantly improves design efficiency and creative ideation, empowering non-expert users to produce high-quality keyframe animations effectively.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have the potential to impact a wide range of creative domains, but the application of LLMs to animation is underexplored and presents novel challenges such as how users might effectively describe motion in natural language. In this paper, we present Keyframer, a design tool for animating static images (SVGs) with natural language. Informed by interviews with professional animation designers and engineers, Keyframer supports exploration and refinement of animations through the combination of prompting and direct editing of generated output. The system also enables users to request design variants, supporting comparison and ideation. Through a user study with 13 participants, we contribute a characterization of user prompting strategies, including a taxonomy of semantic prompt types for describing motion and a 'decomposed' prompting style where users continually adapt their goals in response to generated output.We share how direct editing along with prompting enables iteration beyond one-shot prompting interfaces common in generative tools today. Through this work, we propose how LLMs might empower a range of audiences to engage with animation creation.
Problem

Research questions and friction points this paper is trying to address.

Simplifying 2D animation creation for varied expertise levels
Generating animation code via natural language prompts
Supporting iterative animation refinement with direct editing
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generates animation code from natural language prompts
Enables inline preview and direct editing of animations
Supports iterative refinement via combined editing and prompts
🔎 Similar Papers
No similar papers found.