BlenderGym: Benchmarking Foundational Model Systems for Graphics Editing

📅 2025-04-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Automated 3D graphics editing remains challenging, and there is a lack of realistic, scenario-based evaluation benchmarks for vision-language models (VLMs). Method: This paper introduces BlenderGym—the first system-level benchmark for evaluating VLMs on 3D editing—built upon code-based 3D reconstruction in Blender. It features a multi-stage reasoning scaling scheme, a generation–verification dual-path architecture, and a unified evaluation framework supporting both closed- and open-source VLMs. Contributions/Results: (1) It establishes the first evaluation standard for 3D editing systems requiring human-level perception and manipulation capabilities; (2) it demonstrates that the verification module can be independently optimized via reasoning scaling, revealing asymmetric computational efficiency between generation and verification and their synergistic optimization mechanisms; (3) empirical results show that state-of-the-art VLMs significantly underperform human experts on fundamental Blender editing tasks, validating the benchmark’s effectiveness for assessing and advancing VLM system capabilities.

Technology Category

Application Category

📝 Abstract
3D graphics editing is crucial in applications like movie production and game design, yet it remains a time-consuming process that demands highly specialized domain expertise. Automating this process is challenging because graphical editing requires performing a variety of tasks, each requiring distinct skill sets. Recently, vision-language models (VLMs) have emerged as a powerful framework for automating the editing process, but their development and evaluation are bottlenecked by the lack of a comprehensive benchmark that requires human-level perception and presents real-world editing complexity. In this work, we present BlenderGym, the first comprehensive VLM system benchmark for 3D graphics editing. BlenderGym evaluates VLM systems through code-based 3D reconstruction tasks. We evaluate closed- and open-source VLM systems and observe that even the state-of-the-art VLM system struggles with tasks relatively easy for human Blender users. Enabled by BlenderGym, we study how inference scaling techniques impact VLM's performance on graphics editing tasks. Notably, our findings reveal that the verifier used to guide the scaling of generation can itself be improved through inference scaling, complementing recent insights on inference scaling of LLM generation in coding and math tasks. We further show that inference compute is not uniformly effective and can be optimized by strategically distributing it between generation and verification.
Problem

Research questions and friction points this paper is trying to address.

Automating 3D graphics editing lacking benchmarks for VLMs
Evaluating VLMs' performance on complex human-level editing tasks
Optimizing inference scaling for generation and verification in VLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

BlenderGym benchmarks VLMs for 3D editing
Code-based tasks evaluate VLM reconstruction skills
Inference scaling optimizes generation and verification
🔎 Similar Papers
No similar papers found.