Benchmarking Affordance Generalization with BusyBox

📅 2026-02-05
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the affordance generalization capabilities of vision–language–action (VLA) models when encountering novel objects that possess familiar physical properties but have never been seen before. To this end, we introduce BusyBox—a physical evaluation benchmark built upon six interchangeable modules, which enables the systematic and semi-automated assessment of model performance by generating visually diverse yet affordance-consistent object variants through module rotation and substitution. BusyBox provides, for the first time, a reproducible, low-cost, and easily constructible real-world testbed, accompanied by open-sourced 3D printing schematics, an electronic bill of materials, and a dual-arm robot demonstration dataset. Experiments reveal that current state-of-the-art open-source VLA models, such as π₀.₅ and GR00T-N1.6, exhibit limited generalization on this benchmark, thereby validating its effectiveness and necessity.

Technology Category

Application Category

📝 Abstract
Vision-Language-Action (VLA) models have been attracting the attention of researchers and practitioners thanks to their promise of generalization. Although single-task policies still offer competitive performance, VLAs are increasingly able to handle commands and environments unseen in their training set. While generalization in vision and language space is undoubtedly important for robust versatile behaviors, a key meta-skill VLAs need to possess is affordance generalization -- the ability to manipulate new objects with familiar physical features. In this work, we present BusyBox, a physical benchmark for systematic semi-automatic evaluation of VLAs'affordance generalization. BusyBox consists of 6 modules with switches, sliders, wires, buttons, a display, and a dial. The modules can be swapped and rotated to create a multitude of BusyBox variations with different visual appearances but the same set of affordances. We empirically demonstrate that generalization across BusyBox variants is highly challenging even for strong open-weights VLAs such as $\pi_{0.5}$ and GR00T-N1.6. To encourage the research community to evaluate their own VLAs on BusyBox and to propose new affordance generalization experiments, we have designed BusyBox to be easy to build in most robotics labs. We release the full set of CAD files for 3D-printing its parts as well as a bill of materials for (optionally) assembling its electronics. We also publish a dataset of language-annotated demonstrations that we collected using the common bimanual Mobile Aloha robot on the canonical BusyBox configuration. All of the released materials are available at https://microsoft.github.io/BusyBox.
Problem

Research questions and friction points this paper is trying to address.

affordance generalization
Vision-Language-Action models
object manipulation
generalization benchmark
physical interaction
Innovation

Methods, ideas, or system contributions that make the work stand out.

affordance generalization
Vision-Language-Action models
physical benchmark
BusyBox
zero-shot object manipulation
🔎 Similar Papers
No similar papers found.
D
Dean Fortier
Microsoft Research
T
Timothy Adamson
Genie
T
T. Hellebrekers
Microsoft Research
T
Teresa LaScala
Microsoft Research
K
Kofi Ennin
Mississippi State University
Michael Murray
Michael Murray
University of Washington
RoboticsComputer VisionNatural Language Processing
Andrey Kolobov
Andrey Kolobov
Microsoft Research
Artificial IntelligencePlanning Under UncertaintyMachine LearningCrowdsourcing
G
Galen Mullins
Microsoft Research