🤖 AI Summary
This work addresses the Builder Action Prediction (BAP) task in *Minecraft*, focusing on architect–builder collaborative scenarios, where models must interpret natural-language construction instructions and predict multimodal action sequences. To overcome two key bottlenecks—scarce real-world training data and inconsistent evaluation protocols—the paper proposes: (1) BAP v2, a refined benchmark featuring a cleaned test set and fine-grained, equitable evaluation metrics; (2) a novel synthetic data generation method leveraging dialogue and structural simulators to produce controllable, scalable, high-quality action sequences for the first time; and (3) a low-resource adaptation framework integrating multimodal context modeling, lightweight Transformer training, and LLM-coordinated fine-tuning. Experiments demonstrate substantial improvements in both task performance and cross-instruction generalization. The synthetic data effectively supports training lightweight models and provides a robust foundation for large-scale LLM fine-tuning.
📝 Abstract
Interactive agents capable of understanding and executing instructions in the physical world have long been a central goal in AI research. The Minecraft Collaborative Building Task (MCBT) provides one such setting to work towards this goal (Narayan-Chen, Jayannavar, and Hockenmaier 2019). It is a two-player game in which an Architect (A) instructs a Builder (B) to construct a target structure in a simulated Blocks World Environment. We focus on the challenging Builder Action Prediction (BAP) subtask of predicting correct action sequences in a given multimodal game context with limited training data (Jayannavar, Narayan-Chen, and Hockenmaier 2020). We take a closer look at evaluation and data for the BAP task, discovering key challenges and making significant improvements on both fronts to propose BAP v2, an upgraded version of the task. This will allow future work to make more efficient and meaningful progress on it. It comprises of: (1) an enhanced evaluation benchmark that includes a cleaner test set and fairer, more insightful metrics, and (2) additional synthetic training data generated from novel Minecraft dialogue and target structure simulators emulating the MCBT. We show that the synthetic data can be used to train more performant and robust neural models even with relatively simple training methods. Looking ahead, such data could also be crucial for training more sophisticated, data-hungry deep transformer models and training/fine-tuning increasingly large LLMs. Although modeling is not the primary focus of this work, we also illustrate the impact of our data and training methodologies on a simple LLM- and transformer-based model, thus validating the robustness of our approach, and setting the stage for more advanced architectures and LLMs going forward.