🤖 AI Summary
Large language models (LLMs) suffer from limited performance in embodied decision-making within virtual open-world environments due to insufficient domain-specific knowledge. Method: This paper proposes a low-cost, high-efficiency agent construction framework featuring (i) a vision–language cross-modal knowledge graph, (ii) a lightweight customized object detector, (iii) retrieval-augmented reasoning, and (iv) a desktop manipulation skill library. Crucially, retrieval-based information extraction reduces required domain annotation effort from millions to merely hundreds of samples. Contribution/Results: The approach achieves state-of-the-art performance across diverse open-world tasks. It significantly lowers development overhead while markedly enhancing environmental perception and grounded decision-making capabilities—demonstrating both scalability and practicality for real-world embodied AI applications.
📝 Abstract
Large language models (LLMs) have shown significant promise in embodied decision-making tasks within virtual open-world environments. Nonetheless, their performance is hindered by the absence of domain-specific knowledge. Methods that finetune on large-scale domain-specific data entail prohibitive development costs. This paper introduces VistaWise, a cost-effective agent framework that integrates cross-modal domain knowledge and finetunes a dedicated object detection model for visual analysis. It reduces the requirement for domain-specific training data from millions of samples to a few hundred. VistaWise integrates visual information and textual dependencies into a cross-modal knowledge graph (KG), enabling a comprehensive and accurate understanding of multimodal environments. We also equip the agent with a retrieval-based pooling strategy to extract task-related information from the KG, and a desktop-level skill library to support direct operation of the Minecraft desktop client via mouse and keyboard inputs. Experimental results demonstrate that VistaWise achieves state-of-the-art performance across various open-world tasks, highlighting its effectiveness in reducing development costs while enhancing agent performance.