🤖 AI Summary
This work addresses the challenge of parameter-free continual learning for multimodal agents in open environments, where inefficiencies in tool usage and rigid scheduling often hinder performance. To this end, the authors propose XSkill, a dual-stream framework that introduces experience (action-level guidance) and skill (task-level planning) as complementary knowledge forms within multimodal agents. By leveraging vision-anchored trajectory summarization and a cross-trajectory critique mechanism, XSkill enables visual-driven knowledge extraction, retrieval, and feedback, thereby establishing a continual learning loop. Extensive experiments across five cross-domain benchmarks demonstrate that XSkill consistently outperforms existing baselines when integrated with four distinct backbone models, validating the complementary benefits of the dual knowledge streams in enhancing zero-shot generalization and reasoning behaviors.
📝 Abstract
Multimodal agents can now tackle complex reasoning tasks with diverse tools, yet they still suffer from inefficient tool use and inflexible orchestration in open-ended settings. A central challenge is enabling such agents to continually improve without parameter updates by learning from past trajectories. We identify two complementary forms of reusable knowledge essential for this goal: experiences, providing concise action-level guidance for tool selection and decision making, and skills, providing structured task-level guidance for planning and tool use. To this end, we propose XSkill, a dual-stream framework for continual learning from experience and skills in multimodal agents. XSkill grounds both knowledge extraction and retrieval in visual observations. During accumulation, XSkill distills and consolidates experiences and skills from multi-path rollouts via visually grounded summarization and cross-rollout critique. During inference, it retrieves and adapts this knowledge to the current visual context and feeds usage history back into accumulation to form a continual learning loop. Evaluated on five benchmarks across diverse domains with four backbone models, XSkill consistently and substantially outperforms both tool-only and learning-based baselines. Further analysis reveals that the two knowledge streams play complementary roles in influencing the reasoning behaviors of agents and show superior zero-shot generalization.