🤖 AI Summary
Existing video large language models (V-LLMs) operate under a static tool library assumption, rendering them ill-suited for real-world scenarios involving continuously evolving tools and streaming inputs—leading to poor generalization and catastrophic forgetting. To address this, we propose a Dynamic Learnable Tool Codebook: a dedicated memory module enabling incremental injection of novel tools while preserving stable representations of historical ones. We further introduce an instruction-similarity-driven dynamic tool retrieval mechanism and a continual learning optimization strategy. Additionally, we construct VideoToolBench—the first video-oriented benchmark for tool usage evaluation. Extensive experiments on multiple V-LLM benchmarks and VideoToolBench demonstrate significant improvements in tool selection accuracy and continual adaptability. Our approach achieves, for the first time, efficient and robust tool utilization by open-source V-LLMs under continuous, streaming tool updates.
📝 Abstract
The success of Large Language Models (LLMs) has significantly propelled the research of video understanding. To harvest the benefits of well-trained expert models (i.e., tools), video LLMs prioritize the exploration of tool usage capabilities. Existing methods either prompt closed-source LLMs or employ the instruction tuning paradigm for tool-use fine-tuning. These methods, however, assume an established repository of fixed tools and struggle to generalize to real-world environments where tool data is perpetually evolving and streaming in. To this end, we propose to enhance open-source video LLMs with COntinuaL Tool usage (termed COLT), which automatically acquires tool-use ability in a successive tool stream without suffering 'catastrophic forgetting' of the past learned tools. Specifically, our COLT incorporates a learnable tool codebook as a tool-specific memory system. Then relevant tools are dynamically selected based on the similarity between user instruction and tool features within the codebook. To unleash the tool usage potential of video LLMs, we collect a video-centric tool-use instruction tuning dataset VideoToolBench. Extensive experiments on both previous video LLM benchmarks and the tool-use-specific VideoToolBench dataset demonstrate the state-of-the-art performance of our proposed COLT.