๐ค AI Summary
This work addresses a critical limitation of current vision-language models (VLMs) in robotic navigation: their inability to reason about their own physical capabilities and interactions with the environment, particularly in cluttered scenes where proactive obstacle removal is necessary to establish navigable paths. To overcome this, we propose CoINS, a novel framework that integrates skill-awareness and counterfactual causal reasoning into VLMs, enabling them to evaluate how removing specific objects affects navigation connectivity and to decide whether and how to interact accordingly. By combining a reinforcement learningโacquired skill library with metric-level environmental mapping, CoINS significantly outperforms baseline methods, achieving a 17% absolute improvement in overall success rate and over 80% performance gain in complex, long-horizon scenarios, as demonstrated in both Isaac Sim simulations and real-world experiments.
๐ Abstract
Recent Vision-Language Models (VLMs) have demonstrated significant potential in robotic planning. However, they typically function as semantic reasoners, lacking an intrinsic understanding of the specific robot's physical capabilities. This limitation is particularly critical in interactive navigation, where robots must actively modify cluttered environments to create traversable paths. Existing VLM-based navigators are predominantly confined to passive obstacle avoidance, failing to reason about when and how to interact with objects to clear blocked paths. To bridge this gap, we propose Counterfactual Interactive Navigation via Skill-aware VLM (CoINS), a hierarchical framework that integrates skill-aware reasoning and robust low-level execution. Specifically, we fine-tune a VLM, named InterNav-VLM, which incorporates skill affordance and concrete constraint parameters into the input context and grounds them into a metric-scale environmental representation. By internalizing the logic of counterfactual reasoning through fine-tuning on the proposed InterNav dataset, the model learns to implicitly evaluate the causal effects of object removal on navigation connectivity, thereby determining interaction necessity and target selection. To execute the generated high-level plans, we develop a comprehensive skill library through reinforcement learning, specifically introducing traversability-oriented strategies to manipulate diverse objects for path clearance. A systematic benchmark in Isaac Sim is proposed to evaluate both the reasoning and execution aspects of interactive navigation. Extensive simulations and real-world experiments demonstrate that CoINS significantly outperforms representative baselines, achieving a 17\% higher overall success rate and over 80\% improvement in complex long-horizon scenarios compared to the best-performing baseline