π€ AI Summary
This work addresses the lack of service reliability and QoS guarantees under adversarial attacks in service-oriented vision-language navigation (VLN) systems. We propose AdvOFβthe first adversarial object fusion framework tailored to service computing scenarios. AdvOF generates physically realizable 3D adversarial objects via 2D/3D spatial alignment, multi-view weighted co-optimization, and dual regularization on both VLM perceptual features and physical attributes. This enables precise perturbation of the VLM perception module while preserving original task performance with negligible degradation (<1.2% success rate loss). Evaluated across multiple VLN benchmarks, AdvOF reduces navigation success rates by an average of 42.7%, providing the first empirical evidence of service-level security vulnerabilities in VLN. Our framework establishes theoretical foundations and practical methodologies for designing robust, service-composable VLN systems.
π Abstract
We present Adversarial Object Fusion (AdvOF), a novel attack framework targeting vision-and-language navigation (VLN) agents in service-oriented environments by generating adversarial 3D objects. While foundational models like Large Language Models (LLMs) and Vision Language Models (VLMs) have enhanced service-oriented navigation systems through improved perception and decision-making, their integration introduces vulnerabilities in mission-critical service workflows. Existing adversarial attacks fail to address service computing contexts, where reliability and quality-of-service (QoS) are paramount. We utilize AdvOF to investigate and explore the impact of adversarial environments on the VLM-based perception module of VLN agents. In particular, AdvOF first precisely aggregates and aligns the victim object positions in both 2D and 3D space, defining and rendering adversarial objects. Then, we collaboratively optimize the adversarial object with regularization between the adversarial and victim object across physical properties and VLM perceptions. Through assigning importance weights to varying views, the optimization is processed stably and multi-viewedly by iterative fusions from local updates and justifications. Our extensive evaluations demonstrate AdvOF can effectively degrade agent performance under adversarial conditions while maintaining minimal interference with normal navigation tasks. This work advances the understanding of service security in VLM-powered navigation systems, providing computational foundations for robust service composition in physical-world deployments.