🤖 AI Summary
Current AI assistants in augmented reality (AR) are predominantly reactive, lacking the capability to proactively detect errors, timely correct misconceptions, or encourage users during procedural tasks such as cooking. This work introduces the first proactive, multimodal AI agent designed specifically for AR environments. Our method bridges the gap from passive response to active collaboration by proposing an interpretable intervention-triggering mechanism grounded in structural similarity (SSIM) and action-alignment signals. It integrates inter-frame SSIM evaluation, multimodal action alignment modeling, real-time AR video stream understanding, and vision-language model (VLM)-driven dialogue decision-making. Evaluated on the HoloAssist benchmark, our approach significantly improves the accuracy of proactive intervention timing, reduces false interventions by 37%, and increases task completion success rate by 22%.
📝 Abstract
Multimodal AI Agents are AI models that have the capability of interactively and cooperatively assisting human users to solve day-to-day tasks. Augmented Reality (AR) head worn devices can uniquely improve the user experience of solving procedural day-to-day tasks by providing egocentric multimodal (audio and video) observational capabilities to AI Agents. Such AR capabilities can help AI Agents see and listen to actions that users take which can relate to multimodal capabilities of human users. Existing AI Agents, either Large Language Models (LLMs) or Multimodal Vision-Language Models (VLMs) are reactive in nature, which means that models cannot take an action without reading or listening to the human user's prompts. Proactivity of AI Agents on the other hand can help the human user detect and correct any mistakes in agent observed tasks, encourage users when they do tasks correctly or simply engage in conversation with the user - akin to a human teaching or assisting a user. Our proposed YET to Intervene (YETI) multimodal agent focuses on the research question of identifying circumstances that may require the agent to intervene proactively. This allows the agent to understand when it can intervene in a conversation with human users that can help the user correct mistakes on tasks, like cooking, using AR. Our YETI Agent learns scene understanding signals based on interpretable notions of Structural Similarity (SSIM) on consecutive video frames. We also define the alignment signal which the AI Agent can learn to identify if the video frames corresponding to the user's actions on the task are consistent with expected actions. These signals are used by our AI Agent to determine when it should proactively intervene. We compare our results on the instances of proactive intervention in the HoloAssist multimodal benchmark for an expert agent guiding a user to complete procedural tasks.