CARING-AI: Towards Authoring Context-aware Augmented Reality INstruction through Generative Artificial Intelligence

📅 2025-01-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current AR tutorials lack contextual adaptability and are constrained by device capabilities and technical expertise, hindering personalized instruction in dynamic environments. To address this, we propose CARING-AI: a novel system that introduces environment navigation as an implicit contextual input mechanism to drive generative AI—integrating large language models (LLMs) with 3D animation generation—for real-time synthesis of spatiotemporally aligned, humanoid virtual-avatar-guided AR instructions. We establish an AIGC design space tailored for AR pedagogy, encompassing asynchronous, remote, and impromptu instructional scenarios. The system integrates multimodal perception, lightweight AR rendering, and humanoid avatar motion synthesis. Two user studies (N=12) demonstrate that authoring time decreases by 62% on average, contextual fidelity of instructions significantly improves, and high-fidelity AR instructional content can be deployed without coding.

Technology Category

Application Category

📝 Abstract
Context-aware AR instruction enables adaptive and in-situ learning experiences. However, hardware limitations and expertise requirements constrain the creation of such instructions. With recent developments in Generative Artificial Intelligence (Gen-AI), current research tries to tackle these constraints by deploying AI-generated content (AIGC) in AR applications. However, our preliminary study with six AR practitioners revealed that the current AIGC lacks contextual information to adapt to varying application scenarios and is therefore limited in authoring. To utilize the strong generative power of GenAI to ease the authoring of AR instruction while capturing the context, we developed CARING-AI, an AR system to author context-aware humanoid-avatar-based instructions with GenAI. By navigating in the environment, users naturally provide contextual information to generate humanoid-avatar animation as AR instructions that blend in the context spatially and temporally. We showcased three application scenarios of CARING-AI: Asynchronous Instructions, Remote Instructions, and Ad Hoc Instructions based on a design space of AIGC in AR Instructions. With two user studies (N=12), we assessed the system usability of CARING-AI and demonstrated the easiness and effectiveness of authoring with Gen-AI.
Problem

Research questions and friction points this paper is trying to address.

Augmented Reality
Adaptability
Context Awareness
Innovation

Methods, ideas, or system contributions that make the work stand out.

CARING-AI
Generative AI (GenAI)
Augmented Reality (AR) Tutorials
🔎 Similar Papers
No similar papers found.
Jingyu Shi
Jingyu Shi
Meta
Augmented RealityExtended RealityComputer VisionHuman-AI Interaction
R
Rahul Jain
Purdue University, West Lafayette, Indiana, USA
S
Seungguen Chi
Purdue University, West Lafayette, Indiana, USA
H
Hyungjun Doh
Purdue University, West Lafayette, Indiana, USA
H
Hyunggun Chi
Purdue University, West Lafayette, Indiana, USA
Alexander J. Quinn
Alexander J. Quinn
Assistant Professor at Purdue University
Human-Computer InteractionHuman ComputationCrowdsourcing
Karthik Ramani
Karthik Ramani
Donald W. Feddersen Distinguished Professor of Mechanical Engineering
Designhuman AI interactiongeometric machine learningphysical AIskills transfer