LLMER: Crafting Interactive Extended Reality Worlds with JSON Data Generated by Large Language Models

📅 2025-02-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address inaccurate context extraction, script-generation-induced runtime crashes and latency, and high computational overhead in XR environments, this paper proposes a lightweight, JSON-driven framework leveraging large language models (LLMs). Instead of generating executable code scripts, our approach employs multi-stage context refinement and strict JSON Schema constraints to directly parse natural-language user inputs into structured scene and animation control commands, executed in real time by Unity or Unreal Engine. Our key contribution is the first end-to-end “LLM → JSON → XR” paradigm, eliminating runtime compilation and syntactic errors. Experiments demonstrate over 80% reduction in token consumption and approximately 60% decrease in task completion time compared to state-of-the-art methods. A user study further confirms significant improvements in system stability and interactive experience.

Technology Category

Application Category

📝 Abstract
The integration of Large Language Models (LLMs) like GPT-4 with Extended Reality (XR) technologies offers the potential to build truly immersive XR environments that interact with human users through natural language, e.g., generating and animating 3D scenes from audio inputs. However, the complexity of XR environments makes it difficult to accurately extract relevant contextual data and scene/object parameters from an overwhelming volume of XR artifacts. It leads to not only increased costs with pay-per-use models, but also elevated levels of generation errors. Moreover, existing approaches focusing on coding script generation are often prone to generation errors, resulting in flawed or invalid scripts, application crashes, and ultimately a degraded user experience. To overcome these challenges, we introduce LLMER, a novel framework that creates interactive XR worlds using JSON data generated by LLMs. Unlike prior approaches focusing on coding script generation, LLMER translates natural language inputs into JSON data, significantly reducing the likelihood of application crashes and processing latency. It employs a multi-stage strategy to supply only the essential contextual information adapted to the user's request and features multiple modules designed for various XR tasks. Our preliminary user study reveals the effectiveness of the proposed system, with over 80% reduction in consumed tokens and around 60% reduction in task completion time compared to state-of-the-art approaches. The analysis of users' feedback also illuminates a series of directions for further optimization.
Problem

Research questions and friction points this paper is trying to address.

Enhance XR environment interaction
Reduce generation errors
Optimize JSON data processing
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMER uses JSON from LLMs
Reduces crashes and latency
Multi-stage strategy for XR
🔎 Similar Papers
No similar papers found.
J
Jiangong Chen
Department of Electrical Engineering, Pennsylvania State University
Xiaoyi Wu
Xiaoyi Wu
The Pennsylvania State University
Multi-Armed BanditVideo StreamingLLM
T
Tian Lan
Department of Electrical and Computer Engineering, George Washington University
B
Bin Li
Department of Electrical Engineering, Pennsylvania State University