🤖 AI Summary
This study addresses the challenges of information extraction and weak structuring in ethnographic video design by pioneering a large language model (LLM)-driven approach to automated mind map generation. Leveraging customized prompt engineering, the method guides LLMs to identify core concepts from video transcripts and construct knowledge graphs, validated through human–AI collaborative evaluation and mixed-method (qualitative + quantitative) analysis. Results indicate that LLM-generated mind maps achieve near-expert coverage of central concepts but exhibit significant deficiencies in hierarchical logic, contextual anchoring, and cross-concept coherence. Critical adoption barriers include low user trust, limited customizability, and poor integration into existing design workflows. The study establishes both the potential and practical boundaries of LLMs for knowledge modeling in video-based ethnography and proposes actionable pathways—prompt optimization and human–AI co-enhancement—tailored to design practice. It thus introduces a novel paradigm for AI-augmented ethnographic video analysis.
📝 Abstract
Extracting concepts and understanding relationships from videos is essential in Video-Based Design (VBD), where videos serve as a primary medium for exploration but require significant effort in managing meta-information. Mind maps, with their ability to visually organize complex data, offer a promising approach for structuring and analysing video content. Recent advancements in Large Language Models (LLMs) provide new opportunities for meta-information processing and visual understanding in VBD, yet their application remains underexplored. This study recruited 28 VBD practitioners to investigate the use of prompt-tuned LLMs for generating mind maps from ethnographic videos. Comparing LLM-generated mind maps with those created by professional designers, we evaluated rated scores, design effectiveness, and user experience across two contexts. Findings reveal that LLMs effectively capture central concepts but struggle with hierarchical organization and contextual grounding. We discuss trust, customization, and workflow integration as key factors to guide future research on LLM-supported information mapping in VBD.