🤖 AI Summary
To address inefficient cross-modal heterogeneous resource retrieval (e.g., videos, code repositories, documentation, web pages) and heavy manual effort in knowledge integration within online learning, this paper proposes a learning-scenario-oriented, heterogeneous-resource-aware multi-agent RAG framework. The framework employs task-driven agent specialization, cross-source semantic alignment, and dynamic knowledge fusion to enable coordinated semantic retrieval and generative synthesis across YouTube, GitHub, documentation portals, and search engines. Differing from conventional monolithic RAG systems, it introduces a novel multi-agent collaborative architecture supporting modality-adaptive retrieval and context-aware result aggregation. A user study (N=32) demonstrates that the framework reduces information acquisition time by 42% on average and improves knowledge integration accuracy by 31%, validating its high usability and practical effectiveness.
📝 Abstract
Efficient online learning requires seamless access to diverse resources such as videos, code repositories, documentation, and general web content. This poster paper introduces early-stage work on a Multi-Agent Retrieval-Augmented Generation (RAG) System designed to enhance learning efficiency by integrating these heterogeneous resources. Using specialized agents tailored for specific resource types (e.g., YouTube tutorials, GitHub repositories, documentation websites, and search engines), the system automates the retrieval and synthesis of relevant information. By streamlining the process of finding and combining knowledge, this approach reduces manual effort and enhances the learning experience. A preliminary user study confirmed the system's strong usability and moderate-high utility, demonstrating its potential to improve the efficiency of knowledge acquisition.