🤖 AI Summary
Higher education institutions face fragmented LLM resources, inequitable access, and inadequate governance. Method: We propose the first full-stack LLM collaborative service platform tailored for higher education, built on a microservice architecture featuring a unified LLM gateway with OAuth 2.0 authentication, differential privacy enhancement, a modular RAG engine, multi-backend adapters (e.g., OpenAI, Anthropic, Ollama), and real-time usage monitoring. The platform enables seamless integration of commercial, cloud-hosted, and on-premises open-source models, supporting fine-grained access control, native RAG, third-party API budgeting, and dual Web/API interfaces. Contribution/Results: Deployed at a large public university, it serves dozens of departments and labs, enabling zero- to low-budget teaching and research. User engagement increased 3.2×, and third-party API expenditures decreased by 67%, effectively bridging gaps in educational equity, multi-stakeholder governance, and cost-controllability.
📝 Abstract
We present AI-VERDE, a unified LLM-as-a-platform service designed to facilitate seamless integration of commercial, cloud-hosted, and on-premise open LLMs in academic settings. AI-VERDE streamlines access management for instructional and research groups by providing features such as robust access control, privacy-preserving mechanisms, native Retrieval-Augmented Generation (RAG) support, budget management for third-party LLM services, and both a conversational web interface and API access. In a pilot deployment at a large public university, AI-VERDE demonstrated significant engagement across diverse educational and research groups, enabling activities that would typically require substantial budgets for commercial LLM services with limited user and team management capabilities. To the best of our knowledge, AI-Verde is the first platform to address both academic and research needs for LLMs within an higher education institutional framework.