🤖 AI Summary
This work addresses the resource management challenges in space-air-ground integrated networks (SAGIN) arising from heterogeneous infrastructure, dynamic topology, and stringent QoS requirements. To this end, the authors propose an intelligent agent framework powered by large language models (LLMs) and embedded within a MAPE-K adaptive control plane. The framework orchestrates three types of agents—semantic-aware, intent-driven, and adaptive learning agents—that collaborate to bridge the semantic gap between high-level operational intent and low-level network execution. A novel hierarchical agent-reinforcement learning mechanism is introduced, wherein LLM-based agents dynamically construct reward functions based on semantically enriched network states. Evaluated in a UAV-assisted AIGC service orchestration scenario, the proposed approach significantly outperforms existing methods, achieving a 14% reduction in energy consumption and the lowest average service latency.
📝 Abstract
Space-air-ground integrated networks (SAGIN) promise ubiquitous 6G connectivity but face significant resource management challenges due to heterogeneous infrastructure, dynamic topologies, and stringent quality-of-service (QoS) requirements. Conventional model-driven approaches struggle with scalability and adaptability in such complex environments. This paper presents an agentic artificial intelligence (AI) framework for autonomous SAGIN resource management by embedding large language model (LLM)-based agents into a Monitor-Analyze-Plan- Execute-Knowledge (MAPE-K) control plane. The framework incorporates three specialized agents, namely semantic resource perceivers, intent-driven orchestrators, and adaptive learners, that collaborate through natural language reasoning to bridge the gap between operator intents and network execution. A key innovation is the hierarchical agent-reinforcement learning (RL) collaboration mechanism, wherein LLM-based orchestrators dynamically shape reward functions for RL agents based on semantic network conditions. Validation through UAV-assisted AIGC service orchestration in energy-constrained scenarios demonstrates that LLM-driven reward shaping achieves 14% energy reduction and the lowest average service latency among all compared methods. This agentic paradigm offers a scalable pathway toward adaptive, AI-native 6G networks, capable of autonomously interpreting intents and adapting to dynamic environments.