π€ AI Summary
Intent-Driven Networking (IBN) faces challenges in translating natural-language user intents into structured network configurations due to semantic heterogeneity across diverse applications, resulting in poor translation accuracy and limited scalability. Method: This paper proposes Intent-RAGβa context-aware AI translation framework that synergistically integrates Machine Reasoning (MR) and Retrieval-Augmented Generation (RAG). It employs explicit reasoning chains to verify semantic consistency and mitigate large language model (LLM) hallucinations, while incorporating a domain-adapted retrieval module to support both generic and customized intent expressions. Contribution/Results: Experiments demonstrate that Intent-RAG significantly outperforms baseline LLMs and standard RAG across multi-domain intent parsing tasks, achieving breakthrough improvements in accuracy, generalization, and cross-domain scalability. The framework establishes a novel paradigm for semantic-level network autonomy in IBN, enabling true βsay-and-getβ user experience.
π Abstract
Intent-based network (IBN) is a promising solution to automate network operation and management. IBN aims to offer human-tailored network interaction, allowing the network to communicate in a way that aligns with the network users' language, rather than requiring the network users to understand the technical language of the network/devices. Nowadays, different applications interact with the network, each with its own specialized needs and domain language. Creating semantic languages (i.e., ontology-based languages) and associating them with each application to facilitate intent translation lacks technical expertise and is neither practical nor scalable. To tackle the aforementioned problem, we propose a context-aware AI framework that utilizes machine reasoning (MR), retrieval augmented generation (RAG), and generative AI technologies to interpret intents from different applications and generate structured network intents. The proposed framework allows for generalized/domain-specific intent expression and overcomes the drawbacks of large language models (LLMs) and vanilla-RAG framework. The experimental results show that our proposed intent-RAG framework outperforms the LLM and vanilla-RAG framework in intent translation.