🤖 AI Summary
In active distribution networks (ADNs), heterogeneous domain-specific models (DSMs) face significant challenges in unified orchestration and collaborative execution under multi-scenario, multi-objective operational conditions.
Method: This paper proposes ADN-Agent, an intelligent collaborative architecture centered on a large language model (LLM) that performs user intent recognition, multi-step task decomposition, and adaptive routing; introduces a standardized communication protocol to unify interfaces of heterogeneous DSMs; and develops a lightweight small language model (SLM) fine-tuning pipeline tailored for language-intensive subtasks.
Contribution/Results: Experiments demonstrate substantial improvements over existing LLM-based paradigms in scheduling efficiency, task completion rate, and cross-model collaboration stability. The architecture establishes a scalable, interpretable, and multi-model collaborative framework for intelligent operation and control of ADNs, enabling robust integration of specialized models while preserving domain fidelity and operational transparency.
📝 Abstract
With the integration of massive distributed energy resources and the widespread participation of novel market entities, the operation of active distribution networks (ADNs) is progressively evolving into a complex multi-scenario, multi-objective problem. Although expert engineers have developed numerous domain specific models (DSMs) to address distinct technical problems, mastering, integrating, and orchestrating these heterogeneous DSMs still entail considerable overhead for ADN operators. Therefore, an intelligent approach is urgently required to unify these DSMs and enable efficient coordination. To address this challenge, this paper proposes the ADN-Agent architecture, which leverages a general large language model (LLM) to coordinate multiple DSMs, enabling adaptive intent recognition, task decomposition, and DSM invocation. Within the ADN-Agent, we design a novel communication mechanism that provides a unified and flexible interface for diverse heterogeneous DSMs. Finally, for some language-intensive subtasks, we propose an automated training pipeline for fine-tuning small language models, thereby effectively enhancing the overall problem-solving capability of the system. Comprehensive comparisons and ablation experiments validate the efficacy of the proposed method and demonstrate that the ADN-Agent architecture outperforms existing LLM application paradigms.