One Request, Multiple Experts: LLM Orchestrates Domain Specific Models via Adaptive Task Routing

📅 2025-11-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In active distribution networks (ADNs), heterogeneous domain-specific models (DSMs) face significant challenges in unified orchestration and collaborative execution under multi-scenario, multi-objective operational conditions. Method: This paper proposes ADN-Agent, an intelligent collaborative architecture centered on a large language model (LLM) that performs user intent recognition, multi-step task decomposition, and adaptive routing; introduces a standardized communication protocol to unify interfaces of heterogeneous DSMs; and develops a lightweight small language model (SLM) fine-tuning pipeline tailored for language-intensive subtasks. Contribution/Results: Experiments demonstrate substantial improvements over existing LLM-based paradigms in scheduling efficiency, task completion rate, and cross-model collaboration stability. The architecture establishes a scalable, interpretable, and multi-model collaborative framework for intelligent operation and control of ADNs, enabling robust integration of specialized models while preserving domain fidelity and operational transparency.

Technology Category

Application Category

📝 Abstract
With the integration of massive distributed energy resources and the widespread participation of novel market entities, the operation of active distribution networks (ADNs) is progressively evolving into a complex multi-scenario, multi-objective problem. Although expert engineers have developed numerous domain specific models (DSMs) to address distinct technical problems, mastering, integrating, and orchestrating these heterogeneous DSMs still entail considerable overhead for ADN operators. Therefore, an intelligent approach is urgently required to unify these DSMs and enable efficient coordination. To address this challenge, this paper proposes the ADN-Agent architecture, which leverages a general large language model (LLM) to coordinate multiple DSMs, enabling adaptive intent recognition, task decomposition, and DSM invocation. Within the ADN-Agent, we design a novel communication mechanism that provides a unified and flexible interface for diverse heterogeneous DSMs. Finally, for some language-intensive subtasks, we propose an automated training pipeline for fine-tuning small language models, thereby effectively enhancing the overall problem-solving capability of the system. Comprehensive comparisons and ablation experiments validate the efficacy of the proposed method and demonstrate that the ADN-Agent architecture outperforms existing LLM application paradigms.
Problem

Research questions and friction points this paper is trying to address.

Orchestrating heterogeneous domain models for active distribution networks
Reducing operational overhead through adaptive task routing
Integrating multiple expert systems via unified communication interface
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM coordinates multiple domain specific models
Designs unified communication interface for heterogeneous models
Automated training pipeline fine-tunes small language models
🔎 Similar Papers
No similar papers found.
X
Xu Yang
State Key Laboratory of Power Systems, Department of Electrical Engineering, Tsinghua University, Beijing 100084, China
C
Chenhui Lin
State Key Laboratory of Power Systems, Department of Electrical Engineering, Tsinghua University, Beijing 100084, China
H
Haotian Liu
State Key Laboratory of Power Systems, Department of Electrical Engineering, Tsinghua University, Beijing 100084, China
Q
Qi Wang
Hong Kong Polytechnic University, Hong Kong, China
Y
Yue Yang
State Key Laboratory of High-Efficiency and High-Quality Conversion for Electric Power, Hefei University of Technology, Hefei 230009, China
Wenchuan Wu
Wenchuan Wu
University of Oxford
MRINeuroImagingSignal ProcessingMachine learning