Lightweight Adaptation for LLM-based Technical Service Agent: Latent Logic Augmentation and Robust Noise Reduction

📅 2026-03-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge that large language models (LLMs) struggle to effectively learn implicit decision-making logic from noisy and semantically diverse human demonstrations in complex technical service scenarios, compounded by the high computational cost of conventional training approaches. To this end, the authors propose a lightweight adaptation framework that captures implicit reasoning through planning-aware trajectory modeling and enhanced decision inference. They construct a dual-filtered, multi-ground-truth dataset to mitigate response ambiguity and introduce a hybrid reward mechanism combining an LLM-based discriminator with a lightweight reranker. The proposed method significantly improves agent stability, generalization, and training efficiency, achieving alignment performance on par with standard LLM discriminators in real-world cloud service tasks while substantially reducing training costs.

Technology Category

Application Category

📝 Abstract
Adapting Large Language Models in complex technical service domains is constrained by the absence of explicit cognitive chains in human demonstrations and the inherent ambiguity arising from the diversity of valid responses. These limitations severely hinder agents from internalizing latent decision dynamics and generalizing effectively. Moreover, practical adaptation is often impeded by the prohibitive resource and time costs associated with standard training paradigms. To overcome these challenges and guarantee computational efficiency, we propose a lightweight adaptation framework comprising three key contributions. (1) Latent Logic Augmentation: We introduce Planning-Aware Trajectory Modeling and Decision Reasoning Augmentation to bridge the gap between surface-level supervision and latent decision logic. These approaches strengthen the stability of Supervised Fine-Tuning alignment. (2) Robust Noise Reduction: We construct a Multiple Ground Truths dataset through a dual-filtering method to reduce the noise by validating diverse responses, thereby capturing the semantic diversity. (3) Lightweight Adaptation: We design a Hybrid Reward mechanism that fuses an LLM-based judge with a lightweight relevance-based Reranker to distill high-fidelity reward signals while reducing the computational cost compared to standard LLM-as-a-Judge reinforcement learning. Empirical evaluations on real-world Cloud service tasks, conducted across semantically diverse settings, demonstrate that our framework achieves stability and performance gains through Latent Logic Augmentation and Robust Noise Reduction. Concurrently, our Hybrid Reward mechanism achieves alignment comparable to standard LLM-as-a-judge methods with reduced training time, underscoring the practical value for deploying technical service agents.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Technical Service Agents
Latent Decision Logic
Response Ambiguity
Computational Efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Latent Logic Augmentation
Robust Noise Reduction
Lightweight Adaptation
Hybrid Reward
Multiple Ground Truths
🔎 Similar Papers
No similar papers found.
Yi Yu
Yi Yu
Graduate School of Advanced Science and Engineering at Hiroshima University
Multimodal learningGenerative modelingMultimediaAI Music
J
Junzhuo Ma
School of Mathematics and Sciences, Fudan University, Shanghai, China
C
Chenghuang Shen
Shanghai Center for Mathematical Sciences, Fudan University, Shanghai, China
X
Xingyan Liu
Alibaba Group, Hangzhou, China
J
Jing Gu
School of Mathematics and Sciences, Fudan University, Shanghai, China
H
Hangyi Sun
School of Mathematics and Sciences, Fudan University, Shanghai, China
G
Guangquan Hu
Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai, China
J
Jianfeng Liu
Alibaba Group, Hangzhou, China
W
Weiting Liu
Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai, China
M
Mingyue Pu
Alibaba Group, Hangzhou, China
Yu Wang
Yu Wang
Alibaba
Computer scienceMathematics
Z
Zhengdong Xiao
Alibaba Group, Hangzhou, China
R
Rui Xie
Alibaba Group, Hangzhou, China
L
Longjiu Luo
Alibaba Group, Hangzhou, China
Q
Qianrong Wang
Alibaba Group, Hangzhou, China
G
Gurong Cui
Alibaba Group, Hangzhou, China
H
Honglin Qiao
Alibaba Group, Hangzhou, China
Wenlian Lu
Wenlian Lu
Professor of Mathematics, Fudan University
Neural NetworksComplex NetworksDynamical Systems