🤖 AI Summary
This work addresses the challenge that large language models (LLMs) struggle to effectively learn implicit decision-making logic from noisy and semantically diverse human demonstrations in complex technical service scenarios, compounded by the high computational cost of conventional training approaches. To this end, the authors propose a lightweight adaptation framework that captures implicit reasoning through planning-aware trajectory modeling and enhanced decision inference. They construct a dual-filtered, multi-ground-truth dataset to mitigate response ambiguity and introduce a hybrid reward mechanism combining an LLM-based discriminator with a lightweight reranker. The proposed method significantly improves agent stability, generalization, and training efficiency, achieving alignment performance on par with standard LLM discriminators in real-world cloud service tasks while substantially reducing training costs.
📝 Abstract
Adapting Large Language Models in complex technical service domains is constrained by the absence of explicit cognitive chains in human demonstrations and the inherent ambiguity arising from the diversity of valid responses. These limitations severely hinder agents from internalizing latent decision dynamics and generalizing effectively. Moreover, practical adaptation is often impeded by the prohibitive resource and time costs associated with standard training paradigms. To overcome these challenges and guarantee computational efficiency, we propose a lightweight adaptation framework comprising three key contributions. (1) Latent Logic Augmentation: We introduce Planning-Aware Trajectory Modeling and Decision Reasoning Augmentation to bridge the gap between surface-level supervision and latent decision logic. These approaches strengthen the stability of Supervised Fine-Tuning alignment. (2) Robust Noise Reduction: We construct a Multiple Ground Truths dataset through a dual-filtering method to reduce the noise by validating diverse responses, thereby capturing the semantic diversity. (3) Lightweight Adaptation: We design a Hybrid Reward mechanism that fuses an LLM-based judge with a lightweight relevance-based Reranker to distill high-fidelity reward signals while reducing the computational cost compared to standard LLM-as-a-Judge reinforcement learning. Empirical evaluations on real-world Cloud service tasks, conducted across semantically diverse settings, demonstrate that our framework achieves stability and performance gains through Latent Logic Augmentation and Robust Noise Reduction. Concurrently, our Hybrid Reward mechanism achieves alignment comparable to standard LLM-as-a-judge methods with reduced training time, underscoring the practical value for deploying technical service agents.