ThinkFL: Self-Refining Failure Localization for Microservice Systems via Reinforcement Fine-Tuning

📅 2025-04-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Microservice systems suffer from frequent high-frequency latent failures due to their numerous components and intricate interdependencies; conventional small-scale models exhibit poor generalization, while existing large language model (LLM)-based approaches are constrained by rigid inference pipelines and prohibitively high computational overhead, compromising both accuracy and efficiency. Method: We propose a lightweight LLM-based fault localization framework fine-tuned via Generalized Reinforcement Policy Optimization (GRPO), introducing the first multi-stage GRPO training paradigm that integrates a recursive reasoning executor with a multi-factor dynamic scoring mechanism—endowing the model with autonomous path exploration and self-correction capabilities. Contribution/Results: Evaluated on real-world microservice datasets, our method achieves state-of-the-art fault localization accuracy while reducing end-to-end latency from minutes to seconds, significantly enhancing practical deployability in industrial settings.

Technology Category

Application Category

📝 Abstract
As modern microservice systems grow increasingly popular and complex-often consisting of hundreds or even thousands of fine-grained, interdependent components-they are becoming more susceptible to frequent and subtle failures. Ensuring system reliability therefore hinges on accurate and efficient failure localization. Traditional failure localization approaches based on small models lack the flexibility to adapt to diverse failure scenarios, while recent LLM-based methods suffer from two major limitations: they often rely on rigid invocation workflows that constrain the model's ability to dynamically explore optimal localization paths, and they require resource-intensive inference, making them cost-prohibitive for real-world deployment. To address these challenges, we explore the use of reinforcement fine-tuning to equip lightweight LLMs with reasoning and self-refinement capabilities, significantly improving the cost-effectiveness and adaptability of LLM-based failure localization. We begin with an empirical study to identify three key capabilities essential for accurate localization. Building on these insights, we propose a progressive multi-stage GRPO fine-tuning framework, which integrates a multi-factor failure localization grader and a recursion-of-thought actor module. The resulting model, ThinkFL, not only outperforms existing state-of-the-art LLMs and baseline methods in localization accuracy but also reduces end-to-end localization latency from minutes to seconds, demonstrating strong potential for real-world applications.
Problem

Research questions and friction points this paper is trying to address.

Adapt lightweight LLMs for dynamic failure localization in microservices
Reduce resource-intensive inference costs in LLM-based localization methods
Improve accuracy and speed of failure localization in complex systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforcement fine-tuning for lightweight LLMs
Progressive multi-stage GRPO fine-tuning framework
Multi-factor failure localization grader integration
🔎 Similar Papers
No similar papers found.