🤖 AI Summary
To address performance bottlenecks in large language models (LLMs) on multi-hop question answering—stemming from factual hallucinations and limited reasoning capabilities—this paper proposes Mujica, a multi-hop joint agent architecture, and MyGO, a novel reinforcement learning method. Mujica explicitly models inter-subproblem dependencies via a directed acyclic graph (DAG)-based planning decomposition and orchestrates retrieval and reasoning modules in a coordinated manner. MyGO abandons conventional policy gradient methods, instead employing progressive optimal trajectory sampling and maximum likelihood estimation to enable stable, reference-free, gradient-scaling-free training. Evaluated across multiple multi-hop QA benchmarks, the approach significantly improves the performance of mainstream LLMs while reducing training cost and computational overhead. Moreover, it demonstrates strong cross-model generalization and scalability, making it broadly applicable to diverse LLM backbones and downstream tasks.
📝 Abstract
Large Language Models (LLMs) have demonstrated remarkable versatility, due to the lack of factual knowledge, their application to Question Answering (QA) tasks remains hindered by hallucination. While Retrieval-Augmented Generation mitigates these issues by integrating external knowledge, existing approaches rely heavily on in-context learning, whose performance is constrained by the fundamental reasoning capabilities of LLMs. In this paper, we propose Mujica, a Multi-hop Joint Intelligence for Complex Question Answering, comprising a planner that decomposes questions into a directed acyclic graph of subquestions and a worker that resolves questions via retrieval and reasoning. Additionally, we introduce MyGO (Minimalist policy Gradient Optimization), a novel reinforcement learning method that replaces traditional policy gradient updates with Maximum Likelihood Estimation (MLE) by sampling trajectories from an asymptotically optimal policy. MyGO eliminates the need for gradient rescaling and reference models, ensuring stable and efficient training. Empirical results across multiple datasets demonstrate the effectiveness of Mujica-MyGO in enhancing multi-hop QA performance for various LLMs, offering a scalable and resource-efficient solution for complex QA tasks.