PokeeResearch: Effective Deep Research via Reinforcement Learning from AI Feedback and Robust Reasoning Scaffold

📅 2025-10-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing tool-augmented LLM-based research agents suffer from shallow retrieval, weak alignment metrics, and brittle tool invocation. This paper introduces PokeeResearch-7B—the first 7B-parameter deep research agent trained via unsupervised AI feedback reinforcement learning (RLAIF). It integrates chain-of-thought–driven multi-step reasoning scaffolding, a self-verification mechanism, and an adaptive recovery strategy for tool-call failures, substantially improving factual accuracy, citation fidelity, and instruction adherence. Crucially, we establish an end-to-end LLM-based reward modeling and self-optimization loop—eliminating the need for human annotations. Evaluated on ten mainstream deep research benchmarks, PokeeResearch-7B achieves state-of-the-art performance among models of comparable scale. Both the model weights and inference code are publicly released.

Technology Category

Application Category

📝 Abstract
Tool-augmented large language models (LLMs) are emerging as deep research agents, systems that decompose complex queries, retrieve external evidence, and synthesize grounded responses. Yet current agents remain limited by shallow retrieval, weak alignment metrics, and brittle tool-use behavior. We introduce PokeeResearch-7B, a 7B-parameter deep research agent built under a unified reinforcement learning framework for robustness, alignment, and scalability. PokeeResearch-7B is trained by an annotation-free Reinforcement Learning from AI Feedback (RLAIF) framework to optimize policies using LLM-based reward signals that capture factual accuracy, citation faithfulness, and instruction adherence. A chain-of-thought-driven multi-call reasoning scaffold further enhances robustness through self-verification and adaptive recovery from tool failures. Among 10 popular deep research benchmarks, PokeeResearch-7B achieves state-of-the-art performance among 7B-scale deep research agents. This highlights that careful reinforcement learning and reasoning design can produce efficient, resilient, and research-grade AI agents. The model and inference code is open-sourced under MIT license at https://github.com/Pokee-AI/PokeeResearchOSS.
Problem

Research questions and friction points this paper is trying to address.

Enhancing deep research agents via reinforcement learning
Improving robustness through multi-call reasoning scaffolds
Optimizing tool-augmented LLMs for research-grade performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforcement learning from AI feedback for alignment
Chain-of-thought multi-call reasoning for robustness
Unified framework optimizing accuracy and tool recovery