🤖 AI Summary
Existing single-agent chain-of-thought approaches struggle with multi-stage collaborative reasoning in business workflows due to complex cross-domain prompt engineering, while multi-agent systems incur excessive token overhead and suffer from weak semantic coherence across stages.
Method: We propose Cochain, a lightweight cooperative prompting framework introducing the novel “chain-style collaboration” paradigm. It tightly integrates knowledge graph–based structural modeling with a dynamic, retrievable prompt tree to enable stage-aware collaborative prompt generation.
Contribution/Results: Cochain significantly reduces token consumption while preserving cross-stage semantic consistency. It outperforms state-of-the-art prompt engineering techniques and multi-agent methods across diverse multi-task benchmarks. Expert evaluations demonstrate that Cochain—deployed on lightweight LLMs—exceeds GPT-4’s performance on targeted business reasoning tasks, validating its efficiency, scalability, and practical applicability.
📝 Abstract
Large Language Models (LLMs) have demonstrated impressive performance in executing complex reasoning tasks. Chain-of-thought effectively enhances reasoning capabilities by unlocking the potential of large models, while multi-agent systems provide more comprehensive solutions by integrating collective intelligence of multiple agents. However, both approaches face significant limitations. Single-agent with chain-of-thought, due to the inherent complexity of designing cross-domain prompts, faces collaboration challenges. Meanwhile, multi-agent systems consume substantial tokens and inevitably dilute the primary problem, which is particularly problematic in business workflow tasks. To address these challenges, we propose Cochain, a collaboration prompting framework that effectively solves business workflow collaboration problem by combining knowledge and prompts at a reduced cost. Specifically, we construct an integrated knowledge graph that incorporates knowledge from multiple stages. Furthermore, by maintaining and retrieving a prompts tree, we can obtain prompt information relevant to other stages of the business workflow. We perform extensive evaluations of Cochain across multiple datasets, demonstrating that Cochain outperforms all baselines in both prompt engineering and multi-agent LLMs. Additionally, expert evaluation results indicate that the use of a small model in combination with Cochain outperforms GPT-4.