🤖 AI Summary
Large language models (LLMs) face dual challenges in long-context reasoning: information loss and selective burden—retrieval-based methods often miss critical evidence, while extending context windows exacerbates attention dilution; meanwhile, multi-agent pipelines (e.g., Chain of Agents, CoA) rely on free-form summarization for inter-agent communication, leading to critical detail decay and error accumulation. To address these issues, we propose Chain-of-Synergistic Structured Memory (CoSM), a framework that replaces free-form summarization with a planning–execution–integration micro-loop workflow. CoSM introduces a structured shared memory to enable coordinated reasoning through subproblem decomposition, chunked evidence extraction, iterative inference, and memory synthesis. This design significantly enhances long-range information aggregation, reasoning fidelity, and process auditability. On the HELMET long-text question answering benchmark, CoSM substantially reduces information propagation loss compared to the CoA baseline and achieves marked improvements in reasoning accuracy.
📝 Abstract
Reasoning over very long inputs remains difficult for large language models (LLMs). Common workarounds either shrink the input via retrieval (risking missed evidence), enlarge the context window (straining selectivity), or stage multiple agents to read in pieces. In staged pipelines (e.g., Chain of Agents, CoA), free-form summaries passed between agents can discard crucial details and amplify early mistakes. We introduce COSMIR (Chain Orchestrated Structured Memory for Iterative Reasoning), a chain-style framework that replaces ad hoc messages with a structured memory. A Planner agent first turns a user query into concrete, checkable sub-questions. worker agents process chunks via a fixed micro-cycle: Extract, Infer, Refine, writing all updates to the shared memory. A Manager agent then Synthesizes the final answer directly from the memory. This preserves step-wise read-then-reason benefits while changing both the communication medium (structured memory) and the worker procedure (fixed micro-cycle), yielding higher faithfulness, better long-range aggregation, and auditability. On long-context QA from the HELMET suite, COSMIR reduces propagation-stage information loss and improves accuracy over a CoA baseline.