COSMIR: Chain Orchestrated Structured Memory for Iterative Reasoning over Long Context

📅 2025-10-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) face dual challenges in long-context reasoning: information loss and selective burden—retrieval-based methods often miss critical evidence, while extending context windows exacerbates attention dilution; meanwhile, multi-agent pipelines (e.g., Chain of Agents, CoA) rely on free-form summarization for inter-agent communication, leading to critical detail decay and error accumulation. To address these issues, we propose Chain-of-Synergistic Structured Memory (CoSM), a framework that replaces free-form summarization with a planning–execution–integration micro-loop workflow. CoSM introduces a structured shared memory to enable coordinated reasoning through subproblem decomposition, chunked evidence extraction, iterative inference, and memory synthesis. This design significantly enhances long-range information aggregation, reasoning fidelity, and process auditability. On the HELMET long-text question answering benchmark, CoSM substantially reduces information propagation loss compared to the CoA baseline and achieves marked improvements in reasoning accuracy.

Technology Category

Application Category

📝 Abstract
Reasoning over very long inputs remains difficult for large language models (LLMs). Common workarounds either shrink the input via retrieval (risking missed evidence), enlarge the context window (straining selectivity), or stage multiple agents to read in pieces. In staged pipelines (e.g., Chain of Agents, CoA), free-form summaries passed between agents can discard crucial details and amplify early mistakes. We introduce COSMIR (Chain Orchestrated Structured Memory for Iterative Reasoning), a chain-style framework that replaces ad hoc messages with a structured memory. A Planner agent first turns a user query into concrete, checkable sub-questions. worker agents process chunks via a fixed micro-cycle: Extract, Infer, Refine, writing all updates to the shared memory. A Manager agent then Synthesizes the final answer directly from the memory. This preserves step-wise read-then-reason benefits while changing both the communication medium (structured memory) and the worker procedure (fixed micro-cycle), yielding higher faithfulness, better long-range aggregation, and auditability. On long-context QA from the HELMET suite, COSMIR reduces propagation-stage information loss and improves accuracy over a CoA baseline.
Problem

Research questions and friction points this paper is trying to address.

Addresses reasoning difficulties over long inputs for LLMs
Reduces information loss in multi-agent staged pipelines
Improves accuracy and faithfulness in long-context QA tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Structured memory replaces ad hoc messages
Planner agent breaks query into sub-questions
Worker agents follow fixed micro-cycle procedure
🔎 Similar Papers
No similar papers found.