🤖 AI Summary
To address the inefficiency, information loss, and high computational cost of large language models (LLMs) in enterprise document analysis and financial report understanding—tasks involving long-text queries—this paper proposes FineFlow, a novel workflow orchestration framework. FineFlow introduces a fine-grained, dynamic workflow coordination mechanism comprising three tightly integrated modules: an Analyzer, an Organizer, and an Executor, overcoming limitations of conventional static or coarse-grained adaptive approaches. It synergistically incorporates task-state modeling, dynamic retrieval scheduling, context-aware execution, and lightweight RAG optimization. Evaluated across multiple long-text question-answering benchmarks, FineFlow achieves significant improvements in answer accuracy while substantially reducing inference overhead, thereby enabling joint optimization of precision and efficiency.
📝 Abstract
Large Language Models (LLMs) encounter challenges in efficiently processing long-text queries, as seen in applications like enterprise document analysis and financial report comprehension. While conventional solutions employ long-context processing or Retrieval-Augmented Generation (RAG), they suffer from prohibitive input expenses or incomplete information. Recent advancements adopt context compression and dynamic retrieval loops, but still sacrifice critical details or incur iterative costs. To address these limitations, we propose OkraLong, a novel framework that flexibly optimizes the entire processing workflow. Unlike prior static or coarse-grained adaptive strategies, OkraLong adopts fine-grained orchestration through three synergistic components: analyzer, organizer and executor. The analyzer characterizes the task states, which guide the organizer in dynamically scheduling the workflow. The executor carries out the execution and generates the final answer. Experimental results demonstrate that OkraLong not only enhances answer accuracy but also achieves cost-effectiveness across a variety of datasets.