🤖 AI Summary
Current LLM services process user prompts monolithically, ignoring inherent intra-query semantic parallelism—i.e., the independence of subtasks—leading to high latency and limited optimization opportunities. To address this, we propose the first benchmark explicitly designed for intra-query parallelism, comprising 37,000 real-world prompts, and introduce a structured annotation schema grounded in task templates and shared context. We also release the first standardized evaluation platform supporting structure-aware reasoning assessment. Methodologically, we combine LLM-assisted prompting with rule-driven multilingual validation to extract parallelizable subtasks; then quantify latency reduction, structural consistency, and semantic fidelity via serial-versus-parallel evaluation. Experiments show that over 75% of prompts admit successful parallel decomposition, achieving up to 5× speedup in translation, comprehension, and comparative analysis tasks, with negligible quality degradation.
📝 Abstract
LLM serving systems typically treat user prompts as monolithic inputs, optimizing inference through decoding tricks or inter-query batching. However, many real-world prompts contain latent semantic parallelism--decomposable structures where subtasks can be executed independently to reduce latency while preserving meaning. We introduce PARALLELPROMPT, the first benchmark for measuring intra-query parallelism in natural user prompts. Our dataset comprises over 37,000 real-world prompts from public LLM chat logs, each annotated with a structured schema capturing task templates, shared context, and iteration inputs. These schemas are extracted using LLM-assisted prompting with rule-based multilingual validation. To evaluate the benefits of decomposition, we provide an execution suite that benchmarks serial vs. parallel strategies, measuring latency, structural adherence, and semantic fidelity. Our results show that intra-query parallelism can be successfully parsed in over 75% of curated datasets, unlocking up to 5x speedups on tasks like translation, comprehension, and comparative analysis, with minimal quality degradation. By releasing this benchmark, curation pipeline, and evaluation suite, we provide the first standardized testbed for studying structure-aware execution in LLM serving pipelines.