🤖 AI Summary
Irregular (ragged) data—common in NLP, scientific measurement, and autonomous AI agents—lacks native support in existing workflow systems, hindering efficient, deterministic parallel processing. Method: This paper introduces the first incremental workflow engine explicitly designed for ragged structures. Its core innovation is a formal model featuring named dimensions and explicit dependency relations, enabling static partial-shape verification and dynamic scheduling while guaranteeing deterministic, convergent parallel execution. The system is implemented in Rust and integrates a domain-specific language (DSL), a multi-queue runtime, persistent state recovery, and static shape checking. Contribution/Results: Experiments demonstrate a 14.94× reduction in baseline overhead compared to state-of-the-art systems; end-to-end output throughput scales nearly linearly with resource allocation. These advances significantly improve the efficiency of large-scale ML data generation pipelines involving irregular data.
📝 Abstract
Modern data processing workflows frequently encounter ragged data: collections with variable-length elements that arise naturally in domains like natural language processing, scientific measurements, and autonomous AI agents. Existing workflow engines lack native support for tracking the shapes and dependencies inherent to ragged data, forcing users to manage complex indexing and dependency bookkeeping manually. We present Operon, a Rust-based workflow engine that addresses these challenges through a novel formalism of named dimensions with explicit dependency relations. Operon provides a domain-specific language where users declare pipelines with dimension annotations that are statically verified for correctness, while the runtime system dynamically schedules tasks as data shapes are incrementally discovered during execution. We formalize the mathematical foundation for reasoning about partial shapes and prove that Operon's incremental construction algorithm guarantees deterministic and confluent execution in parallel settings. The system's explicit modeling of partially-known states enables robust persistence and recovery mechanisms, while its per-task multi-queue architecture achieves efficient parallelism across heterogeneous task types. Empirical evaluation demonstrates that Operon outperforms an existing workflow engine with 14.94x baseline overhead reduction while maintaining near-linear end-to-end output rates as workloads scale, making it particularly suitable for large-scale data generation pipelines in machine learning applications.