Optimization of Activity Batching Policies in Business Processes

📅 2025-07-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the multi-objective trade-off among waiting time, processing cost, and resource consumption in activity batching strategies for business processes. To tackle this challenge, we propose an intervention-guided Pareto optimization framework that integrates three metaheuristic methods—hill climbing, simulated annealing, and reinforcement learning—to dynamically update the Pareto front while balancing solution diversity and convergence. A simulation-driven intervention effect evaluation mechanism enables iterative strategy refinement. Experimental results demonstrate that our approach significantly outperforms baseline methods without heuristic guidance in terms of convergence speed to the Pareto-optimal set, spread of the obtained solutions, and reduction in cycle time. The framework provides an interpretable and scalable paradigm for multi-objective optimization in process batching decisions.

Technology Category

Application Category

📝 Abstract
In business processes, activity batching refers to packing multiple activity instances for joint execution. Batching allows managers to trade off cost and processing effort against waiting time. Larger and less frequent batches may lower costs by reducing processing effort and amortizing fixed costs, but they create longer waiting times. In contrast, smaller and more frequent batches reduce waiting times but increase fixed costs and processing effort. A batching policy defines how activity instances are grouped into batches and when each batch is activated. This paper addresses the problem of discovering batching policies that strike optimal trade-offs between waiting time, processing effort, and cost. The paper proposes a Pareto optimization approach that starts from a given set (possibly empty) of activity batching policies and generates alternative policies for each batched activity via intervention heuristics. Each heuristic identifies an opportunity to improve an activity's batching policy with respect to a metric (waiting time, processing time, cost, or resource utilization) and an associated adjustment to the activity's batching policy (the intervention). The impact of each intervention is evaluated via simulation. The intervention heuristics are embedded in an optimization meta-heuristic that triggers interventions to iteratively update the Pareto front of the interventions identified so far. The paper considers three meta-heuristics: hill-climbing, simulated annealing, and reinforcement learning. An experimental evaluation compares the proposed approach based on intervention heuristics against the same (non-heuristic guided) meta-heuristics baseline regarding convergence, diversity, and cycle time gain of Pareto-optimal policies.
Problem

Research questions and friction points this paper is trying to address.

Optimizing trade-offs between waiting time, processing effort, and cost in activity batching
Discovering optimal batching policies via Pareto optimization and intervention heuristics
Comparing meta-heuristics for policy convergence, diversity, and cycle time improvement
Innovation

Methods, ideas, or system contributions that make the work stand out.

Pareto optimization for batching policy discovery
Intervention heuristics to improve batching metrics
Meta-heuristics like hill-climbing and reinforcement learning
🔎 Similar Papers
No similar papers found.