🤖 AI Summary
This paper studies online combinatorial optimization with structured, dependent inputs modeled via Markov Random Fields (MRFs). Traditional algorithms rely on input independence assumptions, limiting their applicability to strongly correlated settings. We introduce MRFs into online combinatorial optimization for the first time to explicitly capture graph-structured dependencies among input variables. We propose a unified algorithmic framework achieving a tight Θ(Δ)-competitive ratio, where Δ is the maximum degree of the MRF graph. Our framework employs p-sample reduction for covering-type minimization problems and extends the “balanced pricing” mechanism to handle maximization problems—including matching and XOS auctions—under MRF dependence. It simultaneously attains O(Δ)-competitiveness for both minimization and maximization objectives. The framework achieves theoretically optimal competitive ratios under coverage constraints and combinatorial auctions, providing the first smooth transition from independent to strongly dependent inputs with provably optimal competitive guarantees.
📝 Abstract
Most existing work in online stochastic combinatorial optimization assumes that inputs are drawn from independent distributions -- a strong assumption that often fails in practice. At the other extreme, arbitrary correlations are equivalent to worst-case inputs via Yao's minimax principle, making good algorithms often impossible. This motivates the study of intermediate models that capture mild correlations while still permitting non-trivial algorithms.
In this paper, we study online combinatorial optimization under Markov Random Fields (MRFs), a well-established graphical model for structured dependencies. MRFs parameterize correlation strength via the maximum weighted degree $Δ$, smoothly interpolating between independence ($Δ= 0$) and full correlation ($Δ o infty$). While naïvely this yields $e^{O(Δ)}$-competitive algorithms and $Ω(Δ)$ hardness, we ask: when can we design tight $Θ(Δ)$-competitive algorithms?
We present general techniques achieving $O(Δ)$-competitive algorithms for both minimization and maximization problems under MRF-distributed inputs. For minimization problems with coverage constraints (e.g., Facility Location and Steiner Tree), we reduce to the well-studied $p$-sample model. For maximization problems (e.g., matchings and combinatorial auctions with XOS buyers), we extend the "balanced prices" framework for online allocation problems to MRFs.