Combinatorial Selection with Costly Information

📅 2024-12-05
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper studies sequential stochastic combinatorial optimization with costly information acquisition: it jointly optimizes solution quality and observation cost within an acyclic Markov decision process (MDP) framework, under matroid constraints and bandit-style feedback. Addressing the limitation of prior work—restricted to special cases—we propose a novel cost-allocation bound coupled with a local approximation framework, enabling lossless approximate composition of solutions for arbitrary component MDPs. Our approach transcends structural restrictions inherent in classical models such as Pandora’s Box, and unifies treatment of a broad class of variants—including the newly introduced Weighing Scale problem—yielding constant-factor optimal approximation algorithms either for maximizing expected reward or minimizing total cost.

Technology Category

Application Category

📝 Abstract
We consider a class of optimization problems over stochastic variables where the algorithm can learn information about the value of any variable through a series of costly steps; we model this information acquisition process as a Markov Decision Process (MDP). The algorithm's goal is to minimize the cost of its solution plus the cost of information acquisition, or alternately, maximize the value of its solution minus the cost of information acquisition. Such bandit superprocesses have been studied previously but solutions are known only for fairly restrictive special cases. We develop a framework for approximate optimization of bandit superprocesses that applies to arbitrary processes with a matroid (and in some cases, more general) feasibility constraint. Our framework establishes a bound on the optimal cost through a novel cost amortization; it then couples this bound with a notion of local approximation that allows approximate solutions for each component MDP in the superprocess to be composed without loss into a global approximation. We use this framework to obtain approximately optimal solutions for several variants of bandit superprocesses for both maximization and minimization. We obtain new approximations for combinatorial versions of the previously studied Pandora's Box with Optional Inspection and Pandora's Box with Partial Inspection; as well as approximation algorithms for a new problem that we call the Weighing Scale problem.
Problem

Research questions and friction points this paper is trying to address.

Optimizing stochastic variables with costly information acquisition
Approximate solutions for bandit superprocesses with matroid constraints
New approximations for combinatorial Pandora's Box variants
Innovation

Methods, ideas, or system contributions that make the work stand out.

Approximate optimization for acyclic MDPs with matroid constraints
Cost amortization to bound optimal solution cost
Local approximation for composing global solutions
🔎 Similar Papers
No similar papers found.