π€ AI Summary
Active Feature Acquisition (AFA) aims to dynamically select the most informative unobserved features for each test instance to improve predictive performance. Existing approaches either rely on hard-to-train reinforcement learning or employ myopic greedy criteria based on mutual information. This paper proposes a supervised latent-variable modeling framework: it performs probabilistic inference over multiple plausible realizations of unobserved features via stochastic encoding in a latent space, jointly capturing their complex dependencies with the target label. This enables non-greedy, globally optimal feature selection. By avoiding reinforcement learning altogether, the method eliminates training instability while overcoming the local optimality limitation inherent in greedy strategies. Extensive experiments on multiple synthetic and real-world datasets demonstrate that our approach consistently and significantly outperforms diverse state-of-the-art baselines.
π Abstract
Active Feature Acquisition is an instance-wise, sequential decision making problem. The aim is to dynamically select which feature to measure based on current observations, independently for each test instance. Common approaches either use Reinforcement Learning, which experiences training difficulties, or greedily maximize the conditional mutual information of the label and unobserved features, which makes myopic acquisitions. To address these shortcomings, we introduce a latent variable model, trained in a supervised manner. Acquisitions are made by reasoning about the features across many possible unobserved realizations in a stochastic latent space. Extensive evaluation on a large range of synthetic and real datasets demonstrates that our approach reliably outperforms a diverse set of baselines.