Actively Inferring Optimal Measurement Sequences

📅 2025-02-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Addressing the dual challenges of high measurement cost and privacy sensitivity in high-dimensional physical quantity estimation, this paper proposes an active sequential inference framework grounded in the latent space of variational autoencoders (VAEs). Methodologically, it introduces a partially observed VAE encoder that directly maps incomplete measurements to complete latent representations; a generative virtual measurement feedback mechanism that online evaluates information gain for closed-loop optimization of measurement policies; and a dynamic measurement selection strategy combining convolutional Hadamard sensing bases with Bayesian sequential decision-making. Evaluated on Fashion MNIST, the framework achieves high-fidelity image reconstruction using only ~10 measurements—substantially outperforming random variational inference. It delivers superior reconstruction accuracy, computational efficiency, and privacy compliance under extremely low sampling rates.

Technology Category

Application Category

📝 Abstract
Measurement of a physical quantity such as light intensity is an integral part of many reconstruction and decision scenarios but can be costly in terms of acquisition time, invasion of or damage to the environment and storage. Data minimisation and compliance with data protection laws is also an important consideration. Where there are a range of measurements that can be made, some may be more informative and compliant with the overall measurement objective than others. We develop an active sequential inference algorithm that uses the low dimensional representational latent space from a variational autoencoder (VAE) to choose which measurement to make next. Our aim is to recover high dimensional data by making as few measurements as possible. We adapt the VAE encoder to map partial data measurements on to the latent space of the complete data. The algorithm draws samples from this latent space and uses the VAE decoder to generate data conditional on the partial measurements. Estimated measurements are made on the generated data and fed back through the partial VAE encoder to the latent space where they can be evaluated prior to making a measurement. Starting from no measurements and a normal prior on the latent space, we consider alternative strategies for choosing the next measurement and updating the predictive posterior prior for the next step. The algorithm is illustrated using the Fashion MNIST dataset and a novel convolutional Hadamard pattern measurement basis. We see that useful patterns are chosen within 10 steps, leading to the convergence of the guiding generative images. Compared with using stochastic variational inference to infer the parameters of the posterior distribution for each generated data point individually, the partial VAE framework can efficiently process batches of generated data and obtains superior results with minimal measurements.
Problem

Research questions and friction points this paper is trying to address.

Minimizes data acquisition costs
Optimizes measurement sequence selection
Recovers high dimensional data efficiently
Innovation

Methods, ideas, or system contributions that make the work stand out.

active sequential inference algorithm
variational autoencoder latent space
minimal measurement data recovery
🔎 Similar Papers
No similar papers found.