🤖 AI Summary
This paper investigates the lossy quantum-classical source coding problem with quantum side information (QC-QSI): given a quantum source, how to efficiently compress the classical data obtained via measurement, while enabling reconstruction at the decoder assisted by a quantum auxiliary state, under a block-error probability constraint. Methodologically, it introduces a novel posterior-channel distortion model, establishing—for the first time—a rigorous connection between rate–channel and rate–distortion theories. It derives a single-letter achievable inner bound tailored to quantum–classical hybrid settings. Furthermore, it characterizes asymptotic performance limits for both QC-QSI and its classical-side-information counterpart (C-CSI), proving that rate–channel protocols achieve the optimal rate–distortion function under the proposed distortion measure. These results provide foundational theoretical support for quantum sensing and compression learning.
📝 Abstract
In this work, we address the lossy quantum-classical source coding with the quantum side-information (QC-QSI) problem. The task is to compress the classical information about a quantum source, obtained after performing a measurement while incurring a bounded reconstruction error. Here, the decoder is allowed to use the side information to recover the classical data obtained from measurements on the source states. We introduce a new formulation based on a backward (posterior) channel, replacing the single-letter distortion observable with a single-letter posterior channel to capture reconstruction error. Unlike the rate-distortion framework, this formulation imposes a block error constraint. An analogous formulation is developed for lossy classical source coding with classical side information (C-CSI) problem. We derive an inner bound on the asymptotic performance limit in terms of single-letter quantum and classical mutual information quantities of the given posterior channel for QC-QSI and C-CSI cases, respectively. Furthermore, we establish a connection between rate-distortion and rate-channel theory, showing that a rate-channel compression protocol attains the optimal rate-distortion function for a specific distortion measure and level.