🤖 AI Summary
This paper investigates whether source-channel separation remains optimal for lossy compression and transmission under perceptual quality constraints. Addressing both block-level (strong) and symbol-average-level (weak) perceptual fidelity metrics, we employ information-theoretic analysis, rate-distortion theory, and random coding constructions to establish the first rigorous characterization: under strong perceptual metrics, separation is optimal only when encoder and decoder share common randomness; under weak metrics, separation is universally optimal—extending the classical separation theorem beyond its traditional domain. We derive a complete necessary and sufficient condition for separability in distortion-perception joint coding, revealing how the structural properties of perceptual metrics fundamentally govern communication architecture design. Our results provide foundational theoretical support for semantic communication systems, bridging information theory with human-centric quality assessment.
📝 Abstract
It is well known that separation between lossy source coding and channel coding is asymptotically optimal under classical additive distortion measures. Recently, coding under a new class of quality considerations, often referred to as perception or realism, has attracted significant attention due to its close connection to neural generative models and semantic communications. In this work, we revisit source-channel separation under the consideration of distortion-perception. We show that when the perception quality is measured on the block level, i.e., in the strong-sense, the optimality of separation still holds when common randomness is shared between the encoder and the decoder; however, separation is no longer optimal when such common randomness is not available. In contrast, when the perception quality is the average per-symbol measure, i.e., in the weak-sense, the optimality of separation holds regardless of the availability of common randomness.