🤖 AI Summary
This work investigates capacity lower bounds for Gaussian channels under pointwise additive input constraints, where constraints are imposed symbol-wise via a generalized cost condition—encompassing sliding-window-dependent correlations. For the first time, the discrete-type method is extended to continuous alphabets. Combining the saddle-point method, large deviations theory, and the entropy power inequality, we derive two unified capacity lower bounds based on exact volume-exponent computations. The resulting bounds are both analytically tractable and computationally feasible, applicable to both continuous and discrete input distributions, and naturally extensible to diverse practical constraint scenarios. Numerical experiments demonstrate that the proposed bounds significantly outperform existing ones in the literature.
📝 Abstract
We present a family of relatively simple and unified lower bounds on the capacity of the Gaussian channel under a set of pointwise additive input constraints. Specifically, the admissible channel input vectors $x = (x_1, ldots, x_n)$ must satisfy $k$ additive cost constraints of the form $sum_{i=1}^n φ_j(x_i) le n Γ_j$, $j = 1,2,ldots,k$, which are enforced pointwise for every $x$, rather than merely in expectation. More generally, we also consider cost functions that depend on a sliding window of fixed length $m$, namely, $sum_{i=m}^n φ_j(x_i, x_{i-1}, ldots, x_{i-m+1}) le n Γ_j$, $j = 1,2,ldots,k$, a formulation that naturally accommodates correlation constraints as well as a broad range of other constraints of practical relevance. We propose two classes of lower bounds, derived by two methodologies that both rely on the exact evaluation of the volume exponent associated with the set of input vectors satisfying the given constraints. This evaluation exploits extensions of the method of types to continuous alphabets, the saddle-point method of integration, and basic tools from large deviations theory. The first class of bounds is obtained via the entropy power inequality (EPI), and therefore applies exclusively to continuous-valued inputs. The second class, by contrast, is more general, and it applies to discrete input alphabets as well. It is based on a direct manipulation of mutual information, and it yields stronger and tighter bounds, though at the cost of greater technical complexity. Numerical examples illustrating both types of bounds are provided, and several extensions and refinements are also discussed.