🤖 AI Summary
This paper investigates the soft guessing problem under tolerable error, using logarithmic loss as the distortion measure, and establishes a deep connection with variable-length lossy source coding. We propose, for the first time, an error-tolerant soft guessing framework and introduce novel information-theoretic tools—conditional Rényi entropy and smooth Rényi entropy—integrated with one-shot and asymptotic information-theoretic techniques. Our analysis yields tight upper and lower bounds on the minimal guessing moment, along with its asymptotic expansion. The results are theoretically optimal in both settings—with and without side information—and naturally yield a fundamental rate–distortion trade-off. This unifies soft guessing theory with lossy compression, providing a coherent information-theoretic characterization bridging guessing and coding.
📝 Abstract
This paper considers the problem of soft guessing under a logarithmic loss distortion measure while allowing errors. We find an optimal guessing strategy, and derive single-shot upper and lower bounds for the minimal guessing moments as well as an asymptotic expansion for i.i.d. sources. These results are extended to the case where side information is available to the guesser. Furthermore, a connection between soft guessing allowing errors and variable-length lossy source coding under logarithmic loss is demonstrated. The Rényi entropy, the smooth Rényi entropy, and their conditional versions play an important role.