🤖 AI Summary
Large language models (LLMs) frequently suffer from hallucinations and unreliable outputs. Method: This paper proposes the first information-theoretic re-ranking framework for LLM generation, rigorously modeling multi-candidate generation as redundant message transmission over parallel, dependent noisy channels—establishing a formal theoretical analogy between generative re-ranking and noisy-channel coding. Under realistic constraints—including imperfect rerankers and channel statistics with inter-output dependencies—we derive sufficient conditions for asymptotically zero decoding error and distill universal re-ranking principles. Contribution/Results: Integrating Mallows and Zipf–Mandelbrot ranking models with statistical channel analysis, we validate the framework on DeepSeek-Coder-7B and TowerInstruct-13B across code generation and medical translation tasks, achieving significant improvements in correctness. Empirical results confirm both the theoretical soundness and robustness of the approach under practical deployment conditions.
📝 Abstract
To ensure large language models (LLMs) are used safely, one must reduce their propensity to hallucinate or to generate unacceptable answers. A simple and often used strategy is to first let the LLM generate multiple hypotheses and then employ a reranker to choose the best one. In this paper, we draw a parallel between this strategy and the use of redundancy to decrease the error rate in noisy communication channels. We conceptualize the generator as a sender transmitting multiple descriptions of a message through parallel noisy channels. The receiver decodes the message by ranking the (potentially corrupted) descriptions and selecting the one found to be most reliable. We provide conditions under which this protocol is asymptotically error-free (i.e., yields an acceptable answer almost surely) even in scenarios where the reranker is imperfect (governed by Mallows or Zipf-Mandelbrot models) and the channel distributions are statistically dependent. We use our framework to obtain reranking laws which we validate empirically on two real-world tasks using LLMs: text-to-code generation with DeepSeek-Coder 7B and machine translation of medical data with TowerInstruct 13B.