Oblivious Deletion Codes

📅 2025-06-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work studies the construction of deletion-correcting codes under the “oblivious model,” where deletions are adversarial but independent of the encoder’s randomness—occupying a middle ground between stochastic and fully adversarial errors, and particularly relevant for physical storage media (e.g., DNA) with hard-to-model noise. We propose both explicit and randomized code constructions. Our key technical innovation is the use of small-prime modular hashing to achieve efficient dimensionality reduction and redundancy embedding, coupled with a list-decoding conversion mechanism. We break the classical redundancy lower bound for adversarial deletion codes in this model for the first time: our explicit code achieves redundancy ≈ $2t log n$ for $t$-deletion correction; our list-decodable code for 2-deletions attains redundancy ≈ $3 log n$; and our randomized construction for $t$-deletions achieves redundancy ≈ $(t+1)log n$, approaching the information-theoretic limit. These results significantly advance both the theoretical understanding and practical design of deletion channels.

Technology Category

Application Category

📝 Abstract
We construct deletion error-correcting codes in the oblivious model, where errors are adversarial but oblivious to the encoder's randomness. Oblivious errors bridge the gap between the adversarial and random error models, and are motivated by applications like DNA storage, where the noise is caused by hard-to-model physical phenomena, but not by an adversary. (1) (Explicit oblivious) We construct $t$ oblivious deletion codes, with redundancy $sim 2tlog n$, matching the existential bound for adversarial deletions. (2) (List decoding implies explicit oblivious) We show that explicit list-decodable codes yield explicit oblivious deletion codes with essentially the same parameters. By a work of Guruswami and Håstad (IEEE TIT, 2021), this gives 2 oblivious deletion codes with redundancy $sim 3log n$, beating the existential redundancy for 2 adversarial deletions. (3) (Randomized oblivious) We give a randomized construction of oblivious codes that, with probability at least $1-2^{-n}$, produces a code correcting $t$ oblivious deletions with redundancy $sim(t+1)log n$, beating the existential adversarial redundancy of $sim 2tlog n$. (4) (Randomized adversarial) Studying the oblivious model can inform better constructions of adversarial codes. The same technique produces, with probability at least $1-2^{-n}$, a code correcting $t$ adversarial deletions with redundancy $sim (2t+1)log n$, nearly matching the existential redundancy of $sim 2tlog n$. The common idea behind these results is to reduce the hash size by modding by a prime chosen (randomly) from a small subset, and including a small encoding of the prime in the hash.
Problem

Research questions and friction points this paper is trying to address.

Constructs deletion codes for oblivious adversarial errors
Bridges gap between adversarial and random error models
Improves redundancy for DNA storage-like noise scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

Constructs explicit oblivious deletion codes
Uses list decoding for oblivious codes
Randomized construction reduces redundancy significantly
🔎 Similar Papers
No similar papers found.