Game of Coding: Coding Theory in the Presence of Rational Adversaries, Motivated by Decentralized Machine Learning

📅 2026-01-05
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes a novel “coded game” framework that integrates rational adversaries into coding theory, departing from the classical assumption that honest nodes constitute a majority—a premise often violated in decentralized machine learning settings where participants are strategic and incentive-driven. By synergistically combining repeated encoding, information-theoretic principles, and mechanism design, the framework establishes an incentive-compatible game-theoretic model. It guarantees a non-zero probability of data recovery even when honest nodes are outnumbered or adversarial nodes dominate, while also exhibiting resilience against Sybil attacks. Theoretical analysis demonstrates that the system’s equilibrium strategy remains invariant to the increasing number of adversarial nodes, thereby confirming its robustness and scalability under challenging adversarial conditions.

Technology Category

Application Category

📝 Abstract
Coding theory plays a crucial role in enabling reliable communication, storage, and computation. Classical approaches assume a worst-case adversarial model and ensure error correction and data recovery only when the number of honest nodes exceeds the number of adversarial ones by some margin. However, in some emerging decentralized applications, particularly in decentralized machine learning (DeML), participating nodes are rewarded for accepted contributions. This incentive structure naturally gives rise to rational adversaries who act strategically rather than behaving in purely malicious ways. In this paper, we first motivate the need for coding in the presence of rational adversaries, particularly in the context of outsourced computation in decentralized systems. We contrast this need with existing approaches and highlight their limitations. We then introduce the game of coding, a novel game-theoretic framework that extends coding theory to trust-minimized settings where honest nodes are not in the majority. Focusing on repetition coding, we highlight two key features of this framework: (1) the ability to achieve a non-zero probability of data recovery even when adversarial nodes are in the majority, and (2) Sybil resistance, i.e., the equilibrium remains unchanged even as the number of adversarial nodes increases. Finally, we explore scenarios in which the adversary's strategy is unknown and outline several open problems for future research.
Problem

Research questions and friction points this paper is trying to address.

rational adversaries
decentralized machine learning
coding theory
Sybil resistance
trust-minimized systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Game of Coding
Rational Adversaries
Decentralized Machine Learning
Sybil Resistance
Repetition Coding
🔎 Similar Papers
No similar papers found.