PLATONT: Learning a Platonic Representation for Unified Network Tomography

📅 2025-11-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Network tomography suffers from fragmented task modeling and unimodal signal reliance, leading to limited generalizability and poor interpretability. To address this, we propose a unified latent-state modeling framework grounded in the Platonic representation hypothesis—namely, that link performance, network topology, and traffic load are distinct multimodal projections of a shared underlying latent state. Our approach jointly learns a compact, structured, and semantically interpretable shared latent space through multimodal alignment and contrastive learning, enabling simultaneous inference across all three tasks. Extensive experiments on both synthetic and real-world network datasets demonstrate that our method significantly outperforms state-of-the-art approaches in link-level performance estimation, topology inference, and traffic forecasting—achieving superior accuracy, robustness to noise and missing data, and cross-task generalization capability.

Technology Category

Application Category

📝 Abstract
Network tomography aims to infer hidden network states, such as link performance, traffic load, and topology, from external observations. Most existing methods solve these problems separately and depend on limited task-specific signals, which limits generalization and interpretability. We present PLATONT, a unified framework that models different network indicators (e.g., delay, loss, bandwidth) as projections of a shared latent network state. Guided by the Platonic Representation Hypothesis, PLATONT learns this latent state through multimodal alignment and contrastive learning. By training multiple tomography tasks within a shared latent space, it builds compact and structured representations that improve cross-task generalization. Experiments on synthetic and real-world datasets show that PLATONT consistently outperforms existing methods in link estimation, topology inference, and traffic prediction, achieving higher accuracy and stronger robustness under varying network conditions.
Problem

Research questions and friction points this paper is trying to address.

Unified framework infers multiple hidden network states
Learns shared latent representations through multimodal alignment
Improves cross-task generalization for network tomography tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Learns shared latent network state via multimodal alignment
Uses contrastive learning for compact structured representations
Trains multiple tomography tasks in unified latent space
🔎 Similar Papers
No similar papers found.