Learning Network Sheaves for AI-native Semantic Communication

📅 2025-12-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address severe semantic noise and weak task relevance when exchanging compressed latent representations among heterogeneous AI agents, this paper proposes a joint learning framework for AI-native semantic communication. Methodologically, it integrates semantic denoising with rate-distortion optimization, introducing an orthogonally constrained mapping layer to jointly learn communication topology and cross-agent alignment mappings. It further incorporates non-convex sparse dictionary learning with closed-form iterative updates to construct a shared global semantic space, simultaneously learning orthogonal alignment matrices and sparse latent representations. The key contribution is the first end-to-end co-optimization of semantic compression, denoising, and alignment—effectively mitigating semantic heterogeneity. Experiments on real-world image data demonstrate a favorable balance among multi-agent semantic clustering, enhanced interpretability, and high downstream task accuracy (exceeding 92% in classification).

Technology Category

Application Category

📝 Abstract
Recent advances in AI call for a paradigm shift from bit-centric communication to goal- and semantics-oriented architectures, paving the way for AI-native 6G networks. In this context, we address a key open challenge: enabling heterogeneous AI agents to exchange compressed latent-space representations while mitigating semantic noise and preserving task-relevant meaning. We cast this challenge as learning both the communication topology and the alignment maps that govern information exchange among agents, yielding a learned network sheaf equipped with orthogonal maps. This learning process is further supported by a semantic denoising end compression module that constructs a shared global semantic space and derives sparse, structured representations of each agent's latent space. This corresponds to a nonconvex dictionary learning problem solved iteratively with closed-form updates. Experiments with mutiple AI agents pre-trained on real image data show that the semantic denoising and compression facilitates AI agents alignment and the extraction of semantic clusters, while preserving high accuracy in downstream task. The resulting communication network provides new insights about semantic heterogeneity across agents, highlighting the interpretability of our methodology.
Problem

Research questions and friction points this paper is trying to address.

Enabling heterogeneous AI agents to exchange compressed latent-space representations
Mitigating semantic noise and preserving task-relevant meaning in communication
Learning communication topology and alignment maps for AI-native networks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Learning network sheaves with orthogonal maps
Semantic denoising and compression module
Nonconvex dictionary learning with closed-form updates
🔎 Similar Papers
No similar papers found.
E
Enrico Grimaldi
Department of Computer, Control and Management Engineering, Sapienza University of Rome, Italy
Mario Edoardo Pandolfo
Mario Edoardo Pandolfo
Ph.D. Student, Sapienza University of Rome
LinguisticsComplex SystemsTopological Signal Processing6G
G
Gabriele D'Acunto
National Inter-University Consortium for Telecommunications (CNIT), Parma, Italy
Sergio Barbarossa
Sergio Barbarossa
Sapienza University of Rome
signal processinggraph signal processingmobile edge computing5G6G
P
P. Lorenzo
National Inter-University Consortium for Telecommunications (CNIT), Parma, Italy