SC-GIR: Goal-oriented Semantic Communication via Invariant Representation Learning

📅 2025-09-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In task-oriented semantic communication, joint transmitter-receiver training induces redundant transmission, while reliance on labeled data limits task generalization. Method: This paper proposes an unsupervised semantic communication framework for machine-to-machine communication. It eliminates joint training and label supervision, instead introducing self-supervised covariance contrastive learning to extract compact, task-agnostic, invariant semantic representations directly from raw images; these representations are then lossily compressed to transmit only task-relevant semantic content. Contribution/Results: To our knowledge, this is the first work to incorporate covariance structure modeling into semantic encoding, markedly enhancing representation discriminability and robustness. Experiments across multiple image datasets show an average performance gain of nearly 10% over baseline methods. Moreover, post-compression classification accuracy remains consistently above 85% across varying signal-to-noise ratios, demonstrating both high efficiency and strong generalization capability.

Technology Category

Application Category

📝 Abstract
Goal-oriented semantic communication (SC) aims to revolutionize communication systems by transmitting only task-essential information. However, current approaches face challenges such as joint training at transceivers, leading to redundant data exchange and reliance on labeled datasets, which limits their task-agnostic utility. To address these challenges, we propose a novel framework called Goal-oriented Invariant Representation-based SC (SC-GIR) for image transmission. Our framework leverages self-supervised learning to extract an invariant representation that encapsulates crucial information from the source data, independent of the specific downstream task. This compressed representation facilitates efficient communication while retaining key features for successful downstream task execution. Focusing on machine-to-machine tasks, we utilize covariance-based contrastive learning techniques to obtain a latent representation that is both meaningful and semantically dense. To evaluate the effectiveness of the proposed scheme on downstream tasks, we apply it to various image datasets for lossy compression. The compressed representations are then used in a goal-oriented AI task. Extensive experiments on several datasets demonstrate that SC-GIR outperforms baseline schemes by nearly 10%,, and achieves over 85% classification accuracy for compressed data under different SNR conditions. These results underscore the effectiveness of the proposed framework in learning compact and informative latent representations.
Problem

Research questions and friction points this paper is trying to address.

Extracts task-essential invariant representations from source data
Eliminates joint training and labeled dataset dependencies
Enables efficient goal-oriented communication for downstream tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-supervised invariant representation learning
Covariance-based contrastive learning techniques
Task-agnostic semantic compression transmission
🔎 Similar Papers
No similar papers found.