Remote Training in Task-Oriented Communication: Supervised or Self-Supervised with Fine-Tuning?

📅 2025-02-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenges of high pretraining overhead, reliance on prior task labels, and impracticality of local retraining in task-oriented communication (TOC) under dynamic wireless connectivity, this paper proposes a two-stage paradigm. First, a task-agnostic, label-free self-supervised pretraining is designed via mutual information maximization to learn generalizable representations prior to connection establishment. Second, task-specific, labeled end-to-end joint fine-tuning is performed. The framework integrates information-theoretic analysis, differentiable communication modeling, and SGD-based optimization to enable joint source-channel learning. Experiments demonstrate that the method reduces training overhead to approximately 50% of that of a fully supervised baseline, significantly improves remote deployment efficiency, and achieves millisecond-level task adaptation. To the best of our knowledge, this is the first work in TOC to realize efficient, label-free pretraining—eliminating the need for prior task annotations while maintaining strong generalization and adaptability.

Technology Category

Application Category

📝 Abstract
Task-oriented communication focuses on extracting and transmitting only the information relevant to specific tasks, effectively minimizing communication overhead. Most existing methods prioritize reducing this overhead during inference, often assuming feasible local training or minimal training communication resources. However, in real-world wireless systems with dynamic connection topologies, training models locally for each new connection is impractical, and task-specific information is often unavailable before establishing connections. Therefore, minimizing training overhead and enabling label-free, task-agnostic pre-training before the connection establishment are essential for effective task-oriented communication. In this paper, we tackle these challenges by employing a mutual information maximization approach grounded in self-supervised learning and information-theoretic analysis. We propose an efficient strategy that pre-trains the transmitter in a task-agnostic and label-free manner, followed by joint fine-tuning of both the transmitter and receiver in a task-specific, label-aware manner. Simulation results show that our proposed method reduces training communication overhead to about half that of full-supervised methods using the SGD optimizer, demonstrating significant improvements in training efficiency.
Problem

Research questions and friction points this paper is trying to address.

Minimizing training overhead in task-oriented communication
Enabling label-free pre-training before connection establishment
Improving efficiency with self-supervised and fine-tuning strategies
Innovation

Methods, ideas, or system contributions that make the work stand out.

self-supervised learning
mutual information maximization
task-agnostic pre-training
🔎 Similar Papers
No similar papers found.
Hongru Li
Hongru Li
HKUST
Wireless CommunicationSemantic Communication
H
Hang Zhao
Dept. of Electronic and Computer Engineering, The Hong Kong University of Science and Technology, Hong Kong
H
Hengtao He
Dept. of Electronic and Computer Engineering, The Hong Kong University of Science and Technology, Hong Kong
Shenghui Song
Shenghui Song
The Hong Kong University of Science and Technology
Information TheoryDistributed IntelligenceML for CommunicationIntegrated Sensing and Communication
J
Jun Zhang
Dept. of Electronic and Computer Engineering, The Hong Kong University of Science and Technology, Hong Kong
Khaled B. Letaief
Khaled B. Letaief
Member of US National Academy of Engineering and New Bright Professor of Engineering, HKUST
Wirelesscommunications