🤖 AI Summary
This work addresses the challenges of energy efficiency optimization and high computational overhead in multi-user semantic communication by proposing a Joint Source and RIS-assisted Channel Encoding (JSRE) framework. JSRE jointly optimizes user scheduling, reconfigurable intelligent surface (RIS) phase shifts, and semantic compression ratios. The framework integrates deep neural networks for semantic feature extraction and leverages RIS to enhance channel orthogonality. To reduce computational costs, a Truncated Deep Reinforcement Learning (T-DRL) algorithm is devised, incorporating a semantic model caching mechanism and a Transformer-based dynamic action space generator, which significantly decreases the frequency of model retraining. Experimental results demonstrate that JSRE substantially outperforms baseline methods in system energy efficiency, while T-DRL markedly improves learning efficiency.
📝 Abstract
In this paper, we explore a joint source and reconfigurable intelligent surface (RIS)-assisted channel encoding (JSRE) framework for multi-user semantic communications, where a deep neural network (DNN) extracts semantic features for all users and the RIS provides channel orthogonality, enabling a unified semantic encoding-decoding design. We aim to maximize the overall energy efficiency of semantic communications across all users by jointly optimizing the user scheduling, the RIS's phase shifts, and the semantic compression ratio. Although this joint optimization problem can be addressed using conventional deep reinforcement learning (DRL) methods, evaluating semantic similarity typically relies on extensive real environment interactions, which can incur heavy computational overhead during training. To address this challenge, we propose a truncated DRL (T-DRL) framework, where a DNN-based semantic similarity estimator is developed to rapidly estimate the similarity score. Moreover, the user scheduling strategy is tightly coupled with the semantic model configuration. To exploit this relationship, we further propose a semantic model caching mechanism that stores and reuses fine-tuned semantic models corresponding to different scheduling decisions. A Transformer-based actor network is employed within the DRL framework to dynamically generate action space conditioned on the current caching state. This avoids redundant retraining and further accelerates the convergence of the learning process. Numerical results demonstrate that the proposed JSRE framework significantly improves the system energy efficiency compared with the baseline methods. By training fewer semantic models, the proposed T-DRL framework significantly enhances the learning efficiency.