DRL-driven Online Optimization for Joint Traffic Reshaping and Channel Reconfiguration in RIS-assisted Semantic NOMA Communications

📅 2026-03-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high computational overhead and energy efficiency optimization challenges in reconfigurable intelligent surface (RIS)-assisted semantic-aware non-orthogonal multiple access (NOMA) networks under time-varying conditions, where multiple coupled variables complicate system design. To tackle this, the paper proposes a delay-tolerant semantic extraction mechanism that enhances NOMA decoding flexibility through traffic shaping and jointly optimizes RIS passive beamforming, semantic extraction policies, and user decoding order. An online adaptive framework based on deep reinforcement learning (DRL) is developed to enable joint decision-making over channel reconfiguration, traffic scheduling, and resource allocation. Experimental results demonstrate that the proposed approach significantly improves long-term energy efficiency, substantially reduces runtime, and achieves superior learning performance compared to state-of-the-art methods.

Technology Category

Application Category

📝 Abstract
This paper explores a reconfigurable intelligent surface (RIS)-assisted and semantic-aware wireless network, where multiple semantic users (SUs) transmit semantic information to an access point (AP) using the non-orthogonal multiple access (NOMA) method. The RIS reconfigures channel conditions, while semantic extraction reshapes traffic demands, providing enhanced control flexibility for NOMA transmissions. To enable efficient long-term resource allocation, we propose a deferrable semantic extraction scheme that can distribute the semantic extraction tasks across multiple time slots. We formulate a long-term energy efficiency maximization problem by jointly optimizing the RIS's passive beamforming, the SUs' semantic extraction, and the NOMA decoding order. Note that this problem involves multiple and coupled control variables, which can incur significant computational overhead in time-varying network environments. To support low-complexity online optimization, a deep reinforcement learning (DRL)-driven online optimization framework is developed. Specifically, the DRL module facilitates the adaptive selection and optimization of the most suitable option from traffic reshaping, channel reconfiguration, or NOMA decoding order assignment based on the dynamic network status. Numerical results demonstrate that the deferrable semantic extraction scheme significantly improves the long-term energy efficiency. Meanwhile, the DRL-driven online optimization framework effectively reduces the running time while maintaining superior learning performance compared to state-of-the-art methods.
Problem

Research questions and friction points this paper is trying to address.

RIS
Semantic Communications
NOMA
Energy Efficiency
Online Optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Deep Reinforcement Learning
Reconfigurable Intelligent Surface
Semantic Communication
NOMA
Online Optimization
🔎 Similar Papers
No similar papers found.