🤖 AI Summary
This paper addresses the Goal-oriented Multiple Access (GoMA) problem in multi-agent wireless channel sharing, filling a critical gap in distributed multi-node coordinated scheduling. We propose the first distributed theoretical framework for goal-oriented multiple access, breaking the traditional paradigm of strict separation between communication and application layers by directly coupling channel access decisions to end-application performance objectives. Leveraging game-theoretic modeling, we characterize the inherent non-convexity and multiplicity of Nash equilibria in GoMA. We design a provably convergent distributed optimization algorithm and integrate bandit learning to enable adaptive policy evolution under no prior knowledge and limited feedback. Experiments demonstrate that our approach consistently outperforms centralized baselines across communication efficiency, task completion rate, and energy efficiency—achieving up to 100% improvement.
📝 Abstract
The Goal-oriented Communication (GoC) paradigm breaks the separation between communication and the content of the data, tailoring communication decisions to the specific needs of the receiver and targeting application performance. While recent studies show impressive encoding performance in point-to-point scenarios, the multi-node distributed scenario is still almost unexplored. Moreover, the few studies to investigate this consider a centralized collision-free approach, where a central scheduler decides the transmission order of the nodes. In this work, we address the Goal-oriented Multiple Access (GoMA) problem, in which multiple intelligent agents must coordinate to share a wireless channel and avoid mutual interference. We propose a theoretical framework for the analysis and optimization of distributed GoMA, serving as a first step towards its complete characterization. We prove that the problem is non-convex and may admit multiple Nash Equilibrium (NE) solutions. We provide a characterization of each node's best response to others' strategies and propose an optimization approach that provably reaches one such NE, outperforming centralized approaches by up to 100% while also reducing energy consumption. We also design a distributed learning algorithm that operates with limited feedback and no prior knowledge.