🤖 AI Summary
This paper addresses inference error optimization for remote real-time signal prediction under bidirectional stochastic delays with memory, where conventional monotonic Age-of-Information (AoI)–inference-error assumptions fail.
Method: We first formulate a non-monotonic AoI–inference-error relationship and propose a goal-oriented joint communication scheduling framework that co-optimizes packet length and AoI, explicitly accounting for forward-sensing and feedback delays to minimize time-averaged inference error. The approach integrates stochastic optimization, AoI theory, and real-time predictive modeling.
Results: Simulations under highly variable delay conditions demonstrate that the proposed strategy reduces average inference error by over 40% compared to baseline methods, significantly enhancing both accuracy and robustness of remote real-time inference.
📝 Abstract
We design a goal-oriented communication strategy for remote inference, where an intelligent model (e.g., a pre-trained neural network) at the receiver side predicts the real-time value of a target signal based on data packets transmitted from a remote location. The inference error depends on both the Age of Information (AoI) and the length of the data packets. Previous formulations of this problem either assumed IID transmission delays with immediate feedback or focused only on monotonic relations where inference performance degrades as the input data ages. In contrast, we consider a possibly non-monotonic relationship between the inference error and AoI. We show how to minimize the expected time-average inference error under two-way delay, where the delay process can have memory. Simulation results highlight the significant benefits of adopting such a goal-oriented communication strategy for remote inference, especially under highly variable delay scenarios.