🤖 AI Summary
This work addresses the critical challenge of ensuring long-term fairness in inference performance among users with heterogeneous and time-varying learning tasks in AI-enabled wireless access networks (AI-RANs) under shared edge resources. To this end, the authors propose an online-within-online fair multi-task learning framework (OWO-FMTL), which maintains a shared model updated in an outer loop while employing a lightweight primal-dual inner loop to dynamically adjust user priorities. The approach introduces a tunable generalized α-fairness metric to flexibly balance efficiency and fairness, and provides theoretical guarantees that performance disparities across users asymptotically vanish over time. Experimental results demonstrate that OWO-FMTL significantly outperforms existing baselines on both convex optimization and deep learning tasks, while maintaining low computational overhead suitable for edge deployment scenarios.
📝 Abstract
AI-enabled Radio Access Networks (AI-RANs) are expected to serve heterogeneous users with time-varying learning tasks over shared edge resources. Ensuring equitable inference performance across these users requires adaptive and fair learning mechanisms. This paper introduces an online-within-online fair multi-task learning (OWO-FMTL) framework that ensures long-term equity across users. The method combines two learning loops: an outer loop updating the shared model across rounds and an inner loop rebalancing user priorities within each round with a lightweight primal-dual update. Equity is quantified via generalized alpha-fairness, allowing a trade-off between efficiency and fairness. The framework guarantees diminishing performance disparity over time and operates with low computational overhead suitable for edge deployment. Experiments on convex and deep learning tasks confirm that OWO-FMTL outperforms existing multi-task learning baselines under dynamic scenarios.