🤖 AI Summary
In multi-task learning with graph-dependent data, existing generalization bounds are suboptimal—typically $O(1/sqrt{n})$—due to the failure of classical concentration inequalities to capture the nuanced dependence structure. Method: This paper introduces the first sharper Bennett and Talagrand inequalities tailored to multi-graph-dependent random variables, overcoming the precision limitations of conventional concentration tools; it further develops a novel analytical framework based on local Rademacher complexity. Contribution/Results: The proposed framework yields a tighter risk upper bound of $O(log n / n)$, significantly improving generalization analysis accuracy. Empirically and theoretically, the method outperforms prior approaches in canonical graph-dependent multi-task settings, such as Macro-AUC optimization, providing stronger theoretical guarantees and superior empirical performance.
📝 Abstract
In multi-task learning (MTL) with each task involving graph-dependent data, generalization results of existing theoretical analyses yield a sub-optimal risk bound of $O(frac{1}{sqrt{n}})$, where $n$ is the number of training samples.This is attributed to the lack of a foundational sharper concentration inequality for multi-graph dependent random variables. To fill this gap, this paper proposes a new corresponding Bennett inequality, enabling the derivation of a sharper risk bound of $O(frac{log n}{n})$. Specifically, building on the proposed Bennett inequality, we propose a new corresponding Talagrand inequality for the empirical process and further develop an analytical framework of the local Rademacher complexity to enhance theoretical generalization analyses in MTL with multi-graph dependent data. Finally, we apply the theoretical advancements to applications such as Macro-AUC Optimization, demonstrating the superiority of our theoretical results over previous work, which is also corroborated by experimental results.