๐ค AI Summary
Existing theoretical analyses of Transformers are limited to modeling tree-structured dependencies (i.e., single-parent relationships), whereas real-world sequences often arise from directed acyclic graphs (DAGs) with multi-parent structures.
Method: We propose the first provably DAG-recoverable theoretical framework for Transformers. We introduce kernel-guided mutual information (KG-MI), a novel information-theoretic measure built upon f-divergence, and design a training objective where each attention head learns an independent marginal transition kernel to capture distinct parentโchild dependencies.
Contribution/Results: We establish the first global convergence guarantee for single-layer multi-head Transformers in polynomial time. Crucially, when instantiated with KL divergence, the learned attention scores provably recover the true DAGโs adjacency matrix exactly. Empirical evaluation confirms strong alignment between theoretical predictions and actual structural recovery accuracy.
๐ Abstract
Uncovering hidden graph structures underlying real-world data is a critical challenge with broad applications across scientific domains. Recently, transformer-based models leveraging the attention mechanism have demonstrated strong empirical success in capturing complex dependencies within graphs. However, the theoretical understanding of their training dynamics has been limited to tree-like graphs, where each node depends on a single parent. Extending provable guarantees to more general directed acyclic graphs (DAGs) -- which involve multiple parents per node -- remains challenging, primarily due to the difficulty in designing training objectives that enable different attention heads to separately learn multiple different parent relationships.
In this work, we address this problem by introducing a novel information-theoretic metric: the kernel-guided mutual information (KG-MI), based on the $f$-divergence. Our objective combines KG-MI with a multi-head attention framework, where each head is associated with a distinct marginal transition kernel to model diverse parent-child dependencies effectively. We prove that, given sequences generated by a $K$-parent DAG, training a single-layer, multi-head transformer via gradient ascent converges to the global optimum in polynomial time. Furthermore, we characterize the attention score patterns at convergence. In addition, when particularizing the $f$-divergence to the KL divergence, the learned attention scores accurately reflect the ground-truth adjacency matrix, thereby provably recovering the underlying graph structure. Experimental results validate our theoretical findings.