Graph-Attention-Based Casual Discovery With Trust Region-Navigated Clipping Policy Optimization

📅 2021-10-19
🏛️ IEEE Transactions on Cybernetics
📈 Citations: 7
Influential: 0
📄 PDF
🤖 AI Summary
Existing reinforcement learning (RL)-based causal discovery methods—such as REINFORCE and PPO—suffer from poor convergence, training instability, and difficulty modeling latent variables or undirected edges. To address these challenges, this paper proposes an end-to-end DAG learning framework grounded in graph attention mechanisms. Its core contributions are: (1) a Structure-agnostic Dynamic Graph Attention Network (SDGAT) encoder that learns variable structural representations without requiring prior neighborhood information; and (2) a trust-region-guided clipping strategy to stabilize policy updates during training. Extensive experiments on synthetic and benchmark datasets demonstrate that the proposed method significantly outperforms existing RL-based causal discovery approaches, achieving state-of-the-art performance in both structural recovery accuracy and training robustness.

Technology Category

Application Category

📝 Abstract
In many domains of empirical sciences, discovering the causal structure within variables remains an indispensable task. Recently, to tackle unoriented edges or latent assumptions violation suffered by conventional methods, researchers formulated a reinforcement learning (RL) procedure for causal discovery and equipped a REINFORCE algorithm to search for the best rewarded directed acyclic graph. The two keys to the overall performance of the procedure are the robustness of RL methods and the efficient encoding of variables. However, on the one hand, REINFORCE is prone to local convergence and unstable performance during training. Neither trust region policy optimization, being computationally expensive, nor proximal policy optimization (PPO), suffering from aggregate constraint deviation, is a decent alternative for combinatory optimization problems with considerable individual subactions. We propose a trust region-navigated clipping policy optimization method for causal discovery that guarantees both better search efficiency and steadiness in policy optimization, in comparison with REINFORCE, PPO, and our prioritized sampling-guided REINFORCE implementation. On the other hand, to boost the efficient encoding of variables, we propose a refined graph attention encoder called SDGAT that can grasp more feature information without priori neighborhood information. With these improvements, the proposed method outperforms the former RL method in both synthetic and benchmark datasets in terms of output results and optimization robustness.
Problem

Research questions and friction points this paper is trying to address.

Reinforcement Learning
Optimization
Stability
Innovation

Methods, ideas, or system contributions that make the work stand out.

SDGAT
Trust Region Optimization
Causal Inference
🔎 Similar Papers
No similar papers found.
Shixuan Liu
Shixuan Liu
National University of Defense Technology
Knowledge ReasoningDomain GeneralizationCausal InferenceData Engineering
Y
Yanghe Feng
College of Systems Engineering, National University of Defense Technology, Changsha, 410073, P.R. China
Keyu Wu
Keyu Wu
Institute for Infocomm Research, A*STAR, Singapore
deep learningreinforcement learningtransfer learningautonomous navigation
G
Guangquan Cheng
College of Systems Engineering, National University of Defense Technology, Changsha, 410073, P.R. China
J
Jincai Huang
College of Systems Engineering, National University of Defense Technology, Changsha, 410073, P.R. China
Zhong Liu
Zhong Liu
College of Systems Engineering, National University of Defense Technology, Changsha, 410073, P.R. China