Non-negative DAG Learning from Time-Series Data

📅 2025-12-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the problem of learning directed acyclic graphs (DAGs) that encode instantaneous causal relationships from multivariate time series, under a linear structural vector autoregressive model (SVARM) incorporating both instantaneous and lagged effects. To overcome the non-convexity induced by the standard acyclicity constraint—a major bottleneck in DAG learning—we introduce, for the first time, a **non-negativity assumption on edge weights**, which transforms the high-order matrix acyclicity condition into a tractable **convex constraint**. This yields the first continuous optimization framework for DAG learning with guaranteed global optimality. Our method employs the augmented Lagrangian multiplier method, ensuring both theoretical rigor and computational feasibility. Experiments on synthetic data demonstrate that the proposed approach significantly outperforms state-of-the-art algorithms in DAG recovery accuracy, establishing a novel paradigm for causal discovery from time-series data.

Technology Category

Application Category

📝 Abstract
This work aims to learn the directed acyclic graph (DAG) that captures the instantaneous dependencies underlying a multivariate time series. The observed data follow a linear structural vector autoregressive model (SVARM) with both instantaneous and time-lagged dependencies, where the instantaneous structure is modeled by a DAG to reflect potential causal relationships. While recent continuous relaxation approaches impose acyclicity through smooth constraint functions involving powers of the adjacency matrix, they lead to non-convex optimization problems that are challenging to solve. In contrast, we assume that the underlying DAG has only non-negative edge weights, and leverage this additional structure to impose acyclicity via a convex constraint. This enables us to cast the problem of non-negative DAG recovery from multivariate time-series data as a convex optimization problem in abstract form, which we solve using the method of multipliers. Crucially, the convex formulation guarantees global optimality of the solution. Finally, we assess the performance of the proposed method on synthetic time-series data, where it outperforms existing alternatives.
Problem

Research questions and friction points this paper is trying to address.

Learn DAG from time-series data with instantaneous dependencies
Impose acyclicity via convex constraint for non-negative edge weights
Cast DAG recovery as convex optimization for global optimality
Innovation

Methods, ideas, or system contributions that make the work stand out.

Non-negative edge weights ensure DAG acyclicity
Convex optimization guarantees global optimal solution
Method of multipliers solves time-series DAG learning
🔎 Similar Papers
No similar papers found.