🤖 AI Summary
Addressing the challenge of jointly achieving flexibility, expressiveness, and interpretability in sparse time-series modeling, this paper proposes the Graph Neural Additive Model (GMAN) framework. GMAN extends additive modeling to graph-structured representations by explicitly encoding temporal trajectories via directed graphs and incorporating feature grouping and graph encoding priors to enable fine-grained interpretability at the feature, node, and graph levels—while supporting controllable trade-offs between expressiveness and interpretability. Technically, it integrates sparse time-series embedding, hierarchical feature grouping, and interpretability-aware regularization. Empirically, GMAN significantly outperforms strong non-interpretable baselines on real-world tasks—including mortality prediction from sparse blood test sequences and fake news detection—while generating high-quality, domain-aligned, and actionable explanations.
📝 Abstract
We introduce GMAN, a flexible, interpretable, and expressive framework that extends Graph Neural Additive Networks (GNANs) to learn from sets of sparse time-series data. GMAN represents each time-dependent trajectory as a directed graph and applies an enriched, more expressive GNAN to each graph. It allows users to control the interpretability-expressivity trade-off by grouping features and graphs to encode priors, and it provides feature, node, and graph-level interpretability. On real-world datasets, including mortality prediction from blood tests and fake-news detection, GMAN outperforms strong non-interpretable black-box baselines while delivering actionable, domain-aligned explanations.