🤖 AI Summary
This study addresses the challenges of missing time-varying microscopic weights and dual sparsity at both network and segment levels in road networks. To this end, the authors propose a distribution estimation framework that integrates sparse-aware embedding, spatiotemporal correlation modeling, and a learnable Gaussian Mixture Model (GMM). By jointly leveraging sparse observed weights, segment-level attributes, and long-range spatiotemporal dependencies, the method enables closed-form modeling of the microscopic weight distribution for each road segment at any given time interval, effectively capturing complex traffic patterns such as heavy tails and multimodality. Experimental results on two real-world datasets demonstrate that the proposed approach significantly outperforms state-of-the-art methods, achieving higher accuracy in completing microscopic weight distributions.
📝 Abstract
Microscopic road-network weights represent fine-grained, time-varying traffic conditions obtained from individual vehicles. An example is travel speeds associated with road segments as vehicles traverse them. These weights support tasks including traffic microsimulation and vehicle routing with reliability guarantees. We study the problem of time-varying microscopic weight completion. During a time slot, the available weights typically cover only some road segments. Weight completion recovers distributions for the weights of every road segment at the current time slot. This problem involves two challenges: (i) contending with two layers of sparsity, where weights are missing at both the network layer (many road segments lack weights) and the segment layer (a segment may have insufficient weights to enable accurate distribution estimation); and (ii) achieving a weight distribution representation that is closed-form and can capture complex conditions flexibly, including heavy tails and multiple clusters.
To address these challenges, we propose DiSGMM that combines sparsity-aware embeddings with spatiotemporal modeling to leverage sparse known weights alongside learned segment properties and long-range correlations for distribution estimation. DiSGMM represents distributions of microscopic weights as learnable Gaussian mixture models, providing closed-form distributions capable of capturing complex conditions flexibly. Experiments on two real-world datasets show that DiSGMM can outperform state-of-the-art methods.