🤖 AI Summary
To address the dual challenges of detecting short-duration/stealthy abnormal stops in intercity bus trajectories with sparse GPS sampling and scarce labeled data, this paper proposes a few-shot semi-supervised learning framework. Our method first employs Sparsity-Aware Segmentation (SAS) to precisely identify stop segments under low-sampling-rate conditions. Second, Local Temporal Indicator-Guided Adjustment (LTIGA) enhances the discriminability of spatiotemporal features by leveraging interpretable trajectory indicators. Third, a synergistic optimization mechanism integrates label propagation, graph convolutional networks (GCNs), and self-training to enable weakly supervised knowledge transfer over a trajectory-based graph structure. Evaluated on real-world intercity bus data, our model achieves an AUC of 0.854 and AP of 0.866 using only ten labeled samples—substantially outperforming existing approaches. The code and dataset are publicly available.
📝 Abstract
Abnormal stop detection (ASD) in intercity coach transportation is critical for ensuring passenger safety, operational reliability, and regulatory compliance. However, two key challenges hinder ASD effectiveness: sparse GPS trajectories, which obscure short or unauthorized stops, and limited labeled data, which restricts supervised learning. Existing methods often assume dense sampling or regular movement patterns, limiting their applicability. To address data sparsity, we propose a Sparsity-Aware Segmentation (SAS) method that adaptively defines segment boundaries based on local spatial-temporal density. Building upon these segments, we introduce three domain-specific indicators to capture abnormal stop behaviors. To further mitigate the impact of sparsity, we develop Locally Temporal-Indicator Guided Adjustment (LTIGA), which smooths these indicators via local similarity graphs. To overcome label scarcity, we construct a spatial-temporal graph where each segment is a node with LTIGA-refined features. We apply label propagation to expand weak supervision across the graph, followed by a GCN to learn relational patterns. A final self-training module incorporates high-confidence pseudo-labels to iteratively improve predictions. Experiments on real-world coach data show an AUC of 0.854 and AP of 0.866 using only 10 labeled instances, outperforming prior methods. The code and dataset are publicly available at href{https://github.com/pangjunbiao/Abnormal-Stop-Detection-SSL.git}