🤖 AI Summary
To address challenges in electric vehicle battery capacity estimation—including scarce labeled data under privacy constraints, sparse and noisy on-site charging records, and poor cross-domain generalization—this paper proposes the first self-supervised pre-training framework tailored for fragmented, de-identified charging segments. The method jointly leverages masked input reconstruction with segment-similarity-weighted attention and contrastive learning to enable fine-grained pattern mining and high-level inter-segment relational modeling, thereby extracting robust representations from low-information-density data. Evaluated under domain-shift scenarios across vehicle manufacturers and aging stages, the model reduces test error by 31.9% over the best baseline. This significantly enhances estimation accuracy and generalization robustness in privacy-preserving settings.
📝 Abstract
Accurate battery capacity estimation is key to alleviating consumer concerns about battery performance and reliability of electric vehicles (EVs). However, practical data limitations imposed by stringent privacy regulations and labeled data shortages hamper the development of generalizable capacity estimation models that remain robust to real-world data distribution shifts. While self-supervised learning can leverage unlabeled data, existing techniques are not particularly designed to learn effectively from challenging field data -- let alone from privacy-friendly data, which are often less feature-rich and noisier. In this work, we propose a first-of-its-kind capacity estimation model based on self-supervised pre-training, developed on a large-scale dataset of privacy-friendly charging data snippets from real-world EV operations. Our pre-training framework, snippet similarity-weighted masked input reconstruction, is designed to learn rich, generalizable representations even from less feature-rich and fragmented privacy-friendly data. Our key innovation lies in harnessing contrastive learning to first capture high-level similarities among fragmented snippets that otherwise lack meaningful context. With our snippet-wise contrastive learning and subsequent similarity-weighted masked reconstruction, we are able to learn rich representations of both granular charging patterns within individual snippets and high-level associative relationships across different snippets. Bolstered by this rich representation learning, our model consistently outperforms state-of-the-art baselines, achieving 31.9% lower test error than the best-performing benchmark, even under challenging domain-shifted settings affected by both manufacturer and age-induced distribution shifts.