Foundation CAN LM: A Pretrained Language Model For Automotive CAN Data

📅 2026-01-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited generalizability of existing task-specific models in automotive CAN data analysis by introducing, for the first time, the foundation model paradigm to this domain. Treating CAN signals as a language, the study proposes a unified encoding and tokenization strategy tailored for mixed discrete-continuous time-series data and leverages large-scale unlabeled data for self-supervised pretraining. Through a unified multi-task fine-tuning framework, a single pretrained model demonstrates strong adaptability across multiple heterogeneous auto insurance prediction tasks, effectively validating the feasibility and advantages of the foundation model approach for CAN data analysis.

Technology Category

Application Category

📝 Abstract
The Controller Area Network (CAN) bus provides a rich source of vehicular signals increasingly leveraged for applications in automotive and auto insurance domains, including collision detection, predictive maintenance, and driver risk modeling. Despite this potential, existing pipelines largely train isolated task-specific models on raw CAN data, with only limited efforts exploring decoded signals. Such fragmentation prevents shared representation learning and limits cross-task generalization. By contrast, natural language processing (NLP) and computer vision (CV) have been transformed by the foundation model paradigm: large-scale pretraining followed by task-specific adaptation. In this work, we introduce the foundation CAN model that demonstrates multi-objective downstream generalization using a single pretrained backbone. Our approach treats CAN data as a language: we pretrain on large-scale, unlabeled decoded CAN signals and fine-tune across heterogeneous auto insurance tasks. To enable this, we propose a unified tokenization scheme for mixed discrete-continuous signals and address challenges of temporal complexity and trip-specific variability. Our results show that one pretrained CAN model can adapt effectively to diverse predictive tasks, validating that the foundation modeling paradigm, proven in NLP and CV, also holds for CAN data. This establishes a new direction for generalizable representation learning in automotive AI.
Problem

Research questions and friction points this paper is trying to address.

CAN data
foundation model
representation learning
cross-task generalization
automotive AI
Innovation

Methods, ideas, or system contributions that make the work stand out.

foundation model
CAN bus
pretrained language model
tokenization
representation learning
🔎 Similar Papers
No similar papers found.
A
Akiharu Esashi
HPCC Lab, University of North Texas, Denton, Texas, USA
P
Pawissanutt Lertpongrujikorn
HPCC Lab, University of North Texas, Denton, Texas, USA
J
Justin Makino
Connected Analytic Services, Plano, Texas, USA
Y
Yuibi Fujimoto
Toyota Insurance Management Solutions, Plano, Texas, USA
Mohsen Amini Salehi
Mohsen Amini Salehi
Associate Professor of Computer Science and Engineering, University of North Texas
Cloud and Edge Computing