🤖 AI Summary
This study addresses the lack of empirical guidance in selecting model export formats during AI system development. We systematically evaluate five formats—ONNX, SavedModel, TorchScript, Pickle, and Joblib—across integration efficiency, cross-platform compatibility, and maintenance cost. Employing an embedded multi-case empirical design—including two industrial systems and three distinct technology stacks—we integrate questionnaire surveys (n=17), structured on-site observations, and qualitative thematic analysis. Our findings reveal that ONNX achieves the best overall balance in cross-framework portability and integration efficiency; SavedModel uniquely excels in end-to-end deep learning pipelines, particularly in preprocessing encapsulation; whereas Pickle and Joblib exhibit pervasive security vulnerabilities and environment coupling, incurring the highest integration costs. This work provides the first engineering-oriented, empirically grounded basis for model serialization format selection in production AI deployment.
📝 Abstract
Machine learning (ML) models are often integrated into ML-enabled systems to provide software functionality that would otherwise be impossible. This integration requires the selection of an appropriate ML model export format, for which many options are available. These formats are crucial for ensuring a seamless integration, and choosing a suboptimal one can negatively impact system development. However, little evidence is available to guide practitioners during the export format selection. We therefore evaluated various model export formats regarding their impact on the development of ML-enabled systems from an integration perspective. Based on the results of a preliminary questionnaire survey (n=17), we designed an extensive embedded case study with two ML-enabled systems in three versions with different technologies. We then analyzed the effect of five popular export formats, namely ONNX, Pickle, TensorFlow's SavedModel, PyTorch's TorchScript, and Joblib. In total, we studied 30 units of analysis (2 systems x 3 tech stacks x 5 formats) and collected data via structured field notes. The holistic qualitative analysis of the results indicated that ONNX offered the most efficient integration and portability across most cases. SavedModel and TorchScript were very convenient to use in Python-based systems, but otherwise required workarounds (TorchScript more than SavedModel). SavedModel also allowed the easy incorporation of preprocessing logic into a single file, which made it scalable for complex deep learning use cases. Pickle and Joblib were the most challenging to integrate, even in Python-based systems. Regarding technical support, all model export formats had strong technical documentation and strong community support across platforms such as Stack Overflow and Reddit. Practitioners can use our findings to inform the selection of ML export formats suited to their context.