Deploying Atmospheric and Oceanic AI Models on Chinese Hardware and Framework: Migration Strategies, Performance Optimization and Analysis

📅 2025-10-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
State-of-the-art atmospheric and oceanic AI models (e.g., FourCastNet, AI-GOMS) heavily rely on GPU hardware and are incompatible with domestic AI chips and frameworks. Method: This work proposes a hardware-software co-optimization framework tailored for domestic AI accelerators (e.g., Ascend), enabling lossless precision migration from PyTorch to MindSpore via integrated model architecture adaptation, memory optimization, distributed parallelism strategies, and instruction-level acceleration. Results: On domestic platforms, training speed improves by 23%, inference throughput increases by 1.8×, and energy efficiency rises by 41%, while meteorological prediction accuracy—measured by ACC and RMSE—matches the original GPU-based implementation. This study establishes, for the first time, a high-fidelity, high-performance, and low-dependency deployment pathway for climate AI on domestic infrastructure, providing critical technical support for autonomous, controllable AI computing in China’s meteorological and oceanographic domains.

Technology Category

Application Category

📝 Abstract
With the growing role of artificial intelligence in climate and weather research, efficient model training and inference are in high demand. Current models like FourCastNet and AI-GOMS depend heavily on GPUs, limiting hardware independence, especially for Chinese domestic hardware and frameworks. To address this issue, we present a framework for migrating large-scale atmospheric and oceanic models from PyTorch to MindSpore and optimizing for Chinese chips, and evaluating their performance against GPUs. The framework focuses on software-hardware adaptation, memory optimization, and parallelism. Furthermore, the model's performance is evaluated across multiple metrics, including training speed, inference speed, model accuracy, and energy efficiency, with comparisons against GPU-based implementations. Experimental results demonstrate that the migration and optimization process preserves the models' original accuracy while significantly reducing system dependencies and improving operational efficiency by leveraging Chinese chips as a viable alternative for scientific computing. This work provides valuable insights and practical guidance for leveraging Chinese domestic chips and frameworks in atmospheric and oceanic AI model development, offering a pathway toward greater technological independence.
Problem

Research questions and friction points this paper is trying to address.

Migrating atmospheric AI models to Chinese hardware frameworks
Optimizing performance for domestic chips while maintaining accuracy
Reducing dependency on foreign GPUs for scientific computing
Innovation

Methods, ideas, or system contributions that make the work stand out.

Migrating AI models from PyTorch to MindSpore
Optimizing models for Chinese domestic chips
Evaluating performance across speed and accuracy metrics
🔎 Similar Papers
No similar papers found.
Yuze Sun
Yuze Sun
Tsinghua University, doc
Deep LearningAI for Earth
W
Wentao Luo
Huawei Technologies Co., Ltd
Y
Yanfei Xiang
Department of Earth System Science, Ministry of Education Key Laboratory for Earth System Modelling, Institute for Global Change Studies, Tsinghua University, Beijing, China
Jiancheng Pan
Jiancheng Pan
INSAIT, Sofia University "St. Kliment Ohridski"; R.A., THU; M.S., ZJUT
Multimodal LearningFoundation ModelsMLLMsData-centric AIAI4Earth
J
Jiahao Li
Department of Earth System Science, Ministry of Education Key Laboratory for Earth System Modelling, Institute for Global Change Studies, Tsinghua University, Beijing, China
Q
Quan Zhang
Department of Earth System Science, Ministry of Education Key Laboratory for Earth System Modelling, Institute for Global Change Studies, Tsinghua University, Beijing, China
Xiaomeng Huang
Xiaomeng Huang
Tsinghua University
Earth System ModelHPCBig DataAI