OSUM-Pangu: An Open-Source Multidimension Speech Understanding Foundation Model Built upon OpenPangu on Ascend NPUs

πŸ“… 2026-03-11
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the scarcity of open-source speech foundation models compatible with non-CUDA hardware platforms, such as Ascend NPUs, which has hindered the development of domestically viable multimodal AI systems. We present the first open-source, multidimensional speech understanding foundation model that supports end-to-end training and inference entirely on a non-CUDA software-hardware stack. Built upon the OpenPangu-7B language backbone and integrated with an audio encoder, our model employs a staged alignment strategy to jointly optimize speech perception and user intent recognition. Evaluated on Ascend NPU hardware, it achieves task accuracy comparable to state-of-the-art GPU-based models while preserving strong natural language interaction capabilities, thereby establishing a reproducible baseline for indigenous multimodal intelligence.

Technology Category

Application Category

πŸ“ Abstract
Recent advancements in Speech Large Language Models have significantly enhanced multi-dimensional speech understanding. However, the majority of high-performance frameworks are predominantly optimized for GPU centric ecosystems and proprietary backbones, creating a significant gap for deployment on non-CUDA computing infrastructures. In this paper, we present OSUM-Pangu, a fully open-source speech understanding foundation model developed on a completely non-CUDA software and hardware stack. By integrating an audio encoder with the openPangu-7B LLM backbone, we successfully implement the entire training and inference pipeline on the Ascend NPU platform. To facilitate efficient task alignment under non-CUDA resource constraints, we adopt a practical training process that sequentially bridges speech perception and user intent recognition. Experimental results demonstrate that OSUM-Pangu achieves task accuracy comparable to mainstream GPU-based models while maintaining robust natural language interaction capabilities. Our work provides a reproducible, non-CUDA baseline for the open-source speech community, promoting the independent evolution of multimodal intelligence.
Problem

Research questions and friction points this paper is trying to address.

Speech Understanding
Non-CUDA Infrastructure
Foundation Model
Open-Source
Ascend NPU
Innovation

Methods, ideas, or system contributions that make the work stand out.

non-CUDA
Ascend NPU
speech foundation model
open-source
multimodal intelligence
πŸ”Ž Similar Papers
No similar papers found.
Y
Yujie Liao
Audio, Speech and Language Processing Group (ASLP@NPU), Northwestern Polytechnical University, Xi’an, China
Xuelong Geng
Xuelong Geng
School of Computer Science, Northwestern Polytechnical University
ASRLLMspeech
Hongfei Xue
Hongfei Xue
Northwestern Polytechnical University
Speech recognitionself-supervised learning
S
Shuiyuan Wang
Audio, Speech and Language Processing Group (ASLP@NPU), Northwestern Polytechnical University, Xi’an, China
Lei Xie
Lei Xie
Northwestern Polytechnical University
speech processingspeech recognitionspeech synthesismultimediaartificial intelligence