Pianist Transformer: Towards Expressive Piano Performance Rendering via Scalable Self-Supervised Pre-Training

📅 2025-12-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing expressive music rendering methods are constrained by limited annotated data, hindering model scalability and generalization. To address this, we propose the first large-scale self-supervised learning framework for piano performance synthesis. Our approach introduces a unified MIDI representation and expressive feature encoding, coupled with an efficient asymmetric Transformer architecture that enables long-context modeling and low-latency inference. We establish a billion-token pretraining paradigm, enabling stable training of a 1.35-billion-parameter model on a 10-billion-token dataset. Experiments demonstrate human-level performance: the model achieves state-of-the-art objective metrics—including significantly reduced note duration and velocity prediction errors—and attains a Mean Opinion Score (MOS) of 4.21 in subjective listening evaluations. This work overcomes key bottlenecks of supervised learning in data efficiency and model scalability, setting a new benchmark for expressive piano synthesis.

Technology Category

Application Category

📝 Abstract
Existing methods for expressive music performance rendering rely on supervised learning over small labeled datasets, which limits scaling of both data volume and model size, despite the availability of vast unlabeled music, as in vision and language. To address this gap, we introduce Pianist Transformer, with four key contributions: 1) a unified Musical Instrument Digital Interface (MIDI) data representation for learning the shared principles of musical structure and expression without explicit annotation; 2) an efficient asymmetric architecture, enabling longer contexts and faster inference without sacrificing rendering quality; 3) a self-supervised pre-training pipeline with 10B tokens and 135M-parameter model, unlocking data and model scaling advantages for expressive performance rendering; 4) a state-of-the-art performance model, which achieves strong objective metrics and human-level subjective ratings. Overall, Pianist Transformer establishes a scalable path toward human-like performance synthesis in the music domain.
Problem

Research questions and friction points this paper is trying to address.

Develops a self-supervised method for expressive piano performance rendering
Addresses limitations of small labeled datasets by scaling data and model size
Unifies MIDI representation to learn musical structure and expression without annotation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified MIDI representation for unlabeled music learning
Asymmetric architecture for long context and fast inference
Self-supervised pre-training with 10B tokens and 135M parameters
🔎 Similar Papers
No similar papers found.