🤖 AI Summary
This work addresses privacy leakage risks in releasing sensitive trajectory data for offline reinforcement learning (RL). We propose the first differentially private (DP) synthetic dataset generation framework for offline RL, introducing DP to offline RL data publishing for the first time. Our method integrates diffusion models with a diffusion Transformer architecture and introduces a curiosity-driven two-stage pretraining–fine-tuning paradigm: pretraining on public data to capture environment dynamics, followed by DP-SGD fine-tuning on sensitive trajectories to ensure ε-DP guarantees. Extensive experiments across five real-world sensitive offline RL benchmarks demonstrate that our approach significantly outperforms existing baselines under strict ε-DP constraints, achieving superior trade-offs among data fidelity, trajectory diversity, and downstream policy learning performance. The framework provides a verifiable, practical, and privacy-preserving paradigm for secure offline RL data sharing.
📝 Abstract
Recently, offline reinforcement learning (RL) has become a popular RL paradigm. In offline RL, data providers share pre-collected datasets -- either as individual transitions or sequences of transitions forming trajectories -- to enable the training of RL models (also called agents) without direct interaction with the environments. Offline RL saves interactions with environments compared to traditional RL, and has been effective in critical areas, such as navigation tasks. Meanwhile, concerns about privacy leakage from offline RL datasets have emerged.
To safeguard private information in offline RL datasets, we propose the first differential privacy (DP) offline dataset synthesis method, PrivORL, which leverages a diffusion model and diffusion transformer to synthesize transitions and trajectories, respectively, under DP. The synthetic dataset can then be securely released for downstream analysis and research. PrivORL adopts the popular approach of pre-training a synthesizer on public datasets, and then fine-tuning on sensitive datasets using DP Stochastic Gradient Descent (DP-SGD). Additionally, PrivORL introduces curiosity-driven pre-training, which uses feedback from the curiosity module to diversify the synthetic dataset and thus can generate diverse synthetic transitions and trajectories that closely resemble the sensitive dataset. Extensive experiments on five sensitive offline RL datasets show that our method achieves better utility and fidelity in both DP transition and trajectory synthesis compared to baselines. The replication package is available at the GitHub repository.