🤖 AI Summary
This work addresses the challenge of debugging OpenMP programs, whose highly nondeterministic execution complicates error reproduction. Existing record-and-replay techniques suffer from limited scalability due to frequent inter-thread synchronization. To overcome this, the authors propose a lightweight recording mechanism based on distributed clocks (DC) and distributed epochs (DE), which substantially reduces synchronization frequency and enables efficient, scalable deterministic replay. The approach is integrated into the ReOMP framework and seamlessly interoperates with ReMPI, supporting deterministic replay of MPI+OpenMP hybrid applications. Experimental results on representative HPC benchmarks show that the proposed method achieves 2–5× higher recording efficiency compared to conventional per-memory-access synchronization schemes, while incurring only negligible overhead—unaffected by MPI scale—when combined with MPI replay.
📝 Abstract
After all these years and all these other shared memory programming frameworks, OpenMP is still the most popular one. However, its greater levels of non-deterministic execution makes debugging and testing more challenging. The ability to record and deterministically replay the program execution is key to address this challenge. However, scalably replaying OpenMP programs is still an unresolved problem. In this paper, we propose two novel techniques that use Distributed Clock (DC) and Distributed Epoch (DE) recording schemes to eliminate excessive thread synchronization for OpenMP record and replay. Our evaluation on representative HPC applications with ReOMP, which we used to realize DC and DE recording, shows that our approach is 2-5x more efficient than traditional approaches that synchronize on every shared-memory access. Furthermore, we demonstrate that our approach can be easily combined with MPI-level replay tools to replay non-trivial MPI+OpenMP applications. We achieve this by integrating \toolname into ReMPI, an existing scalable MPI record-and-replay tool, with only a small MPI-scale-independent runtime overhead.