🤖 AI Summary
Existing double-blind oblivious RAM (ORAM) schemes struggle to simultaneously achieve high performance, good data locality, and rigorous double-blind security. To address this, we propose H₂O₂RAM, a high-performance hierarchical double-blind ORAM. It is the first double-blind ORAM to adopt a hierarchical architecture, integrating implicit ORAM primitives, oblivious sorting/shuffling, a TEE-ORAM coordination protocol, and optimized memory layout. Crucially, H₂O₂RAM resolves the fundamental tension—present in prior designs like FutORMa—between private-memory assumptions and true double-blindness, ensuring complete indistinguishability of both internal and external memory accesses. Experimental evaluation demonstrates that, compared to state-of-the-art schemes, H₂O₂RAM achieves up to 1000× speedup in execution time and reduces memory overhead by 5–44×. Moreover, it fully supports privacy-preserving applications including secure multi-party computation and federated learning.
📝 Abstract
The combination of Oblivious RAM (ORAM) with Trusted Execution Environments (TEE) has found numerous real-world applications due to their complementary nature. TEEs alleviate the performance bottlenecks of ORAM, such as network bandwidth and roundtrip latency, and ORAM provides general-purpose protection for TEE applications against attacks exploiting memory access patterns. The defining property of this combination, which sets it apart from traditional ORAM designs, is its ability to ensure that memory accesses, both inside and outside of TEEs, are made oblivious, thus termed doubly oblivious RAM (O$_2$RAM). Efforts to develop O$_2$RAM with enhanced performance are ongoing. In this work, we propose H$_2$O$_2$RAM, a high-performance doubly oblivious RAM construction. The distinguishing feature of our approach, compared to the existing tree-based doubly oblivious designs, is its first adoption of the hierarchical framework that enjoys inherently better data locality and parallelization. While the latest hierarchical solution, FutORAMa, achieves concrete efficiency in the classic client-server model by leveraging a relaxed assumption of sublinear-sized client-side private memory, adapting it to our scenario poses challenges due to the conflict between this relaxed assumption and our doubly oblivious requirement. To this end, we introduce several new efficient oblivious components to build a high-performance hierarchical O$_2$RAM (H$_2$O$_2$RAM). We implement our design and evaluate it on various scenarios. The results indicate that H$_2$O$_2$RAM reduces execution time by up to $sim 10^3$ times and saves memory usage by $5sim44$ times compared to state-of-the-art solutions.