SecDTD: Dynamic Token Drop for Secure Transformers Inference

📅 2026-03-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Secure Transformer inference suffers from high computational and communication overhead, making it challenging to simultaneously achieve strong privacy guarantees and practical efficiency. This work proposes a dynamic token dropping method tailored for secure inference, which shifts the dropping operation to an early stage of the pipeline. Token importance is assessed via a lightweight Max-Centric Normalization (MCN) scoring mechanism, and an efficient oblivious median selection protocol, OMSel, is designed to enable secure, low-overhead dynamic pruning. Evaluated across eight GLUE datasets, the approach achieves a 4.47× end-to-end speedup with no loss in accuracy, while OMSel accelerates median selection by 16.9× compared to existing secure sorting protocols.

Technology Category

Application Category

📝 Abstract
The rapid adoption of Transformer-based AI has been driven by accessible models such as ChatGPT, which provide API-based services for developers and businesses. However, as these online inference services increasingly handle sensitive inputs, privacy concerns have emerged as a significant challenge. To address this, secure inference frameworks have been proposed, but their high computational and communication overhead often limit practical deployment. In plaintext settings, token drop is an effective technique for reducing inference cost; however, our analysis reveals that directly applying such methods to ciphertext scenarios is suboptimal due to distinct cost distributions in secure computation. We propose SecDTD, a dynamic token drop scheme tailored for secure Transformer inference. SecDTD advances token drop by shifting the dropping to earlier inference stages, effectively reducing the cost of key components such as Softmax. To support this, we introduce two core techniques. Max-Centric Normalization (MCN): A novel, Softmax-independent scoring method that enables early token drop with minimal overhead and improved normalization, supporting more aggressive dropping without accuracy loss. OMSel: A faster, oblivious median selection protocol that securely identifies the median of importance scores to support token drop. Compared to existing sorting-based methods, OMSel achieves a 16.9$\times$ speedup while maintaining security, obliviousness and randomness. We evaluate SecDTD through 48 experiments across eight GLUE datasets under various network settings using the BOLT and BumbleBee frameworks. SecDTD achieves 4.47 times end-to-end inference acceleration without degradation in accuracy.
Problem

Research questions and friction points this paper is trying to address.

secure inference
privacy-preserving
Transformer
token drop
computational overhead
Innovation

Methods, ideas, or system contributions that make the work stand out.

Secure Inference
Dynamic Token Drop
Max-Centric Normalization
Oblivious Median Selection
Transformer Optimization
🔎 Similar Papers
No similar papers found.
Y
Yifei Cai
Iowa State University
Z
Zhuoran Li
University of Arizona
Y
Yizhou Feng
Old Dominion University
Qiao Zhang
Qiao Zhang
Shandong University
Privacy in Machine Learning
Hongyi Wu
Hongyi Wu
IEEE Fellow, Professor and Department Head, ECE, The University of Arizona
Intelligent and Secure Computing and Communication Systems
D
Danella Zhao
University of Arizona
C
Chunsheng Xin
Iowa State University