๐ค AI Summary
This study addresses the lack of formal modeling of the Automatic Deleveraging (ADL) mechanism in perpetual contract exchanges by formulating ADL as an online learning problem over a PnL write-down domain. The framework dynamically selects, at each round, a solvency budget and a set of profitable accounts to partially write down unrealized gains, thereby restoring system-wide solvency. We propose a theoretically grounded approach with provable robustness, integrating online learning algorithms, sequential decision-making modeling, and counterfactual evaluation, calibrated and validated against real-world market stress events. Applied to the Hyperliquid stress event in 2025, our algorithm reduced excess liquidations from $51.7 million to $3 millionโmerely 2.6% of the theoretical regret upper bound.
๐ Abstract
Autodeleveraging (ADL) is a last-resort loss socialization mechanism used by perpetual futures venues when liquidation and insurance buffers are insufficient to restore solvency. Despite the scale of perpetual futures markets, ADL has received limited formal treatment as a sequential control problem. This paper provides a concise formalization of ADL as online learning on a PNL-haircut domain: at each round, the venue selects a solvency budget and a set of profitable trader accounts. The profitable accounts are liquidated to cover shortfalls up to the solvency budget, with the aim of recovering exchange-wide solvency. In this model, ADL haircuts apply to positive PNL (unrealized gains), not to posted collateral principal. Using our online learning model, we provide robustness results and theoretical upper bounds on how poorly a mechanism can perform at recovering solvency. We apply our model to the October 10, 2025 Hyperliquid stress episode. The regret caused by Hyperliquid's production ADL queue is about 50\% of an upper bound on regret, calibrated to this event, while our optimized algorithm achieves about 2.6\% of the same bound. In dollar terms, the production ADL model over liquidates trader profits by up to \$51.7M. We also counterfactually evaluated algorithms inspired by our online learning framework that perform better and found that the best algorithm reduces overshoot to \$3M. Our results provide simple, implementable mechanisms for improving ADL in live perpetuals exchanges.