Tempora: Characterising the Time-Contingent Utility of Online Test-Time Adaptation

📅 2026-02-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the critical gap in existing test-time adaptation (TTA) evaluation protocols, which neglect latency constraints inherent in real-world deployment and thus fail to capture the trade-off between accuracy and response time. To bridge this gap, we propose Tempora, a novel framework that introduces the concept of time-conditioned utility and establishes a time-sensitive evaluation protocol. We systematically benchmark seven representative TTA methods across 240 temporal scenarios on ImageNet-C, defining discrete, continuous, and amortized utility metrics to encompass diverse deployment contexts—including asynchronous streaming, interactive systems, and budget-constrained settings. Our analysis reveals substantial discrepancies between conventional rankings and real-world performance: for instance, the traditionally top-performing method ETA underperforms under 41.2% of time constraints, and no single method consistently dominates across all corruption types and latency conditions.

Technology Category

Application Category

📝 Abstract
Test-time adaptation (TTA) offers a compelling remedy for machine learning (ML) models that degrade under domain shifts, improving generalisation on-the-fly with only unlabelled samples. This flexibility suits real deployments, yet conventional evaluations unrealistically assume unbounded processing time, overlooking the accuracy-latency trade-off. As ML increasingly underpins latency-sensitive and user-facing use-cases, temporal pressure constrains the viability of adaptable inference; predictions arriving too late to act on are futile. We introduce Tempora, a framework for evaluating TTA under this pressure. It consists of temporal scenarios that model deployment constraints, evaluation protocols that operationalise measurement, and time-contingent utility metrics that quantify the accuracy-latency trade-off. We instantiate the framework with three such metrics: (1) discrete utility for asynchronous streams with hard deadlines, (2) continuous utility for interactive settings where value decays with latency, and (3) amortised utility for budget-constrained deployments. Applying Tempora to seven TTA methods on ImageNet-C across 240 temporal evaluations reveals rank instability: conventional rankings do not predict rankings under temporal pressure; ETA, a state-of-the-art method in the conventional setting, falls short in 41.2% of evaluations. The highest-utility method varies with corruption type and temporal pressure, with no clear winner. By enabling systematic evaluation across diverse temporal constraints for the first time, Tempora reveals when and why rankings invert, offering practitioners a lens for method selection and researchers a target for deployable adaptation.
Problem

Research questions and friction points this paper is trying to address.

test-time adaptation
accuracy-latency trade-off
temporal constraints
domain shift
online adaptation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Test-Time Adaptation
Time-Contingent Utility
Accuracy-Latency Trade-off
Temporal Evaluation Framework
Domain Shift
🔎 Similar Papers
No similar papers found.