Unsupervised Layer-Wise Dynamic Test Time Adaptation for LLMs

📅 2026-02-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges of unsupervised, sample-level test-time adaptation in large language models, where fixed learning rates often lead to overfitting, distributional shift, and degradation in generation quality. To mitigate these issues, the authors propose a hierarchical dynamic test-time adaptation framework that introduces, for the first time, a lightweight hypernetwork to dynamically predict per-layer LoRA learning rate scaling factors at each optimization step. This hypernetwork leverages both prompt embeddings and Transformer layer representations to enable fine-grained, structure-aware adaptation control. Extensive experiments across multiple large language models and datasets demonstrate that the proposed method significantly enhances adaptation stability and performance while effectively alleviating quality degradation.

Technology Category

Application Category

📝 Abstract
Test-time adaptation (TTA) for large language models (LLMs) updates model parameters at inference time using signals available at deployment. This paper focuses on a common yet under-explored regime: unsupervised, sample-specific TTA, where the model adapts independently for each prompt using only the prompt itself, without gold answers or external supervision. Although appealing, naive unsupervised TTA with a fixed, handcrafted learning rate can be unstable: updates may overfit to prompt-specific statistics, drift from the desired answer distribution, and ultimately degrade generation quality. This failure mode is not surprising, as in this case TTA must adapt to a single prompt within only a few gradient steps, unlike standard training that averages updates over large datasets and long optimization horizons. Therefore, we propose layer-wise dynamic test-time adaptation, a framework which explicitly modulates TTA strength as a function of prompt representation, LLM structure and adaptation step. In our setting, TTA updates only LoRA parameters, and a lightweight hypernetwork predicts per-layer, per-step learning-rate multipliers, enabling fine-grained control. Experiments across various datasets and LLMs consistently show that our method substantially strengthens TTA by learning effective scaling patterns over adaptation steps and transformer layer projections, improving stability while delivering better performance.
Problem

Research questions and friction points this paper is trying to address.

test-time adaptation
unsupervised adaptation
large language models
prompt-specific adaptation
adaptation stability
Innovation

Methods, ideas, or system contributions that make the work stand out.

test-time adaptation
unsupervised adaptation
layer-wise learning rate
LoRA
hypernetwork
🔎 Similar Papers
No similar papers found.
L
Longhuan Xu
Southeast University-Monash University Joint Graduate School
Cunjian Chen
Cunjian Chen
Monash University
Generative AIComputer VisionDeep Learning
F
Feng Yin
Southeast University