Task-Adaptive Parameter-Efficient Fine-Tuning for Weather Foundation Models

📅 2025-09-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing parameter-efficient fine-tuning (PEFT) methods struggle to accommodate the variable heterogeneity, multi-resolution nature, and spatiotemporal coverage discrepancies inherent in weather downstream tasks, leading to suboptimal performance on Weather Foundation Models (WFMs). To address this, we propose WeatherPEFT—a task-adaptive PEFT framework. Its core innovations are: (1) a task-aware dynamic prompting mechanism that conditionally injects encoder weights via input-dependent prompts; and (2) a Fisher information matrix-guided stochastic parameter selection strategy that identifies and updates only the most critical parameters while preserving training stability. WeatherPEFT maintains pretraining knowledge integrity while substantially enhancing downstream adaptation capability. Experiments across three representative weather forecasting tasks demonstrate performance on par with full fine-tuning, with over 90% reduction in trainable parameters—achieving both high accuracy and low computational overhead.

Technology Category

Application Category

📝 Abstract
While recent advances in machine learning have equipped Weather Foundation Models (WFMs) with substantial generalization capabilities across diverse downstream tasks, the escalating computational requirements associated with their expanding scale increasingly hinder practical deployment. Current Parameter-Efficient Fine-Tuning (PEFT) methods, designed for vision or language tasks, fail to address the unique challenges of weather downstream tasks, such as variable heterogeneity, resolution diversity, and spatiotemporal coverage variations, leading to suboptimal performance when applied to WFMs. To bridge this gap, we introduce WeatherPEFT, a novel PEFT framework for WFMs incorporating two synergistic innovations. First, during the forward pass, Task-Adaptive Dynamic Prompting (TADP) dynamically injects the embedding weights within the encoder to the input tokens of the pre-trained backbone via internal and external pattern extraction, enabling context-aware feature recalibration for specific downstream tasks. Furthermore, during backpropagation, Stochastic Fisher-Guided Adaptive Selection (SFAS) not only leverages Fisher information to identify and update the most task-critical parameters, thereby preserving invariant pre-trained knowledge, but also introduces randomness to stabilize the selection. We demonstrate the effectiveness and efficiency of WeatherPEFT on three downstream tasks, where existing PEFT methods show significant gaps versus Full-Tuning, and WeatherPEFT achieves performance parity with Full-Tuning using fewer trainable parameters. The code of this work will be released.
Problem

Research questions and friction points this paper is trying to address.

Adapts parameter-efficient tuning for weather foundation models
Addresses variable heterogeneity and resolution diversity challenges
Achieves full-tuning performance with fewer trainable parameters
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic prompting adapts encoder weights to input tokens
Fisher information guides critical parameter selection and update
Stochastic selection stabilizes parameter updates during backpropagation
🔎 Similar Papers
No similar papers found.