Keep the Lights On, Keep the Lengths in Check: Plug-In Adversarial Detection for Time-Series LLMs in Energy Forecasting

📅 2025-12-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Time-series large language models (TS-LLMs) deployed in energy forecasting are vulnerable to globally optimized adversarial examples (AEs), yet their variable-length input capability hinders effective AE detection. Method: We propose a plug-and-play black-box detection framework that leverages the TS-LLM’s inherent variable-length input support—without architectural modification or retraining—to generate multiscale random subsequences and quantify prediction consistency, thereby identifying adversarial inputs. This establishes the first sampling-induced divergence-based AE detection paradigm. Contribution/Results: Evaluated on three benchmark energy datasets (ETTh2, NI, Consumption), our method achieves a mean detection accuracy of 98.2%, significantly outperforming existing localization-based approaches. It further demonstrates high practicality and strong robustness in real-world power Internet-of-Energy (IoE) scenarios.

Technology Category

Application Category

📝 Abstract
Accurate time-series forecasting is increasingly critical for planning and operations in low-carbon power systems. Emerging time-series large language models (TS-LLMs) now deliver this capability at scale, requiring no task-specific retraining, and are quickly becoming essential components within the Internet-of-Energy (IoE) ecosystem. However, their real-world deployment is complicated by a critical vulnerability: adversarial examples (AEs). Detecting these AEs is challenging because (i) adversarial perturbations are optimized across the entire input sequence and exploit global temporal dependencies, which renders local detection methods ineffective, and (ii) unlike traditional forecasting models with fixed input dimensions, TS-LLMs accept sequences of variable length, increasing variability that complicates detection. To address these challenges, we propose a plug-in detection framework that capitalizes on the TS-LLM's own variable-length input capability. Our method uses sampling-induced divergence as a detection signal. Given an input sequence, we generate multiple shortened variants and detect AEs by measuring the consistency of their forecasts: Benign sequences tend to produce stable predictions under sampling, whereas adversarial sequences show low forecast similarity, because perturbations optimized for a full-length sequence do not transfer reliably to shorter, differently-structured subsamples. We evaluate our approach on three representative TS-LLMs (TimeGPT, TimesFM, and TimeLLM) across three energy datasets: ETTh2 (Electricity Transformer Temperature), NI (Hourly Energy Consumption), and Consumption (Hourly Electricity Consumption and Production). Empirical results confirm strong and robust detection performance across both black-box and white-box attack scenarios, highlighting its practicality as a reliable safeguard for TS-LLM forecasting in real-world energy systems.
Problem

Research questions and friction points this paper is trying to address.

Detects adversarial examples in time-series LLMs for energy forecasting
Addresses challenges from variable-length inputs and global temporal dependencies
Ensures robust forecasting in real-world energy systems against attacks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Plug-in detection framework uses variable-length input sampling
Measures forecast consistency across shortened sequence variants
Detects adversarial examples via sampling-induced divergence signal
🔎 Similar Papers
No similar papers found.