Rethinking Multimodal Fusion for Time Series: Auxiliary Modalities Need Constrained Fusion

📅 2026-03-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge in multimodal time series forecasting where unconstrained fusion of auxiliary modalities—such as text or vision—often introduces irrelevant information that degrades predictive performance. To mitigate this issue, the authors propose the Controllable Fusion Adapter (CFA), a plug-and-play, low-rank adaptation module that selectively integrates only those cross-modal features aligned with temporal dynamics, without modifying the backbone model. CFA is compatible with diverse time series and text architectures and employs a controlled fusion mechanism to efficiently filter relevant information. Extensive evaluation across multiple datasets, encompassing over 20,000 experiments, demonstrates that CFA consistently outperforms existing fusion strategies, underscoring its effectiveness and generalizability.

Technology Category

Application Category

📝 Abstract
Recent advances in multimodal learning have motivated the integration of auxiliary modalities such as text or vision into time series (TS) forecasting. However, most existing methods provide limited gains, often improving performance only in specific datasets or relying on architecture-specific designs that limit generalization. In this paper, we show that multimodal models with naive fusion strategies (e.g., simple addition or concatenation) often underperform unimodal TS models, which we attribute to the uncontrolled integration of auxiliary modalities which may introduce irrelevant information. Motivated by this observation, we explore various constrained fusion methods designed to control such integration and find that they consistently outperform naive fusion methods. Furthermore, we propose Controlled Fusion Adapter (CFA), a simple plug-in method that enables controlled cross-modal interactions without modifying the TS backbone, integrating only relevant textual information aligned with TS dynamics. CFA employs low-rank adapters to filter irrelevant textual information before fusing it into temporal representations. We conduct over 20K experiments across various datasets and TS/text models, demonstrating the effectiveness of the constrained fusion methods including CFA. Code is publicly available at: https://github.com/seunghan96/cfa/.
Problem

Research questions and friction points this paper is trying to address.

multimodal fusion
time series forecasting
auxiliary modalities
constrained fusion
irrelevant information
Innovation

Methods, ideas, or system contributions that make the work stand out.

constrained fusion
multimodal time series forecasting
Controlled Fusion Adapter
low-rank adapter
auxiliary modality integration
🔎 Similar Papers
No similar papers found.
Seunghan Lee
Seunghan Lee
Yonsei University
Deep LearningMachine Learning
J
Jun Seo
LG AI Research, Seoul, South Korea
J
Jaehoon Lee
LG AI Research, Seoul, South Korea
S
Sungdong Yoo
LG AI Research, Seoul, South Korea
M
Minjae Kim
LG AI Research, Seoul, South Korea
T
Tae Yoon Lim
LG AI Research, Seoul, South Korea
D
Dongwan Kang
LG AI Research, Seoul, South Korea
H
Hwanil Choi
LG AI Research, Seoul, South Korea
S
SoonYoung Lee
LG AI Research, Seoul, South Korea
W
Wonbin Ahn
LG AI Research, Seoul, South Korea