Robust Disentangled Counterfactual Learning for Physical Audiovisual Commonsense Reasoning

📅 2025-02-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of physical audio-visual commonsense reasoning under multimodal missingness, where existing methods suffer from inadequate causal reasoning due to entangled static/dynamic visual features, absence of counterfactual modeling, and poor cross-modal robustness. To this end, we propose a joint framework integrating a disentangled sequence encoder with counterfactual learning: (1) a variational autoencoder coupled with disentangled representation learning separates video into static structural and dynamic motion latent factors; (2) counterfactual intervention modeling explicitly captures physical causal relationships among objects; (3) a shared-and-private feature decomposition mechanism enables robust reconstruction under modality dropout. The framework is plug-and-play and seamlessly integrates with vision-language models. Extensive experiments demonstrate significant improvements in accuracy and robustness to modality missingness across multiple benchmarks, achieving state-of-the-art performance.

Technology Category

Application Category

📝 Abstract
In this paper, we propose a new Robust Disentangled Counterfactual Learning (RDCL) approach for physical audiovisual commonsense reasoning. The task aims to infer objects' physics commonsense based on both video and audio input, with the main challenge being how to imitate the reasoning ability of humans, even under the scenario of missing modalities. Most of the current methods fail to take full advantage of different characteristics in multi-modal data, and lacking causal reasoning ability in models impedes the progress of implicit physical knowledge inferring. To address these issues, our proposed RDCL method decouples videos into static (time-invariant) and dynamic (time-varying) factors in the latent space by the disentangled sequential encoder, which adopts a variational autoencoder (VAE) to maximize the mutual information with a contrastive loss function. Furthermore, we introduce a counterfactual learning module to augment the model's reasoning ability by modeling physical knowledge relationships among different objects under counterfactual intervention. To alleviate the incomplete modality data issue, we introduce a robust multimodal learning method to recover the missing data by decomposing the shared features and model-specific features. Our proposed method is a plug-and-play module that can be incorporated into any baseline including VLMs. In experiments, we show that our proposed method improves the reasoning accuracy and robustness of baseline methods and achieves the state-of-the-art performance.
Problem

Research questions and friction points this paper is trying to address.

Robust disentangled counterfactual learning for audiovisual reasoning
Inferring physical commonsense from incomplete multimodal data
Enhancing causal reasoning in models for implicit knowledge
Innovation

Methods, ideas, or system contributions that make the work stand out.

Disentangled sequential encoder
Counterfactual learning module
Robust multimodal learning method
🔎 Similar Papers
No similar papers found.
M
Mengshi Qi
State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, China
Changsheng Lv
Changsheng Lv
Beijing University of Posts and Telecommunications
Scene Graph GenerationAutonomous Driving
Huadong Ma
Huadong Ma
BUPT
Internet of ThingsMultimedia