ReasonX: MLLM-Guided Intrinsic Image Decomposition

📅 2025-12-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Intrinsic image decomposition suffers from poor generalization to real-world scenes, primarily due to reliance on paired supervision from synthetic data. To address this, we propose an unsupervised cross-domain adaptation framework that leverages multimodal large language models (MLLMs) as perceptual discriminators for the first time. Specifically, the MLLM performs relative attribute comparisons—e.g., albedo, depth, surface normals, and illumination—on unlabeled in-the-wild images, generating GRPO-based reinforcement learning rewards to guide decomposition model optimization. Our approach requires no ground-truth pairs and is model-agnostic, enabling strong generalization across domains. Experiments demonstrate substantial improvements: albedo WHDR decreases by 9–25% on IIW; ETH3D depth error drops by up to 46%; and overall decomposition accuracy and robustness across all components are significantly enhanced in real-world settings.

Technology Category

Application Category

📝 Abstract
Intrinsic image decomposition aims to separate images into physical components such as albedo, depth, normals, and illumination. While recent diffusion- and transformer-based models benefit from paired supervision from synthetic datasets, their generalization to diverse, real-world scenarios remains challenging. We propose ReasonX, a novel framework that leverages a multimodal large language model (MLLM) as a perceptual judge providing relative intrinsic comparisons, and uses these comparisons as GRPO rewards for fine-tuning intrinsic decomposition models on unlabeled, in-the-wild images. Unlike RL methods for generative models, our framework aligns conditional intrinsic predictors by rewarding agreement between the judge's relational assessments and analytically derived relations from the model's outputs. ReasonX is model-agnostic and can be applied to different intrinsic predictors. Across multiple base architectures and modalities, ReasonX yields significant improvements, including 9-25% WHDR reduction on IIW albedo and up to 46% depth accuracy gains on ETH3D, highlighting the promise of MLLM-guided comparative supervision to bridge low- and high-level vision reasoning.
Problem

Research questions and friction points this paper is trying to address.

Improves generalization of intrinsic image decomposition to real-world scenarios
Uses MLLM-guided comparisons as rewards for fine-tuning on unlabeled images
Enhances accuracy across multiple intrinsic components like albedo and depth
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses MLLM as perceptual judge for relative comparisons
Applies GRPO rewards from comparisons to fine-tune models
Model-agnostic framework improving accuracy across architectures
🔎 Similar Papers
No similar papers found.