๐ค AI Summary
This work addresses the significant degradation of garment structural features under low-light conditions, which severely undermines robotic grasping robustness. Existing approaches overlook the dynamic impact of illumination variations on the reliance of non-RGB modalities within multimodal fusion frameworks. To overcome this limitation, we propose a lighting-structure interaction compensation model that encodes continuous illumination intensity as a quantitative guidance signal to drive adaptive fusion between RGB and non-RGB modalities according to ambient lighting conditions, thereby generating illumination-consistent grasping representations. Integrating multimodal perception, quantified illumination encoding, and deep learningโbased grasping policies, our method substantially outperforms baseline approaches on a newly curated garment grasping dataset, achieving 32%โ44% higher grasping accuracy across diverse low-light environments and significantly enhancing model generalization.
๐ Abstract
Achieving accurate garment grasping under dynamically changing illumination is crucial for all-day operation of service robots.However, the reduced illumination in low-light scenes severely degrades garment structural features, leading to a significant drop in grasping robustness.Existing methods typically enhance RGB features by exploiting the illumination-invariant properties of non-RGB modalities, yet they overlook the varying dependence on non-RGB features under varying lighting conditions, which can introduce misaligned non-RGB cues and thereby weaken the model's adaptability to illumination changes when utilizing multimodal information.To address this problem, we propose GraspALL, an illumination-structure interactive compensation model.The innovation of GraspALL lies in encoding continuous illumination changes into quantitative references to guide adaptive feature fusion between RGB and non-RGB modalities according to varying lighting intensities, thereby generating illumination-consistent grasping representations.Experiments on the self-built garment grasping dataset demonstrate that GraspALL improves grasping accuracy by 32-44% over baselines under diverse illumination conditions.