GraspALL: Adaptive Structural Compensation from Illumination Variation for Robotic Garment Grasping in Any Low-Light Conditions

๐Ÿ“… 2026-03-15
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the significant degradation of garment structural features under low-light conditions, which severely undermines robotic grasping robustness. Existing approaches overlook the dynamic impact of illumination variations on the reliance of non-RGB modalities within multimodal fusion frameworks. To overcome this limitation, we propose a lighting-structure interaction compensation model that encodes continuous illumination intensity as a quantitative guidance signal to drive adaptive fusion between RGB and non-RGB modalities according to ambient lighting conditions, thereby generating illumination-consistent grasping representations. Integrating multimodal perception, quantified illumination encoding, and deep learningโ€“based grasping policies, our method substantially outperforms baseline approaches on a newly curated garment grasping dataset, achieving 32%โ€“44% higher grasping accuracy across diverse low-light environments and significantly enhancing model generalization.

Technology Category

Application Category

๐Ÿ“ Abstract
Achieving accurate garment grasping under dynamically changing illumination is crucial for all-day operation of service robots.However, the reduced illumination in low-light scenes severely degrades garment structural features, leading to a significant drop in grasping robustness.Existing methods typically enhance RGB features by exploiting the illumination-invariant properties of non-RGB modalities, yet they overlook the varying dependence on non-RGB features under varying lighting conditions, which can introduce misaligned non-RGB cues and thereby weaken the model's adaptability to illumination changes when utilizing multimodal information.To address this problem, we propose GraspALL, an illumination-structure interactive compensation model.The innovation of GraspALL lies in encoding continuous illumination changes into quantitative references to guide adaptive feature fusion between RGB and non-RGB modalities according to varying lighting intensities, thereby generating illumination-consistent grasping representations.Experiments on the self-built garment grasping dataset demonstrate that GraspALL improves grasping accuracy by 32-44% over baselines under diverse illumination conditions.
Problem

Research questions and friction points this paper is trying to address.

garment grasping
illumination variation
low-light conditions
multimodal fusion
structural features
Innovation

Methods, ideas, or system contributions that make the work stand out.

adaptive feature fusion
illumination-structure interaction
multimodal compensation
low-light grasping
garment manipulation
๐Ÿ”Ž Similar Papers
No similar papers found.