🤖 AI Summary
This work addresses the performance degradation in tactile few-shot class-incremental learning (FSCIL) caused by the lack of standardization in contextual factors—such as device variations and contact conditions—during data collection. To mitigate this issue, the authors propose a novel modeling paradigm termed Context-as-Transform (CaT), which disentangles context into structured low-dimensional components and high-dimensional residual components. These components are normalized and aligned via families of invertible transformations. Furthermore, an Uncertainty-aware Prototype Calibration (UCPC) mechanism is introduced to refine class prototypes under contextual uncertainty. The framework incorporates a pseudo-context consistency loss to optimize the inverse transformations, effectively reducing contextual interference. Experiments on the HapTex and LMT108 benchmarks demonstrate that the proposed CaT-FSCIL approach significantly outperforms existing methods, confirming its effectiveness and robustness.
📝 Abstract
Few-Shot Class-Incremental Learning (FSCIL) can be particularly susceptible to acquisition contexts with only a few labeled samples. A typical scenario is tactile sensing, where the acquisition context ({\it e.g.}, diverse devices, contact state, and interaction settings) degrades performance due to a lack of standardization. In this paper, we propose Context-as-Transform FSCIL (CaT-FSCIL) to tackle the above problem. We decompose the acquisition context into a structured low-dimensional component and a high-dimensional residual component. The former can be easily affected by tactile interaction features, which are modeled as an approximately invertible Context-as-Transform family and handled via inverse-transform canonicalization optimized with a pseudo-context consistency loss. The latter mainly arises from platform and device differences, which can be mitigated with an Uncertainty-Conditioned Prototype Calibration (UCPC) that calibrates biased prototypes and decision boundaries based on context uncertainty. Comprehensive experiments on the standard benchmarks HapTex and LMT108 have demonstrated the superiority of the proposed CaT-FSCIL.