Sketch Down the FLOPs: Towards Efficient Networks for Human Sketch

📅 2025-05-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing lightweight image models fail on fine-grained sketch-based image retrieval (FG-SBIR) due to the inherent sparsity and structural ambiguity of sketch data. Method: We propose the first efficient, sketch-specific network for FG-SBIR, featuring a novel sketch-oriented cross-modal knowledge distillation framework, an RL-driven dynamic canvas selector, and a dual-stream sketch–image feature alignment mechanism. Contributions/Results: Our method reduces FLOPs by 99.37% (from 40.18G to 0.254G) and parameters by 84.89%, achieving—for the first time—the lowest computational cost among all sketch networks, even surpassing state-of-the-art lightweight image models. It attains a retrieval accuracy of 33.03% (vs. baseline 32.77%), significantly outperforming generic image models fine-tuned or transferred to sketch tasks. This work establishes an efficient, domain-specialized lightweight paradigm for sketch understanding.

Technology Category

Application Category

📝 Abstract
As sketch research has collectively matured over time, its adaptation for at-mass commercialisation emerges on the immediate horizon. Despite an already mature research endeavour for photos, there is no research on the efficient inference specifically designed for sketch data. In this paper, we first demonstrate existing state-of-the-art efficient light-weight models designed for photos do not work on sketches. We then propose two sketch-specific components which work in a plug-n-play manner on any photo efficient network to adapt them to work on sketch data. We specifically chose fine-grained sketch-based image retrieval (FG-SBIR) as a demonstrator as the most recognised sketch problem with immediate commercial value. Technically speaking, we first propose a cross-modal knowledge distillation network to transfer existing photo efficient networks to be compatible with sketch, which brings down number of FLOPs and model parameters by 97.96% percent and 84.89% respectively. We then exploit the abstract trait of sketch to introduce a RL-based canvas selector that dynamically adjusts to the abstraction level which further cuts down number of FLOPs by two thirds. The end result is an overall reduction of 99.37% of FLOPs (from 40.18G to 0.254G) when compared with a full network, while retaining the accuracy (33.03% vs 32.77%) -- finally making an efficient network for the sparse sketch data that exhibit even fewer FLOPs than the best photo counterpart.
Problem

Research questions and friction points this paper is trying to address.

Adapting photo-efficient models for sketch data
Reducing FLOPs in sketch-specific networks
Maintaining accuracy while minimizing computational cost
Innovation

Methods, ideas, or system contributions that make the work stand out.

Cross-modal knowledge distillation for sketch adaptation
RL-based canvas selector for dynamic abstraction adjustment
97.96% FLOPs reduction with retained accuracy
🔎 Similar Papers
No similar papers found.