🤖 AI Summary
To address the bottlenecks of point-annotation dependency and costly backbone recomputation in resource-constrained crowd counting, this paper proposes TCFormer—a lightweight Transformer architecture with only 5.0 million parameters—enabling weakly supervised learning using image-level global count labels for the first time. Methodologically, TCFormer introduces: (1) a density-guided learnable weighted aggregation module that explicitly models spatial density distribution; (2) a density-level classification loss jointly optimizing counting accuracy and density discrimination; and (3) an efficient ViT-based feature extractor integrated with a weakly supervised joint training strategy. Evaluated on four major benchmarks—ShanghaiTech Part A/B, UCF-QNRF, and NWPU—TCFormer achieves state-of-the-art accuracy-parameter trade-offs with significantly fewer parameters. Its efficiency and performance establish a new paradigm for deploying crowd counting models on edge devices.
📝 Abstract
Crowd counting typically relies on labor-intensive point-level annotations and computationally intensive backbones, restricting its scalability and deployment in resource-constrained environments. To address these challenges, this paper proposes the TCFormer, a tiny, ultra-lightweight, weakly-supervised transformer-based crowd counting framework with only 5 million parameters that achieves competitive performance. Firstly, a powerful yet efficient vision transformer is adopted as the feature extractor, the global context-aware capabilities of which provides semantic meaningful crowd features with a minimal memory footprint. Secondly, to compensate for the lack of spatial supervision, we design a feature aggregation mechanism termed the Learnable Density-Weighted Averaging module. This module dynamically re-weights local tokens according to predicted density scores, enabling the network to adaptively modulate regional features based on their specific density characteristics without the need for additional annotations. Furthermore, this paper introduces a density-level classification loss, which discretizes crowd density into distinct grades, thereby regularizing the training process and enhancing the model's classification power across varying levels of crowd density. Therefore, although TCformer is trained under a weakly-supervised paradigm utilizing only image-level global counts, the joint optimization of count and density-level losses enables the framework to achieve high estimation accuracy. Extensive experiments on four benchmarks including ShanghaiTech A/B, UCF-QNRF, and NWPU datasets demonstrate that our approach strikes a superior trade-off between parameter efficiency and counting accuracy and can be a good solution for crowd counting tasks in edge devices.