🤖 AI Summary
Existing lightweight vision models struggle to balance parameter count, computational cost, and performance, while inadequately modeling human visual mechanisms. Inspired by the human visual system’s tendency to process scenes holistically before focusing on details—and to retain global context even during local attention—this work proposes a Global-to-Parallel Multi-scale Encoding (GPM) mechanism and introduces H-GPE, a lightweight network architecture. H-GPE employs a Global Insight Generator (GIG) to capture holistic semantics, while parallel branches concurrently model mid-to-large scale relationships and fine-grained textures, enabling synergistic integration of global and local features. Evaluated across image classification, object detection, and semantic segmentation tasks, H-GPE consistently outperforms state-of-the-art lightweight models with significantly fewer FLOPs and parameters, achieving a superior trade-off between accuracy and efficiency.
📝 Abstract
Lightweight vision networks have witnessed remarkable progress in recent years, yet achieving a satisfactory balance among parameter scale, computational overhead, and task performance remains difficult. Although many existing lightweight models manage to reduce computation considerably, they often do so at the expense of a substantial increase in parameter count (e.g., LSNet, MobileMamba), which still poses obstacles for deployment on resource-limited devices. In parallel, some studies attempt to draw inspiration from human visual perception, but their modeling tends to oversimplify the visual process, making it hard to reflect how perception truly operates. Revisiting the cooperative mechanism of the human visual system, we propose GPM (Global-to-Parallel Multi-scale Encoding). GPM first employs a Global Insight Generator (GIG) to extract holistic cues, and subsequently processes features of different scales through parallel branches: LSAE emphasizes mid-/large-scale semantic relations, while IRB (Inverted Residual Block) preserves fine-grained texture information, jointly enabling coherent representation of global and local features. As such, GPM conforms to two characteristic behaviors of human vision perceiving the whole before focusing on details, and maintaining broad contextual awareness even during local attention. Built upon GPM, we further develop the lightweight H-GPE network. Experiments on image classification, object detection, and semantic segmentation show that H-GPE achieves strong performance while maintaining a balanced footprint in both FLOPs and parameters, delivering a more favorable accuracy-efficiency trade-off compared with recent state-of-the-art lightweight models.