NeuroMamba: Multi-Perspective Feature Interaction with Visual Mamba for Neuron Segmentation

πŸ“… 2026-01-22
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the challenges posed by the complex morphology and ambiguous boundaries of neurons, which hinder accurate segmentation in electron microscopy data. Existing CNNs lack long-range modeling capabilities, while patch-based Transformers often lose fine voxel-level details. To overcome these limitations, the authors propose NeuroMamba, a framework built upon the Visual Mamba architecture that integrates global state-space modeling without patch partitioning with local feature extraction. The method enhances boundary discrimination through channel-wise gating, introduces a resolution-adaptive spatially continuous scanning mechanism, and employs cross-modulation to fuse multi-view features. Evaluated on four public electron microscopy datasets, NeuroMamba achieves state-of-the-art performance, significantly improving segmentation accuracy for both anisotropic and isotropic volumes.

Technology Category

Application Category

πŸ“ Abstract
Neuron segmentation is the cornerstone of reconstructing comprehensive neuronal connectomes, which is essential for deciphering the functional organization of the brain. The irregular morphology and densely intertwined structures of neurons make this task particularly challenging. Prevailing CNN-based methods often fail to resolve ambiguous boundaries due to the lack of long-range context, whereas Transformer-based methods suffer from boundary imprecision caused by the loss of voxel-level details during patch partitioning. To address these limitations, we propose NeuroMamba, a multi-perspective framework that exploits the linear complexity of Mamba to enable patch-free global modeling and synergizes this with complementary local feature modeling, thereby efficiently capturing long-range dependencies while meticulously preserving fine-grained voxel details. Specifically, we design a channel-gated Boundary Discriminative Feature Extractor (BDFE) to enhance local morphological cues. Complementing this, we introduce the Spatial Continuous Feature Extractor (SCFE), which integrates a resolution-aware scanning mechanism into the Visual Mamba architecture to adaptively model global dependencies across varying data resolutions. Finally, a cross-modulation mechanism synergistically fuses these multi-perspective features. Our method demonstrates state-of-the-art performance across four public EM datasets, validating its exceptional adaptability to both anisotropic and isotropic resolutions. The source code will be made publicly available.
Problem

Research questions and friction points this paper is trying to address.

neuron segmentation
long-range dependencies
voxel-level details
boundary ambiguity
irregular morphology
Innovation

Methods, ideas, or system contributions that make the work stand out.

NeuroMamba
Visual Mamba
patch-free modeling
multi-perspective feature fusion
neuron segmentation
πŸ”Ž Similar Papers
No similar papers found.
L
Liuyun Jiang
State Key Laboratory of Brain Cognition and Brain-Inspired Intelligence Technology, Institute of Automation, Chinese Academy of Sciences, Beijing, China; School of Future Technology, University of Chinese Academy of Sciences, Beijing, China
Yizhuo Lu
Yizhuo Lu
δΈ­η§‘ι™’θ‡ͺεŠ¨εŒ–η ”η©Άζ‰€
δΊΊε·₯智能、η₯žη»ηΌ–解码
Yanchao Zhang
Yanchao Zhang
Professor of Electrical, Computer, and Energy Engineering, Arizona State University
Network and distributed system securitywireless networksmobile computing
J
Jiazheng Liu
State Key Laboratory of Brain Cognition and Brain-Inspired Intelligence Technology, Institute of Automation, Chinese Academy of Sciences, Beijing, China
H
Hua Han
State Key Laboratory of Brain Cognition and Brain-Inspired Intelligence Technology, Institute of Automation, Chinese Academy of Sciences, Beijing, China; School of Future Technology, University of Chinese Academy of Sciences, Beijing, China