🤖 AI Summary
This work addresses the challenges of boundary ambiguity and structural uncertainty in camouflaged object detection by proposing a novel parameter-free injection of edge priors during the early denoising stages of a diffusion model. For the first time, boundary information is incorporated into the generative process without introducing additional parameters. A unified optimization objective is designed to jointly model spatial accuracy, structural constraints, and uncertainty-aware supervision, enabling coherent learning of global semantics and fine-grained boundaries. The proposed method achieves state-of-the-art performance on the CAMO, COD10K, and NC4K benchmarks, consistently outperforming existing approaches across multiple metrics—including Sm, Fβw, Em, and MAE—while significantly enhancing boundary sharpness and reducing false detections.
📝 Abstract
Bi-CamoDiffusion is introduced, an evolution of the CamoDiffusion framework for camouflaged object detection. It integrates edge priors into early-stage embeddings via a parameter-free injection process, which enhances boundary sharpness and prevents structural ambiguity. This is governed by a unified optimization objective that balances spatial accuracy, structural constraints, and uncertainty supervision, allowing the model to capture of both the object's global context and its intricate boundary transitions. Evaluations across the CAMO, COD10K, and NC4K benchmarks show that Bi-CamoDiffusion surpasses the baseline, delivering sharper delineation of thin structures and protrusions while also minimizing false positives. Also, our model consistently outperforms existing state-of-the-art methods across all evaluated metrics, including $S_m$, $F_β^{w}$, $E_m$, and $MAE$, demonstrating a more precise object-background separation and sharper boundary recovery.