Task-specific Scene Structure Representations

📅 2023-01-02
🏛️ AAAI Conference on Artificial Intelligence
📈 Citations: 7
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of modeling task-specific scene structures in low-level vision tasks. We propose a lightweight Scene Structure Guidance Network (SSGNet), the first to formulate graph partitioning as a learnable neural module grounded in spectral clustering theory. Our structural guidance module is trained end-to-end in a fully unsupervised manner via two novel self-supervised losses, enabling direct learning of task-relevant structural representations without ground-truth supervision. Key contributions include: (1) the first unsupervised framework for task-structure extraction in low-level vision; (2) a plug-and-play modular design with only 56K parameters; and (3) state-of-the-art performance on joint super-resolution and denoising, with strong cross-dataset generalization. The source code is publicly available.
📝 Abstract
Understanding the informative structures of scenes is essential for low-level vision tasks. Unfortunately, it is difficult to obtain a concrete visual definition of the informative structures because influences of visual features are task-specific. In this paper, we propose a single general neural network architecture for extracting task-specific structure guidance for scenes. To do this, we first analyze traditional spectral clustering methods, which computes a set of eigenvectors to model a segmented graph forming small compact structures on image domains. We then unfold the traditional graph-partitioning problem into a learnable network, named Scene Structure Guidance Network (SSGNet), to represent the task-specific informative structures. The SSGNet yields a set of coefficients of eigenvectors that produces explicit feature representations of image structures. In addition, our SSGNet is light-weight (56K parameters), and can be used as a plug-and-play module for off-the-shelf architectures. We optimize the SSGNet without any supervision by proposing two novel training losses that enforce task-specific scene structure generation during training. Our main contribution is to show that such a simple network can achieve state-of-the-art results for several low-level vision applications including joint upsampling and image denoising. We also demonstrate that our SSGNet generalizes well on unseen datasets, compared to existing methods which use structural embedding frameworks. Our source codes are available at https://github.com/jsshin98/SSGNet.
Problem

Research questions and friction points this paper is trying to address.

Extracting task-specific scene structures for vision tasks
Unfolding graph partitioning into learnable network architecture
Enhancing low-level vision applications with lightweight guidance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unfolds graph partitioning into learnable network
Light-weight plug-and-play eigenvector coefficients module
Unsupervised training with novel structure losses
🔎 Similar Papers
No similar papers found.
J
Jisu Shin
AI Graduate School, GIST, South Korea
S
SeungHyun Shin
AI Graduate School, GIST, South Korea
Hae-Gon Jeon
Hae-Gon Jeon
School of Computing, Yonsei University
Computer VisionComputational PhotographyAI for Social GoodCreative AI