"I Know It When I See It": Mood Spaces for Connecting and Expressing Visual Concepts

📅 2025-04-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of modeling abstract visual concepts—such as “melancholy” or “vitality”—that are difficult to define precisely yet perceptible to humans. Methodologically, it introduces a novel image-level manipulation framework grounded in fibration-based feature compression and decompression, coupled with an unsupervised loss derived from the principal eigenvector structure of an affinity graph; this enables local-linear, compact, and fine-tuning-free emotion-space modeling. By leveraging pretrained feature distillation and pairwise similarity learning, the framework constructs a low-dimensional semantic space from only 2–20 exemplar images. Compared to baselines, the resulting space is 50–100× more compact and trainable in under one minute. It supports zero-shot visual analogy, object averaging, and pose transfer, achieving state-of-the-art performance on cross-domain abstract concept expression tasks.

Technology Category

Application Category

📝 Abstract
Expressing complex concepts is easy when they can be labeled or quantified, but many ideas are hard to define yet instantly recognizable. We propose a Mood Board, where users convey abstract concepts with examples that hint at the intended direction of attribute changes. We compute an underlying Mood Space that 1) factors out irrelevant features and 2) finds the connections between images, thus bringing relevant concepts closer. We invent a fibration computation to compress/decompress pre-trained features into/from a compact space, 50-100x smaller. The main innovation is learning to mimic the pairwise affinity relationship of the image tokens across exemplars. To focus on the coarse-to-fine hierarchical structures in the Mood Space, we compute the top eigenvector structure from the affinity matrix and define a loss in the eigenvector space. The resulting Mood Space is locally linear and compact, allowing image-level operations, such as object averaging, visual analogy, and pose transfer, to be performed as a simple vector operation in Mood Space. Our learning is efficient in computation without any fine-tuning, needs only a few (2-20) exemplars, and takes less than a minute to learn.
Problem

Research questions and friction points this paper is trying to address.

Modeling abstract visual concepts without clear labels
Compressing image features into a compact Mood Space
Enabling image operations via vector math in Mood Space
Innovation

Methods, ideas, or system contributions that make the work stand out.

Computes Mood Space to connect and express visual concepts
Uses fibration computation to compress pre-trained features
Learns pairwise affinity relationships of image tokens
🔎 Similar Papers
No similar papers found.