MultiBind: A Benchmark for Attribute Misbinding in Multi-Subject Generation

📅 2026-03-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the prevalent issue of attribute misbinding in multi-subject image generation—where identity, appearance, or pose are erroneously assigned to non-target subjects—a problem inadequately diagnosed by existing evaluation metrics. To this end, we introduce MultiBind, the first benchmark grounded in real multi-person photographs, which provides slot-aligned subject crops, masks, bounding boxes, standardized reference images, and structured prompts. We further propose a dimension-wise confusion evaluation protocol that systematically quantifies cross-subject attribute misbinding by matching generated subjects to ground-truth slots. For the first time, we formally define and categorize interpretable failure modes—drift, swap, dominance, and fusion—and employ differential similarity matrices to disentangle self-degradation from inter-subject interference. Experiments demonstrate that MultiBind effectively uncovers binding failures in state-of-the-art models that conventional metrics fail to detect.

Technology Category

Application Category

📝 Abstract
Subject-driven image generation is increasingly expected to support fine-grained control over multiple entities within a single image. In multi-reference workflows, users may provide several subject images, a background reference, and long, entity-indexed prompts to control multiple people within one scene. In this setting, a key failure mode is cross-subject attribute misbinding: attributes are preserved, edited, or transferred to the wrong subject. Existing benchmarks and metrics largely emphasize holistic fidelity or per-subject self-similarity, making such failures hard to diagnose. We introduce MultiBind, a benchmark built from real multi-person photographs. Each instance provides slot-ordered subject crops with masks and bounding boxes, canonicalized subject references, an inpainted background reference, and a dense entity-indexed prompt derived from structured annotations. We also propose a dimension-wise confusion evaluation protocol that matches generated subjects to ground-truth slots and measures slot-to-slot similarity using specialists for face identity, appearance, pose, and expression. By subtracting the corresponding ground-truth similarity matrices, our method separates self-degradation from true cross-subject interference and exposes interpretable failure patterns such as drift, swap, dominance, and blending. Experiments on modern multi-reference generators show that MultiBind reveals binding failures that conventional reconstruction metrics miss.
Problem

Research questions and friction points this paper is trying to address.

attribute misbinding
multi-subject generation
cross-subject interference
subject-driven image generation
entity-indexed prompts
Innovation

Methods, ideas, or system contributions that make the work stand out.

attribute misbinding
multi-subject generation
slot-based evaluation
dimension-wise confusion
entity-indexed prompting
🔎 Similar Papers
No similar papers found.