🤖 AI Summary
While current large multimodal models (LMMs) excel on standard vision-language benchmarks, they critically lack alignment with human-centered AI (HCAI) principles—such as fairness, ethics, and empathy. Method: We introduce HCAI-Bench, the first multimodal benchmark explicitly designed for evaluating HCAI compliance, comprising 32K real-world image-text pairs across seven dimensions. It integrates GPT-4o–assisted annotation with expert cross-verification and quantifies HCAI principles within a unified evaluation framework featuring seven task types: open/closed visual question answering, multilingual QA, empathetic description, and more. Contribution/Results: We publicly release a standardized dataset, fine-grained scoring protocol, and evaluation code. Empirical evaluation across 15 state-of-the-art LMMs reveals pervasive weaknesses—including poor robustness, inaccurate visual grounding, and trade-offs between human-aligned behavior and factual accuracy—establishing a reproducible diagnostic foundation for HCAI-driven model improvement.
📝 Abstract
Large multimodal models (LMMs) now excel on many vision language benchmarks, however, they still struggle with human centered criteria such as fairness, ethics, empathy, and inclusivity, key to aligning with human values. We introduce HumaniBench, a holistic benchmark of 32K real-world image question pairs, annotated via a scalable GPT4o assisted pipeline and exhaustively verified by domain experts. HumaniBench evaluates seven Human Centered AI (HCAI) principles: fairness, ethics, understanding, reasoning, language inclusivity, empathy, and robustness, across seven diverse tasks, including open and closed ended visual question answering (VQA), multilingual QA, visual grounding, empathetic captioning, and robustness tests. Benchmarking 15 state of the art LMMs (open and closed source) reveals that proprietary models generally lead, though robustness and visual grounding remain weak points. Some open-source models also struggle to balance accuracy with adherence to human-aligned principles. HumaniBench is the first benchmark purpose built around HCAI principles. It provides a rigorous testbed for diagnosing alignment gaps and guiding LMMs toward behavior that is both accurate and socially responsible. Dataset, annotation prompts, and evaluation code are available at: https://vectorinstitute.github.io/HumaniBench