🤖 AI Summary
This work addresses the limitation of existing evaluation protocols for omnimodal models, which predominantly rely on textual outputs and fail to assess their ability to generate contextually appropriate speech in multimodal settings. To bridge this gap, we introduce OmniACBench—the first benchmark specifically designed for evaluating context-aware speech generation in omnimodal models. It integrates spoken instructions, textual scripts, and visual inputs to formulate controllable speech generation tasks targeting six acoustic attributes: speaking rate, voice presence, pronunciation, emotion, accent, and timbre. Our benchmark uncovers critical bottlenecks in current models’ capacity to leverage multimodal context for expressive speech synthesis and identifies three representative failure modes. Experiments across eight state-of-the-art models demonstrate that, despite strong performance on conventional text-based tasks, they exhibit significant deficiencies in context-driven speech generation, thereby validating the challenge and utility of OmniACBench.
📝 Abstract
Most testbeds for omni-modal models assess multimodal understanding via textual outputs, leaving it unclear whether these models can properly speak their answers. To study this, we introduce OmniACBench, a benchmark for evaluating context-grounded acoustic control in omni-modal models. Given a spoken instruction, a text script, and an image, a model must read the script aloud with an appropriate tone and manner. OmniACBench comprises 3,559 verified instances covering six acoustic features: speech rate, phonation, pronunciation, emotion, global accent, and timbre. Extensive experiments on eight models reveal their limitations in the proposed setting, despite their strong performance on prior textual-output evaluations. Our analyses show that the main bottleneck lies not in processing individual modalities, but in integrating multimodal context for faithful speech generation. Moreover, we identify three common failure modes-weak direct control, failed implicit inference, and failed multimodal grounding-providing insights for developing models that can verbalize responses effectively.