🤖 AI Summary
This work addresses the challenge of high computational cost and the trade-off between efficiency and accuracy in long-form video understanding by proposing A4VL, a multi-agent perception-action alliance. A4VL leverages an event-driven chunking strategy and a cue-guided alignment mechanism within iterative perception-action cycles, enabling agents to collaboratively extract query-relevant cues, localize key segments, and refine answers through cross-review and consensus mechanisms. Evaluated on five mainstream VideoQA benchmarks, A4VL outperforms 18 representative vision-language models and 10 long-video optimization approaches while significantly reducing inference latency, thereby achieving substantial gains in processing efficiency without compromising accuracy.
📝 Abstract
This paper presents a multi-agent perception-action exploration alliance, dubbed A4VL, for efficient long-video reasoning. A4VL operates in a multi-round perception-action exploration loop with a selection of VLM agents. In each round, the team of agents performs video question-answer (VideoQA) via perception exploration followed by action exploration. During perception exploration, each agent learns to extract query-specific perception clue(s) from a few sampled frames and performs clue-based alignment to find the video block(s) that are most relevant to the query-specific event. During action exploration, A4VL performs video reasoning in three steps: (1) each agent produces its initial answer with rational, (2) all agents collaboratively scores one another through cross-reviews and relevance ranking, and (3) based on whether a satisfactory consensus is reached, the decision is made either to start a new round of perception-action deliberation by pruning (e.g., filtering out the lowest performing agent) and re-staging (e.g., new-clue and matching block based perception-action exploration), or to conclude by producing its final answer. The integration of the multi-agent alliance through multi-round perception-action exploration, coupled with event-driven partitioning and cue-guided block alignment, enables A4VL to effectively scale to real world long videos while preserving high quality video reasoning. Evaluation Results on five popular VideoQA benchmarks show that A4VL outperforms 18 existing representative VLMs and 10 recent methods optimized for long-video reasoning, while achieving significantly lower inference latency. Our code is released at https://github.com/git-disl/A4VL.