🤖 AI Summary
This work addresses the lack of structured semantic representations and the opacity of multimodal reasoning in traffic video understanding by proposing a scene graph–based ReAct-style multimodal agent. The method integrates object detection, multi-object tracking, and lane extraction to construct dynamic traffic scene graphs, which are then coupled with a large language model to enable explainable, collaborative reasoning through symbolic graph queries and tool invocation. By introducing structured scene graphs into the ReAct framework for the first time, the approach not only achieves competitive accuracy on the TUMTraffic VideoQA benchmark but also delivers a transparent and traceable reasoning process.
📝 Abstract
We present Scene-Graph Based Multi-Modal Traffic Agent (SGTA), a modular framework for traffic video understanding that combines structured scene graphs with multi-modal reasoning. It constructs a traffic scene graph from roadside videos using detection, tracking, and lane extraction, followed by tool-based reasoning over both symbolic graph queries and visual inputs. SGTA adopts ReAct to process interleaved reasoning traces from large language models with tool invocations, enabling interpretable decision-making for complex video questions. Experiments on selected TUMTraffic VideoQA dataset sample demonstrate that SGTA achieves competitive accuracy across multiple question types while providing transparent reasoning steps. These results highlight the potential of integrating structured scene representations with multi-modal agents for traffic video understanding.