See2Refine: Vision-Language Feedback Improves LLM-Based eHMI Action Designers

๐Ÿ“… 2026-02-02
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the challenge that autonomous vehicles lack natural communication channels with other road users, and existing external Human-Machine Interface (eHMI) designs struggle to adapt to dynamic traffic scenarios while relying heavily on manual annotation. To overcome these limitations, the authors propose the See2Refine framework, which introduces a vision-language model (VLM) as an automated perceptual feedback source for the first time, working in concert with a large language model (LLM) to form a closed-loop system. This system enables unsupervised generation and iterative refinement of multimodal eHMI behaviorsโ€”such as light bars, animated eyes, and robotic arms. Experiments demonstrate that the approach consistently outperforms baseline methods across diverse eHMI modalities and LLM scales, with VLM-based evaluations showing strong alignment with human preferences, thereby validating its generalizability and effectiveness.

Technology Category

Application Category

๐Ÿ“ Abstract
Automated vehicles lack natural communication channels with other road users, making external Human-Machine Interfaces (eHMIs) essential for conveying intent and maintaining trust in shared environments. However, most eHMI studies rely on developer-crafted message-action pairs, which are difficult to adapt to diverse and dynamic traffic contexts. A promising alternative is to use Large Language Models (LLMs) as action designers that generate context-conditioned eHMI actions, yet such designers lack perceptual verification and typically depend on fixed prompts or costly human-annotated feedback for improvement. We present See2Refine, a human-free, closed-loop framework that uses vision-language model (VLM) perceptual evaluation as automated visual feedback to improve an LLM-based eHMI action designer. Given a driving context and a candidate eHMI action, the VLM evaluates the perceived appropriateness of the action, and this feedback is used to iteratively revise the designer's outputs, enabling systematic refinement without human supervision. We evaluate our framework across three eHMI modalities (lightbar, eyes, and arm) and multiple LLM model sizes. Across settings, our framework consistently outperforms prompt-only LLM designers and manually specified baselines in both VLM-based metrics and human-subject evaluations. Results further indicate that the improvements generalize across modalities and that VLM evaluations are well aligned with human preferences, supporting the robustness and effectiveness of See2Refine for scalable action design.
Problem

Research questions and friction points this paper is trying to address.

eHMI
automated vehicles
LLM-based action design
perceptual verification
human-free feedback
Innovation

Methods, ideas, or system contributions that make the work stand out.

vision-language model
LLM-based action design
eHMI
closed-loop refinement
perceptual feedback
๐Ÿ”Ž Similar Papers
No similar papers found.