🤖 AI Summary
This study addresses the underexplored role of images in conversational search clarification, particularly their differential effects on user responses to clarification questions versus query reformulation tasks. Through a user study involving 73 participants, the authors systematically compare multimodal (image-enhanced) and text-only clarification strategies, examining how task type and user expertise moderate their impact on user behavior and retrieval performance. The findings reveal, for the first time, a task-dependent value of visual cues: images significantly improve the accuracy of query reformulation and subsequent retrieval effectiveness, while also increasing user preference and engagement during clarification interactions. However, in clarification answering tasks, text-only strategies yield superior user performance. These results suggest that visual augmentation in conversational search should be strategically deployed based on specific task demands and user characteristics.
📝 Abstract
Conversational search (CS) systems increasingly employ clarifying questions to refine user queries and improve the search experience. Previous studies have demonstrated the usefulness of text-based clarifying questions in enhancing both retrieval performance and user experience. While images have been shown to improve retrieval performance in various contexts, their impact on user performance, when incorporated into clarifying questions, remains largely unexplored. We conduct a user study with 73 participants to investigate the role of images in CS, specifically examining their effects on two search-related tasks: (i) answering clarifying questions, and (ii) query reformulation. We compare the effect of multimodal and text-only clarifying questions in both tasks within a CS context from various perspectives. Our findings reveal that while participants showed a strong preference for multimodal questions when answering clarifying questions, preferences were more balanced in the query reformulation task. The impact of images varied with both task type and user expertise: in answering clarifying questions, images helped maintain engagement across different expertise levels, while in query reformulation, they led to more precise queries and improved retrieval performance. Interestingly, for clarifying question answers, text-only setups demonstrated better user performance as they provided more comprehensive textual information in the absence of images. These results provide valuable insights for designing effective multimodal CS systems, highlighting that the benefits of visual augmentation are task-dependent and should be strategically implemented based on the specific search context and user characteristics.