Guidance Source Matters: How Guidance from AI, Expert, or a Group of Analysts Impacts Visual Data Preparation and Analysis

📅 2025-02-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how the provenance of analytical guidance—specifically AI systems, human experts, analyst teams, hybrid sources, or unattributed sources—affects users’ data analysis behavior and subjective perceptions. Under rigorously controlled conditions ensuring equivalent guidance quality across conditions, five preregistered experimental groups (N = 250) used a custom-built visual analytics platform to perform exploratory tasks. Behavioral metrics (e.g., guidance request frequency, adoption rate) and perceptual measures (e.g., trust, post-task gain perception, regret) were collected multidimensionally. Results reveal that guidance provenance significantly modulates users’ stage-specific behavioral preferences (e.g., exploration vs. verification) and affective responses—challenging the prevailing assumption that high-quality guidance is functionally interchangeable across sources. Notably, AI-provided guidance yielded the highest perceived gain yet also the strongest regret. Systematic differences emerged across sources in trust formation and guidance utilization patterns. This work provides the first systematic empirical evidence of the independent effect of guidance provenance, advancing theory and practice in trustworthy AI design and human-AI collaborative analytics.

Technology Category

Application Category

📝 Abstract
The progress in generative AI has fueled AI-powered tools like co-pilots and assistants to provision better guidance, particularly during data analysis. However, research on guidance has not yet examined the perceived efficacy of the source from which guidance is offered and the impact of this source on the user's perception and usage of guidance. We ask whether users perceive all guidance sources as equal, with particular interest in three sources: (i) AI, (ii) human expert, and (iii) a group of human analysts. As a benchmark, we consider a fourth source, (iv) unattributed guidance, where guidance is provided without attribution to any source, enabling isolation of and comparison with the effects of source-specific guidance. We design a five-condition between-subjects study, with one condition for each of the four guidance sources and an additional (v) no-guidance condition, which serves as a baseline to evaluate the influence of any kind of guidance. We situate our study in a custom data preparation and analysis tool wherein we task users to select relevant attributes from an unfamiliar dataset to inform a business report. Depending on the assigned condition, users can request guidance, which the system then provides in the form of attribute suggestions. To ensure internal validity, we control for the quality of guidance across source-conditions. Through several metrics of usage and perception, we statistically test five preregistered hypotheses and report on additional analysis. We find that the source of guidance matters to users, but not in a manner that matches received wisdom. For instance, users utilize guidance differently at various stages of analysis, including expressing varying levels of regret, despite receiving guidance of similar quality. Notably, users in the AI condition reported both higher post-task benefit and regret.
Problem

Research questions and friction points this paper is trying to address.

Data Analysis
User Behavior
Guidance Source Impact
Innovation

Methods, ideas, or system contributions that make the work stand out.

AI-guided data analysis
user experience
guidance source impact
🔎 Similar Papers
No similar papers found.