🤖 AI Summary
This study investigates the psychological mechanism by which users reject offers—particularly excessively high discounts (e.g., 85%)—due to perceived “phantom costs” (i.e., implicit, non-monetary trade-offs) when interacting with human versus robotic sales agents. Method: A controlled automotive purchase simulation (N = 855) employed a 2×2 factorial design manipulating agent type (human/robot) and autonomy (autonomous/non-autonomous), crossed with discount levels (5% vs. 85%). Contribution/Results: Robotic agents significantly reduced phantom cost perceptions, as they are perceived as lacking self-interest, thereby attenuating suspicion about ulterior motives. Although high discounts heightened motive-related skepticism, perceived benefits still positively predicted purchase intention. Critically, phantom costs generalized beyond the agent to associated products and managers—a novel finding—and agent type systematically weakened this effect by mitigating motivational attributions. These results provide empirical grounding for human–AI trust modeling and ethically informed AI design.
📝 Abstract
People often reject offers that are too generous due to the perception of hidden drawbacks referred to as"phantom costs."We hypothesized that this perception and the decision-making vary based on the type of agent making the offer (human vs. robot) and the degree to which the agent is perceived to be autonomous or have the capacity for self-interest. To test this conjecture, participants (N = 855) engaged in a car-buying simulation where a human or robot sales agent, described as either autonomous or not, offered either a small (5%) or large (85%) discount. Results revealed that the robot was perceived as less self-interested than the human, which reduced the perception of phantom costs. While larger discounts increased phantom costs, they also increased purchase intentions, suggesting that perceived benefits can outweigh phantom costs. Importantly, phantom costs were not only attributed to the agent participants interacted with, but also to the product and the agent's manager, highlighting at least three sources of suspicion. These findings deepen our understanding of to whom people assign responsibility and how perceptions shape both human-human and human-robot interactions, with implications for ethical AI design and marketing strategies.