GenAI Advertising: Risks of Personalizing Ads with LLMs

📅 2024-09-23
🏛️ arXiv.org
📈 Citations: 3
Influential: 1
📄 PDF
🤖 AI Summary
This study investigates the ethical implications of embedding personalized advertisements in LLM-based chatbots, focusing on user experience and trust. Method: We designed and deployed three comparative systems—ad-integrated (unlabeled), ad-free, and ad-integrated with explicit disclosure—and conducted a controlled A/B test incorporating subjective user ratings, behavioral logs, and expert annotation. Contribution/Results: Our empirical analysis reveals that unlabeled ads temporarily improve perceived response desirability but significantly erode system credibility; 87% of users failed to detect ads autonomously, and upon disclosure, 62% perceived them as manipulative. Users consistently preferred managing privacy through conversational interaction rather than interface settings. The findings highlight three critical risks: high ad invisibility, substantial trust degradation, and weak user agency. These results provide empirically grounded warnings and design guidelines for ethically sustainable commercialization of LLMs.

Technology Category

Application Category

📝 Abstract
Recent advances in large language models have enabled the creation of highly effective chatbots, which may serve as a platform for targeted advertising. This paper investigates the risks of personalizing advertising in chatbots to their users. We developed a chatbot that embeds personalized product advertisements within LLM responses, inspired by similar forays by AI companies. Our benchmarks show that ad injection impacted certain LLM attribute performance, particularly response desirability. We conducted a between-subjects experiment with 179 participants using chabots with no ads, unlabeled targeted ads, and labeled targeted ads. Results revealed that participants struggled to detect chatbot ads and unlabeled advertising chatbot responses were rated higher. Yet, once disclosed, participants found the use of ads embedded in LLM responses to be manipulative, less trustworthy, and intrusive. Participants tried changing their privacy settings via chat interface rather than the disclosure. Our findings highlight ethical issues with integrating advertising into chatbot responses
Problem

Research questions and friction points this paper is trying to address.

Investigating personalized ad integration in LLM chatbots for monetization
Evaluating user detection and preference for hidden advertisements
Developing ad-serving models and datasets for adaptive advertising
Innovation

Methods, ideas, or system contributions that make the work stand out.

Embedding personalized ads into chatbot responses
Fine-tuning Phi-4-Ads model for advertising adaptation
Creating advertising dataset for personalized LLM monetization
🔎 Similar Papers
No similar papers found.