Towards Equitable AI: Detecting Bias in Using Large Language Models for Marketing

📅 2025-02-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses implicit social biases in large language models (LLMs) when generating financial marketing slogans, systematically auditing unfair representations across sensitive attributes—gender, age, income, education, and marital status. Leveraging a corpus of 1,700 real-world advertisements, it pioneers the application of relative bias quantification and the Kolmogorov–Smirnov test to marketing contexts, integrating prompt engineering with thematic classification (empowerment, finance, rights, personalization) for multidimensional analysis. Results reveal significant systemic bias: disadvantaged groups—women, youth, low-income, and less-educated individuals—are disproportionately assigned stereotypical “empowerment” and “personalization” framings, whereas advantaged groups receive predominantly neutral, domain-specific “finance” terminology. The work establishes a novel methodological framework for measuring and mitigating bias in LLM-driven marketing applications, offering both theoretical grounding and empirical evidence for bias detectability and intervenability.

Technology Category

Application Category

📝 Abstract
The recent advances in large language models (LLMs) have revolutionized industries such as finance, marketing, and customer service by enabling sophisticated natural language processing tasks. However, the broad adoption of LLMs brings significant challenges, particularly in the form of social biases that can be embedded within their outputs. Biases related to gender, age, and other sensitive attributes can lead to unfair treatment, raising ethical concerns and risking both company reputation and customer trust. This study examined bias in finance-related marketing slogans generated by LLMs (i.e., ChatGPT) by prompting tailored ads targeting five demographic categories: gender, marital status, age, income level, and education level. A total of 1,700 slogans were generated for 17 unique demographic groups, and key terms were categorized into four thematic groups: empowerment, financial, benefits and features, and personalization. Bias was systematically assessed using relative bias calculations and statistically tested with the Kolmogorov-Smirnov (KS) test against general slogans generated for any individual. Results revealed that marketing slogans are not neutral; rather, they emphasize different themes based on demographic factors. Women, younger individuals, low-income earners, and those with lower education levels receive more distinct messaging compared to older, higher-income, and highly educated individuals. This underscores the need to consider demographic-based biases in AI-generated marketing strategies and their broader societal implications. The findings of this study provide a roadmap for developing more equitable AI systems, highlighting the need for ongoing bias detection and mitigation efforts in LLMs.
Problem

Research questions and friction points this paper is trying to address.

Detecting bias in AI-generated marketing slogans
Assessing demographic-based biases in LLMs
Developing equitable AI systems for marketing
Innovation

Methods, ideas, or system contributions that make the work stand out.

Detecting bias in LLM outputs
Using KS test for bias assessment
Tailored ads for demographic categories
🔎 Similar Papers
No similar papers found.