A Large-Scale Empirical Analysis of Custom GPTs' Vulnerabilities in the OpenAI Ecosystem

📅 2025-05-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses pervasive security vulnerabilities in custom GPTs within the OpenAI ecosystem through the first large-scale empirical risk assessment. We conducted automated red-teaming and multi-metric quantitative analysis on 14,904 custom GPTs from the OpenAI Store, systematically identifying seven exploitable threat categories—including role-playing attacks, system prompt leakage, and phishing content generation—and establishing a multidimensional risk ranking framework. Our analysis reveals, for the first time, a strong positive correlation between the prevalence of high-risk vulnerabilities (e.g., role-playing exploitation: 96.51%, system prompt leakage: 92.20%, phishing generation: 91.22%) and their adoption frequency across the customization layer—demonstrating that foundational model security flaws are not only inherited but amplified in downstream customizations. Critically, over 95% of evaluated models lack even basic safeguards, underscoring the urgent need for robust security governance in custom AI deployment scenarios.

Technology Category

Application Category

📝 Abstract
Millions of users leverage generative pretrained transformer (GPT)-based language models developed by leading model providers for a wide range of tasks. To support enhanced user interaction and customization, many platforms-such as OpenAI-now enable developers to create and publish tailored model instances, known as custom GPTs, via dedicated repositories or application stores. These custom GPTs empower users to browse and interact with specialized applications designed to meet specific needs. However, as custom GPTs see growing adoption, concerns regarding their security vulnerabilities have intensified. Existing research on these vulnerabilities remains largely theoretical, often lacking empirical, large-scale, and statistically rigorous assessments of associated risks. In this study, we analyze 14,904 custom GPTs to assess their susceptibility to seven exploitable threats, such as roleplay-based attacks, system prompt leakage, phishing content generation, and malicious code synthesis, across various categories and popularity tiers within the OpenAI marketplace. We introduce a multi-metric ranking system to examine the relationship between a custom GPT's popularity and its associated security risks. Our findings reveal that over 95% of custom GPTs lack adequate security protections. The most prevalent vulnerabilities include roleplay-based vulnerabilities (96.51%), system prompt leakage (92.20%), and phishing (91.22%). Furthermore, we demonstrate that OpenAI's foundational models exhibit inherent security weaknesses, which are often inherited or amplified in custom GPTs. These results highlight the urgent need for enhanced security measures and stricter content moderation to ensure the safe deployment of GPT-based applications.
Problem

Research questions and friction points this paper is trying to address.

Assessing security vulnerabilities in 14,904 custom GPTs
Analyzing prevalence of exploitable threats like prompt leakage
Identifying inherent security weaknesses in OpenAI's foundational models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Analyzed 14,904 custom GPTs for vulnerabilities
Introduced multi-metric ranking system for risks
Found 95% lack adequate security protections
🔎 Similar Papers
No similar papers found.