Personhood credentials: Artificial intelligence and the value of privacy-preserving tools to distinguish who is real online

📅 2024-08-15
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
To address the growing online trust crisis and privacy risks stemming from AI-generated synthetic identities, this paper proposes Personhood Credentials (PHCs)—a privacy-preserving, non-biometric, decentralized digital credential framework. PHCs leverage anonymous credentials and zero-knowledge proofs, issued by trusted authorities, and enable lightweight local or global verification without revealing personally identifiable information, thereby allowing users to cryptographically attest to their humanity. Unlike CAPTCHAs—vulnerable to adversarial automation—or mandatory real-name registration—compromising user privacy—PHCs strike a novel balance between anonymity and verifiable authenticity. The paper rigorously demonstrates PHCs’ feasibility in three dimensions: mitigating AI-driven identity abuse, strengthening platform-level trust, and safeguarding user privacy. Finally, it outlines a coordinated implementation roadmap integrating policy frameworks, technical infrastructure, and interoperability standards.

Technology Category

Application Category

📝 Abstract
Anonymity is an important principle online. However, malicious actors have long used misleading identities to conduct fraud, spread disinformation, and carry out other deceptive schemes. With the advent of increasingly capable AI, bad actors can amplify the potential scale and effectiveness of their operations, intensifying the challenge of balancing anonymity and trustworthiness online. In this paper, we analyze the value of a new tool to address this challenge:"personhood credentials"(PHCs), digital credentials that empower users to demonstrate that they are real people -- not AIs -- to online services, without disclosing any personal information. Such credentials can be issued by a range of trusted institutions -- governments or otherwise. A PHC system, according to our definition, could be local or global, and does not need to be biometrics-based. Two trends in AI contribute to the urgency of the challenge: AI's increasing indistinguishability from people online (i.e., lifelike content and avatars, agentic activity), and AI's increasing scalability (i.e., cost-effectiveness, accessibility). Drawing on a long history of research into anonymous credentials and"proof-of-personhood"systems, personhood credentials give people a way to signal their trustworthiness on online platforms, and offer service providers new tools for reducing misuse by bad actors. In contrast, existing countermeasures to automated deception -- such as CAPTCHAs -- are inadequate against sophisticated AI, while stringent identity verification solutions are insufficiently private for many use-cases. After surveying the benefits of personhood credentials, we also examine deployment risks and design challenges. We conclude with actionable next steps for policymakers, technologists, and standards bodies to consider in consultation with the public.
Problem

Research questions and friction points this paper is trying to address.

Privacy Protection
AI Identity Verification
Cybersecurity
Innovation

Methods, ideas, or system contributions that make the work stand out.

AI-based authentication
privacy protection
digital certification
🔎 Similar Papers
No similar papers found.
S
Steven Adler
OpenAI
Z
Zoe Hitzig
Harvard Society of Fellows
Shrey Jain
Shrey Jain
Microsoft
C
Catherine Brewer
University of Oxford
W
Wayne Chang
SpruceID
R
Ren'ee DiResta
Independent Researcher, SpruceID
E
Eddy Lazzarin
a16z crypto
S
Sean McGregor
UL Research Institutes
W
Wendy Seltzer
Tucows
D
Divya Siddarth
Massachusetts Institute of Technology
Nouran Soliman
Nouran Soliman
Massachusetts Institute of Technology
Tobin South
Tobin South
Massachusetts Institute of Technology
C
Connor Spelliscy
Decentralization Research Center
M
Manu Sporny
Digital Bazaar
V
Varya Srivastava
University of Oxford
J
John Bailey
American Enterprise Institute
Brian Christian
Brian Christian
University of Oxford
Artificial IntelligenceMachine LearningCognitive ScienceComputational Neuroscience
Andrew Critch
Andrew Critch
UC Berkeley, Department of Electrical Engineering and Computer Sciences
MathematicsStatisticsArtificial IntelligenceMachine Learning
R
Ronnie Falcon
OpenMined
H
Heather Flanagan
Independent Researcher, SpruceID
K
Kim Hamilton Duffy
Decentralized Identity Foundation
E
Eric Ho
Goodfire
Claire Leibowicz
Claire Leibowicz
Head of AI and Media Integrity, Partnership on AI; PhD Candidate, Oxford Internet Institute
Responsible AISynthetic and Manipulated MediaDigital Culture
S
Srikanth Nadhamuni
eGovernments Foundation
A
A. Rozenshtein
University of Minnesota Law School
D
David Schnurr
OpenAI
E
Evan Shapiro
Mina Foundation
L
Lacey Strahm
OpenMined
Andrew Trask
Andrew Trask
University of Oxford and OpenMined
Deep LearningDifferential PrivacySecure Multi-Party ComputationFederated LearningNatural Language Processing
Z
Zoe Weinberg
ex/ante
C
Cedric Whitney
School of Information, University of California, Berkeley
Tom Zick
Tom Zick
Harvard
Law and TechnologyEthical AIReinforcement Learning