🤖 AI Summary
Generative AI (GAI) systems frequently omit or erase salient identity attributes—such as gender and disability—in image captioning, inflicting systemic representational harm on indirect users (e.g., image owners) who neither operate the GAI nor consent to such erasure.
Method: This paper reframes prompt injection not as an adversarial exploit but as an empowerment mechanism, proposing lightweight identity directives embedded directly into image metadata. These directives explicitly instruct GAI models to preserve the owner’s gender and disability identities during caption generation. The approach requires no platform-level intervention or model fine-tuning, enabling marginalized users to assert self-representation through autonomous content annotation.
Contribution/Results: Empirical case studies demonstrate significant improvements in both accuracy and visibility of identity attributes in generated captions. The method enhances representational agency and bias resistance for indirect users within AI-mediated content ecosystems, establishing a novel, accessible pathway for equitable AI participation.
📝 Abstract
Generative AI risks such as bias and lack of representation impact people who do not interact directly with GAI systems, but whose content does: indirect users. Several approaches to mitigating harms to indirect users have been described, but most require top down or external intervention. An emerging strategy, prompt injections, provides an empowering alternative: indirect users can mitigate harm against them, from within their own content. Our approach proposes prompt injections not as a malicious attack vector, but as a tool for content/image owner resistance. In this poster, we demonstrate one case study of prompt injections for empowering an indirect user, by retaining an image owner's gender and disabled identity when an image is described by GAI.