WhatsAI: Transforming Meta Ray-Bans into an Extensible Generative AI Platform for Accessibility

📅 2025-05-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Proprietary wearable multimodal AI systems (e.g., Meta Ray-Bans) lack openness and modifiability, hindering blind and visually impaired (BVI) developers from leading accessible innovation. Method: We introduce the first open-source, hackable full-stack framework tailored for BVI developers—transforming Ray-Bans into an edge-cloud collaborative generative AI platform. It integrates lightweight vision-language models (VLMs) with traditional ML models to enable real-time scene description, object detection, and OCR, and pioneers the Accessible AI Implementation (AAII) paradigm—leveraging WhatsApp for low-barrier, voice-first interaction. Contribution/Results: The system achieves end-to-end latency <1.2 seconds and delivers practical-level speech feedback accuracy. It has already catalyzed multiple BVI-led derivative application prototypes, advancing community-driven democratization of visual accessibility technologies.

Technology Category

Application Category

📝 Abstract
Multi-modal generative AI models integrated into wearable devices have shown significant promise in enhancing the accessibility of visual information for blind or visually impaired (BVI) individuals, as evidenced by the rapid uptake of Meta Ray-Bans among BVI users. However, the proprietary nature of these platforms hinders disability-led innovation of visual accessibility technologies. For instance, OpenAI showcased the potential of live, multi-modal AI as an accessibility resource in 2024, yet none of the presented applications have reached BVI users, despite the technology being available since then. To promote the democratization of visual access technology development, we introduce WhatsAI, a prototype extensible framework that empowers BVI enthusiasts to leverage Meta Ray-Bans to create personalized wearable visual accessibility technologies. Our system is the first to offer a fully hackable template that integrates with WhatsApp, facilitating robust Accessible Artificial Intelligence Implementations (AAII) that enable blind users to conduct essential visual assistance tasks, such as real-time scene description, object detection, and Optical Character Recognition (OCR), utilizing standard machine learning techniques and cutting-edge visual language models. The extensible nature of our framework aspires to cultivate a community-driven approach, led by BVI hackers and innovators to tackle the complex challenges associated with visual accessibility.
Problem

Research questions and friction points this paper is trying to address.

Overcoming proprietary barriers in wearable AI for BVI accessibility
Enabling BVI-led innovation in visual assistance technologies
Democratizing development of customizable wearable visual aids
Innovation

Methods, ideas, or system contributions that make the work stand out.

Extensible framework for Meta Ray-Bans customization
Integration with WhatsApp for accessible AI implementations
Community-driven approach led by BVI innovators
🔎 Similar Papers
No similar papers found.