← Back to Projects
GuidePup Icon

GuidePup

AI-powered vision assistant helping visually impaired users understand their surroundings through real-time camera analysis and voice feedback.

Status: App Store Review Started: Oct 2025 Lead: Charlie Han

🎯 The Problem

Over 2.2 billion people worldwide have vision impairment. Many rely on others to describe their surroundings, read signs, or navigate unfamiliar spaces.

Independence Limited

Daily tasks like reading menus, identifying products, or navigating new places require assistance.

Existing Tools Fall Short

Most apps only do OCR or object detection—they don't provide contextual understanding.

Cost Barriers

Professional assistive devices can cost thousands of dollars.

👥 Users & Impact

2.2B
People with vision impairment globally
Free
Open-source, no subscription
Real-time
Instant voice feedback

Target Users

  • Visually impaired individuals seeking daily independence
  • Elderly users with declining vision
  • Caregivers and accessibility advocates
  • Schools and organizations serving blind communities

🎬 Demo

🌐
Web demo coming soon

Try GuidePup in your browser — no install required. We're building it.

Video Walkthrough

▶️

Demo video coming soon

2-3 min walkthrough of core features

To add a video, set the data-video-url attribute with a YouTube, Vimeo, or direct video URL.

🚀 How to Run

📱iOS (TestFlight)
💻From Source

Install via TestFlight

  1. Install TestFlight from the App Store
  2. Open the invite link (coming soon)
  3. Tap "Accept" and then "Install"
  4. Grant camera and microphone permissions when prompted

🏗️ Architecture

┌─────────────────────────────────────────────────────────────┐
│                        GuidePup App                         │
├─────────────────────────────────────────────────────────────┤
│                                                             │
│  ┌──────────────┐    ┌──────────────┐    ┌──────────────┐  │
│  │   Camera     │───▶│   Frame      │───▶│   GPT-4      │  │
│  │   Module     │    │   Capture    │    │   Vision     │  │
│  └──────────────┘    └──────────────┘    └──────────────┘  │
│                                                  │          │
│                                                  ▼          │
│  ┌──────────────┐    ┌──────────────┐    ┌──────────────┐  │
│  │   Audio      │◀───│   Text-to-   │◀───│   Response   │  │
│  │   Output     │    │   Speech     │    │   Parser     │  │
│  └──────────────┘    └──────────────┘    └──────────────┘  │
│                                                             │
├─────────────────────────────────────────────────────────────┤
│  Storage: AsyncStorage │ API: OpenAI │ Platform: iOS/Expo  │
└─────────────────────────────────────────────────────────────┘
                            

Tech Stack

React Native Expo GPT-4 Vision iOS Speech TypeScript

✨ Key Features

📷

Real-time Analysis

Point your camera and get instant AI-powered descriptions of your surroundings.

🗣️

Voice Feedback

Natural text-to-speech reads descriptions aloud. Adjustable speed and voice.

🎯

Context Aware

Understands scenes, not just objects. "A crowded coffee shop with an empty table near the window."

Accessibility First

Built with VoiceOver support, high contrast, and large touch targets.

📴

Offline History

Review past descriptions offline. Useful for remembering locations or items.

🔒

Privacy Focused

Images processed in real-time, not stored. Your API key stays on device.

📊 Metrics & Results

~2s
Average response time
95%
Scene accuracy (internal testing)
1
App Store submission
100%
VoiceOver compatible

Testing Notes

Tested with 5 visually impaired beta users. Key feedback: "Finally an app that tells me what's happening, not just what objects are in frame."

🗺️ Roadmap

v1.0 — Core App

Camera capture, GPT-4 Vision integration, voice output

v1.1 — App Store Submission

Polish UI, accessibility audit, submit for review

v1.2 — Beta Testing

TestFlight rollout, gather user feedback

v2.0 — Navigation Mode

Step-by-step guidance, obstacle detection

v2.1 — Android

Port to Android, expand reach

🙏 Credits

Project Lead

Charlie Han

Development

Atrak Team

AI/ML

OpenAI GPT-4 Vision API

Framework

React Native + Expo

Special thanks to our beta testers from the blind community who provided invaluable feedback.