OpenClaw on Meta Glasses helps you control your AI agent using voice commands directly from your smart glasses.
This guide walks you through the complete setup, from configuration to live usage, in a simple step by step process.
What is OpenClaw on Meta Glasses?
OpenClaw on Meta glasses is a setup where:
- Glasses capture what you see and hear
- AI understands your request
- OpenClaw performs the action
You speak → AI understands → Task gets done
System Architecture Overview
| Component | Role |
|---|---|
| Meta Glasses | Capture voice and video |
| VisionClaw App | Connects everything |
| AI (Gemini) | Understands commands |
| OpenClaw | Performs actions |
System Requirements
| Requirement | Details |
|---|---|
| Phone | Android or iPhone |
| OS | Android 14+ / iOS 17+ |
| AI | Gemini API key |
| Backend | OpenClaw (hosted or local) |
Step-by-Step Guide
Get Gemini API Key
Required for AI processing.
- Go to Google AI Studio
- Sign in
- Create API key
- Copy and save it
Set Up OpenClaw
You have 2 options:
Option 1: Easy Method (Recommended)
Use a hosted platform like Ampere.sh:
- Go to Ampere.sh and create an account
- Deploy OpenClaw from the dashboard
- Copy your API endpoint and gateway token
Option 2: Manual Local Setup
If you want full control:
npm install -g openclaw
openclaw setup
openclaw gateway startFor gateway connection help, see the OpenClaw Gateway pairing guide.
- Port: 18789
- Enable gateway
- Use same Wi-Fi as phone
Install VisionClaw App
You can install VisionClaw on both Android and iOS.
iOS Setup
git clone https://github.com/sseanliu/VisionClaw.git
cd VisionClaw/samples/CameraAccess
open CameraAccess.xcodeproj- Open in Xcode
- Connect device
- Click Run
Android Setup
git clone https://github.com/sseanliu/VisionClaw.git- Open CameraAccessAndroid in Android Studio
- Add GitHub token (for SDK access)
- Build and run app
Add API Keys
Open config file:
- iOS →
Secrets.swift - Android →
Secrets.kt
Add:
- Gemini API Key
- OpenClaw Host URL
- OpenClaw Port (18789)
- Gateway Token
Save and rebuild app.
Enable Developer Mode
- Open Meta View app
- Go to Settings
- Tap app version multiple times
- Enable Developer Mode
Connect Meta Glasses
- Pair glasses with Meta app
- Open VisionClaw
- Tap Start Streaming
- Tap AI button
Start Using
Speak commands naturally. Examples:
- "What am I looking at?"
- "Send a message"
- "Add reminder"
Once your agent is live, you can explore more automations and tasks. See how to automate with AI agents for ideas on what OpenClaw can do next.
Common Setup Issues
| Issue | Fix |
|---|---|
| AI not responding | Check API key |
| OpenClaw not connecting | Check host + port |
| Build error | Check SDK / dependencies |
| Mic/camera not working | Enable permissions |
Frequently Asked Questions
Can I install OpenClaw directly on Meta glasses?
Can I use OpenClaw on both Android and iPhone?
Is coding required to set this up?
Does this work in real-time?
Is this setup safe to use?
Skip the Complex Setup
Setting up VisionClaw, Xcode, Android Studio, and API keys can be time-consuming. Use Ampere.sh to deploy OpenClaw instantly — no coding, no local server, works with your glasses right away.
Deploy on Ampere.sh →