Magic Lens: Vibing a Disney-Like Experience

2026 · AI Vibe Coding · Immersive Mobile · Personal

Designed and shipped an AI-powered immersive character experience proof of concept — camera, voice, visual effects — running in a mobile browser. Built solo in 12 hours across 3 days.

The proof of concept

Disney's most immersive experiences cost billions and require a guest to travel to them. I wanted to know what the minimum viable version of that magic looked like — on a phone, in a browser, with no engineering team.

Magic Lens is the answer: point your camera at any environment and a cast of AI characters narrate what they see, in voice, with animated sprites overlaid on the live camera feed. And there’s even a lightweight storyline.

Impact

Concept to deployed, shareable web app in 12 hours across 3 days

Zero engineering team — designed, built, and debugged solo with AI assistance

Full pipeline working on iPhone: camera → Claude vision → character narration → ElevenLabs voice → canvas effects

Established a repeatable AI design-to-code workflow across Figma, Cursor, and Claude

The Problem

The obvious version of this demo would be embarrassing.

An AI that looks at a photo and says "I see a desk, a lamp, and some books" is a novelty, not an experience. The design challenge wasn't technical — it was avoiding that trap. Disney's Imagineers don't label environments; they interpret them through character and story. A park bench isn't furniture; it's where a character might rest on their journey. That's the bar this project was trying to clear.

The harder question: could a single person, with no backend engineering background, build something that clears it — using AI as both the creative medium and the production tool?

The live experience: Claude narrates the scene in character, voice plays through AirPods, animated sprite appears over the camera feed. And it’s all tied in to a lightweight storyline, bringing meaning and purpose to each scene, and inspiring the user to continue exploring.

My Approach

Framed

Started with the experience design question, not the technical one. Four character archetypes — each with a distinct voice, visual signature, and narrative style — selected by AI based on what the camera sees. Context-adaptive narration, the way Disney transitions between lands.

Built

Generated the full stack with AI assistance: Next.js app, Claude Vision API integration, ElevenLabs TTS pipeline, canvas-based particle effects, animated character sprites. Deployed to Vercel from day one so testing happened on the real target, not a simulator.

Tested

Took it outdoors. Real-world use revealed what desk testing missed: voice drop-outs when ElevenLabs quota ran out, two-call latency breaking conversational rhythm, visible controls that invited wrong interactions. Each finding drove a design change.

Shipped

Redesigned the UI in Figma, translated it to code through a three-tool pipeline (Figma → Cursor → Claude), and diagnosed a production deployment failure caused by a serverless bundling constraint that only appeared on Vercel — invisible in local testing.

Before and after the Figma redesign: emoji buttons replaced with SVG icons, character selector hidden, controls minimal enough to disappear into the experience.

The Outcome

A working proof-of-concept demonstrating that AI can function as a medium for immersive narrative experience — not just a utility. Four Disney-inspired characters self-select based on scene context, narrate in ElevenLabs voice through AirPods, and appear as animated sprites over the live camera feed. It runs in Safari. No app install. No special hardware. Just a URL.

More practically: a demonstration that one designer, fluent in current AI tools, can design, build, debug, and ship experiences of this complexity in days, not months. The workflow itself — moving fluidly between Figma, Cursor, and Claude, understanding what each tool does best and where the handoffs belong — is as much the deliverable as the app.

From 100% Vercel error rate to 0% — diagnosing a serverless bundling constraint that worked locally and failed in production.

The Insight

The most interesting finding wasn't technical. When AI tools generate code from a design, they don't just translate — they interpret. The Figma spec showed six buttons floating over the camera feed with no container. The AI added a frosted-glass bar that wasn't there. The addition was arguably an improvement. But it wasn't the design.

That moment captures something important about where AI-assisted design is heading. The designer's review role is shifting from "does this match the spec?" to "is the AI's interpretation better or worse than my intent?" That's a different skill than traditional QA — and one that will matter more as these tools mature. The designer who can evaluate AI creative departures, not just verify compliance, is the one who stays in control of the work.