AI, Embodied.

We asked an impossibly complex question: What would it take to free AI from its digital prison and give it a true, physical place in our world?

For too long, two parallel frontiers have defined our digital world, each waiting for the other. We’ve had “book smart” AI – a powerful brain locked in a text box, unable to see, move, or understand our world in context. And we’ve had AR, which gave us digital objects that could exist in our world, but couldn’t understand it. We’ve had a disembodied brain and a muted body, and the true leap forward was never about making one better, but about finally fusing them.

Project Jade, a new Peridot embodied AI experience and a technological marvel, is the story of what happens when you solve that. It is an experience born not from one breakthrough, but from the simultaneous convergence of four distinct, cutting-edge technologies.

We didn’t just build an app. We gave AI senses.

A vision video showcasing Dot as a truly embodied, spatially-aware AI companion.

It started with giving it eyes.

The entire experience begins with cutting-edge AR Hardware. Running on Snap's AR glasses, Spectacles, Project Jade gives our AI a human-like, first-person perspective. For the first time, an AI can see what you see, as you see it.

But seeing isn't understanding. Next, we gave it a sense of place.

This was the greatest challenge: teaching an AI the invisible rules of a physical space. We solved one of the hardest problems in augmented reality by fusing three technologies:
  1. Niantic's Visual Positioning System (VPS) anchors the AI to the real world with centimeter-level accuracy.
  2. A proprietary pathfinding system ingests 2D floorplans to create a 3D map, teaching the AI every possible route.
  3. Real-time semantic understanding allows it to recognize and avoid real-world obstacles and hazards.
The AI now has a body. It knows where it is, where you are, and can physically navigate complex spaces without walking through walls.

With sight and a sense of place, we gave it a brain.

An AI that can see and move needs to be able to think. A multi-modal generative AI system acts as its brain. It can identify objects in its field of vision, access a deep knowledge base, and form memories.

Finally, to make it a companion, we gave it a soul.

A walking, thinking machine is still just a machine. To make it a friend, it needed to connect. In partnership with Hume AI, we integrated an Empathetic Voice AI. This isn't a robotic assistant. It's an advanced conversational AI that delivers emotionally intelligent, sub-second dialogue. It understands tone and responds with a nuanced, empathetic voice.

The Result: The First True Real-World AI Companion

This convergence is the breakthrough.

For the first time, an AI is truly embodied. It is not a chatbot on a screen or a digital object “stuck” to a wall. It is a persistent, shared companion that multiple people can see and interact with in the same physical space. It understands its environment, and for the first time, can move through it with you.

And because this AI is embodied, its voice becomes revolutionary. It ushers in a new paradigm of interaction: a truly context-aware conversational interface. It can identify what you’re looking at and hold a natural, flowing conversation about it. It can recommend a place to visit and then physically guide you there.

This is the leap from a “voice assistant” that answers to an “AI companion” that explores. It’s the technology that turns a simple walk into a shared journey of discovery.

A demo video, showcasing Project Jade in action on Snap Spectacles.

A Blueprint for the Future of Interaction

Project Jade is not just a demo. It is the blueprint for a new era of interaction.

This fusion of hardware, geospatial mapping, generative AI, and emotional voice is the breakthrough that finally unlocks AI from the text box. This is the technology that will power the next generation of intelligent robotics, hyper-contextual guides, and truly helpful companions that can find connection and joy in the world right alongside us.

A History of Real-World Innovation

Niantic Spatial is the team pioneering AI that understands the physical world. Led by Founder and CEO John Hanke, our mission is to build spatial intelligence that helps people and machines better understand, navigate, and engage with their surroundings.

Our legacy builds on over 25 years of leadership in geospatial AI, mapping, and AR. Originally part of Google, our team pioneered the 3D mapping application ‘Keyhole’ that became the groundwork for Google Earth and Google Maps. As Niantic, Inc., we launched Pokémon GO, a cultural phenomenon that blended the physical and virtual worlds for hundreds of millions of people and has been downloaded over 1 billion times.

Today, our 4-time Webby Award-winning Peridot franchise is a flagship for this boundary-pushing innovation. It is an immersive franchise focused on inventing the future, merging cutting-edge advancements in augmented reality, generative AI, and spatial computing to enhance gameplay in meaningful ways. With experiences across web, mobile, mixed-reality headsets, and AR glasses, Peridot is our living lab for what’s next – and Project Jade is its furthest evolution.