Google Launches Project Genie AI for Real-Time Interactive World Creation

Google CEO Sundar Pichai has announced the launch of Project Genie, a new experimental artificial intelligence tool from Google DeepMind that allows users to create and explore interactive digital worlds in real time.

Pichai shared the announcement on X, formerly known as Twitter, stating that he has been personally testing the tool. He described the experience as impressive and highlighted its ability to generate interactive environments using simple prompts.

“Project Genie is a prototype web app powered by Genie 3, Nano Banana Pro, and Gemini that lets you create your own interactive worlds,” Pichai wrote. “I’ve been playing around with it a bit, and it’s out of this world.”

The rollout introduces Project Genie to Google AI Ultra subscribers in the United States as Google continues to push deeper into the competitive global artificial intelligence space. The company said early access will help gather feedback that will guide future research and product development.

Google DeepMind CEO Demis Hassabis also confirmed the launch in a separate post on X. He said the tool represents a major step forward in world modeling and interactive simulation.

“Thrilled to launch Project Genie, an experimental prototype of the world’s most advanced world model,” Hassabis wrote. “Create entire playable worlds to explore in real time from a simple text prompt.”

In an official statement, Google said it has started rolling out access to Project Genie for Google AI Ultra subscribers in the U.S. aged 18 and above. The company added that it plans to expand availability to more regions over time.

Project Genie is built on Genie 3, a general-purpose world model that Google DeepMind first previewed in August. Unlike traditional 3D environments that rely on pre-built scenes, Genie 3 generates the path ahead dynamically as users move through a world.

As users interact with the environment, the system predicts and creates what comes next in real time. This approach allows the AI to simulate physics, movement, and interactions across evolving digital spaces rather than static settings.

The prototype combines Genie 3 with Gemini and Nano Banana Pro, two other Google AI systems. Together, these models enable users to build, explore, and modify digital worlds instead of simply viewing them.

Users can create environments using text prompts or uploaded images. They can specify characters, settings, and modes of movement such as walking, driving, or flying. Users can also choose first-person or third-person perspectives before entering a world.

After creation, users can freely explore the environment. The system adjusts the world in real time based on movement and camera direction. Users can also remix existing worlds by modifying prompts or selecting examples from a gallery. Completed experiences can be recorded and downloaded as videos.

Google DeepMind said Project Genie remains an early research prototype and has several limitations. Generated worlds may not always match user prompts. Physics may not appear realistic, and characters can be difficult to control. Each generated experience is currently limited to 60 seconds.

Despite these constraints, Google said Project Genie plays a key role in its broader artificial intelligence strategy. The company views the project as part of its long-term effort to develop artificial general intelligence, or AGI, capable of operating across a wide range of real-world scenarios.