Why World Models Are AI's Next Big Thing

2 hours ago 1

LLMs, or large language models, are the technological foundation of today's AI. Chatbots such as ChatGPT and Gemini use LLMs to create the natural-sounding text you see on your screen. But LLMs may not be the most consequential AI technology.

"These LLMs will be a massively important component of the final AI system," Google DeepMind CEO Demis Hassabis told Bloomberg at the World Economic Forum. "The only question in my mind is, is it the only component?"

Hassabis goes on to talk about how other breakthroughs will be coming to ensure that the next generation of AI systems work together seamlessly. One of those "very important" breakthroughs is world models. World models are built to translate our physical world -- such as the laws of physics, object detection and movement -- into a digital blueprint that AI can understand. It's less concerned with creating words and more focused on understanding our natural world, something current AI models are bad at.

You likely won't interact with world models in the same way you do with LLM-powered tech, like through chatbots. However, the world models will demonstrate how AI can create realistic videos, guide surgical robots and enhance autonomous vehicles' driving capabilities. They're important building blocks in developing what's called physical AI -- tech that not only understands our world but can take actions in it.


Don't miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.


A variety of AI pioneers have signaled a shift toward building a world model. Yann LeCun, a leading AI pioneer, has recently left his role leading Meta's AI efforts to join a startup focused on building world models. Fei-Fei Li, colloquially known as the godmother of AI, has said spatial intelligence -- the ability to understand your physical environment -- is the next frontier for tech innovation. 

"Spatial intelligence will transform how we create and interact with real and virtual worlds -- revolutionizing storytelling, creativity, robotics, scientific discovery and beyond," she wrote in a November blog post.

Nvidia CEO Jensen Huang also dedicated a portion of his CES 2026 keynote to the company's efforts in world models. Building an AI model that's grounded in the laws of physics and ground truth starts with the data used for training, Huang said. 

Watch this: Every Announcement from the Nvidia Live CES 2026 Stream

09:02

AI models of every flavor require immense quantities of data to build and refine their outputs. Typically, AI companies rely on content created by real humans -- with and without their permission -- which has led to major legal showdowns. World models can be built with human data, including simulations. That data is essential to building world models that can reason and make cause-and-effect judgments.

Cosmos world model

Nvidia's world model Cosmos uses text, image and video to understand the physical world.

Nvidia/Screenshot by CNET

One area where Nvidia is using world models is in self-driving cars. In a live demo, Nvidia demonstrated how its world model, Cosmos, uses a car's sensors to understand its own position and that of every other nearby car on the road to create a live video of its surroundings. Developers can use that information to run scenarios, like car accidents, to see how the vehicle would respond and make necessary safety improvements. Synthetic data, or nonhuman-generated data, can also be helpful in tandem with world models to help predict rare "edge cases."

As AI continues to be integrated into every part of our online lives, it's essential that it can understand our physical world, rather than continue to hallucinate and make mistakes. Renewed research and investment from industry leaders in spatial intelligence, world models and physical AI show that the industry isn't just going to build more chatbots -- it's working on building AI that's more rooted in our reality, rather than the other way around.

Read Entire Article