Meta has launched a new artificial intelligence system called V-JEPA 2, aimed at transforming how machines understand and navigate the physical world.
The open-source model, revealed on Wednesday at the VivaTech conference in Paris, mimics human reasoning to anticipate physical outcomes—like a ball falling off a table or an object staying present when it’s out of sight.
Unlike traditional models that rely on annotated images or video, V-JEPA 2 uses a “latent space” to simulate real-world dynamics, marking a shift from language-based AI to more spatially aware systems.
New AI breakthrough simulates human-like reasoning in real time
The V-JEPA 2 model represents Meta’s latest advance in AI systems known as “world models”—a concept gaining traction among developers seeking to move beyond large language models.
These systems attempt to build internal simulations of reality that help machines predict outcomes and plan actions accordingly.
According to Meta, V-JEPA 2 can perform this reasoning without needing labelled video footage, setting it apart from existing generative AI systems like ChatGPT or Gemini.
The model is built to enable real-time spatial understanding for AI-driven technologies such as autonomous vehicles, warehouse robots, and drone delivery systems.
In a video presentation, Meta’s Chief AI Scientist Yann LeCun described V-JEPA 2 as an “abstract digital twin of reality” that allows AI to “predict consequences of its actions” and “plan a course of action to accomplish a given task.”
Meta expands AI focus with $14B Scale AI investment
Meta’s launch of V-JEPA 2 comes at a time when the company is doubling down on its AI ambitions.
The tech giant is reportedly investing $14 billion into Scale AI, a San Francisco-based startup that supplies training data for machine learning.
The firm, founded by Alexandr Wang, is expected to play a key role in Meta’s next phase of AI development.
According to people familiar with the matter, Wang is also being hired to lead key AI initiatives at Meta.
This investment aligns with CEO Mark Zuckerberg’s stated goal of embedding AI into Meta’s core offerings.
The company is not only looking to improve Facebook and Instagram’s user experience through AI but also to develop long-term capabilities in robotics and autonomous systems.
Competition heats up among world model developers
Meta’s efforts follow a growing trend in AI research towards world modelling.
In September last year, AI researcher Fei-Fei Li raised $230 million for a startup called World Labs, which is also focused on building large-scale world models.
Meanwhile, Google’s DeepMind unit is testing its own world model project called Genie, designed to simulate games and virtual environments in real time.
Unlike large language models that interpret and generate text, world models prioritise spatial understanding, causal reasoning, and prediction.
These models could become essential for any AI operating in dynamic, real-world environments—from delivery bots to factory automation systems.
How V-JEPA 2 could reshape AI applications
Meta has made V-JEPA 2 open-source, allowing developers to access, test, and integrate it into a variety of use cases.
This includes devices that need to navigate their surroundings with minimal human input or context from labelled data.
According to Meta, the model’s reliance on simplified spatial reasoning rather than heavy data input could make it more efficient, adaptable, and scalable than existing AI models.
The implications go beyond logistics and robotics.
If world models like V-JEPA 2 continue to develop as expected, they may pave the way for AI to operate autonomously in unfamiliar environments, opening up use cases in fields such as healthcare, agriculture, and even disaster relief.
Meta shared that the launch marks a key milestone in its long-term AI roadmap, especially as competition from OpenAI, Microsoft, and Google intensifies.
As world models become more central to AI progress, V-JEPA 2 positions Meta to take a leading role in the race to develop general-purpose artificial intelligence that can think and act more like a human in the real world.
The post Meta unveils V-JEPA 2: AI model predicts real-world movement without video data appeared first on Invezz