- Everyone is focused on agentic AI but physical AI is also starting to break through
- Physical AI faces a range of hurdles, including a lack of the right kind of data needed to train the models
- Still, top AI vendors are expected to offer physical AI products within the next three years
First, there was machine learning. Then, generative AI. Now, we’re barreling toward agentic AI with gusto. But while it may seem – and is indeed – a massive leap forward, Agentic AI is a waypoint rather than an endpoint. So, what’s next? It seems there’s already an answer to that: physical AI.
What is physical AI?
Put simply, physical AI refers to AI that is applied by machines to understand and enable complex interactions with the world. Think robots, drones, autonomous vehicles and the like.
Nvidia CEO Jensen Huang thrust physical AI into the general consciousness during his January 2025 CES keynote. But the term has been popping up more frequently in recent months.
Huang dedicated an entire segment of his GTC DC keynote in October to the topic and we’ve also seen it mentioned in recent releases by companies like AWS, Cisco, Red Hat, Accenture, Akamai, Richtech and more. Gartner flagged physical AI as one of its top strategic technology trends for 2026 and even the World Economic Forum (WEF) wrote a whitepaper about it.
So, what is physical AI good for? Well, as the WEF noted in its paper, “These intelligent robotic systems combine perception, reasoning and action, enabling a level of autonomy and adaptability that marks a critical juncture in industrial automation. By bridging the digital and physical realms, physical AI promises to reimagine how industrial systems function – from factory floors to supply chains.”
Companies like Nvidia have also argued that physical AI has the potential to help fill gaps created by labor shortages.
Overcoming AI hurdles
But enabling physical AI isn’t as simple as plunking a large language model in a robot. As consulting firm Arthur D. Little noted in a recent blog, LLMs “lack any real sensory information and knowledge about the world and how it works conceptually.” Thus, physical AI “must be grounded in sensory data” in order to “move beyond narrow, task-specific functions.”
The problem is there’s a shortage of the right kind of real-world data to train physical AI systems, according to both WEF and Arthur D. Little. Nvidia thinks the answer is simulation (i.e., digital twins) and world foundation models, which are models that understand and generate synthetic data about the state of the physical world. Available world foundation models include Nvidia’s Isaac and Cosmos and Covariant’s RFM-1.
Then there’s the whole problem of power efficiency and stuffing sufficient compute muscle inside robots. This is a familiar can of worms for anyone working on edge compute use cases.
Arthur D. Little highlighted safety as another key issue that needs to be addressed before physical AI hits primetime. Why? “AI that acts directly on the physical environment can cause harm to humans,” the company stated in its blog.
While there aren’t easy solutions to these challenges, there’s enough potential in physical AI for folks like Nvidia to press ahead. Potential use cases span ecommerce fulfillment, electronics manufacturing, healthcare and more.
Gartner has predicted that half of the top AI vendors will offer physical AI products by 2028 and 80% of warehouses will use robotics or automation by the same year.