Robotics Is Having Its ChatGPT Moment: The Startups Turning AI Into Labor
In November 2022, OpenAI released ChatGPT and the world suddenly understood what large language models could do. Within two months, a hundred million people had tried it. Within a year, every Fortune 500 company had an AI strategy. The technology had existed for years before that moment, but the interface, the experience, and the timing aligned to create a phase transition in public understanding and commercial adoption.
Robotics is hitting that same inflection point right now. The underlying technologies, foundation models for physical manipulation, sim-to-real transfer learning, affordable actuators, computer vision, and natural language control, have all crossed critical thresholds simultaneously. For the first time, robots can be taught new tasks in hours instead of months. They can handle objects they have never seen before. They can understand spoken instructions and figure out how to execute them in the physical world.
On TBPN, this has been one of the most discussed topics of the past six months, particularly after the team's deep-dive conversations with Humble Robotics and other companies building in this space. This article lays out the full landscape of why robotics is exploding, who is winning, and what it means for the future of work.
Why the "ChatGPT Moment" Analogy Works for Robotics
The ChatGPT analogy is not hype. It is structurally accurate, and understanding why reveals where the robotics industry is heading.
Before ChatGPT: The Technology Existed but Nobody Cared
Large language models existed before ChatGPT. GPT-3 launched in 2020. Google had been building transformer models since 2017. Academic papers on the technology had been published for years. But outside of a small community of AI researchers and developers, nobody was using these models for anything practical. The technology was impressive in demos but inaccessible to normal people.
Robotics has been in this exact same position. Boston Dynamics has been making viral videos of robots doing backflips since 2013. Industrial robots have been assembling cars for decades. But the gap between "impressive demo" and "useful product that normal businesses can deploy" has remained enormous. A factory that wanted to automate a new task with a traditional industrial robot would spend $500,000 on the hardware, another $500,000 on integration, and wait six to twelve months before the system was operational.
The Interface Breakthrough
What ChatGPT did for language models was make them accessible through a simple chat interface. What foundation models are doing for robotics is making robot programming accessible through natural language and demonstration. Instead of writing code to specify every joint angle and force profile for a manipulation task, an operator can now show a robot what to do a few times, describe the goal in plain English, and the robot figures out the rest.
This is the unlock. The technology shift is not just about making robots more capable. It is about making them dramatically easier to deploy. And that changes the economics of automation for every business that has physical operations.
Foundation Models for Robotics: The Core Technology Shift
The most important technical development driving the robotics revolution is the emergence of foundation models specifically designed for physical manipulation and navigation.
What Are Robotic Foundation Models?
Just as GPT-4 is a general-purpose language model that can be fine-tuned for specific text tasks, robotic foundation models are general-purpose manipulation models that can be fine-tuned for specific physical tasks. These models are trained on massive datasets of robot interactions, both real and simulated, learning the general principles of how to grasp objects, move through space, and manipulate physical things.
The breakthrough papers and projects that catalyzed this shift include:
- RT-2 (Google DeepMind): Demonstrated that vision-language-action models could transfer knowledge from internet-scale data to robot control. A robot trained with RT-2 could follow instructions like "pick up the extinct animal" and correctly identify and grasp a toy dinosaur, even though it had never been explicitly programmed to do so
- Octo (UC Berkeley): An open-source foundation model for robot manipulation that can be fine-tuned for new tasks with as few as ten demonstrations
- RT-X (Open X-Embodiment): A collaborative dataset of over one million robot trajectories from 22 different robot types, enabling models that generalize across different physical platforms
- GR-1/GR-2 (multiple labs): Generalist robot policies that combine language understanding, visual perception, and physical control in unified architectures
Sim-to-Real Transfer: Training Robots in Digital Worlds
One of the most significant breakthroughs enabling the robotics explosion is sim-to-real transfer, the ability to train robots in simulation and have those skills work in the real world. This changes the economics of robot training fundamentally.
Training a robot in the real world is slow, expensive, and potentially dangerous. The robot can only attempt one task at a time. It might break things. It might break itself. And collecting enough training data for a foundation model requires millions of interactions, which would take years in real time.
In simulation, a company can run thousands of parallel instances of a robot, each attempting different tasks, collecting millions of data points per day. NVIDIA's Isaac Sim and similar platforms have made these simulations realistic enough that skills learned in the virtual world transfer to physical robots with minimal additional training. The gap between simulated and real performance, once a chasm, has narrowed to something manageable.
The Humanoid Race: Figure, 1X, Agility, and Tesla
No segment of robotics captures public imagination like humanoid robots. The premise is straightforward: build robots shaped like humans so they can operate in environments designed for humans, using tools designed for human hands, navigating spaces designed for human bodies.
Figure: The Venture-Backed Frontrunner
Figure has raised over $1.5 billion in venture capital, including investments from Microsoft, NVIDIA, OpenAI, and Jeff Bezos. Their Figure 02 humanoid is currently deployed in a pilot program at BMW's manufacturing facility in Spartanburg, South Carolina, performing tasks like loading trays, inspecting parts, and operating alongside human workers.
What sets Figure apart is their integration of large language models into the robot's control system. Figure 02 can hold a conversation with a human coworker, understand multi-step instructions, and explain what it is doing and why. This is not just a party trick. It is a fundamental requirement for robots that work alongside humans in unstructured environments.
1X Technologies: The Norwegian Dark Horse
1X Technologies, backed by OpenAI, has taken a different approach with their NEO humanoid. Rather than targeting manufacturing first, 1X is focused on building a robot that can operate in home environments, performing household tasks like cleaning, organizing, and basic maintenance. Their thesis is that the home market is ultimately larger than the industrial market, and that solving the harder problem of unstructured home environments will create technology that trivially transfers to more structured industrial settings.
Agility Robotics: Purpose-Built for Warehouses
Agility Robotics has been more pragmatic. Their Digit humanoid is specifically designed for warehouse operations, particularly the "last mile" of warehouse automation: picking items off shelves, loading and unloading containers, and moving goods between conveyor systems and storage locations. Agility has a manufacturing facility in Salem, Oregon, capable of producing thousands of Digits per year and has signed commercial agreements with Amazon and other logistics companies.
Tesla Optimus: The Wildcard
Tesla's Optimus program remains the most ambitious and most uncertain player in the humanoid space. Elon Musk has claimed that Optimus will eventually be priced under $20,000 and will perform a wide range of household and industrial tasks. The engineering demonstrations have shown steady progress, but the gap between Tesla's demos and the capabilities of competitors like Figure and Agility remains a subject of debate in the robotics community.
What Tesla does have is an unmatched ability to manufacture at scale and drive down costs. If Optimus reaches even basic commercial viability, Tesla's manufacturing infrastructure could make humanoid robots a mass-market product in a way that no startup can match.
First Real Markets: Where Robots Make Money Today
While humanoid robots dominate the headlines, the robotics industry is already generating significant revenue in less glamorous but highly practical applications.
Warehouses and Logistics
The warehouse automation market exceeds $20 billion annually and is growing at 15-20% per year. Companies like Amazon Robotics (formerly Kiva Systems), Locus Robotics, and 6 River Systems have deployed hundreds of thousands of robots in fulfillment centers worldwide. The newest generation of warehouse robots can pick individual items from shelves, a task that was considered too difficult for robots just five years ago.
Manufacturing
Traditional industrial robots from FANUC, ABB, and KUKA have been automating manufacturing for decades, but AI is transforming what they can do. New collaborative robots, or cobots, from companies like Universal Robots can work alongside humans safely, learn new tasks through demonstration, and adapt to variations in parts and processes without reprogramming.
Data Center Maintenance
As data centers proliferate to support AI workloads, the need for automated maintenance has become urgent. Robots that can swap hard drives, connect cables, inspect equipment, and monitor environmental conditions are being deployed by Google, Microsoft, and hyperscale data center operators. This is a market that barely existed five years ago and is now growing rapidly.
Agriculture
Agricultural robots for harvesting, weeding, and monitoring crops represent a massive market opportunity. Companies like Aigen and Carbon Robotics are deploying autonomous systems that can identify and remove weeds without herbicides, harvest delicate fruits, and monitor plant health at scale. The labor shortage in agriculture makes this one of the most economically compelling applications for robotics.
We cover these market dynamics daily on TBPN. If you are in the robotics space or investing in it, having a TBPN mug on your desk is a signal to everyone on your Zoom calls that you are tracking the smartest analysis in tech.
VC Funding: The Numbers Behind the Boom
Venture capital investment in robotics has reached record levels. In 2025, robotics startups raised over $12 billion globally, more than double the 2023 total. The first quarter of 2026 is on pace to exceed even that record.
Key funding rounds that illustrate the scale of investment:
- Figure: $675 million Series B at a $2.6 billion valuation (2024), followed by additional rounds totaling over $1 billion
- Physical Intelligence (PI): $400 million to build foundation models for robot manipulation
- Skild AI: $300 million for a general-purpose robotic brain
- 1X Technologies: $100 million Series B for humanoid robots
- Covariant: $222 million for AI-powered robotic picking in warehouses
The funding is coming from a mix of traditional venture capital firms like Sequoia, Andreessen Horowitz, and Khosla Ventures, along with strategic investors including NVIDIA, Microsoft, Amazon, and OpenAI. The strategic interest is particularly telling: these companies are betting that robotics will be a major application of the AI systems they are building.
Hardware Cost Curves: Why Robots Are Getting Cheap
The cost of building a capable robot has dropped dramatically over the past five years, driven by several converging trends:
- Actuator costs: Electric actuators, the motors that move robot joints, have dropped from $5,000-10,000 per unit to $500-1,000 thanks to advances in motor design and manufacturing scale driven by the electric vehicle industry
- Sensor costs: LiDAR sensors that cost $75,000 in 2015 now cost under $500. Cameras suitable for robot vision cost under $50. IMUs (inertial measurement units) cost pennies
- Compute costs: The processors needed to run AI models on a robot have become dramatically more powerful and efficient. NVIDIA's Jetson platform provides GPU-accelerated computing for robots at a fraction of the cost of previous solutions
- Software costs: Open-source frameworks like ROS 2 (Robot Operating System), combined with foundation models that reduce the need for custom programming, have slashed the software development cost of new robot applications
The result is that a capable mobile manipulation robot that would have cost $500,000 five years ago can now be built for under $50,000 in parts. At scale production, humanoid robots could potentially reach the $20,000-30,000 range within the next three to five years.
The Labor Market Impact: What Changes and When
The question everyone asks about robotics is whether robots will take human jobs. The answer is nuanced and time-dependent.
Near Term (2026-2028): Augmentation, Not Replacement
In the near term, robots will primarily fill roles that humans either cannot or do not want to fill. The United States has approximately 8.5 million unfilled jobs, concentrated in manufacturing, warehousing, agriculture, and food service. Robots that can perform routine physical tasks in these industries are more likely to address labor shortages than to displace existing workers.
Medium Term (2028-2032): Task Automation Within Roles
As robots become more capable, they will automate specific tasks within broader job roles. A warehouse worker who currently spends 60% of their time walking and carrying items might see those tasks automated while retaining responsibility for quality control, exception handling, and supervision. This changes the nature of work rather than eliminating it.
Long Term (2032+): Structural Labor Market Shifts
If humanoid robots achieve the cost and capability levels that companies like Figure and Tesla are targeting, the long-term impact on the labor market could be profound. A $20,000 robot that can perform eight hours of physical labor per day at a cost of approximately $3-4 per hour would be dramatically cheaper than human labor in most countries. The economic incentive to deploy these systems would be overwhelming.
The policy implications are significant and are a frequent topic of discussion on TBPN's daily livestream. How societies handle this transition, through retraining programs, universal basic income, new forms of taxation, or other mechanisms, will be one of the defining political questions of the next decade.
TBPN's Framework: The Three Waves of Robotics Commercialization
Based on our ongoing coverage of the robotics industry, TBPN has developed a three-wave framework for understanding how robotics will commercialize:
Wave 1 (Now-2027): Structured single-task automation. Robots that perform one specific task in a controlled environment. Examples: warehouse picking robots, palletizing systems, autonomous forklifts. This wave is already generating billions in revenue.
Wave 2 (2027-2030): Multi-task operation in semi-structured environments. Robots that can perform multiple tasks in environments with some variability. Examples: manufacturing cobots that switch between assembly tasks, agricultural robots that handle multiple crop types, service robots in hotels and restaurants. This wave is where foundation models make the biggest difference.
Wave 3 (2030+): General-purpose robots in unstructured environments. Robots that can navigate and perform useful work in any human environment. Examples: household robots, construction robots, general-purpose humanoids. This wave requires breakthroughs in both AI and hardware that are not yet fully realized but are on the visible horizon.
If you are a founder building in robotics, an investor evaluating opportunities, or simply someone who wants to understand one of the most transformative technology shifts in human history, TBPN is where this story gets covered every single day. Grab a TBPN t-shirt and join the thousands of founders, investors, and engineers who tune in at 11 AM PT.
Frequently Asked Questions
What is a foundation model for robotics and why does it matter?
A foundation model for robotics is a large AI model trained on massive datasets of robot interactions that learns general principles of physical manipulation and navigation. Just as GPT-4 learned general language skills from internet text and can be adapted to specific tasks through prompting or fine-tuning, robotic foundation models learn general manipulation skills from millions of robot demonstrations and can be adapted to new tasks with minimal additional training. This matters because it reduces the time and cost of teaching a robot a new task from months to hours, fundamentally changing the economics of automation. Companies like Physical Intelligence, Google DeepMind, and Skild AI are leading the development of these models.
Will humanoid robots really be available for under $20,000?
The $20,000 price point for humanoid robots is ambitious but not unrealistic within the next five to seven years. The key cost drivers, actuators, sensors, compute, and structural materials, are all on declining cost curves driven by manufacturing scale from adjacent industries like electric vehicles and consumer electronics. Tesla's manufacturing expertise and vertical integration make them the most likely candidate to hit this price point first, though Figure and other well-funded startups are also targeting aggressive pricing. The precedent is industrial robot arms, which cost over $100,000 twenty years ago and can now be purchased for under $15,000 from companies like Universal Robots.
Which industries will be most affected by robotics in the next five years?
Warehousing and logistics will see the most immediate impact, with autonomous mobile robots and picking systems becoming standard equipment in fulfillment centers. Manufacturing will continue its decades-long automation trend, accelerated by AI-powered cobots that are easier to deploy. Agriculture will see significant adoption of autonomous harvesting and weeding robots as labor shortages intensify. Food service, particularly quick-service restaurants, will begin deploying robots for food preparation and delivery. Data center maintenance is an emerging but rapidly growing market. Healthcare logistics, including automated pharmacy fulfillment and hospital supply delivery, will expand significantly.
How does TBPN cover the robotics industry?
TBPN covers robotics as part of its daily live tech show, airing from 11 AM to 2 PM PT on YouTube and X. The show features regular segments on robotics news, funding rounds, and product launches, along with interviews with founders building in the space, including deep conversations with companies like Humble Robotics. John Coogan and Jordi Hays bring a unique perspective that combines technical understanding with venture capital insight, making TBPN's robotics coverage particularly valuable for founders, investors, and engineers working in the industry. Past episodes covering robotics are available on YouTube and as podcast episodes on all major platforms.
