Featured Topics
Featured Products
Events
S&P Global Offerings
Featured Topics
Featured Products
Events
S&P Global Offerings
Featured Topics
Featured Products
Events
S&P Global Offerings
Featured Topics
Featured Products
Events
Financial and Market intelligence
Fundamental & Alternative Datasets
Government & Defense
Professional Services
Banking & Capital Markets
Economy & Finance
Energy Transition & Sustainability
Technology & Innovation
Podcasts & Newsletters
Financial and Market intelligence
Fundamental & Alternative Datasets
Government & Defense
Professional Services
Banking & Capital Markets
Economy & Finance
Energy Transition & Sustainability
Technology & Innovation
Podcasts & Newsletters
Blog — Jan. 27, 2026
By Neil Barbour
Robots took a decidedly unhurried step forward in the physical AI era at CES from Jan. 6–9, 2026, as over 140,000 people poured into Las Vegas to bear witness to an emerging class of automatons folding laundry, carrying free weights and playing ping pong.
There are two key dynamics pushing functional humanoid robots closer to reality. The first is perhaps something everyone saw coming: As AI has become more capable over the past half decade, the industry has naturally wondered what all that advanced reasoning would look like in the real world. Having a computer write a poem is great and all, but wouldn’t a robot doing chores be more useful?
That's where the term physical AI comes into play. It's the act of bringing AI into the physical world in the form of machinery. The same large model approach that helps something like ChatGPT string words together in a sensical way can be used to help a robot figure out what next step it needs to take or a driverless car what its next turn should look like.
The issue, so far at least, is that the robots are a little sluggish. Watching LG Corp.'s CLOiD try to do laundry was an exercise in extreme patience as the robot painstakingly put a single towel in a washing machine, taking a little over 30 seconds to do so.
They’ll have to speed that up a bit if they wish to deliver a more practical humanoid helper agent.
Perhaps that's where NVIDIA Corp. comes in. We were invited to a media Q&A with President and CEO Jensen Huang, during which he highlighted the potential of physical AI and the work his company is doing behind the scenes with its Cosmos Reason 2 visual language model.
The second dynamic advancing practical robotics revolves around good old-fashioned engineering. As Elon Musk famously said of his own robotics ambitions, "Hands are hard," reflecting how difficult it was to build a robot that could actually pick something up. The current wave of robots can move, lift and carry things in dynamic environments in ways that their predecessors could not.
For instance, WIRobotics' Allex is a robot capable of lifting 30 kilograms with hands that can also high five a human and make heart hands. This functionality is the result of a complex array of sensors that can feel up to 100 grams of fingertip force, as well as 20 joints with 15 degrees of motion. There are also advancements in balancing and "backdrivability" in the arms, waist and neck that allow Allex to be pliable, nimble and responsive.
AgiBot Innovation (Shanghai) Technology Co. Ltd. offered a similar demo at NVIDIA's booth, and we talked to smaller startups such as Matrix and Ensuring Operational Safety Ltd., which are working specifically on robotic hands.
But will any of this actually make it to market? Most of the teams we talked to said their robots would be working their way into research or pilot programs over the next few years, with few details on timing and even fewer details on pricing.
That means humanoid robots could one day, in the not-too-distant future, start slotting into warehouses and manufacturing, but it seems far less likely that the average consumer will soon have access to another set of hands to take on the drudgery of daily chores.
Instead, consumers will have to make do with pricey robotic pets, which we saw plenty of on the floor.
For instance, Hengbot's Sirius is a small, dog-like robot with impressive locomotion that can take commands such as to sit and beg. Sentigent Technology's Rovar X3 is an outdoor companion robot that can maneuver complex terrain with all terrain tires.
XREAL, Lumus key players in smart glasses glow up
There were not as many smart glasses on the show floor this year, but those that showed up made a strong statement.
We talked with waveguide developer Lumus, Ltd., which is taking a bit of a victory lap after announcing in December that it is driving the visualization tech on the Meta Ray Ban Displays. Meta deployed a display with a 20-degree field of view, ideal for what is known as "data snacking."
But Lumus had grander ambitions at the show, showing its ZOE waveguide solution capable of displaying a 70-degree field of view.
Lumus has had one of the better waveguide demos for a few years, with superb color accuracy and brightness, which likely helped it land the Meta partnership.
However, it's unclear who the ideal partner might be for the wider FOV in the near term as we suspect that many of the larger companies exploring smart glasses (Apple Inc., Samsung Electronics Co. Ltd.) will probably also lean toward unobtrusive displays over immersion. That said, if and when the demand for more robust data visualization starts to ramp, Lumus will have been refining the process.
Listen to our new podcast Data & Dimensions Ep. 1: The Future of XR Hardware
That is not to say that immersive smart glasses aren't already in the market and making big moves.
We got to demo the XREAL 1S, which bumps up the specs on the previous model while cutting $50 off the asking price (down to $450). XREAL also had a gaming-themed model on display with partner ROG.
XREAL Inc.'s glasses are more personal media viewers than digital assistants. They use birdbath optics instead of waveguide, which cuts the price but requires a thicker stack of tech in the lenses. Plus, they're shaded like sunglasses for the sake of perceived image brightness.
The next step is in XREAL's partnership with Alphabet Inc., which will result in the second Android XR device (after the Samsung Galaxy XR) sometime in the next 18 months.
XREAL's reliance on birdbath technology could really give it a pricing edge in the mixed reality space currently occupied by Apple's Vision Pro, and while it would likely be more expensive than the Meta Quest 3, it would also be substantially smaller.
A true disruptor in the console space?
The NEX Team Inc. was keeping its foot on the gas after reportedly outselling the Xbox Series consoles at retail in the US over the 2025 holiday shopping season. The NEX Playground is an Android-based gaming console that tracks user movements through a front-facing camera and uses them as game inputs.
For instance, we played a game that let us swing a real wiffle bat to simulate a home run derby. The game even gave us tips for a better swing.
NEX Playground made a big push into physical retail over the past year, with interactive displays in Target Corp. and Walmart Inc., which seemed to win over customers in a big way. The company's strategy underscores accessibility and a family-friendly atmosphere that some of the larger players in the space have had a harder time keying into in recent years.
CEO and co-founder David Lee said the Playground had an installed base of 800,000 and expects to break 1 million this year. That's a far cry from the 120 million Switch units or the 74 million PS5 consoles we estimate were installed as of the end of 2025, but a strong start from a newcomer. Lee hopes to keep the momentum going in the year ahead with more brand partnerships, more sports titles and adding internet multiplayer options.
TVs are all about the color gamut
LG, Samsung and TCL Electronics Holdings Ltd. all had big, impressive RGB TVs at the show. This is the potential successor to OLED for high-end display technology. It deploys red, green and blue backlights and local dimming to achieve a better representation of the perceptible color gamut.
That's a key difference in the way vendors have talked about TVs in the past few years: There's less of an emphasis on reaching true black and blowing out views with nits. Now it's about how many colors a set can display.
Still, many in the industry maintain that the lack of true pixel-level control means that RGB TVs still have a way to go to replace OLED. What's more is that RGB TVs are still prohibitively expensive. Like, midsize sedan levels of expensive.
Computer chips that cool themselves
We took a briefing with xMEMs Labs Inc. regarding a novel new approach to cooling electronics with an air pump on a chip. The idea is that a very small chip could be inserted into a phone to actively cool the components, unlocking some computational headroom. The concept could even be deployed directly into chip packaging, meaning your next CPU might have its own embedded cooling solution. XMEMs says it already has some partners in the smartphone segment, with products potentially appearing this year.
Reducing the energy cost of visualization
We took a short briefing from chip company Innatera Nanosystems B.V., which is building "neuromorphic" microcontrollers that aim to improve efficiency in smart homes, speakers, and wearables. Their secret sauce is processing sensor data on-device in a "brain-like," always-on manner to reduce power costs.
Think of how your smart doorbell camera turns on whenever it detects movement. Innatera claims to accurately sense humans and reduce false positives, and thus waste less power. Innatera's Pulsar neuromorphic microcontroller recently entered mass market production and has already secured a number of customers in the smart home and consumer space.
Economics of Streaming Media is a regular feature from S&P Global Market Intelligence Kagan.
Product
Content Type
Segment
Language
Podcast
Research Article