Featured Topics
Featured Products
Events
S&P Global Offerings
Featured Topics
Featured Products
Events
S&P Global Offerings
Featured Topics
Featured Products
Events
S&P Global Offerings
Featured Topics
Featured Products
Events
Financial and Market intelligence
Fundamental & Alternative Datasets
Government & Defense
Professional Services
Banking & Capital Markets
Economy & Finance
Energy Transition & Sustainability
Technology & Innovation
Podcasts & Newsletters
Financial and Market intelligence
Fundamental & Alternative Datasets
Government & Defense
Professional Services
Banking & Capital Markets
Economy & Finance
Energy Transition & Sustainability
Technology & Innovation
Podcasts & Newsletters
17 Feb, 2026

| Tesla, pictured above, and Waymo are moving forward with competing autonomous vehicle systems. Source: Tesla. |
This is the second in a two-part series exploring the pivotal debates shaping the future of autonomous vehicles in the US. Click here to read Part 1.
As US lawmakers look to establish a federal approach to regulating autonomous vehicles, competing technologies are emerging that could complicate questions over which sensor architectures and what types of testing are necessary to prove a system is safe.
As autonomous vehicle makers look to balance safety, scale and cost, a debate is growing over the two competing architectures dominating the market: sensor fusion and vision-only. The sensor fusion approach integrates data from multiple sources — light detection and ranging (lidar) cameras, radar, and ultrasonic sensors — to create a comprehensive environmental model. The model offers sensor redundancy but also costs more for manufacturers and consumers. By comparison, vision-only or camera-only systems rely solely on cameras and neural networks to infer depth and identify objects. They do not measure distance directly; instead, they use computer vision algorithms to interpret visual data.
In crafting new laws and standards that will govern the industry going forward, regulators will need to determine whether safety standards should mandate specific hardware and software configurations or allow manufacturers flexibility in meeting benchmarks that are based on meaningful performance metrics.
"A federal framework for autonomous vehicles is not about picking winners," Sen. Ted Cruz (R-Texas), chairman of the US Senate Committee on Commerce, Science, and Transportation, said during a February hearing. "It is about setting clear rules, improving safety [and] creating American jobs."
Competing approaches
Notably, the two leaders in the autonomous vehicle space illustrate the divide over the two architectures.
Alphabet Inc.'s Waymo LLC takes a sensor fusion approach, with its Waymo Driver system using over a dozen cameras, four lidar sensors, and radar units for 360-degree perception.
"While cameras on conventional cars can struggle with raindrops, road grime, and ice, our system features integrated cleaning systems to maintain visibility. In conditions where a camera's view may be limited, our lidar and radar provide the necessary redundancy to maintain the Waymo Driver's perception," the company said in a Feb. 12 blog post announcing the release of its 6th generation Waymo Driver. The new system was designed with a "streamlined configuration that drives down costs."
Historically, ARK Investment Management LLC estimates Waymo autonomous vehicles cost more than $100,000 to produce, with the sensor suite alone exceeding $40,000. By comparison, Tesla Inc.'s Model 3 — with sensors included — costs $40,000, according to ARK.
The cost differential translates to operations: ARK projects Tesla's cost per mile could be 30% to 40% lower than Waymo's at scale, primarily due to Tesla's vertically integrated production and elimination of expensive lidar hardware.
In recent years, however, Tesla has taken a camera-only approach, having removed radar in 2021. The company's camera-only approach has been deployed across over 4 million vehicles, but the company does not publicly break out development costs or hardware expenses for its full self-driving system, making direct market comparisons difficult.
"Compared to cameras, lidar and radars are considerably more expensive to install, maintain and replace," Steve Wang, a partner at Caldwell who advises companies on intellectual property monetization and tech transactions, said in an interview. "So from a cost standpoint, those are the things that car manufacturers also had to take into consideration for hardware designs."
Beyond cost, Tesla said its camera-only approach also offers a benefit in terms of its self-driving vehicles blending in with other traffic.
"Unlike our competitors, our Robotaxi fleet blends in the markets we operate in since they don't have extra sensor sets or peripherals, which make them stick out," Tesla CFO Vaibhav Taneja said during an October 2025 earnings call. "This is an underappreciated aspect of our current vehicle offerings, which are all designed for autonomous driving."
Waymo and Tesla did not respond to emailed requests to discuss production costs.
Safety scenarios
In terms of safety, Waymo has developed what it calls a comprehensive "safety case" approach that extends beyond mileage accumulation. In a peer-reviewed study published in February, Waymo outlined its framework for "building a credible case for safety" through "determination of absence of unreasonable risk." The company has cited 32 peer-reviewed papers validating its approach, including studies comparing its crash rates to human benchmarks across 56.7 million rider-only miles.
Tesla, which began rolling out its robotaxis in Austin, Texas, in the middle of 2025, released a full self-driving safety report late in the year. This showed that Teslas with full self-driving enabled drove 5.1 million miles before a major collision, versus the US average of 699,000 miles.
Critics of the camera-only approach argue that it does not offer the same level of safety as sensor fusion, especially in scenarios with reduced visibility, such as heavy rain or snow.
Jon Miller, chief business officer at Nexar, which supplies real-world driving data to autonomous vehicle developers, said in an interview that the path to autonomous vehicle deployment runs through comprehensive exposure to these rare, high-risk scenarios.
"There's a very common consensus that edge cases are the bottleneck to getting to mass market deployment," Miller said. "[Autonomous vehicles] need to prove that they can handle edge cases in order to be on the roads. And a framework for that does not exist today."
But executives in the camera-only space say those concerns are overblown.
"Our AI is strong enough, trained enough and sophisticated enough to handle those use cases," said Eran Ofir, CEO of autonomous driving software company Imagry Inc. "We drive at night, we drive in rain, we even drive on snow. We have a unique network that is handling unknown objects — sofas that fell on the road from a moving truck, rocks left in the street, a child on a sled in a snowy environment."
Imagry builds a vision-only, mapless autonomous driving stack that relies on cameras and software rather than lidar, radar or high-definition maps. The company's system is already being deployed in pilot public transportation projects across Europe and Japan, primarily in autonomous buses. According to Ofir, a fully equipped vehicle using its system can be built for roughly $70,000 — about a third of the cost of many lidar-heavy systems. Ofir said that in 2027, only Tesla and Imagry would be able to provide robotaxis at that price point.
"Every decision the machine has taken in the last few years was supervised, vetted by humans," Ofir said. "It's like sitting next to my daughter when she learned how to drive — that's how you cover the edge cases."