Robotaxis Need Lidar. Tesla Needs Scale.

Robotaxis use lidar because they need redundant perception without a human fallback. Tesla avoids it because its autonomy bet depends on cheap cameras, neural networks, and millions of consumer cars collecting data.

Peter SoidaPeter SoidaMay 11, 2026 · 5 min read
Robotaxis Need Lidar. Tesla Needs Scale.

Most debates about self-driving cars get stuck on the wrong question.

Is lidar better than cameras? Is radar enough? Is Tesla reckless for avoiding lidar? Is Waymo overengineering the problem?

The better question is simpler: what kind of autonomy is the system trying to sell?

A robotaxi and a consumer car are not the same product. They may both promise “self-driving,” but they live under different constraints. A robotaxi has to operate without a human fallback. A consumer car, at least today, is still usually sold as a driver-assistance system with a human in the loop.

That difference explains why Waymo uses lidar and Tesla does not.

The Same Problem, Two Different Businesses

Waymo’s car is not just a car. It is a service vehicle.

The customer does not buy it. The customer rides in it. There is no owner sitting behind the wheel, ready to correct the system. The vehicle has to understand the world, make decisions, and complete the ride without asking a human driver to intervene.

That is why Waymo’s sensor stack is built around redundancy. Its system combines lidar, cameras, radar, and compute. Lidar gives the vehicle a detailed 3D view of its surroundings by sending out laser pulses and measuring how they return. Cameras interpret visual information. Radar helps with distance, speed, and motion.

For a robotaxi, expensive sensors can make economic sense because the vehicle is a revenue-generating asset. If a car is operating all day as part of a fleet, the cost of the sensor stack can be spread across thousands of rides.

Tesla is building for a different world.

Tesla wants autonomy to emerge from mass-market cars already on the road. That means the system has to work inside vehicles that can be produced, sold, and upgraded at enormous scale. In that model, every expensive sensor becomes a tax on distribution.

So Tesla’s bet is not just technical. It is economic.

Robotaxis can afford complexity because they are fleet infrastructure. Tesla needs simplicity because it wants autonomy to scale through consumer vehicles.

Lidar Is Not Elegance. It Is Insurance.

Lidar is attractive because it gives an autonomous system a direct way to measure the geometry of the world.

A camera sees pixels. A radar detects objects and motion. Lidar helps build a precise 3D map of nearby surroundings.

That matters when the vehicle has no human backup.

If a robotaxi is driving through a city, it needs to detect pedestrians, cyclists, construction zones, unusual vehicles, unexpected obstacles, and changing road conditions. A redundant sensor stack gives the system multiple ways to interpret the same environment.

This is not elegance.

It is risk management.

The same logic appears in some advanced consumer automation systems as well. Mercedes-Benz, for example, uses a mix of radar, lidar, ultrasound sensors, cameras, and positioning systems in its Drive Pilot system.

The reason is straightforward: once the car takes more responsibility from the driver, redundancy becomes harder to avoid.

When the car is responsible, the system needs backup layers.

Tesla Is Betting That Vision Will Scale

Tesla’s argument is different.

If humans drive mostly with vision, Tesla argues, cars should eventually be able to do the same with cameras and neural networks. The company’s Full Self-Driving product is still supervised and requires active driver attention, but the broader Tesla autonomy strategy depends on learning from a massive fleet of consumer cars.

In this view, lidar is not just expensive. It may be a distraction.

If the hard problem is visual understanding, then adding lidar can create the feeling of progress without solving the deeper intelligence problem. A car still needs to understand lanes, signs, intent, motion, edge cases, and human behavior.

Tesla’s advantage, if it works, is scale.

Millions of cars can collect driving data. Millions of miles can feed training loops. Software can improve across the fleet. The sensor suite can remain cheaper and easier to distribute.

That is the core of the Tesla bet: autonomy as a data network, not a specialized hardware stack.

The Real Split Is Control vs Scale

This debate is often framed as safety versus ambition, or engineering versus ideology.

But the deeper split is between two strategies for building intelligence.

Waymo is building autonomy as a controlled service. It can choose the cities, map the roads, maintain the fleet, inspect the sensors, and operate inside a managed environment.

Tesla is trying to build autonomy as a scalable consumer platform. It wants the system to work across many places, many drivers, and many cars without turning every vehicle into a lidar-covered robotaxi.

Both approaches have logic.

Waymo’s approach is more operationally conservative. It makes sense when the company owns the service experience and cannot depend on a driver.

Tesla’s approach is more scalable if the underlying intelligence becomes good enough. But that is a big “if.” Cameras plus neural networks do not automatically equal human perception. The fact that humans drive with eyes does not mean machines can easily do the same.

The Sensor Stack Reveals the Business Model

The lesson goes beyond self-driving cars.

Technical architecture usually reveals the business model underneath.

Waymo needs lidar because it is selling rides without drivers. Tesla avoids lidar because it is trying to turn ordinary cars into a distributed autonomy network.

One system optimizes for reliability inside a managed service. The other optimizes for scale across consumer hardware.

That is the real tradeoff.

Builders face versions of this decision all the time. Do you build a more controlled, expensive, reliable system? Or do you build a simpler system that can improve through scale?

Robotaxis need lidar because they need redundancy.

Tesla needs scale because its entire autonomy story depends on the fleet.

Next read: Tesla Is Betting on Data, Not Sensors.

Sources / further reading: Waymo Driver sensor stack, Mercedes-Benz Drive Pilot, Tesla Full Self-Driving Supervised.

1 Comment

No comments yet.