> Right now if you dropped a Waymo and Tesla FSD on an unsealed road in Michigan, one of them will drive at least as well as a human learner driver and the other will probably refuse to move
One will drive and likely get into an accident, and the other will responsibly not do what it wasn't designed for. But if you took away Waymo's responsible disposition, I'm sure it would drive better than the Tesla, because it's just way further along in development, more and better sensors, more rigorous testing and simulations, more corner cases that absolutely had to be dealt with to roll out a real self-driving product.
What are you basing this prediction on? There is extensive independent documented video evidence of FSD beta driving and navigating on unsealed roads in Michigan with no human intervention required. This includes driving at night, in the rain, and with a diverse array of obstacles.
> [Waymo is] just way further along in development
This may well be true. However there's no publicly available information to make any objective assessment. For better or worse (and I see valid arguments both ways) Tesla is airing their dirty laundry in public for all to see. It can be assessed by objective third parties in privately owned vehicles without the blessing or oversight of the parent company.
By comparison we have no way of assessing Waymo's stack objectively. For example we don't know to what extent corner cases are embedded into the software or handled by humans remotely. Was the 15th Avenue incident resolved by human intervention or an improvement to the stack? To what extent is their vision and planning stacks capable of universality or over-fitted to regional specifics? No objective analysis is possible, so no comparison is possible.
As someone who works at one of the "actual AV companies", there are just as many examples of FSD making mistakes that show Tesla's approach is broken in unacceptable ways, so assuming it will get in an accident is very fair.
What laypeople don't seem to get is that AV mistakes don't follow the same "scale of alarm" as human mistakes.
Tesla is making mistakes that, for a human, might not be world ending, but for an AV make absolutely no sense.
Classification issues in AVs are supposed to be the achilles heel, but Tesla is still running into situations where perception works correctly and the AV just completely ignores ground rules that any serious AV must be built around.
> Was the 15th Avenue incident resolved by human intervention or an improvement to the stack? To what extent is their vision and planning stacks capable of universality or over-fitted to regional specifics? No objective analysis is possible, so no comparison is possible.
It doesn't matter when the product you're comparing it to ignores stop signs. Tesla's FSD approach is the definition of the local maxima problem for AVs. It makes progress on the axis that impresses lay people by sacrificing progress on the axes that matter to the long term success of an AV. Geofencing and arbitrary sensor limitations are just the tip of that iceberg...
One will drive and likely get into an accident, and the other will responsibly not do what it wasn't designed for. But if you took away Waymo's responsible disposition, I'm sure it would drive better than the Tesla, because it's just way further along in development, more and better sensors, more rigorous testing and simulations, more corner cases that absolutely had to be dealt with to roll out a real self-driving product.