I am intrigued by Teslas, and am considering purchasing one. However, I would feel better about the purchase if there was absolutely no autopilot software or hardware on board. Ideally that would mean zero, including non-autopilot builds of all system and control software so there is no chance the car will ever engage autopilot.
Is it possible to buy a Tesla completely devoid of autopilot, or is this type of accident potentially a risk in all of them?
Every tesla you can buy new has autopilot hardware and software for level 2 autonomy features. Certain older builds have different hardware.
There is an additional cost you can pay for called "Full Self Driving". This fee enables some new functionality like summon mode and the car changing lanes itself when you indicate a lane change. This fee is borderline fraud as people who leased early model 3 cars didn't receive anything close to the functionality described.
Autopilot is not ever active unless the driver enables it. You pull a stalk to turn on adaptive cruise control which only manages your speed and doesn't do any steering. Another pull of the stalk turns on autosteer, which is really just fairly fancy lanekeeping assist. You can buy a tesla and never use autopilot. There is some automatic emergency braking you can't disable, but that's true of nearly all new vehicles.
It is an extra cost option. Don't specify it and it won't be enabled. The hardware and software are still present in the vehicle, just turned off.
There's never been a Tesla incident, that I know of, where the system has turned itself on. Every single crash has been after a deliberate engagement by the driver.
Unintended accelerations in Nissans were blamed on bit-flips due to cosmic rays. I think that one actually was true in the end. So unless you'd like to test the CPUs near nuclear reactors and electron guns, I'd say that there is a (cosmically) small risk of it turning on despite not enabling it.
Wow, that's crazy! I didn't even think about that, but I can totally see how it could be possible. I imagine that the only reliable way to prevent it would be to have multiple redundant systems that operated via consensus.
As programmers, we know that the only way to achieve 100% certainty that code will never be executed is for it not to be present at all. I'm specifically asking if that safety feature -- complete absence of autopilot -- is available as an option.
Can't you just not use it? The driver in the crash activated autopilot intentionally shortly before the crash. He did that and then took his hands off the wheel. I can't comment on autopilots general reliability but using it appears to be entirely optional...
As a software engineer it's hard to trust code that you know has flaws, because there could be a flaw that causes autopilot to automatically turn on. Extremely unlikely, I know, but it's definitely a nagging thought. It would be nicer if there were a mechanical cutoff (not really possible in this case) so you could be absolutely sure.
I know, I know -- you can design software so that this type of flaw is completely impossible, but as a user I have no way of verifying this, and my trust has already been eroded by the feature I'm trying to disable so I really want to verify it.
That can be enabled remotely: "Full Self-Driving Capability is available for purchase post-delivery, prices are likely to increase over time with new feature releases"
That means the hardware and software are in the vehicles for purchase via the website. Hence my question.
Is it possible to buy a Tesla completely devoid of autopilot, or is this type of accident potentially a risk in all of them?