Elon’s recent comments about the progress of full self driving capability for Tesla’s cars point to the end of next year as when the machine intelligence will be sufficiently developed to do the job. Once that happens, it should get better fairly quickly because the fleet learning is largely AI-driven rather than human coded.
Elon said that he thinks it will aim for about 2-3X better than humans in terms of accident statistics.
When we can actually start using it however, will be gated mainly by certification and law. The recent Uber accident has not helped.
It’s worth thinking about what will be the decisive turning point that lets Tesla convince regulators that it’s ready.
In principle, it’s when you can show that the likelihood of loss is greater with a human driver, by an appreciable margin.
So Elon has set a target of 2-3X better at the start, as measured by overall statistics of human vs. AI accident data. (It will likely improve to 10-100X humans pretty rapidly due to the intrinsic power of machine learning to self-improve. (Like Alpha Go in the game realm).
But there’s a nuance here.
It’s not enough to look at the overall averages, because that includes all drivers, some of whom have severe impairments, or simply aren’t that skilled. In fact, a disproportionate number of losses come from those drivers who have some disadvantage.
Thought experiment - if your kid’s friend is going to drive them somewhere, do you consider how competent they are? Do you encourage them to ride with someone who’s got it together, rather than someone with chronic accidents? I certainly do, and my kids do this now themselves. We don’t get into cars where we know the driver has defficiencies. Car accidents are the most likely cause of death if you’re 18. We go with someone else.
That illustrates a point: I think the benchmark won’t be gross actuarial data for all drivers, but rather the accident rates for the population of drivers who meet some threshold of skill.
Let’s say 80% of all accidents are by those below the threshold, and only 20% are with those of demonstrated competency above the threshold.
That would move the goalpost 5X.
That is, FSD should be 2-3X better than the segment of drivers who have very good skill.
When the dust settles, I think lawmakers will want facts that show the AI driver is say 3X better than a very good human driver.
That would mean trusting your kid’s life to a machine is 3X safer than another human, even if you consider only good drivers. That’s a very salable proposition.
I think this will evolve in legislative discussion over the next year, and this higher threshold is likely where we’ll end up.
That said, I have very high confidence that we’ll get there. I just believe it matters how we arrive at certification. It must be inarguable that we’re all better off, so folks accept it with minimal pushback.
Being 12 months early but seeing a dramatic and violent failure, that a skilled driver might have avoided - that could set certification back years because of irrational public outrage. It probably works better to have an unassailable position to get to the finish line faster.
No system will have zero accidents, but if we’ve got data that says you’re 3X more likely to survive than with very good humans, then we’re fully into the new era, and there’s no stopping it.
I think Elon will likely tune how he presses for certification, based on some form of this threshold logic .
The faster we get all the way there, the less we get set back, and the more lives we save.