Forums

On the Threshold of Full Self Driving

On the Threshold of Full Self Driving

Elon’s recent comments about the progress of full self driving capability for Tesla’s cars point to the end of next year as when the machine intelligence will be sufficiently developed to do the job. Once that happens, it should get better fairly quickly because the fleet learning is largely AI-driven rather than human coded.

Elon said that he thinks it will aim for about 2-3X better than humans in terms of accident statistics.

When we can actually start using it however, will be gated mainly by certification and law. The recent Uber accident has not helped.

It’s worth thinking about what will be the decisive turning point that lets Tesla convince regulators that it’s ready.

In principle, it’s when you can show that the likelihood of loss is greater with a human driver, by an appreciable margin.

So Elon has set a target of 2-3X better at the start, as measured by overall statistics of human vs. AI accident data. (It will likely improve to 10-100X humans pretty rapidly due to the intrinsic power of machine learning to self-improve. (Like Alpha Go in the game realm).

But there’s a nuance here.

It’s not enough to look at the overall averages, because that includes all drivers, some of whom have severe impairments, or simply aren’t that skilled. In fact, a disproportionate number of losses come from those drivers who have some disadvantage.

Thought experiment - if your kid’s friend is going to drive them somewhere, do you consider how competent they are? Do you encourage them to ride with someone who’s got it together, rather than someone with chronic accidents? I certainly do, and my kids do this now themselves. We don’t get into cars where we know the driver has defficiencies. Car accidents are the most likely cause of death if you’re 18. We go with someone else.

That illustrates a point: I think the benchmark won’t be gross actuarial data for all drivers, but rather the accident rates for the population of drivers who meet some threshold of skill.

Let’s say 80% of all accidents are by those below the threshold, and only 20% are with those of demonstrated competency above the threshold.

That would move the goalpost 5X.

That is, FSD should be 2-3X better than the segment of drivers who have very good skill.

When the dust settles, I think lawmakers will want facts that show the AI driver is say 3X better than a very good human driver.

That would mean trusting your kid’s life to a machine is 3X safer than another human, even if you consider only good drivers. That’s a very salable proposition.

I think this will evolve in legislative discussion over the next year, and this higher threshold is likely where we’ll end up.

That said, I have very high confidence that we’ll get there. I just believe it matters how we arrive at certification. It must be inarguable that we’re all better off, so folks accept it with minimal pushback.

Being 12 months early but seeing a dramatic and violent failure, that a skilled driver might have avoided - that could set certification back years because of irrational public outrage. It probably works better to have an unassailable position to get to the finish line faster.

No system will have zero accidents, but if we’ve got data that says you’re 3X more likely to survive than with very good humans, then we’re fully into the new era, and there’s no stopping it.

I think Elon will likely tune how he presses for certification, based on some form of this threshold logic .

The faster we get all the way there, the less we get set back, and the more lives we save.

bill | March 24, 2018

I disagree with factoring in competency since I believe you have to consider the average of all drivers

Mark K | March 24, 2018

Another way to look at the competency dimension is to think about this scenario -

They play the video on national news (as with Uber’s recent accident), and you watch it.

If you’re a skilled driver and you feel you could have dodged that bullet, you disclaim the AI pilot.

If it performed heroically, better than most could see themselves doing, you’ll accept the accident as unavoidable, and not demand they ground the fleet.

To compare, on video of low skill human drivers, we often find ourselves armchair quarterbacking their moves.

The AI pilot will be held to a much higher standard in the court of public opinion.

Whenever we play back the moves under the microscope, and we will, because there’s 8 cameras logging it, we will expect the bot to be as good as the best of us. If not, we’ll say it killed the victim.

That’s the reckoning we face. It can and will be won, because we’ll keep adding AI and sensor power until it’s clearly superior to us. We just want to avoid the Luddite debate after a horrific event that costs us years of regulatory delay, and thousands of lives that could have been saved if we got there sooner.

cb500r | March 25, 2018

I also disagree with your competency conclusion.
The world is an average of all, so if the system is 2x better than average, it still means not using it in general would kill double the people.
I believe I'm a skilled driver. For sure I was heroic when I was young. My skill level now is to realize that I can get tired, worn out, unfocused,... In all those situations, even being the best driver in the world, I think I would lose against a computer in dangerous situations.
Something that needs to be improved to trust the cars more is the visibility of what it thinks. I'm frequently feared about how close it is passing by parking cars or cars in a turn or yesterday it hit the curb of a traffic island.
I would like to trust AI, but as long as Elon is not officially paying for something like this, I don't feel confident.

Mark K | March 25, 2018

Numerically, if it’s simply better than average, we’re ahead. That’s very rational.

Psychologically, the public and politicians often decide to do irrational, asymmetric things.

Legislators will push back on one case of particular errant bot, and let a thousand cases through of humans behaving badly. Machines are judged much more harshly than us.

SimpleSimon | March 25, 2018

Better than the average driver is not really a bench mark that's going to impress legislators or especially insurance companies (where the real power lies.)

I imagine an exhaustive battery of tests, on closed courses, designed to measure full self driving against professional test drivers, testing for the most common AND the most uncommon scenarios. When the FSD performs demonstrably better than a skilled human in challenging conditions then the adoption of this technology will surge forward.

Still, those unlikely and rare situations where a driver has to choose between crashing into another car or driving into a row of hedges will always be a black mark against FSD if it doesn't make the same judgment a human would make, but that's going to have be an acceptable shortcoming when weighed against the grater good.

SimpleSimon | March 25, 2018

grater/greater

jordanrichard | March 25, 2018

Not to be a wet blanket on this whole FSD, but relatively speaking getting a car to drive on its own is the easy part. Getting every state will have to change its laws to allow FSD on thier roads, is the biggest hurtle.

PBEndo | March 25, 2018

I see only a small number of states allowing FSD initially. Most states will wait for conclusive data from those states to show that it is truly safer, on average, by a large margin. The data will take time to acquire so there may be a very slow roll-out nationwide.
Additional complicating factors:
1. Multiple systems - FSD technologies will vary from one manufacturer to another.
2. A given manufacturer may have different versions of AP on the road simultaneously, as Tesla does currently.
3. Will each new model of car require new FSD testing as is currently done for crash testing? What level of changes/improvements will trigger the threshold for a new round of testing before allowed on the streets. Could an OTA software update disallow FSD in a car in one state that may require a new certification process?
4. States may allow FSD purely on testing or crash data results, but they may also place certain hardware requirements. i.e. California may require more cameras or data logging capability while Nevada may enact a law requiring a certain number of sensors, LIDAR, etc.

I think most states will find the overall situation very complicated. They will be genuinely concerned about safety and their own liability and default to "erring on the safe side". For the state governments, "safe side" will mean no FSD even if it is actually safer. The last few states to allow FSD after the majority of states have approved it run the risk of legal action if it can be proven they are increasing danger by not allowing FSD.

Even if all of the above is wrong and the states all eagerly allow a smooth and timely transition to FSD their decisions will be affected by lobbying efforts to either allow or disallow FSD on a state-by-state basis by the various interested parties. You can bet that manufacturers that are behind the curve on FSD tech will be pushing hard to keep it from being allowed until they are ready.

Teslaguy | March 25, 2018

Excellent discussion. All the points mentioned show me that this is going to be a long drawn out process...... longer than Elon estimates.

Bighorn | March 25, 2018

Elon estimated 6 years last time I remember him mentioning it—I think he said 3 years to develop and 3 years to validate and get laws changed. I’m guessing that was about a year and a half ago?

carlk | March 25, 2018

The Uber accident made me to realize one more advantage of Tesla's FSD approach. Everyone else is aiming at driverless ride hailing and taxi services. That requires full level 5 approval which may not be that easy to obtain. Without that those cars are pretty useless for that intended purpose. Tesla has the same goal eventually but it also could start to release slightly less capability that needs a "driver" there and paying some degrees of attention but will take you from point A to point B without much or any intervention. That could serve good use of many Tesla buyers already. .

Bighorn | March 25, 2018
Bighorn | March 25, 2018

The article said tech would take 5-6 years then years more for regulatory approval.

kevin | March 25, 2018

Many states are looking at autonomous vehicles and quite a few have passed legislation, at least to allow testing, including testing without a human operator.

http://www.ncsl.org/research/transportation/autonomous-vehicles-self-dri...

p.c.mcavoy | March 25, 2018

carlk | March 25, 2018
“... Tesla has the same goal eventually but it also could start to release slightly less capability that needs a "driver" there and paying some degrees of attention but will take you from point A to point B without much or any intervention. That could serve good use of many Tesla buyers already. “
——————————

I’m not advocating one approach or the other as right or best, but some view systems that fall in the middle ground, Level 3ish, capable of largely FSD in some conditions but still needing driver to be prepared to take over, as being a bad idea with actually fairly high risk. That argument is that drivers seeing how well the system is capable of performing in some situations quickly become complacent, expect it to always perform that well, and actually overly trust the capability of the system. That was one of the areas cited in the NHTSA and NTSB investigations around the human-machine interface.

I know just my own behavior at times how I need to check myself around whether I’m mentally checking out too much when using AP1. I have not dug deeply into the new Cadillac system where they tout full hands-free interstate driving but I am curious what they may be doing potentially via vision system, similar to the camera installed in the Model 3, to judge driver engagement with some fashion of warning for distractor drivers.

Stiction | March 25, 2018

CA is still the biggest market for cars , and it does have pretty good weather for FSD, so I suspect it's good bet that they will wrangle it out and the other states will adopt, perhaps with tweaks.

TeslaTap.com | March 25, 2018

I expect FSD to be rolled out in specific use cases first. I know some think FSD will be from point to point, but at first it is more likely to be active on segments - perhaps onramp to offramp (more intelligent that EAP today). There could be many limitations at first too - perhaps no snow or ice. I worry some expect instant perfection with FSD (from any automaker). We may get FSD sooner with limitations, and it allows more development and testing on the hardest cases that not supported at first.

bill | March 25, 2018

I think if they allow FSD functionality while requiring the driver stay engauged then it will be very easy for the technology coupled with human oversight to be far more safe then pure human driver.

I think they should allow all the capabilities while requiring the driver pay attention. Probably be watching the driver's eyes and hands.

I am part of a study MIT is doing where they put three cameras in my car and they record my head and my steering wheel and also tap into the can bus so they can also get telemetry from the car. And this study is already been going on for at least two years. They are looking at more than just Teslas.

I am convinced that I am safer when I am driving with Auto Pilot on. How many times have you been distracted by something and as a result drifted into the other lane and were only saved by the fact that another car wasn't there.

How many people die or are killed each year by people falling asleep at the wheel, That should also be eliminated by using Tesla AP. The only accidents will be where someone fell asleep and AP doesn't handle something like construction.

This technology can save lives today if implemented with the proper safeguards to prevent people from relying on it more then they should.

SimpleSimon | March 25, 2018

As I mentioned earlier the insurance companies will decide how acceptable this technology is. Will they give discounts for FSD like they do for ABS or Anti-theft devices or will they add a premium for cars so equipped because they don’t have enough data to calculate risk.

There’s no point in rolling out a technology that you can’t insure.

jerrykham | March 25, 2018

Insurance will definitely be an issue. You think people blame autopilot for collisions a lot now (mostly always incorrectly), wait until it really was the car driving without a human being asked to keep their hands on the wheel. The manufacturer isn't going to want to take that liability. The occupant of the vehicle (no longer the driver at least during FSD scenarios) isn't going to want to take it either (although the insurance company will surely try to put the burden on the occupant). Then cue up all the lawsuits attempting to show the fault was with the software...

carlk | March 25, 2018

@SimpleSimon That's why Tesla is starting to get involved is auto insurance areas. Insurance companies are only interested in profits instead of promoting new technology especially the new technology potentially can significantly reduce their revenues.

jordanrichard | March 25, 2018

P.c.mcavoy, I did do some digging and posted a thread in the General forum. Just to summarize, GM/Cadillac says to no use “Supercruise” in bad weather like rain or fog. It will not change lanes, requires a good GPS signal, only works on already mapped routes and requires On-Star which is free for 3 years and then a paid service beyond that. If after 3 years you decide to not use On-Star, Supercruise becomes disabled because it requires the maps to be updated every 7 months. Those map updates happen via On-Star. So, don’t pay for On-Star and the maps become 7 months old, Supercruise goes away.

p.c.mcavoy | March 25, 2018

@jordanrichard - My comment about the GM/Cadillac system was in no way intended to assert or imply that I felt it as a superior system to Tesla AP1 or EAP. My comment was that I’m curious what they are going to ensure that the driver stays engaged to the human/machine interface requirements in the NHTSA guideline document.

The requirements to operate such as good GPS signal, only mapped routes, not to be used in bad weather, etc, is what the NHTSA guidance defines as the Operational Design Domain, which is one of the expected elements to be considered along with the Human Machine Interface. That’s the element I’m curious about; what GM points to in terms of Human Machine Interface aspects of the system.

SbMD | March 25, 2018

@pc - the Cadillac system's has a facial recognition camera that sits on the steering column and continually checks the driver. It looks to ensure that the driver's eyes are open and facing the road. If the driver takes their attention off the road, the system will warn the driver to return their attention, or risk disengaging the system.

Mark K | March 25, 2018

Excellent comments, thanks all.

Bighorn - Yes, Elon has been sober from the start that it will take time for approval, after the technical milestone is reached. The question is - how can we make the approval process wiser and less political? If we’re smart, can we save lives sooner.

Political warfare is one the most destructive forces in the world today. It turns reasonable people with shared interests into enemies, shunting intellect into discrediting their counterparts, instead of producing greater good. Charting a wise course through the social dimensions of fsd will bring this lifesaving technology to the world sooner.

CarlK - exactly right. Tesla is unique among the fsd competitors by choosing this AI shadow assistant strategy for the global fleet, rather than narrow local level 5 autonomous taxi experiments. Assistants circumvent much of the regulatory confusion, while advancing the technology as the debate rages.

If we’re lucky, Tesla will quietly arrive at a point where it is so demonstrably safer, even politicians will find it beyond argument.

jordanrichard | March 25, 2018

P.c.mcavoy, oh I didn’t think you were saying that. I also misunderstood what you were saying. I misread your comment about a face recognizing camera meaning one that faces outward and recognizes people in the car’s path.

It’s been a long weekend........

PBEndo | March 25, 2018

It's a good thing the right to drive one's car isn't protected by a constitutional amendment. Otherwise, FSD would never happen.

p.c.mcavoy | March 25, 2018

@jordanrichard - No problem. I appreciate your and @SbMD’s comments about what GM appears to actually be doing. As I said, I’ve not really had a chance to check into it but appears I was on the right track with my suspicion they are using a camera/vision system to monitor the driver for engagement via facial recognition. I’m curious what the user prompts or “nags” are like with their system when driver starts to look at their phone, etc.

jordanrichard | March 26, 2018

Caddy is using a green light strip built into the steering wheel, which is IMHO a pretty good idea as it is the closest thing to you. Like the Tesla system, if after a certain amount of time you don't react, the system will kick off.

dborn @nsw.au | March 26, 2018

From experience, whatever Elon says in relation to timing, double it at least, and then plan for further extension of time in an open ended fashion.
See new Nav system, new user interface, etc.

Mark K | March 27, 2018

Wow! 8% drop on the stock in one day.

Why? Many things - firey accident, flame articles on model 3 ramp, and - fsd.

Really the biggest issue for the whole sector - new concern about regulatory delay affecting when autonomous driving will become available and profitable.

Shazaam!

On the heels of the Uber incident, and the wsj article that wrongly implicates autopilot in the firey model x crash, the market irrationally concludes that fsd leaders don’t have the momentum they thought.

This will settle down, and the stock will resume its rise soon enough, but how surprising that the whole psychology issue with fsd comes to the fore, so soon after this topic was posted.

Handling public psychology to certify fsd intelligently and rationally may do more to shorten the lead time than adding 200 more engineers.

The NTSB said they’re focused on the fire aspects, and how to manage when it happens, and though autopilot was active, they’re not investigating that.

And wsj article weaves this thread that links the fire to the Uber accident under the umbrella of fsd (subtly and implicitly). And bang! 5 billion dollars disappears (temporarily).

I used to trust that newspaper, but when it comes to all things Tesla, even good news is somehow bad.

SbMD | March 27, 2018

The entire market was down. Wouldn't read too much into it.

Mark K | March 27, 2018

Yeah, Dow is down 1.4% overall on general grousing, but Tesla took an 8.2% hit.

SimpleSimon | March 28, 2018

NVIDIA took a similar 8% hit due to suspending its FSD testing.

hoffmannjames | March 28, 2018

Regarding measuring FSD's competency with respect to a human driver, I think a lot of the talk right now about FSD being X times better than the average driver is mostly speculative. So, why not just give the FSD system a comprehensive driving test, similar to what humans have to pass to get their driver's license? Some type of driving test would be the best to way to measure in a quantitative manner how good the FSD is compared to a human driver. Have the FSD with no human at all inside, complete a driving test with just a starting and finishing destination in its nav, , no preprogrammed route, that includes everything from highways to local roads, construction zones, residential zones, unexpected obstacles in the road and changes in navigation. That would be a good test of the FSD system. And you could have a competent human driver do the same test and compare results.

I am sure the DMV already has similar tests that they require of FSD but I think requiring that the FSD get its "driving license" by completing a comprehensive driving test would be a good idea. You could also have a part of the test similar to the written part of a driver's license test where the FSD computer has to identify a ton of different road signs.

SimpleSimon | March 28, 2018

@hoffmanjames You're a genius. That's an amazing thought and I should know becauase just 30 short posts above yours I wrote this "I imagine an exhaustive battery of tests, on closed courses, designed to measure full self driving against professional test drivers, testing for the most common AND the most uncommon scenarios. When the FSD performs demonstrably better than a skilled human in challenging conditions then the adoption of this technology will surge forward."

Great minds!!

inconel | March 28, 2018

If a program passes the test are we guaranteed it will work well on all cars with their sensors potentially adjusted slightly differently (manufacturing tolerance)? At least for humans we test both together as we cannot yet decouple the sensors from the neural network and its underlying program :)

Mark K | March 28, 2018

That’s essentially the thesis of this thread. The bar will be set higher than just the 3X the average of all human drivers.

It will have to be at least better than very good drivers, and likely much higher than that.

If we start defining and aiming for that necessary bar, I think we’ll end up getting certified faster overall, and greatly reduce the chance of a massive setback from public anger.

If we accept this logic early on, we’ll choose better strategies to get there.

bill | March 29, 2018

@hoffmannjames

"So, why not just give the FSD system a comprehensive driving test, similar to what humans have to pass to get their driver's license? "

What FSD System?

bill | March 29, 2018

We should test the current AP with Driver against a driver without AP and see how they are different.

Madatgascar | March 29, 2018

An even bigger fallacy than comparing the accident rates of FSD to all human drivers is comparing FSD accident rates to the rates for all cars. You’re comparing FSD systems that operate only in the safest car on the planet to the data set for all cars, including jalopies with no modern safety equipment that get in accidents just because their wheels fall off.

Mark K | March 30, 2018

Madatgascar - that’s mathematically quite true, if the metric is injury to occupants.

Risk of injury in Tesla MS is dramatically lower than in average for all cars. Especially in front-end collisions, where MS has 3X the crumple zone. That would unfairly tilt the statistical evaluation in Tesla’s favor.

But if you look at accident rates separate from injury, you get a more pure measure of skill of the pilot.

Either way, all of these considerations argue for setting Tesla’s target higher than an average skill human.

Mark K | March 30, 2018

There’s another dimension to this whole topic - insurance.

If you think through the whole FSD transition, you can make an argument for Tesla offfering it’s own insurance to warrant performance and create cost efficiency to speed the transition.

After M3 production is humming and cash is really flowing, insurance can be a big lever to resolve consumer acceptance issues, and

All problems are opportunities, waiting to be turned into profits.

carlk | March 30, 2018

Tesla does not have to turn on full driverless, the one that does not require a driver in the car, in one shot. It can start to release "super AP" features and gradually improve to true FSD. I'd be very happy if the car could take me from home to work even if I still need to sit on the driver's seat and ready to take over on moment's notice but not required to pay constant attention all the time. That's another advantage of the Tesla model over the rest that are only developing cars for driverless ride hailing or taxi services. They will likely to start with a long phase in stage with a backup driver in every car but that make those cars no different than human driven cars for the intended purposes. During the time, probably a long time for true FSD to be proved and approved, those companies will have to spend as much or more to run a few of those cars while Tesla could sell a lot of cars for people to enjoy the benefit everyday. Not to mention all of them are continuously helping the Tesla's machine learning system. I have no doubt that Tesla, with its thoughtful planning on this strategy, will come out the winner in this race.

p.c.mcavoy | March 30, 2018

“... ready to take over on moment's notice but not required to pay constant attention all the time.”

Am I the only one that finds a bit of a contradiction in that statement? Not sure how I’d know I needed to take over when I’ve not being paying attention. That’s also why some argue that a Level 3 capability type of system has inherent risks due to drivers becoming overly reliant on the system such that they are not paying attention an unaware when they do need to resume control.

hoffmannjames | March 30, 2018

@carlk

I definitely like your idea of Tesla releasing "Super AP" features. Of course, I think only owners who have purchased the FSD option should get them.

carlk | March 30, 2018

At this point you are. ;)

Moment's notice means when FSD senses situations that is confusing, say weather or road work, it will just slow down or stop safely and let the driver to make a decision. At that point the system should already be level 4.x and able to handle, say, 99.99%+ of situations totally independently. How much the driver involvement is required, which can be gradually decreasing too, is between Tesla and the driver, how much confidence each has and how much responsibility each is willing to take. A car that does not need a driver inside definitely will need full legislative approval and likely will not be easy to have soon.

carlk | March 30, 2018

The above post was to reply @p.c.mcavoy.

carlk | March 30, 2018

hoffmannjames
"I definitely like your idea of Tesla releasing "Super AP" features. Of course, I think only owners who have purchased the FSD option should get them."

That was what I was thinking too. It is super AP because it is already passed AP capability and into FSD territory utilizing full set of FSD sensors. The point is just Tesla does not need to get to the point of true FSD (no driver needed in the car) before making the system useful to owners.

carlk | March 30, 2018

it is --> it has

Pages