Watched is self driving car special last night. If you watched it too, please share your thoughts here.
EDIT: And no one should call the segment "FUD". This is PBS NOVA; a series that has been running for 43 years straight now...
Wow, no one else saw this special?
I watched it and found it excellent with a sobering look at both the need for and the challenges of autonomous driving. It did seem to irritate a few of the fanbois on this forum because it implied L4/L5 may be farther out there than some are claiming. Since my wife and I don't have kids (to drive us around when we get old) and I am well aware how dangerous driving is I am rooting for L4/L5 within the next decade.
I love the tech but we kind of heard the economic justifications before with the current "Car When You Need It" companies like Reach who are all going out of business. People just didn't find it useful. All self driving does is replace the driver but the reasons they are failing economically remain and apply to the robot taxis.
As Professor Rus from MIT points out, it's not the easy drive down the interstate with adaptive cruise and auto steer it's all the crazy things that people do that other people easily adapt to that are the issue. The exceptions to the rules. Musk is a bit disingenuous in saying all those millions of miles of driving data are useful, they aren't. It is the few exceptional nuggets in which case the mass of data makes that harder. She explains that the hard work is putting the car in those situations and trying to program for it. The example in the show of that tricky intersection.
The other reason the robotaxi thing doesn't work out is everyone needs the cars at the same time to go to work and home from work is 90% of it so it doesn't eliminate the need for lots of cars, buses and trains do that.
I think Google's head guy had the best reason for developing "self driving" and it was based on smart cars on smart highways where it could nearly triple the density of road's carrying capacity. Working best in rush hours on main commuter routes that are built to help the cars read the road.
Thank you for the thoughtful contribution Fish.
You had to love the DeLorean doing donuts on it's own.
@billtphotoman - I share your hope that we will have L4/L5 by the time we Boomers/Gen X'ers need in in the next decade. Sooner would be okay with me too. By year end, 2019 - not going to happen.
I found it balanced and "enjoyable" especially with all I've learned before and especially after my delivery of 3/30/19. I also think that the Google take on smart cars/highways was excellent and will get more attention as in my case being in Boston this would truly dial up quality of highway life.
My wife & I don't have children either, and FSD is one of the reasons I bought a Tesla. We watched the N OVA show, and it was quite informative. Mostly opinion though, once the current state of development was described. I was a b it surprised they had as much of Musk on it as they did, although it was all footage from elsewhere not developed for NOVA. I was also surprised the show producers did not even touch on the widely differing business cases employed - how so many developers buy ICE cars, mount stuff on them, and then pay people to feed AI's and ride around in the cars, compared to Tesla mostly getting paid by their customers for all than instead of spending money on it, and how as a result they have 1000's of time more AI feeding cars on the roads, etc.....
While FSD was a reason to buy one, after a bit more than a year of ownership I have yet to even try EAP. So I'm not in a hurry to tell the judge not to worry, my car will drive me.
"Mostly opinion though, once the current state of development was described."@gballant4570
Mostly informed data based analysis by the top science experts in the world on autonomous driving...a bit different.
Excellent broadcast, laying out the various approaches to FSD. Judging by the show, and my experience with Autopilot and Summon, we have a long way to go.
@FISHEV - "Musk is a bit disingenuous in saying all those millions of miles of driving data are useful, they aren't."
Of course they are. The more miles they accumulate the more edge case situations they'll have to analyse. No, not every mile is equally valuable, but none of them are useless if for no other reason than to reinforce what's already been learned.
Millions of miles of driving data are useful simply because they contain a LOT of edge cases. As an engineer, IMHO, it's clear to me that Tesla isn't streaming data from all the cameras on their cars back to the mothership all the time. Hell, they have a hard enough time streaming three (OK, Four) cameras to a locally-attached USB Flash drive. 99% of the video from those millions of miles of driving are analyzed by the car, determined to be "uninteresting", and are discarded - though statistics are likely kept and reported about the "uninteresting" parts.
What they do have the ability to do, once the car analyzes video and determines something "interesting' has happened, is capture the video of the situation and send it back to the mothership to be incorporated into the learning data - which makes the same (or similar) situations in the future "uninteresting".
That's why the "millions of miles" are useful - the ability to extract and use the edge cases that they contain.
@dsvick and Frank99 - I don't doubt more data is useful but as someone pretty familiar with the limits digital cameras I wonder about the completeness of the data. The forward cameras are pretty low resolution and dynamic range is always a challenge with cameras. I wonder if the car detects the driver swerves to avoid something in the road the car wasn't able to detect (a fist sized rock in the travel lane etc.) if that occurrence is sent to mother Tesla.
Isn’t it millions with “B”?!
There was a failure to mention that the with the Brown crash, running under the semi trailer, he was driving an early Model S, and if memory serves me, AP-1. Also, it might have still been in the era when Tesla was using Mobile Eye. Note the mention of ME at the beginning of the program. The system mistook the trailer for a overhead sign, and he was watching a Harry Potter movie on a portable DVD.
The program also failed to mention that investigators found a reduction of potential accidents by 40% when using AP, So while being factual, the program left out a few details.
I watched it and enjoyed it for the most part. I was really curious about that first Uber-AP-caused death (RIP), and found it interesting that Uber had overridden the cars AEB. I did not know prior to watching the show that the backup driver was distracted and watching a video on her phone. Also, I'm glad I live in Cali and not Florida where an 18-wheeler can cut across a highway like in the two cases shown that involved fatalities.
I have been putting together a list of FSD edge cases, and was glad to see they showed a few of them.
"The program also failed to mention that investigators found a reduction of potential accidents by 40% when using AP, So while being factual, the program left out a few details." -@rlwrw
Just going from memory, I do recall the show reporting Tesla's position that using AP results in less accidents when not using AP. Perhaps watch the show again as you may have a "bias filter" in place.
FISH, What do you think is better way to build and continuously improve software:
Bunch of people sitting around in a room thinking about all the scenarios drivers face.
Data and videos from real life that can be used by AI to learn, in addition to people thinking about uses cases.
Let's say FSD team is looking at scenarios where animals jump out unto the street. Tesla software teams are going to think about main scenarios like: What should the car do when on a straight road? Or in a curve? How big is the animal? If car can't stop in time and requires severe evasive maneuver, should the car swerve left or right depending on the velocity of the car and animal crossing the street? Is there enough room on the road shoulder to swerve right? Etc. Just this scenario of animal jumping out unto the road would require weeks/months of thinking about all the scenarios, data points, identifying decisions car has to make, deciding on the "best" option (whatever that may be), executing the maneuvers.
Tesla software teams could spend the next 10 years identifying all the scenarios drivers face and it would not come close to what AI can learn from using data and videos from millions/billions of actual miles driven.
As a resident here in Phoenix, watching the clip where the Uber car is driving on a dark road and the pedestrian suddenly appears out of the darkness just pisses me off.
IMHO, its an Intentionally modified video released by Uber to make the accident appear unavoidable, and every news team ignores that. The road that the accident occurred on is exceptionally well lighted-you could drive 60 mph on it safely with your headlights off, and easily see a pedestrian 5 seconds ahead. It appears to me that Uber intentionally darkened the video. Rat b*stards, in keeping with their “frat brother” C suite culture.
“Hey Frank, don’t hold back, tell us what you really think.”
It was an interesting program. If you are interested in the low level information about Tesla's FSD hardware and software watch the Tesla Autonomy Day video on Youtube
They provide a tremendous amount of information about the chip in HW3, it has a staggering amount of processing power.
-@bjrosen Thanks for sharing. I will take a gander as I have the HW3 chip. The question is, are the cameras, ultrasonic sensors and radar a sufficient "feed" for it? My guess is, "No... not by a long shot...." I suspect the current fleet will be used to collect data and test things for the next generation (updated or otherwise) M3, Y PU truck fleet.... You heard the prediction hear first ;-)
while I have zero "Evidence" to say you are wrong. I would seriously hope and expect that Tesla's engineers took into account things like required sight distances, required views and sensors, etc. These seem like the very first thing you would do when designing the system is to gather requirements.
That would be a HUGE oversight and something they would likely never recover from.
I don't have any facts or evidence either, other than what's publicly available like sensor range, etc. I base my "opinion" on my engineering education and background only.
^^^ Tesla could always offer a retrofit program for the sensors, should what I predict become fact - much like the HW2.5 to HW3 rumored retrofit program.
I do agree with that and they would have to IMO to save consumer faith. If I found my car could not do what was advertised I would not be happy and looking to buy another Tesla anytime soon. I am fine with the incremental process and building to that. But if I was suddenly told never? ...
-@Joshan " If I found my car could not do what was advertised I would not be happy and looking to buy another Tesla anytime soon."
^^^ THIS has already happened to "earlier than M3" Telsa customers/owners (EAP and FSD promises, broken). There was even a lawsuit and settlement because people didn't receive what they paid for, or in a timely fashion. FSD is the poster child and continue to be. Those of us with limited "FSD functionality" have nothing more than rebranded EAP functionality. Today's "FSD" is 100% inclusive to the older EAP package - meaning if you are a EAP owner, purchasing FSD gets you absolutely no additional functionality today, only a promise for tomorrow... Feel free to correct my understanding if I have misspoken.
nope, that's all correct as far as I know. I did buy FSD even though I already had EAP, as I am hoping it gets me HW3 sooner than later. Foolish? maybe.... time will tell.
Any news on when you will get your free retrofitted HW3 computer? There's all this talk about "feature complete" by year end, yet I haven't seen, read or otherwise heard that Tesla even started to roll out the retrofit program. How can both statements (FSD by year end.... No HW3 upgrades as promised to HW2.5 owners...) be accurate in the same universe, where the laws of physics are the same?
"feature complete" is a software term. It does not mean you are done. Just means that all of the features required for the minimally viable product are in the code to meat the requirements in the design doc.
I do not think ever car having HW3 installed is a requirement for that statement.
Last I read on twitter Elon said starting in December if I remember right. It will be a Cluster F though... They are already having service stress and adding in a couple hundred thousand (no clue on real number) retrofits is a LOT of man hours.
my last post made it sound like I believed that it would be feature complete by end of this year after I re-read it. I am NOT saying I am drinking that kool aid, was just responding in general hah!
-@Joshan Agreed, Feature complete is not well defined as communicated. I hope your new computer arrives in December. I read that Tesla plans to set up high throughput HW3 installation centers, solely for the purpose of doing all of the upgrades. Hopefully that is true.
jebinc@ I agree with you, the current sensor suite doesn't seem to be reliable enough for the job. On numerous occasions I've seen the car disable simple things like lane assist, sometimes it's fairly obvious why, it's raining, but sometimes it's not as clear. To really do FSD the car's sensors need to operate in a wide range of conditions.
The HW3 chip, on the other hand, is incredibly impressive. I've been designing processors since the 1970s including a couple of supercomputers and I think they've made all the right choices. The overall performance is 140 trillion operations per second. To emulate that on standard CPUs would require something like 10 or 20 thousand processors. It's not an apples to apples comparison, most of their operations are very simple, 8 bit multiply/adds, general purpose CPUs operate on 32 or 64 bit integers and 64 bit floating point numbers. But the beauty of designing a custom single purpose processor is that you can do just what you need to do and not waste gates or more importantly power, on functionality that you don't need. They looked at the power cost of every operation and optimized the design so that it could do what they need to do and still fit in their power budget of 100W, they came in under that, it's only 70W. That's considerably less than a high end desktop CPU. What I can't judge is whether the overall problem is well enough understood, or if neural nets are a sufficient solution to it, I'm skeptical. BTW at the end of the video Musk says that then next generation will be about 3X over this generation, that sounds right to me. They've made the big leap from a more general purpose architecture, in the Nvidia chips that they used in HW2.5, to a fully custom architecture in HW3 which yielded a 22X performance jump. But that's done, future improvements will just be Moore's law scaling. The HW3 chip is built in a 14nm process, the next one will be in a 7nm process. Because this is an embarrassing parallel problem they can more or less scale the number of components by the process density so a 2X to 3X increase in processing power is certainly doable.
Well said. Agree 100%
@Frank99. Interesting info about the road. From what little I have seen of that video, I think you are right that Uber must have darkened it. Thanks for sharing.
I also have an engineering background, no facts other than what Tesla publishes, a fair amount of knowledge about lenses, sensors, etc and am in the camp of the sensors not being up to true L4/L5. The strongest case I have for the sensors being inadequate is some back of the envelope calculations based on the side camera's published 60m range. If I want to make an protected turn onto a road with a 40 MPH or higher speed limit (about 60 feet per second) the side cameras can only "see" an car or motorcycle about 3 seconds away. That isn't nearly enough time in my opinion and as the speeds climb it only gets worse. So, I think a retrofit of the side cameras with camera(s) equivalently capable to the front cameras would be required plus a wiper system to keep the glass in front of them clear of rain. Given the brain power Tesla has I am sure they are aware of this. I just suspect the original requirement for "FSD" probably only covered travel on controlled access roads otherwise the side cameras would be as capable as the front cameras. I have yet to find anywhere Tesla claiming "FSD" = SAE L4/L5. If this is the case they aren't really on the hook for giving L4/L5 to people who bought "FSD". I will stick with EAP until my next Tesla in 2029 or so. I am quite confident Teslas with fully backed L4/L5 will be available by then. I do think they will get there before other automakers.
@billtphotoman - I thought Elon stated FSD will eventually be level 4 in the Autonomy day talk. What I think gets little talk is what is FSD.
In my mind, Tesla's FSD will be a geo-fenced system with clear limitations. Before the regulators approve it. Tesla will offer FSD that requires nags. There is no other way to release before regulators give approval. So FSD may not be entirely hands-free or attention free at first.
It may well work 99% of the routes, taking you from home to work and other locations, but there will be areas where it will either avoid or will not go. There will be weather conditions it can't handle - whiteouts, hurricanes, etc. Some restrictions may get reduced over time, but it could be many years until FSD handles everything. And there are some situations that really should be avoided, but humans drive it anyway (often to their peril). Should FSD attempt these "bad" cases? Probably not.
Perhaps level 5 can handle unmarked dirt-roads, complex intersections (Osaka, Japan has an 8-way crazy intersection I've been through), Rome with 4-5 lane circle, and many other weird cases. That will take quite a bit longer. Yet, FSD with limitations could still handle 99% or more of daily driving routes for most people and should be able to drive arround geo-fenced problem areas.
Back on topic - I also liked the PBS special. My minor note is quite a few of the Tesla cases were old AP1 with Mobileye and/or old software that had been superseded with new hardware and software. Tesla's rapid advancements are hard to capture in such a review. I wish they had shown some of the many videos of AP actually preventing crashes.
When I purchase my FSD, this was the description:https://electrek.co/wp-content/uploads/sites/3/2018/06/Screen-Shot-2018-...
Note that it does not state L4 or L5, just a description for what it will someday do. To me the description sounds like L4 ("in almost all circumstances" is not "all"). However, it does imply someone in the driver's seat, but no action required. So I guess no napping on the seat in the back?
Our math is one and the same. Personally, I think we just continue to see L2 improvements and some "Beta" stuff that will forever remain "beta," unless there is a sensor upgrade and rear radar added to existing cars. I suspect Tesla will quietly upgrade sensors and such as the M3 lineage ages. For us existing owners, it's a "TBD".
"Of course they [millions of miles of boring Tesla driving] are. [useful]@dsvick
Nope and ALL the experts explain why from different perspectives. That stuff is easy to do it is the difficult situations that are the key to developing autonomous driving. They go over some examples to demonstrate the odd intersection, wrong way driver and explain it in the narratives.
Why did Tesla break up with MobilEye?
"its an Intentionally modified video released by Uber to make the accident appear unavoidable"@Frank99
No one has ever said Uber modified the video. There was an entire NHTSA investigation including that video from the cars cameras. Once someone was killed all that data is property of the police not Uber. You can hear the police going over this with the Uber rep in the video.
The blame went to:
1. Driver for not looking at road.
2. Uber for turning off Volvo's auto braking.
3. Uber for having hw/sw see the biker with enough time to stop but the Uber software/hw neither stops the car nor alerts the driver.
4. Biker for crossing illegally.
No one can convince me that there are very many Tesla buyers of FSD that do not know exactly what they are investing in. Those that actually might not are either lazy, stupid, or both.
"What do you think is better way to build and continuously improve software"@95Dawg.
Have no idea. The show outlined how current research is done by the leading players.