Some folks have no problems, other folks have problems a and b, and still other folks have problems c and d. Same version of software. Any hypotheses?
Without specific examples of the differences, I'd have to say that with neural net processing and with no two situations being exactly the same that we should expect differences in behavior - not drastically different, but certainly not exaclty the same. If you are talking about the basic functionality it could be explained in differences in hardware. With vehicles that are undergoing continuous changes I would expect software to behave differently with different hardware.
Some of it I think is different versions of hardware, some may have different components from different sources that do the same thing?
@JarvisM3: I agree.
Question is - how many variations are there in each hardware series (ie, 1, 1.5, 2, 2.5)?
I think it is a matter of expectations. I find that sometimes (especially after an update with new behaviors) I get impatient with the autopilot and start expecting more than the system is capable. After a while, I get accustomed to what the system can do and I don't get frustrated with the system. Most of the things people report as "problems" are just limitations of the system.
To be clear, I'm not saying no one has had a legitimate "problem" with autopilot. I'm saying a lot of the issues I read about are just due to expecting too much from the software at this point.
As others have noted - hardware variability, condition variability, order of operations variability (this was particularly important in the "car stays *on* when I get out!" issue because it was related to when the foot came off the brake in relation to the door being opened, as one example), level of annoyance variability (some people don't consider things issues that others might). To some degree, it may also be related to which updates have been added in the past, since some cars skip some updates. There may be a small packet here or there with a bit of duplicate code from a prior update that's now creating a bug because it doesn't know which to go with -- or it's trying to do something already done, or it thinks something was done that wasn't (camera on, for example).
May seem to think software is smart. It's not. The designers are, but when you have multiple people working on multiple versions, a small typo or slight syntax change can make a big difference. I'd expect they have a number of software editors for this, but people are still human and can miss things. I'd expect critical systems code doesn't change often (which is why the car will still work while rebooting), but other "layers" that we get in updates can have larger perceived impacts than one would expect. Also -- butterfly effect. We changed this thing over here for auto-wipers (which rely on cameras) to make them better, but didn't take into consideration how it would impact the rear camera with that delay start code (so someone can get in the car without getting splashed by the wipers). Now we need to reconsider how we are coding that to exclude the rear camera.
Why does same software act different on everyone's phone or a computer?
It's same situation for any product hardware or software. Too many variables in this world.
In many instances, it is not the sofware, but rather the owner not familiarizing themselves with the various settings. Reading the Operators Manual would certainly reduce many of the so called software issues. There software differences due latest version not having been updated in the car. So, far in my case, it has been an issue of not going into panic modem, but just working through the issue; whether, it be settings, having only one blue tooth phone (samsung) on, or just simply doing a reboot to clear a problem.
I’ve always suspected the non-deterministic nature of many of these software issues is caused by multi-threading issues and resource contention in the software. It depends not just on the driving conditions, but what else is going on in the car, different settings, and so on.
The variable is "folks", just sayn'
I find that a lot of Tesla cars have a short between the steering wheel and the accelerator pedal...
Everything else is normally just hardware differences.
I've wondered the same thing. For example, on 8.5, several users reported display weirdness--orange blobs, scrambled jittery maps and such. And documented it for all of us to see. I never had that once. Many have reported the backup camera being black for a few seconds on various releases. It's always come on instantly for me. I've had the car spontaneously recalculate a route over and over and over, but haven't heard about that from anyone else (and it went away several updates ago). So I think there's more to it than just the human factor of who's in the car.
Finally, someone who gets my question. Yay, CharleyBC
Humans are terrible at agreeing to what they observe.
Same hardware and same software will behave exactly the same in identical circumstances.
My kid and I get a new iphone together each time and get the same model. He has issues 10x faster than me... Its the same thing with any computerized system. Differences in how they are used, how they are configured, expectations and maintenance.
@Magic... totally agree. Eyewitnesses who see the same event can have drastically different recollections of said event... lots of empirical studies back this up. This is true even when people watch the same video footage
The back up camera is a classic. Sometimes I notice a black screen for a few seconds and other times I don't, it is inconsistent. The kicker is I am also inconsistent as to when I look at the screen for a back up image. Sometimes I look right away after putting in R and sometimes I don't (so maybe the image was black and I did not notice those times).
@rdavis, I thought the short was between the steering wheel and the seat.
@TG: that's when I am driving ….
@jim, Heh heh.
@TG - who you callin' short? I'm 5'7", TYVM. ;)
Sorry @hokiegir1 - I knew I should have said 'short circuit'.
But then I probably would have offended some robots too.
Same software, slightly different hardware in some cases, but vastly different environmental input variability.
Btw, people often seem to lump the AI software/hardware into the mix when talking about bugs but the vast majority of bugs are unrelated to the AI hardware or software. Those functions tend to be very stable.
@billlake2000 Does raise a good question. Even with the same software build, some people see problems that other people rarely see.
1) Inconsistent definition of "rarely". Just the way humans are different
2) Some subtle difference in the way the car is used which results in latent issues surfacing for some people and not others
3) Slightly different hardware because cars built at different times have slightly different components, or as components age they can behave slightly differently (ex: 0.1v difference...)
Generally, from a software development perspective, #2 is the most common. There are thousands of different use cases and each generates a different path through the code base.
Actually, I think I have a different take on this. And it comes from one of the speakers at autonomy day and my rough understanding on how neural network computers work.
First off: The autonomy guy speaker, the one involved in the mass deployment of the auto-driving stuff (i.e., neural network databases..) stated that the NN was being deployed _faster_ than firmware updates. That was quick, but it's important: That means that, never mind that one is running 2019.8.5 or 2019.12.1 or whatever, across the fleet, cars in each point release are likely running different NN loads! And _that_ stuff doesn't show up on the car's data screen anywhere.
Next: The fundamental idea behind the NN is that there's a bunch of neurons, installed in layers, all cross-connected, that have multiple inputs from the inputs (say, on the left) and the outputs (say, on the right). Each neuron has multiple inputs; those inputs get weights; and the summary of the weights determines the level of output of the neuron that we're talking about. The whole array has levels whose inputs are only from other neurons earlier in the array, so we're talking a lot of computational processes here. Eventually, it all comes out on the right with decisions: Is that a tree? A raindrop? A semi-trailer with a drug-induced hallucinating driver heading right for one?
Now comes the kicker. After all that data comes out, there's _feedback_ back into the neural network array that _changes_ _the_ _weighting on all those neurons. So, if one thought that that was a tree, and, for various reasons that was true, then feedback can be sent back into the array reducing the weights of the it_wasn't_a_tree gang and increasing the weights of the it-sure-is-a-tree gang. That's the learning component.
So, the crowd at Tesla sends out all the weights periodically (and not necessarily along with the firmware loads), and hands it off to the local processors. If I 'twere them, I'd let the neural processor on the car crank on the weights for a time, then get back that data from multiple cars, and use it to improve the overall algorithm; lather, rinse, repeat.
But what does that mean for individual drivers? It means that every so often, if one has a neural database that consistently is getting things wrong, a data dump from the mothership is going to change that database. And if one has a really smart NN, by hook, crook, chance, and by whatever the driver is doing that affects the whole dance, sometimes the data dump from the mothership is going to make the car stupider. Overall I'd expect things would get better over time.
But compare individual cars with each other? They'd be all over the map. Different firmware, different NN load, NN loads self-modifying as the car and driver goes; wowzers.
I'd imagine that an individual car can't get too different from another car. But the differences are likely there, and real. Fun.
I have always wondered if it had to do with the sequence order of updates. A friend of mine with the same LR-RWD has had updates that I have not received and vice-versa. We eventually end up with the same update but in different sequences.
One good example of what seems to be different operation was the slow power up on MCU2 (I think now fixed).
Some owners got into the car and took 15-20 seconds to buckle up and then pressed the brake to turn on the car. Zero "powering up" messages. Others hoped in the car quickly and immediately pressed the brake. The message appeared "Car powering up" and would not let you go into drive for 10-15 seconds. Both are actually identical internal software operation (i.e. takes 20 seconds for some internal processor to come on-line), but the slower owner never saw the message that annoyed the fast owners.
Another case is how often people state a problem, but fail to state what hardware or software they are on. It often makes a big difference. Then people respond "I never have problem x", but they may be on different hardware (MCU1/MCU2/AP1/AP2/HW2.0/HW2.5, etc.). Only when you're comparing the same hardware and software version can you expect consistency.
@Tronguy: thanks for that little essay. I love learning stuff.
That's not exactly how it works.
The car doesn't learn. The neural network database is trained, and then the results of that training is downloaded into all the cars.
@M3BlueGeorgia: You and my wife think the same, that the local processor doesn't learn over time. On the other hand: It's not exactly rocket science for a NN computer to do the self-training bit, researchers do that all the time. So... I have no idea, and if you've got the inside information, I bow to your superior knowledge. But to my mind, the situation strongly reminds me of ye bitcoin miners, who sometimes enlist the computational power of zillions of normal PCs (sometimes without the owners of those PCs knowing) to calculate the value of the next bitcoin, each little PC doing its part in a distributed network of computational nodes.
So, with this big-ass fleet of Teslas.. Why _wouldn't_ Tesla use all that computational NN horsepower out there to help things along back at the mothership?
In fact, now that I think of it, back at the Autonomy Day presentation, didn't at least one of the speakers say that the Tesla's in the field were "ghosting" the actual drivers of the cars, comparing what the NN was coming up with as compared to what the driver was actually doing? And that the results of the ghost were being sent back to the mothership? Um. Don't know, really, but I wonder: Are the results of the ghosting also affecting the EAP/SD aspects of the car _without_ always going through the mothership to do so?
It's fun to speculator, but I suspect that nobody at Tesla is going to breath a word of anything like this to the public, for proprietary reasons if nothing else.
Argh. Speculate, not speculator.
Wow, what a dumb question. Do you even have a computer or phone or anything else using software/firmware?
seems to me that almost everyone is driving on different roads...
Autopilot is going to "like" the road striping on some roads better than others.
And in different conditions... zillions of variables. Dirty cameras, weather, sun,
As M8B said, the same hardware and software will behave the same IN IDENTICAL CONDITIONS.
And the conditions are almost never gonna be the same.
AND, yeah, software has bugs. And some bugs are "timing issues" which are difficult to figure out, and can make reproducing a particular problem very difficult. Multiple specific things have to happen at just the right times and then bingo - the software does something unexpected. Multiple people don't see the issue because they didn't have all the conditions JUST RIGHT to trigger the issue. But some do see it.
Dang, typed a long response and stupid phone/browser loses it.
I'll leave it at @M3Blue +1. Tesla fleet is wrong distributed compute platform for actual training of the neural net. Seti at home possibly, Bitcoin mining maybe, but not NN training and analysis.
I think Elon was simply humoring that guy during the Autonomy Day Q&A.
A behavior that drives me a little batty is the variable time it takes the car to lock after I leave it parked. Sometimes it locks when I 5 feet away. Sometimes (much more often) I can barely hear the beep of the horn (>50 feet away). Any explanations?
@syclone, Many explanations...
The car is not communicating and/or checking the Bluetooth connection at all times. For example, if the phone/car ping each other every 15 seconds then randomly it will happen to ping when you are just out of range and another time you could be 15 seconds further.
Additionally, the communication will vary based upon direction you walk away, orientation of phone, obstacles, etc. For example, a human is basically a big bag of water and just having the phone in back pocket can reduce or prevent detection as you approach the car. Same when you walk away.
You can test this a bit by controlling the position of phone as you move away as well as your speed of movement to determine which it is or if you have other more quirky effects going on such as a contributor due to phone behavior.
This was discussed at length long ago and tested but I don't remember the results. You might be able to Google and find the old thread.