I find it strange that Tesla cameras don’t detect street lights and stop on reds/yellows. I would think this function would be in their sweet spot.
They are working on it. Really a very hard problem. You're used to a simple vertical traffic light, but sometimes they are mounted horizontally, then there are some that are the lights from hell. One in San Francisco has about 10 indicators on the post - strange ones like trolley lights on the same post as car traffic with various turn indicators. Then in there are places that use sodium street lights (very yellow) as San Jose used to have. As you drive up to a traffic light, it was common to have a street light behind it, but looks like a yellow traffic light. Takes more than a little thought to figure out what the real state is, even for humans!
Hopefully you'll never encounter this one: https://www.oddee.com/wp-content/uploads/_media/imgs/articles2/a97092_11...
(it's not real)
Recognizing traffic lights/signs is only part of challenges, do it faster enough is another , current on board computer may not be powerful enough to processing all information and make right decision within a fraction of a second . That is probably why HW3 or beyond is needed.
Incredibly difficult challenge that requires many case studies. It may be some places are so difficult they will have to be geofenced out until they can make them reliable enough.
I suspect its easy to do 99% of the time. Unfortunately, the other 1 time out of 100, you blow through the light, possibly T-boning, getting T-boned, or worse - possibly hitting a pedestrian.
Got to get to perhaps 99.99999% before its good enough to count on.
@Earl and Nagin
That's the real issue ... Getting to 100% red light detection with very few false negatives.
Then there's accurately seeing that the red light in front and slightly to the left is for the left turn lane, while the green light up and maybe slightly to the right is for your lane of travel.
Stop lights are easy. A four-way stop sign is a tougher problem.
I suggest Tesla build a large scale real traffic test site and a fleet of several hundred self-driven Tesla cars with robots simulating bicycles, motorcycles, pedestrians, and animals riding/walking along or across streets to mimic various real-life urban traffic/weather conditions. They can also invite human drivers to participate in the testing with their insurance covered. The test site can be easily configurable so that one week it simulates some of the most complicated NY City traffic, another week to mimic LA commuting and so on. Meanwhile collecting real traffic accident data in those real sites while simulating testing site is undergoing testing.
This testing can be going on 24 hours a day and 7 days a week to refine and testing FSD software and accumulating a sufficient amount of data to prove to State and Federal governments that Tesla FSD is safer than human drivers in all driving conditions.
Does anyone want to participate in testing? I am willing to spend a day there :) .
Why would they need to pay for a fleet of several hundred cars and a fake test place when they have hundreds of thousands of them in the real world where the actual algorithms are already being tested? The cars aren't actually following the instructions of the FSD software but the detection algorithms are all being tested on every car since they started with AP2.
This is why many of us believe Tesla will have FSD many years before anyone else does. Tesla will (actually, they already) have millions of miles of testing while Google and other little guys have to pay a lot of money to only have thousands of miles of testing.
I'm not sure, but I think Earl and Nagin are right. Or they is right, if they is only one entity.
+1 @E&A, I was thinking exact same thing. Nothing better than testing with real world data.
I can think of many reasons as why, here are some
1. Good quality control is essential to gain public confidence and regulatory approvals in adopting FSD as Elon envisioned . It is the same reason why a drug company needs to do many clinical trails before a new frog is approved. As a counter example, you saw what happened to Boeing 737 Max as a result of poor software testing. The current way Tesla does software testing is inadequate in it its minimum. A bad Iphone bug may bring inconvenience to billions of people, but no one will die, a bad bug for a million Tesla cars on other hand will. A different software test standard needs to be established.
2 so far the large amount of data you were referring to is mainly data for TACC and auto steer, nothing yet for FSD, and under human driver’s supervising all the times, in term of complexity and accident rate, million miles of freeway driving probably is less than 10 thousand miles on a city streets. Among “millions” , I added about 3000 miles, I know the quality of these data, and how many times it did not work or worked only partially.
3 if Tesla can demo hundred self drive cars in most difficulty driving conditions or hash environment, it says much more than “million” miles record Tesla frequently refers. Most importantly, the purpose is not to prove to you or me who already drive a Tesla car and have our confidence in Tesla that they will FSD work eventually. But to majority of the public, they are not convinced that Tesla FSD cars are safer than human drivers, they are rightfully to believe so. If Tesla can make the test site data transparent to the public, they can see, try and test them selves , that will be much more convincing than "millions of miles" of driving records stored in black box
4. Having a controlled test environment with all monitoring equipments , Tesla can quickly identify causes of close calls and accidents.
the list can go on and on.
The magratheans can build a duplicate planet as simulator, just sayn'
Here you redefined word "simulator". My understanding is that Tesla sells duplicated cars, not simulators of cars.
I thought you wanted a simulation of real world to test real stuff? Magratheans specialize in making stuff like that.
When you call, ask for SlartyB - and give him my referral number #27182818. One more I and get a free upgrade to a rotating iron core.
"Is it hard for AI cameras to recognize traffic lights?" Not so much.
But it should detect correct traffic light when multiple traffic lights are in the view and have to recognize where to stop.
And have to do it almost 100% of accuracy.
Think about 1 million cars, 10 crossing a day, 1 year, no accident. failure rate of 1/3,650,000,000 (0.0000000274
%) is hard.
Elon Musk said that the developer edition of autopilot in his car stops at red lights. So have a little patience.
The r8.5 elease note says "some" or "certain" cases or instances of red light detection. So yeah, it's not as easy as one could imagine. It will get there. It's just a matter of time.
Traffic lights shouldn't be as hard to navigate safely as you might think because they are typically not a single light but a set of _six_ lights*.
You have two groups of 3 lights: A and B. The car just needs to determine which of lights A:red, A:yellow, A:green, B:red, B:yellow and B:green are on.
Because of this there is likely enough error checking you can do to make it reasonably free from catastrophic mistakes:
- If less than one light in group A is on then error out and alert the driver
- If more than one light in group A is on then error out and alert the driver
- If less than one light in group B is on then error out and alert the driver
- If more than one light in group B is on then error out and alert the driver
- If the lights in group A don't match the lights in group B then error out and alert the driver
- If any of the lights are not on solidly but are blinking then error out and alert the driver
- If the car notices the lights change state start over
- If any light does not change state in the proper order then error out and alert the driver
So there is so much error checking you can do it doesn't seem likely a car will just plow through an intersection because it misread a light.
*Obviously there are different light setups as well but this is probably the most common.
Every time you have to find the traffic lights on the "I am not a robot" thing, you are working on the light recognition algorithm
Everything is in motion. Weather compounds the complexity of the challenge. It is hugely difficult.
So to miss a red light and plow into an intersection the car would have to misread _4_ lights: A:red, A:green, B:red and B:green. So even if the probability of correctly reading an individual light isn't great the probability of misreading all four of those lights is extremely low.
For example, if the probability of correctly reading an individual light is only 99% (that's really bad) the probably of misreading the set of lights and driving through a red light is 1 in 100,000,000 which is probably low enough.
And add the complexity of red, yellow, and green arrows on the same signal with "regular" red, yellow and green lamps.
"And add the complexity of red, yellow, and green arrows ..."
That's why I said:
"Obviously there are different light setups as well but this is probably the most common."
If you have accurate maps, the car knows where the traffic lights are. That should aid dramatically, since the car doesn't need to find where the traffic lights are located, just read the status of the lights at preset locations.
However: "accurate maps" seems to be an oxymoron based on speed limits knowledge, where the map-based speed limit is wrong a surprisingly high percentage of time. And there appears to be no way for the car to learn the correct speed limit.
Temporary traffic lights are an issue, though much less of an issue than temporary speed limits, but a problem nevertheless.
Trying to find an image of a nightmare traffic light in San Francisco I've seen before. Didn't find it, but found this video of a Cruise Automation FSD prototype. A great collection to show why FSD is hard! http://gmauthority.com/blog/2019/02/cruise-automation-navigates-tricky-s...
You also have to figure in the train traffic crossing lights that flash on and off. You have to be able to see the difference between traffic lights and neon lighted signs off in the distance and to the sides of the cars. This stuff gets real tricky for an AI system to decipher.
Maybe add into this discussion, flashing yellow bus lights, the bus's stop sign on the side, etc...
Anything that is very obvious for us and we do it without even thinking is actually a hard problem for machine. Even something as simple as reaching to a door handle and opening it seems obvious to us but programming a machine to do that is hard. There are just too many variables. Our brain is such a wonderful thing, it's hard to believe what all it can do
@Carl, I understand where you are coming from but you are oversimplifying a little too much. Yes I read your caveat, but that isn't enough because you are only referencing the lights in the setup. And you say that your example is the "most common", but having a solution for the most common won't keep the car from making a wrong decision on all the other situations. The bigger issue I think is not even correctly identifying which light is on and but figuring out which set of lights apply for the lane you are in. There are so many lights that don't align up with the lanes for all kinds of reasons...That I think is the bigger issue.
As others have mentioned the problematic. In addition to some of the challenges already pointed out:
1) Driving into the sun with the sun directly behind the light. Humans can use the visors or hands to block the sun. The cars camera can only stop down an aperture which may not be enough.
2) When the intersections are close together, making sure to read the closest stop light
3) Getting the left turn signal unit and through traffic signal unit correct. This isn't as straight forward as it would seem because some intersections contain more than one signal unit for through traffic.
4) Some cities do their own weird things with signals. For example, in several Texas cities the left signal light will flash a yellow arrow when a left turn is legal but not protected where as most cities just show this as a green signal on the left turn signal unit.
My point wasn't that the problem is easy to solve but that there is enough redundant data and inter-dependencies in a system of traffic lights to make it extremely improbable for a self-driving car AI to misread a set of lights in such a way that it can't detect that it misread the set of lights.
In other words, if the car misreads the lights for whatever reason (lane doesn't line up with light, etc.) the AI should easily be able to deduce that its interpretation of the lights must be wrong because its interpretation breaks the known constraints. I.e., the car will _know_ it is misreading the lights and bail out and alert the driver to take over.
Waymo self-driving around here in Chandler AZ for last 2 years detects traffic lights in the white Chrysler Pacific mini-vans, so can't be too hard to do? Oddly, rarely see the Waymo's on the highway - so maybe they haven't figured that out yet?
@cmdo - WayMo here in Mountain View (headquarters) have their cars driving around all the time. I see at least one a day driving around. Never seen one on the freeway, and it looks they are only supporting 40 mph and below. Mostly see them at 25-30 mph and less, never above 40 mph.
Incredibly hard but someday they may get it as good as a fifth grader hahaha! This problem is going to take a lot more AI than they think it is.
Considering about 700 people die each year in running red light crashes it looks like the RI is still not quite there yet either.