I find it strange that Tesla cameras don’t detect street lights and stop on reds/yellows. I would think this function would be in their sweet spot.
They are working on it. Really a very hard problem. You're used to a simple vertical traffic light, but sometimes they are mounted horizontally, then there are some that are the lights from hell. One in San Francisco has about 10 indicators on the post - strange ones like trolley lights on the same post as car traffic with various turn indicators. Then in there are places that use sodium street lights (very yellow) as San Jose used to have. As you drive up to a traffic light, it was common to have a street light behind it, but looks like a yellow traffic light. Takes more than a little thought to figure out what the real state is, even for humans!
Hopefully you'll never encounter this one: https://www.oddee.com/wp-content/uploads/_media/imgs/articles2/a97092_11...
(it's not real)
Recognizing traffic lights/signs is only part of challenges, do it faster enough is another , current on board computer may not be powerful enough to processing all information and make right decision within a fraction of a second . That is probably why HW3 or beyond is needed.
Incredibly difficult challenge that requires many case studies. It may be some places are so difficult they will have to be geofenced out until they can make them reliable enough.
I suspect its easy to do 99% of the time. Unfortunately, the other 1 time out of 100, you blow through the light, possibly T-boning, getting T-boned, or worse - possibly hitting a pedestrian.
Got to get to perhaps 99.99999% before its good enough to count on.
@Earl and Nagin
That's the real issue ... Getting to 100% red light detection with very few false negatives.
Then there's accurately seeing that the red light in front and slightly to the left is for the left turn lane, while the green light up and maybe slightly to the right is for your lane of travel.
Stop lights are easy. A four-way stop sign is a tougher problem.
I suggest Tesla build a large scale real traffic test site and a fleet of several hundred self-driven Tesla cars with robots simulating bicycles, motorcycles, pedestrians, and animals riding/walking along or across streets to mimic various real-life urban traffic/weather conditions. They can also invite human drivers to participate in the testing with their insurance covered. The test site can be easily configurable so that one week it simulates some of the most complicated NY City traffic, another week to mimic LA commuting and so on. Meanwhile collecting real traffic accident data in those real sites while simulating testing site is undergoing testing.
This testing can be going on 24 hours a day and 7 days a week to refine and testing FSD software and accumulating a sufficient amount of data to prove to State and Federal governments that Tesla FSD is safer than human drivers in all driving conditions.
Does anyone want to participate in testing? I am willing to spend a day there :) .
Why would they need to pay for a fleet of several hundred cars and a fake test place when they have hundreds of thousands of them in the real world where the actual algorithms are already being tested? The cars aren't actually following the instructions of the FSD software but the detection algorithms are all being tested on every car since they started with AP2.
This is why many of us believe Tesla will have FSD many years before anyone else does. Tesla will (actually, they already) have millions of miles of testing while Google and other little guys have to pay a lot of money to only have thousands of miles of testing.
I'm not sure, but I think Earl and Nagin are right. Or they is right, if they is only one entity.
+1 @E&A, I was thinking exact same thing. Nothing better than testing with real world data.
I can think of many reasons as why, here are some
1. Good quality control is essential to gain public confidence and regulatory approvals in adopting FSD as Elon envisioned . It is the same reason why a drug company needs to do many clinical trails before a new frog is approved. As a counter example, you saw what happened to Boeing 737 Max as a result of poor software testing. The current way Tesla does software testing is inadequate in it its minimum. A bad Iphone bug may bring inconvenience to billions of people, but no one will die, a bad bug for a million Tesla cars on other hand will. A different software test standard needs to be established.
2 so far the large amount of data you were referring to is mainly data for TACC and auto steer, nothing yet for FSD, and under human driver’s supervising all the times, in term of complexity and accident rate, million miles of freeway driving probably is less than 10 thousand miles on a city streets. Among “millions” , I added about 3000 miles, I know the quality of these data, and how many times it did not work or worked only partially.
3 if Tesla can demo hundred self drive cars in most difficulty driving conditions or hash environment, it says much more than “million” miles record Tesla frequently refers. Most importantly, the purpose is not to prove to you or me who already drive a Tesla car and have our confidence in Tesla that they will FSD work eventually. But to majority of the public, they are not convinced that Tesla FSD cars are safer than human drivers, they are rightfully to believe so. If Tesla can make the test site data transparent to the public, they can see, try and test them selves , that will be much more convincing than "millions of miles" of driving records stored in black box
4. Having a controlled test environment with all monitoring equipments , Tesla can quickly identify causes of close calls and accidents.
the list can go on and on.
The magratheans can build a duplicate planet as simulator, just sayn'
Here you redefined word "simulator". My understanding is that Tesla sells duplicated cars, not simulators of cars.
I thought you wanted a simulation of real world to test real stuff? Magratheans specialize in making stuff like that.
When you call, ask for SlartyB - and give him my referral number #27182818. One more I and get a free upgrade to a rotating iron core.
"Is it hard for AI cameras to recognize traffic lights?" Not so much.
But it should detect correct traffic light when multiple traffic lights are in the view and have to recognize where to stop.
And have to do it almost 100% of accuracy.
Think about 1 million cars, 10 crossing a day, 1 year, no accident. failure rate of 1/3,650,000,000 (0.0000000274
%) is hard.
Elon Musk said that the developer edition of autopilot in his car stops at red lights. So have a little patience.
The r8.5 elease note says "some" or "certain" cases or instances of red light detection. So yeah, it's not as easy as one could imagine. It will get there. It's just a matter of time.
Traffic lights shouldn't be as hard to navigate safely as you might think because they are typically not a single light but a set of _six_ lights*.
You have two groups of 3 lights: A and B. The car just needs to determine which of lights A:red, A:yellow, A:green, B:red, B:yellow and B:green are on.
Because of this there is likely enough error checking you can do to make it reasonably free from catastrophic mistakes:
- If less than one light in group A is on then error out and alert the driver
- If more than one light in group A is on then error out and alert the driver
- If less than one light in group B is on then error out and alert the driver
- If more than one light in group B is on then error out and alert the driver
- If the lights in group A don't match the lights in group B then error out and alert the driver
- If any of the lights are not on solidly but are blinking then error out and alert the driver
- If the car notices the lights change state start over
- If any light does not change state in the proper order then error out and alert the driver
So there is so much error checking you can do it doesn't seem likely a car will just plow through an intersection because it misread a light.
*Obviously there are different light setups as well but this is probably the most common.
Every time you have to find the traffic lights on the "I am not a robot" thing, you are working on the light recognition algorithm
Everything is in motion. Weather compounds the complexity of the challenge. It is hugely difficult.
So to miss a red light and plow into an intersection the car would have to misread _4_ lights: A:red, A:green, B:red and B:green. So even if the probability of correctly reading an individual light isn't great the probability of misreading all four of those lights is extremely low.
For example, if the probability of correctly reading an individual light is only 99% (that's really bad) the probably of misreading the set of lights and driving through a red light is 1 in 100,000,000 which is probably low enough.
And add the complexity of red, yellow, and green arrows on the same signal with "regular" red, yellow and green lamps.
"And add the complexity of red, yellow, and green arrows ..."
That's why I said:
"Obviously there are different light setups as well but this is probably the most common."
If you have accurate maps, the car knows where the traffic lights are. That should aid dramatically, since the car doesn't need to find where the traffic lights are located, just read the status of the lights at preset locations.
However: "accurate maps" seems to be an oxymoron based on speed limits knowledge, where the map-based speed limit is wrong a surprisingly high percentage of time. And there appears to be no way for the car to learn the correct speed limit.
Temporary traffic lights are an issue, though much less of an issue than temporary speed limits, but a problem nevertheless.
Trying to find an image of a nightmare traffic light in San Francisco I've seen before. Didn't find it, but found this video of a Cruise Automation FSD prototype. A great collection to show why FSD is hard! http://gmauthority.com/blog/2019/02/cruise-automation-navigates-tricky-s...
You also have to figure in the train traffic crossing lights that flash on and off. You have to be able to see the difference between traffic lights and neon lighted signs off in the distance and to the sides of the cars. This stuff gets real tricky for an AI system to decipher.
Maybe add into this discussion, flashing yellow bus lights, the bus's stop sign on the side, etc...
Anything that is very obvious for us and we do it without even thinking is actually a hard problem for machine. Even something as simple as reaching to a door handle and opening it seems obvious to us but programming a machine to do that is hard. There are just too many variables. Our brain is such a wonderful thing, it's hard to believe what all it can do
@Carl, I understand where you are coming from but you are oversimplifying a little too much. Yes I read your caveat, but that isn't enough because you are only referencing the lights in the setup. And you say that your example is the "most common", but having a solution for the most common won't keep the car from making a wrong decision on all the other situations. The bigger issue I think is not even correctly identifying which light is on and but figuring out which set of lights apply for the lane you are in. There are so many lights that don't align up with the lanes for all kinds of reasons...That I think is the bigger issue.
As others have mentioned the problematic. In addition to some of the challenges already pointed out:
1) Driving into the sun with the sun directly behind the light. Humans can use the visors or hands to block the sun. The cars camera can only stop down an aperture which may not be enough.
2) When the intersections are close together, making sure to read the closest stop light
3) Getting the left turn signal unit and through traffic signal unit correct. This isn't as straight forward as it would seem because some intersections contain more than one signal unit for through traffic.
4) Some cities do their own weird things with signals. For example, in several Texas cities the left signal light will flash a yellow arrow when a left turn is legal but not protected where as most cities just show this as a green signal on the left turn signal unit.
My point wasn't that the problem is easy to solve but that there is enough redundant data and inter-dependencies in a system of traffic lights to make it extremely improbable for a self-driving car AI to misread a set of lights in such a way that it can't detect that it misread the set of lights.
In other words, if the car misreads the lights for whatever reason (lane doesn't line up with light, etc.) the AI should easily be able to deduce that its interpretation of the lights must be wrong because its interpretation breaks the known constraints. I.e., the car will _know_ it is misreading the lights and bail out and alert the driver to take over.
Waymo self-driving around here in Chandler AZ for last 2 years detects traffic lights in the white Chrysler Pacific mini-vans, so can't be too hard to do? Oddly, rarely see the Waymo's on the highway - so maybe they haven't figured that out yet?
@cmdo - WayMo here in Mountain View (headquarters) have their cars driving around all the time. I see at least one a day driving around. Never seen one on the freeway, and it looks they are only supporting 40 mph and below. Mostly see them at 25-30 mph and less, never above 40 mph.
Incredibly hard but someday they may get it as good as a fifth grader hahaha! This problem is going to take a lot more AI than they think it is.
Considering about 700 people die each year in running red light crashes it looks like the RI is still not quite there yet either.
Hmmm... another resurrected necro-post.
OPENED: April 11, 2019
LAST RELEVANT POST: April 30, 2019
RESURRECTED: February 5, 2020
Nothing interesting about it. Just the spammers doing their useless thing.
Now, now, Red and EVRider. I am sure that a senior at the prestigious Catholic University of Zimbabwe has completely solved, in his final semester, an engineering challenge that corporations with $$$ and loads of PhD's have been working on.
In order to gain this valuable data, please enter your bank's routing information in the comments below...
Great big red, orange and green lights arranged in vertical, should be hard to miss. What’s more challenging is recognising filter signs, zebra crossing, lollipop people and knowing the different rules between toucan, pelican and pegasus crossings. Mini-roundabouts are fun as well, as are speed bumps.
Would you like to be the person writing the code to navigate the magic roundabout? https://en.wikipedia.org/wiki/Magic_Roundabout_(Swindon)
The real answer is to simplify road layouts and have roads communicate with cars. There’s been a good news story in the UK today - a long distance automated drive in a Leaf with a part Govt-funded development. Relevance is that the UK is actively working on self-driving. Ultimately good for research and the regulatory changes that are needed to enable the technology.
@andy: Simplifying road layouts and making them communicate with cars isn’t going to help with existing roads. If FSD can’t handle what exists today, it won’t happen.
I’m sure it’ll happen. Research into self driving already has Government backing. Reading the comments about the US road layouts and behaviour it sounds as though the US has continued to have a relatively simple system.
In the UK, over the last 20 years, the roads and signage have become increasingly complicated to try and control congestion, behaviour and to try and make more use of the space. It’s got to the point where it is difficult for a human, let alone a computer. Self driving will enable changes in road layouts, but will also need changes. There’s a big safety case for self-driving and self-driving will help to drive efficient use of space while reducing congestion and energy usage. It won’t be long before the balance of regulation tips in favour.
No, not easy.
Few not-so-happy use cases that come to mind (plenty more if we really think about it):
- Fog, rain, snow, hail, very dusty construction area, steam coming out of manhole, etc.
- Brightly lit business signs near traffic signals. Neon signs
- Hilly areas (where taillights could look like red only traffic light at expected height from camera's line of sight)
- Traffic lights on cables (instead of fixed posts) that are flailing around in high wind
- Multiple directional signal lights. Directional signals where a lane has both left arrow and straight only arrow greens.
- Red/green signals for bicycles that are next to signal for cars.
- Signals for buses only.
- "Foresignal" that shows state of traffic light behind a sharp corner. Tesla might mistake it as traffic light and stop 200 meters before the actual intersection.
Low sun and box junctions can be added to the list for consideration at lights,
You also want the car to be able to read obstructions in the road - pot holes, speed bumps and debris for example.
It'll come, a step at a time.
Also traffic lights being out totally or just 1 light failed. It seems simple when you initially think about it, but then when you add all the variables humans just know how to deal with it becomes a lot more complicated.
Just the low sun issue seems to be a big enough problem. I have had the car tell them to immediately take control because of it quite a few times at this time of year.
In addition to the challenges mentioned, some logic needs to be built into the system to help prevent spoofing from attackers. The equivalent of cyber criminals are going to attack autonomous systems as soon as they become prevalent. Specifically in the case of traffic light recognition, they will study how the system works and figure out ways to spoof it (IE make it think a light is green when it is really red). Sadly abuse cases always need to be considered because criminals are always out there. I am sure the brain-trust at Tesla is already on this but it is yet another challenge to be overcome.
Object detection and classification are simple technical problems now. Get a large enough training data set, have humans label it, feed it to a neural network model, and back-propagate the results. It's easy to ensure the model is not biased towards good weather by including bad weather imagery in the training and testing sets. If a human can see and identify it, so can a machine; in many cases the machine can do it better.
Situational inference is MUCH harder. As a common example, consider two roads that intersect at a 45 degree angle. You pull up to a stop on the left side of the acute angle of the intersection. The traffic light for the road to your right may well be visible and directly in front of your lane. You can easily tell it is not your light, because you know you are approaching a 45 degree intersection and can tell from the orientation of the light that it is meant for the other road.
Such a problem is much harder for a machine. This is the fundamental problem of general AI. Machines can process data more accurately and efficiently then humans, but dealing with ambiguity and drawing inferences from general knowledge in complex scenarios are very hard technical challenges. Doing this with sufficient reliability to trust your life to it is the main obstacle to self driving cars.
I wish the traffic light display on the screen was bigger. There is a lot of room and my near vision is poor, so I often can't see the lights on the screen. It does not have to look like a traffic light, just a big red, yellow or green dot (or arrow) would do.
I imagine the future will bring us more relevant tech, the "lights" will beam a signal at some frequency.
We are in the nursery of driving tech, and in 100 years our kids( or we) will laugh at the concept of getting cars to read signs and light signals. Would it be like teaching horses to read stop signs, to give way, to decide if to stop for the proverbial runaway school bus?
Then , to help the horse make the decision we put a monkey on its back..to drive it, while we sit in the pumpkin carriage conversing the latest knitting stitches.