Forums

Autopilot and intersections

Autopilot and intersections

Just thinking that that dealing with stop signs/lights and intersections will never be automatic unless each intersection and all vehicles are equipped with some sort of interactive guidance control. Maybe common stop signs would be easy enough to at least get a car to stop, using the system that reads speed limit signs, but after that I wonder if anyone has an idea of how that may be handled in the future. If the sign was obscured or knocked down...it could be very bad for somebody.

On freeway driving, I wonder how autopilot will handle motorcycles lane splitting.

Lubdub | October 13, 2014

as I tell my wife "if you hit them, it is their fault'

Red Sage ca us | October 13, 2014

The forward camera also reads signal lights.

carlgo | October 13, 2014

A simple light might be detected, but how about those big intersections with lots of hanging lights and it is sometimes hard to figure them all out, with which lights govern which lanes and then there are the options of turning under certain conditions. Just seems like an area that will require human guidance. My worry is that people will be playing on their iToys and sailing through intersections.

Years ago there was a story in the news about a guy from another country who came to the US and rented an RV. He put it on cruise control and then walked back to make himself a snack...

grega | October 13, 2014

To operate effectively, it doesn't need to be smart enough to fully understand lights. Not unless it needs to be 100% autonomous. It just needs to ask the driver to drive, with clear and adequate notice.

So at a set of lights the following might happen:
- Easily understood red or green and traffic flow. Car keeps control.
- Easily understood red light with dubious understanding beyond that - stop the car at the light and tell the driver to take over for the intersection
- Dubious understanding - give adequate notice to driver.

I assume if you're turning that'll be done by the driver for some time.

The car needs 3 actions when the driver is needed
1) a subtle signal as early as possible, so the driver knows its coming up in the next 20-30 seconds.
2) an obvious signal, with more than adequate time for the driver to respond
3) an ability to SAFELY stop if the driver doesn't respond.
(plus a general safety principle that if the driver is incapacitated/unavailable, the car notices and reacts as early as possible regardless of road conditions).

Red Sage ca us | October 13, 2014

Signal lights are fairly well standardized, and would be recognized by the camera. GPS and maps should also help. This is a very simple thing: IF, AND, OR, ELSE, WHEN...

Timo | October 14, 2014

Problem is that car needs to understand which lights are relevant to it. That's not easy even for humans sometimes: I have seen one case where light that I needed to follow was in a curb of the opposite lane partially behind a tree. I nearly run a read light there.

There is a lot of brain activity that you don't realize at all in the intersection that goes to figuring out distinction between relevant and non-relevant information. Brain is very good at filtering out irrelevant information. Building an AI that does that same is not easy.

Mandatory xkcd link: http://xkcd.com/1425/

BrassGuy | October 14, 2014

I have seen important signs obstructed by trees. Once I got pulled over in a 30 zone. Later I went back and found the sign - completely obstructed until I was beside it.

I'm don't really care about Auto Pilot and intersections because I've always enjoyed driving, and even more now with a Model S.

It will probably be quite a few years before autonomous driving is anywhere near 100%. But where driving is the most boring and attendance can wander, long highway stretches, this is where Auto Pilot can really shine even today.

BrassGuy | October 14, 2014

edit - I don't...

Red Sage ca us | October 14, 2014

Though Autopilot can be used on surface streets, I believe it is meant for use on highways. Once again, just like cruise control. And you aren't forced to turn it on. Drive, or not drive. It's up to you.

carlgo | October 14, 2014

Good answers all. I think perhaps the immediate answer is probably grega's in that it wold be fairly easy to simply alert the driver of upcoming intersections, regardless of the lights or signage, and then hopefully they would stop sexting and actually drive for awhile.

Constant alerts would be annoying for sure and so drivers might simply switch off autopilot while driving in areas with lots of intersection to turn off the alerts. This might be the best solution regardless and even better if there was at least some subtle indication of upcoming intersections. In fact there might be a suggestion made to the driver to do this if a bunch of intersections are on the route.

Timo | October 15, 2014

This autopilot is a bit scary for me just because behavior what previous posts indicate. It is not autonomous bot driver. It can't do what humans can do (IE. anticipate potential threats), so if people act like they do not need to follow the traffic then this is not a safety improvement, it is safety...what is a word for opposite of improvement??? Disimprovement?

I hope that Tesla also includes a black box and all insurances are void if you have an accident when autopilot is engaged.

It is a bit too good without being perfect.

JeffreyR | October 15, 2014

"Danger improvement"?

Anemometer | October 15, 2014

Aha - now I see what the fuss is people are thinking they missed out on. They expect the current system to be upgraded to full autopilot in all driving scenarios. Correct me if I'm mistaken, but there isn't a small server farm in the frunk running AI and image recognition software. Nope didn't think so. Is the camera resolution 24fps 200 mega pixels with 180 degree field of view ? Akin to a human eye?

I think what Telsa has built is for highway cruising only. But you'll still need to drive up the ramp into the city, or the car will park up assuming you have gone to sleep.

It can track moving / fixed objects and work out if they are warm or cold, (living or another car) and plot a course based on certain assumptions of movement. Basically some kind of "game like" physics model based on the size and expected mass of each vehicle.

Can it tell the difference between a baby sat in a pushchair with it's back to you, and a cat in a card board box that suddenly leaps out into the path of the car, causing the autopilot to swerve into what it thought was an inanimate object? "Just" a pushchair?

What if a shopping trolley full of freshly bakes bread rolls into the street causing the car to emergency brake. If it work out it is on a collision course with the presumably alive basket of bread rolls, will it choose the inanimate cat box or the or the pushchair to hit?

Does the car recognise a carpet fitters truck with the door open?

A Charlie Chaplin pane of glass?

Whats the resolution like on the radar? Can it see a piece of dangling electric cable between two poles that have fallen over in a storm? At head height?

What about the trash lorry, that currently has the trash lifted in the air? Can it work out when the trash is lowered in 5 seconds time - it will be in your path?

Does it know that the sounds of the ambulance approaching your green traffic light means it is supposed to stop? Can it actually see the blue flashing light approaching from the right to work out it's on a collision course with you. Or is that just "noise" or out of shot?

What about a kid riding a bike perpendicular to your trajectory. The car will assume you are on a cross path collision course, not realising the cyclist will turn a corner. You car might hit the brake causing the driver behind to hit you.

OR even a pedestrian walking to the edge of the pavement? How is the car to know if they have seen you (could be blind deaf) or able bodied and just listening to an ipod. At what point does it decide - hey this person is walking into my path, vs will stop and wait like a fully aware person? Especailly as EVs are quieter!

The scary thing is - if we drove like they'll have to programme the cars, we'd all be doing 10mph in urban environments just in case. So they have to build in some "hopefully they won't do stupid stuff" risk based algorithms. Queue - legal action against manufacturers for not quite getting the algorithm right and poor johnny was not seen as risky enough to slow the car before he walked under it.

Maybe, they build the brian so it can be easily upgraded to 8 x 128 core Xeons when they are available, though I still doubt that would drive a car as well as human in all situations. Autopilot on cars is a bit harder than on planes.

Anemometer | October 15, 2014

I what I meant by all that... this is the current state of googles/ effort...

Google "hardware in the trunk of google self driving car"

I doubt Telsa got a 10 year jump on them using a couple of 8 core i7s and some webcams and parking sensors.

Red Sage ca us | October 15, 2014

IF, THEN, AND, OR, ELSE, WHEN, WHILE...

grega | October 15, 2014

I'm falling into the same trap Anemometer.

We're accepting what it can do, but then pushing that limit further and wondering how it will do certain things. The answer is... it won't do certain things, even after 10 software upgrades. It'll do some things and not others.

The real safety issue is how well does it recognise when the driver is required, and how safely does it stop if the driver doesn't respond. (Even that second question might theoretically be unneeded - how safely does any car react if the driver doesn't respond? - but I think it'll be one of the most focussed on aspects).

Highway driving, traffic jam driving, auto speed limits, auto parking - all good (and easier) things to accomplish.

carlgo | October 15, 2014

At some point the computer will be called upon to make serious decisions. You are driving down the Big Sur Coast in your 1200 hp 2021 Roadster. There is a 1000' drop on your right and an uphill cliff on your left, no shoulders. A car come at you in your lane. The computer has choices and it makes them in a millisecond. It knows that to steer you over the cliff is not acceptable. It senses that there are other cars coming up behind the offending car, so if it swerves to the left into the oncoming lane you will most likely head-on with one of them. However, they are further back and the braking is full-on. You will hit the other oncoming car at a slower speed than you would the first one, which is closer. It choses a head-on solution. Your Tesla can take the hit, but the other car is a 1984 Beetle. Its driver will be killed.

At least we won't have to make the decision...

I know, kind of a downer, but totally rational from the perspective of the Tesla computer. You paid for it, it will protect you above all else.

Of course in time all cars will have this technology and communicate with each other besides. Nobody will be crossing into oncoming lanes except in the rare instance of gross mechanical failure. In this event, the second oncoming car would not be a 1984 VW and would itself get out of the way. No problem!

Red Sage ca us | October 16, 2014

There is a distinct difference between Autopilot, an Autonomous car, and an Artificial Intelligence. Tesla Motors is not likely to develop HAL 9000, SkyNET, or even Marvin the Paranoid Android to act as the control system for their cars. The human driver will still be engaged in the act of operating the vehicle for quite some time.

"The trouble with computers, of course, is that they're very sophisticated idiots. They do exactly what you tell them at amazing speed. Even if you order them to kill you. So if you do happen to change your mind, it's very difficult to stop them from obeying the original order. [stops computer from destroying Earth] But not impossible." -- Tom Baker as THE 'DOCTOR WHO'

Timo | October 16, 2014

They could do HAL9000 or SkyNET, but Marvin is waaaay out of their reach.

Anemometer | October 16, 2014

Ive been having a read on how the google autonomous car works... They have said 10 years before its ready. They buikd a 3d map of everything, kerbs, lamposts, stop signs. It also had a LIDAR and a 64 beam laser range finder.

Depsite all that its cant deal with temporary stop lights, snow, or some types of heavy rain. And of course is limited to where it can drive by where has been fully mapped. Its not an AI system, its programmed for scenarios. The reasons they have a driver and engineer conitiously in the car is to find sitautions the car deals with poorly or unacceptabley. The real interesting thing is they had to program it to be a bit more aggreesive to deal with stop junctions. Ie it would wait for its turn, but if other drivers are rude, it edges forward a few inches to make a point. Thats why in the Telsa the first step is you make the decision when to change lane, the car doesnt know about lane changing ettiquete yet

I'm now pretty sure the Tesla autopilot is speficic to freeway driving, No pedestrians, dogs or on coming traffic to deal with. In theory. The collision avoidance is helpful but not really a fail safe for crap divers, just an assist for when you are not concentrating properly. I've seen demos of the Volvo system where it works most of the time, but can still hit objects its meant to avoid.

grega | October 16, 2014

It would be useful to work on 2-way highways. Far too many accidents when someone's been driving far too many hours and veers across the road.

But... until it can avoid the driver coming the other way on the wrong side of the road as well as a person can, in theory it would be best to have an awake person at the wheel (forget the autopilot, I'm not really sure how well drivers react to that scenario in practice!).

mr.fabian.rappe | October 16, 2014

When all cars have some kind of position system onboard you can make the cars arrange the best flow of traffic, that means that the flow of traffic will allow one car to pass though the flow and no traffic lights are needed.

Elon Lets eliminate the Traffic lights, they are so inefficient!

The autonomous car isn't so hard to do, but it would depend on all other cars would have some features like the one above. Communication between cars. The future.

centralvalley | October 16, 2014

I am not entirely convinced that the public is ready for any sort of autopilot or other computer-controlled driving in day-to-day situations.

License plate scanners used by law enforcement make too many mistakes. (My friend received a ticket by mail for a carpool lane violation 250 miles from his house. The license plate "scanned" belonged to him all right, but was a Jeep that was registered non-op with the DMV, and had been up on blocks for a complete overhaul. It took him two hours and a large package proving his car did not even resemble the photograph and was not driveable.) I presume that some of the technology utilized by these autopilots use similar principles as with license plate scanners.

Moreover, the vehicle codes vary from state-to-state and province-to-province. Will these cars load all the statutes and case law that apply to driving so the car complies with the law in that particular jurisdiction?

Our brains use all our sensory capabilities to process stimuli. They are amazing at filtering out irrelevant or contradictory information. We don't just rely upon vision. Yes, computers can "see" better than we can with radar, lidar, infra red, etc.

Auditory stimulation plays a large part in driving. To a much lesser degree, olfactory stimulation can alert us to a potential incident.

Have the Googlers and the Tesla engineers factored in hearing and smell to the processors to know directions of potential hazards or traffic incidents?

I think creating a self-driving car is a great academic challenge for those in the fields that would develop one. As a practical matter, a self-driving car is only as good as the weakest link in the entire spectrum of hardware and software. And I believe that no one knows what exactly the weakest link is today or will be tomorrow.

Anemometer | October 16, 2014

The autonomous car isn't so hard to do, but it would depend on all other cars would have some features like the one above. Communication between cars. The future.
You obviously dont write software for a living. ;)

I was a devloper for 12 years and the latest 9 specialise in back end system & integration testing. Its probably pretty easy to write some code where a car can follow a white line (as per the P85D launch ride).

But to get it to deal with other stuff is what takes the time. Software follows the typical pareto rule 20% is functionality and 80% exception handling. The bits I posted earlier were just off the top of my head. I forgot to say the google car doesnt avoid pot holes. Not good if you have the 21" rims.

And then you have the bugs. Numerous studies have show that for every 3 changes you make to the code, 1/3 will introduce an unintended bug. that could be as simple as using >= instead of >. What we call a boundary condition error. So if your Software is working out the distance to impact of several dozen objects it is tracking there could be a delay of several hundred milliseconds before the code reevaluates the distance to one of the objects that is closing in on you and it finally crosses the >= rather than the > threashold. considering this code has to identify moving objects, stationary objects that may move. There will be multiple threads running in the code, with higher priorities given to objects that are higher risk so they get evaluated more often. Running down a freeway is one thing, but what about driving through a mall car park packed with pedestrians...hundreds of them? They all need tagging and tracking, direction working out.

Timo | November 4, 2014

(resurrecting old thread)

That is why autonomous car would need proper AI with fuzzy logic and not just bazillion scenarios. Proper pattern recognition.

Computer speed will not be problem in just few years, it's case of software that needs to be smart, and that can be tough to accomplish. We need a smart computer to code even smarter computer. And then we have reached singularity.

Brian H | November 4, 2014

Which Musk dreads.

Timo | November 4, 2014

I wonder if singularity is really required for a truly autonomous car.

It could be that in order to survive in proper chaotic traffic you really need a brain power rivaling humans, which would mean that we have then reached point where cars become "alive" changing their behavior based on things they learn.

The first truly sentient artificial being we create might well be a car: It must be self-aware because decisions it makes are based on itself, complexity of the outside world requires high intelligence and it has both complex sensory "organs" to react to changes in outside world, and it can independently act based on changes in that world. It even needs some sort of self-preservation "instinct". If you add some communication between cars and humans then in all possible definitions of sentient being you have one right there. All Asimov's three laws of robotics required: Dont hurt people, obey the commands of humans, and self preservation.

carlgo | November 4, 2014

'When all cars have some kind of position system onboard you can make the cars arrange the best flow of traffic, that means that the flow of traffic will allow one car to pass though the flow and no traffic lights are needed."

Cars would be weaving through at high speed, alternately whizzing by at right angles, turning across the bows. It would be like one a huge flock of bats all flying fast in a huge numbers, in different directions, yet evidently not hitting each other.

Maybe I'll just wait for the Tesla quadracopter.

Brian H | November 5, 2014

The bats use ultrasonic collision-avoidance tech.

grega | November 5, 2014

@Timo
It's worth remembering that if you create an automated system that truly emulates human thought, that you'll have a system with human failings too. A car that can get distracted by things (for example).

In terms of safety, we may find a human plus rules based autopilot is more effective, and eventually a synthetic intelligence combined with rules based autopilot. In either case a more traditional software system to start with.

Nic727 | October 15, 2015

So does the car can see red light in intersection?

Bubba2000 | October 16, 2015

A lot of the traffic lights are networked to synchronized to optimize traffic flow, remote control during emergencies, etc. The auto-pilot in the auto could be allowed to access this info in real time with time stamps and control the auto in conjunction with other sensors, GPS data, etc. If for some reason the link is not working, the GPS info would warn the driver to take over. Google maps that Tesla uses has traffic light info and other data. Need a license probably.

I think that all the data and sensors needed for effective auto-pilot are in existence, Need the software to make it all work. With 32,000 deaths per year in the USA from MVA, plus crippling injuries, this tech is essential. In comparison, aircraft deaths are around 1,100 per year in the USA.

Red Sage ca us | October 17, 2015

Onramp, to offramp. No signal lights.

Timo | October 19, 2015

Robotic driver can actually be a lot safer than humans. They can do tricks like smooth zipper joining when one lane merges to another and they can see multiple directions at the same time.

It's just matter of software/hardware co-operation.

Autopilot weakness is the fact that it needs well-defined destination and can't deviate from route for sudden unprepared case of something interesting at somewhere. "go there" is not meaningful command unless car can actually either read your mind or interpret your expression better than any human.

drax7 | October 19, 2015

According to Mobileye , autonomous driving will require 8 cameras.
And at least it's eyeq4 processor chip , not yet out there until 2016.

This version eyeq3 processor is excellent for safety assistance
And some autonomous driving in well defined streets.

Check their website for further info

Grinnin'.VA | October 20, 2015

@ Bubba2000 | October 16, 2015

>> I think that all the data and sensors needed for effective auto-pilot are in existence, Need the software to make it all work. <<
^^ If you mean autonomous driving based on a driver's input of the address of the destination, I think you're wrong. See below.

@ Red Sage ca us | October 17, 2015

>> Onramp, to offramp. No signal lights. <<
^^
Yes, that's the main focus/goal for the current Tesla "Autopilot".
Many MS owners have tried the current versions and found some features quite useful in some driving situations. (I've done that many times.)

Timo | October 19, 2015

>> Robotic driver can actually be a lot safer than humans. They can do tricks like smooth zipper joining when one lane merges to another and they can see multiple directions at the same time. <<

^^ You're talking about the future, not now.
Furthermore, I believe that no MS that's on the road will ever be capable of achieving safe "robotic" driving because the sensors on the current cars aren't adequate for such 'autonomous' driving. For example, the lack of a suitable rear-facing sensor(s) precludes safe 'robotic' or 'autonomous' passing of slow moving cars on multiple-lane highways.

@ drax7 | October 19, 2015

>> According to Mobileye , autonomous driving will require 8 cameras.
And at least it's eyeq4 processor chip, not yet out there until 2016. ...

Yes. That's why our current MS cars will never become autonomous driving cars.
I believe that the MobilEye front cameras on the current MS cars are capable of tracking multiple cars ahead. Consequently, the current MS cars may be upgraded by software to do safe automated merging in some traffic situations.