Forums

Towards Full Autonomy (Full Self Driving & Autopilot question)

Towards Full Autonomy (Full Self Driving & Autopilot question)

Hi there,

I have some general questions regarding the self driving capability of the upcoming FSD in all Teslas, with regards to achieving FULL autonomy:

1. How will the new cards with FSD navigate complicated intersections in cities?
2. Will they rely on any existing infrastructure for this? If so, what?
3. Will they use communication protocols such as V2V, V2X (vehicle-to-everything)? But isn't latency going to be an issue, especially with different Telcos and given average city speeds are close to 50kph ?
4. Are there any upgrades to infrastructure that could be useful to enable full and safe autonomy? Something that provides all Teslas with more info about their surroundings? Is Govt. on board with this?

Thanks in advance :)

Cheers.

bp | 21 maj 2019

Tesla (Musk) believes their AP hardware & software can safely navigate using front facing radar, 8 cameras and proximity sensors, providing more data than a human driver has. With the exception of Nissan, the other manufacturers will also be using LIDAR to provide more image data.

Assuming the sensors are providing enough information, then it comes down to the ability of each manufacturer to develop software intelligent enough to drive safer than a human. Because of the lack of standardization and that conditions on the road are constantly changing, FSD will only work if the onboard system is able to navigate based on the current circumstances (observing the impact of other vehicles, pedestrians, obstacles, potholes, police, construction, barriers, temporary signs, …).

At some point, we could see some roads (such as high speed highway lanes) designed solely for FSD vehicles, but that won't happen until FSD vehicles dominate the market - which won't be any time soon - so the FSD systems must be able to work on the current roadways.

V2V and V2I are interesting, but can only be used as an additional data source - and not relied on for safe operation of the vehicle. V2V and V2I won't provide much value when most vehicles and roads lack communication.

Tesla's strategy is to use fleet learning as much as possible to train the FSD software to operate under real world conditions. They're building a huge database of real world situations, plus they can roll out new FSD software and run it in shadow mode to test the software in the real world.

Other manufacturers are relying on simulation (which may not reflect the full range of actual road conditions) and/or a relatively small fleet of test vehicles.

It's still too early to tell which of these approaches will work. But it's unlikely we'll see any infrastructure improvements or implementation of V2x communication soon enough to help in the development and testing of FSD in the foreseeable future.

andy.connor.e | 21 maj 2019

These questions are too specific for consumers to be answering.

David N | 22 maj 2019

Looking at our current state of highways and roads , I too have doubts. Too many complicated road issues, poor lane markings, pot holes, road debris etc....
I understand and like the goal, it’s getting there that will take a lot longer than perhaps stated.

Madatgascar | 22 maj 2019

Take Tesla’s projections with more than a grain of salt. Elon is chronically overly optimistic and overly impressed with the technology. This is the guy who thinks VR is so realistic that we may already be living in a simulation. This is the guy who promised a fully automated alien dreadnaught factory that turns raw materials into cars, only to later observe that “it turns out” humans are “underrated,” very good at certain tasks, and play an important role on the assembly line. In the same way, we will ultimately hear that humans are really good at using their brains for odd things like judging water filled potholes, and therefore still need to be involved in driving to some extent. FSD will become FSD*, redefined as needed, driver still in command. It will probably still be the best in class, and some people probably will fall asleep in their cars, and it will be messy.

David N | 26 maj 2019

Is the ultimate goal to have all cars “talk to each other” while moving/parking etc..,?
Seems to me that as long as humans are driving, there are going to be problems. A simple viewing of the nightly local news quickly reveals the crashes by drunk drivers, those on drugs, those who expierance a medical episode, and then you have those that are just plain idiots.
I’m guessing until the above are not allowed to drive, crashes will always happen, people will continue to get hurt and die.
Elon May have the technology, but unfortunately the rest of the equation is far far behind.

TeslaTap.com | 26 maj 2019

@Madatgascar - They are projections, not promises. Yep, he's overly optimistic, but everyone else seems to be doom and gloom with an attitude it will never work, so why even bother. I'd rather have optimistic in charge and get something great even if it takes longer than desired. Better than a big pot of nothing.

Elon does have a sense of humor - just some don't get it or don't understand humor. I forgot about the VR simulation - that was funny.

So how does a human that can't detect a water filled pothole do better than FSD that can't detect a water filled pothole? We all know there will be plenty of situations that FSD will not work - just like those situations that will not work for humans - huracanes, snow whiteout conditions, mud slides, etc. The key is if FSD works for MOST situations that we as humans also encounter and handle, it should be quite good. It has a 360 degree view every second, that us humans don't possess. The current EAP already has been avoiding accidents that no human could handle, and it's only going to get better. That said, it will also never be perfect. Some accidents will still occur no matter what. The key is if it is much safer than a human driver, which is a fairly low bar to achieve. I think Tesla has already achieved safer than humans with EAP. Now it's needed to get even better and handle more conditions (i.e. city traffic).

CygnusX-1 | 26 maj 2019

This is one question I've always had about the camera tech and FSD. How good really is the vision? Most (all) people have to take a vision test in order to be able to drive. Has anyone every tested a Tesla - or for that matter asked Tesla for specific numbers? How would the forward facing camera compare to a human? Would it be 20/20? 20/100? 20/10? This should be easy for Tesla to state and/or show. I personally have not seen anyone on YouTube post a video of what the camera sees with regard to signs at specific distances. I would think that would be interesting. All I've seen is video of people driving around with a camera (GoPro) facing outside and one on the display. There are vehicles, people, etc that pass the car that don't even show up on the display. Does that mean that the car didn't even see those items? Or is the display that laggy that the car sees them, but the engineers chose not to display them on screen for bandwidth reasons.

And don't tell me that it has radar. Radar can't see everything. It can't read signs, see stop lights, and apparently semis crossing the road.These things have to be improved significantly before FSD becomes a reality.

Don't get me wrong. I'm a full believer in FSD. As a matter of fact I just purchased an inventory S and having FSD was a requirement. I'm the type of person who buys cars and then drives them into the ground. I don't plan on getting rid of this new car for 8-10 years, in that time I believe (hope) that we will have FSD.

TeslaTap.com | 27 maj 2019

@Cygnus - Interesting idea, but I've not heard of any camera being rated similar to our human vision with a 20/20 style system. Most fixed-focus digital cameras are focused at a specific area. In the case of Tesla's cameras, they are focused for range, not reading a book 1 foot away from the camera as we humans are able to do. While we humans also have peripheral vision, the resolution is very poor. Where the camera has about the same resolution at the frame edge as the center. So I have no way to answer what you are asking, and it may not be all that important.

If you want to see what the cameras see (at least one center and two rear facing side cameras), just search for Tesla Dashcam video. These show what 3 of the 8 cameras view. Tesla also states the range of each camera (max distance), but I don't know what this means precisely. See diagram here: https://www.tesla.com/autopilot

Now what the camera sees and the neural network sees is also an interesting and perhaps more relevant. The hardware in HW2.x cannot process the full camera data, so it can down-res and/or select areas of the image of interest. For most of the time it may only use 1/4 the pixels (just a guess on my part). This allows each frame to be processed for the 8 cameras. To deal with signs,it could boost the resolution of just the sign area to read it. The HW3 hardware is capable of processing frames at far higher resolution, likely needed for FSD.

Actually radar does seen items crossing the road and a lot of other reflections. Turns out there are so many reflections (road surface, signs, overpasses, etc.), that the software ignores all fixed reflections. A semi or anything that is stopped relative to you is ignored by radar unless it can be tracked (i.e. a car that is in front of you that slows and then stops is seen). The current radar has a 32 beam view (very coarse) likely in a 8 wide by 4 high beam array. It measures the distance to targets in each of the radar beams. So its more advanced than a single "beam" that people often associate with radar.

Vision is needed to see a fixed object in the road path, and it is slowly getting better with improved software. HW3 with full frame processing should be able to better detect and deal with objects in the road. For now with EAP, owners must watch for objects in the road.

SCCRENDO | 27 maj 2019

The resolution achieved by cameras today is amazing and can be achieved by truly small lenses. Larger lenses would only be required for extreme distant vision which is unnecessary. Just witness the cameras in phones. The cameras just have to cover 360 degrees. The issue would be software and appropriate computer hardware. As a photographer I believe all we need is available. When I see the abilities of photoshop and Lightroom I the abilities are there. Facial recognition etc has also achieved amazing utility. It’s just a point of fine tuning the software and AI for the specific purpose of Tesla. Unfortunately going from 0-90% may be a lot easier than going from 90-100%

Madatgascar | 27 maj 2019

@TT and SCCRENDO - I agree, the vision part is better than a human driver. What is lagging is the cognitive reasoning part. I can tell by the edges of a water filled pothole and the way the ripples move if it’s a danger or not. I know from context if it’s safe to drive within inches of a row of parked cars, or if there is a chance a child might pop out between two cars. I can make eye contact with flagmen and other drivers. I already take many more things into account than my Tesla does in freeway driving, and on surface streets the variables multiply exponentially.

It is not enough to be better than human drivers. Our record is dragged down by humans who are too old, too young, too tired, too angry, or too drunk to drive. My wife is none of those things when she drives our Tesla, and I need her to be statistically safer than a competent adult driving the safest car in the world before FSD is “good enough.”

SCCRENDO | 27 maj 2019

As I said. The AI needs to improve. But all the technology is there.

Dramsey | 27 maj 2019

I keep wondering how things like traffic light recognition will work on AP2.0 cars with monochrome cameras. Of course the position of the light in the fixture is definitive, but interpreting that would seem to involved a lot more processing than just "red light".

SCCRENDO | 27 maj 2019

@dramsey. I guess they could just upgrade 1 camera. But as stated the position would be helpful.

bp | 28 maj 2019

While (most) humans use a two camera system with adjustable focus and depth, compared to the data produced from Tesla's camera and radar suite, humans are only operating on a small amount of data compared to what Tesla's sensors are capable of providing.

While Tesla's cameras may have limitations of field-of-view, depth and resolution, the camera data is constantly streaming, and by processing continuous images, the software may (and should) be able to produce higher resolution images and provide more data about the surroundings than a human driver with two cameras with limited field of view, likely using the much faster FSD V3 processor.

A bigger concern should be Tesla's reliance on radar and not lidar. Will Tesla be able to solve the stationary object problem (detecting when there is a stationary object in the road ahead)? The first major AP crash was with a stationary object (semi) against a light background, and was likely part of the reason why Tesla quickly moved away from Mobileye. In the last few days. a Tesla with AP2 hit a stalled vehicle and evidently didn't react prior to the accident. Will Tesla's sensors and processor be able to detect this situation and drive safer than a human in responding???

TeslaTap.com | 28 maj 2019

@bp - In these hits into a stationary object the human at the wheel didn't detect the stationary object either!

@Dramsey - In HW2.0 we often call the cameras monochrome, but they actually use RCCC filters. For each camera pixel there are four photoreceptors. Tesla uses Red, Clear, Clear, Clear for the four photoreceptors, so they car can detect red quite easily. HW2.5 changed to RCCB (Red, Clear, Clear, Blue), where green can be calculated if needed. The reason for clear is it improves the light sensitivity. The reason for multiple clear, is even better light sensitivity - critical for seeing in the dark. It may be at HW2.0 has better night-vision (3 clear vs 2 clear), but my guess is the newer HW2.5 cameras have each individual photoreceptor with better sensitivity, so night vision may be a wash. Seems every year camera tech gets better!

Dramsey | 29 maj 2019

@TeslaTap.com,

Thanks for the explanation!