Forums

Upgrades for FSD on Jan 2017 Model S

Upgrades for FSD on Jan 2017 Model S

My current Tesla was purchased in January 2017 with the FSD package. It is hardware version 2.0. What are the plans/options for making this vehicle actually FSD? Is Tesla going to replace the computers and hardware on all these cars to live up to the FSD package contract?

marika.appell | April 22, 2019

Good question! Lots of promises and speculation but until that new CPU is installed in my car, I don't believe anything.

hammer @OR-US | April 22, 2019
EVRider | April 22, 2019

Tesla confirmed this again during today’s Autonomy Day presentation.

TeslaTap.com | April 22, 2019

They are going to replace the HW2.0 and HW2.5 AP processors with HW3 for those that bought FSD. They are not making other changes that occurred between 2.0 and 2.5. This may mean no dashcam, as HW2.0 uses monochrome cameras. Far more details if you're interested here: https://teslatap.com/articles/autopilot-processors-and-hardware-mcu-hw-d... (updated today with new HW3 info and photo).

RedJ | April 22, 2019

@TT do you think there’s any credence to the story that HW2.0 cameras are firmware updatable to be full color.

TeslaTap.com | April 23, 2019

@Red - no. The camera's photoreceptors are monochrome, You get color by adding tiny filters - red, green and blue. If the filters are not there, software/firmware cannot fix it. That's the simple explanation. The camera can be set to different modes, but doesn't change what filters were installed during manufacture. For example, you can set an RGB color mode, but you'd get a mess of video output. The firmeware is the same for color and monochrome cameras, and it is assumed the application knows what kinds of filters were installed.

For a bit more complexity, the HW2.0 cameras can see red and monochrome. Each pixel is made up of 4 photoreceptors - one with a red filter, and 3 monochrome. This technique improves the monochrome sensitivity (3 cells are combined for better night vision).

HW2.5 cameras also use 4 photoreceptors, but they have filters for red, green, green and blue. Green is doubled up to improve green sensitivity. If the same camera photoreceptors that were used in HW2.0, the night sensitivity would be less. It's possible the HW2.5 color cameras are a newer design that has improved sensitivity per photoreceptor and can works similar to HW2.0 cameras at night.

reed_lewis | April 23, 2019

@TT - Then I wonder how will HW 2.0 be capable of handling different color stop lights then? It seems to me that Tesla would want to upgrade the cameras to be true RGB so that they do not have to write different code for RGB versus RMMM cameras. Plus handling full FSD using monochrome cameras is almost the same thing as using Lidar in my mind.

ggendel | April 23, 2019

@read_lewis I don't believe any cameras currently supplied do true RGB. Earlier ones did RCCC and the newer ones do RCCB. Of course the luminance channels can be an approximation for green and having a red channel means you can estimate yellow as well. However, RCCC does a poor job of discriminating blue thus in situations where you need to discriminate between green, yellow, and blue the newer sensors are superior.

reed_lewis | April 23, 2019

Thinking about it, the only cameras that probably need to be full color are the three center cameras as they are the ones that need to see stoplights, etc.

So perhaps they will swap those out.

TeslaTap.com | April 23, 2019

@reed_lewis - I don't know how Tesla will handle traffic lights on HW2.0, but I assume it will be a combination of red light detection and position within the light. Red will be bright on both red and mono pixels, a green light will be only on the mono pixels, and yellow will be both red and mono but not as bright. It may not be important to detect between yellow and red on a traffic light - if you treat it as red!

@ggendel - You are right. I was trying to simplify the understanding with a RGGB, which is only used on the rear camera. For those unclear about RCCB type camera used in 7 of the 8 cameras on HW2.5, it's Red, Clear, Clear and Blue. The clear (i.e.monochrome) is used to boost sensitivity, as a green filter reduces the light the sensor sees. With a bit of computational power, you can reconstruct a RGB video from a RCCB output. The calculation is made on each pixel: Green = Monochrome - Red - Blue. The way the video is processed by the neural network, my guess is you don't have to do this calculation. Only for the dashcam video would this need to be done.