Submitted by SteveWin1 on Wed, 2018-11-14 12:11
Since there is very little overlap between cameras, how is Tesla going to give its cars depth perception? The radar only points forward and won't detect people, soccer balls, a mattress falling of the roof of the car in front of you, etc. Ultrasonic sensors don't see far enough away to be useful for avoiding obstacles while you're at speed.
Millions of years of evolution have created a pretty consistent trend. Animals (mainly predators) that need good depth perception have more than one eye pointing in the direction of the object they need to locate and the distance between the eyes is proportional to the distance to the objects that are most important for them to locate. Teslas have more of a prey-animal-configuration for their cameras -- they're pointing in all directions, giving the cars a huge "visual field," but there are only very small areas where cameras overlap. If you watch the cartoons on your car's display, the car does a pretty good job of determining the bearing of other cars, but not the distance. They'll bounce toward and away from the cartoon of your car while all cars are stopped (excluding the car in front of you which is located with radar). Seems like it might be a hard problem to solve. Thoughts?