A decent camera only vision system should still be able to detect the wall. I was actually shocked at the fact that Tesla failed this test so egregiously.
If you use two side by side cameras you can determine distance to a feature by calculating the offset in position of the feature between the two camera images. I had always assumed this was how Tesla planned to achieve camera only FSD, but that would make too much sense.
Even if they wanted to avoid any redundant hardware and only go with one camera for each direction, there is still a chance they could’ve avoided this kind of issue if they used structure through motion, but that’s much harder to do if the objects could be moving.
A decent camera only vision system should still be able to detect the wall. I was actually shocked at the fact that Tesla failed this test so egregiously.
If you use two side by side cameras you can determine distance to a feature by calculating the offset in position of the feature between the two camera images. I had always assumed this was how Tesla planned to achieve camera only FSD, but that would make too much sense.
https://www.intelrealsense.com/stereo-depth-vision-basics/
Even if they wanted to avoid any redundant hardware and only go with one camera for each direction, there is still a chance they could’ve avoided this kind of issue if they used structure through motion, but that’s much harder to do if the objects could be moving.
https://en.m.wikipedia.org/wiki/Structure_from_motion