Quite - I don't have those sensors but I do have madly myopic (sort of corrected) stereoscopic vision and a brain, with 30 odd years experience in it. Oh and I have ears. I generally drive with the window down in town. I can look left and listen right.
My office has an entrance onto the A37 in Yeovil (1) Bear in mind we drive on the left in the UK. Picture yourself in a car next to that for sale sign and trying to turn right. That white car will be doing 20-40 mph or much more. In the distance is a roundabout (Fiveways) which is large enough to enable a fast exit and people love to accelerate out of a roundabout. As you can also see this is a hill so the other side is quite fast because cars have to brake to keep down to the speed limit of 30 mph. That's just one scenario.
Anyway, back to your idea that Tesla FSD (Full Self Driving) ie RADAR, LIDAR etc is equivalent to "me" is debatable. I do have my limitations but I can reason about them and I can reason about what FSD might mean as well.
You assert: "If human vision is good enough, computer vision is good enough." People and computers/cars/whatevs do not perceive things in the same way. I doubt very much that you have two cameras with a very narrow but high res central field of view with a rather shag peripheral view which is tuned for movement. Your analogue "vision" sensors should be mounted on something that can move around (within the confines of a seatbelt). Yes I do have to duck under the rear view mirror and peer around the A pillar etc.
I have no doubt that you have something like a camera with a slack handful of Google Corals, trying to make sense of what is happening but it is really, really complicated. I actually think that your best bet is not to try to replicate my or your sensors and actions but to think outside the box.
My office has an entrance onto the A37 in Yeovil (1) Bear in mind we drive on the left in the UK. Picture yourself in a car next to that for sale sign and trying to turn right. That white car will be doing 20-40 mph or much more. In the distance is a roundabout (Fiveways) which is large enough to enable a fast exit and people love to accelerate out of a roundabout. As you can also see this is a hill so the other side is quite fast because cars have to brake to keep down to the speed limit of 30 mph. That's just one scenario.
Anyway, back to your idea that Tesla FSD (Full Self Driving) ie RADAR, LIDAR etc is equivalent to "me" is debatable. I do have my limitations but I can reason about them and I can reason about what FSD might mean as well.
You assert: "If human vision is good enough, computer vision is good enough." People and computers/cars/whatevs do not perceive things in the same way. I doubt very much that you have two cameras with a very narrow but high res central field of view with a rather shag peripheral view which is tuned for movement. Your analogue "vision" sensors should be mounted on something that can move around (within the confines of a seatbelt). Yes I do have to duck under the rear view mirror and peer around the A pillar etc.
I have no doubt that you have something like a camera with a slack handful of Google Corals, trying to make sense of what is happening but it is really, really complicated. I actually think that your best bet is not to try to replicate my or your sensors and actions but to think outside the box.
Have you ever considered a drone?
Cheers Jon
(1) - https://www.google.co.uk/maps/@50.9471642,-2.6382854,3a,75y,...