There is a world of difference between expectations and actual reality.
So much so that Tesla had to go back to the drawing board more than once after having announced that it was "easy" and a "done deal". There were clear reasons why Waymo chose to use hugely expensive LIDAR sensors and then geomapped every route the car would take. Essentially the Waymo software is a smart chimp that can't deal with the real world. For instance an
unprotected left turn. A waymo Level 5 Taxi (only in geofenced Phoenix since 2016), will not try to take an unprotected left turn. What we call filtered or unfiltered. It will go miles out of its way or leave people on the other side of the road from their destination.
Waymo has not solved this problem since 2016 and even though the car already knows that the turn is filtered or not through the geomapping, it still can't make the turn.
Tesla has had to resolve these problems and Tesla has finally realised what we, humans, do. That is, simply, we watch the road in 4D and take action based upon evolving situations. 4D you say. Yep, 3D vision over time. Think of a 3D movie, it is actually 4D, it is 3D vision for a period of time. That is what we do. Now to get a computer to do it. 30fps per camera.
Then we also have up to 3 mirrors in the car. To replicate this tesla's have 8 camera's. 30fps *8 is one hell of a lot of frames of data to label, identity and feed back to a driving AI as reliable training instructions. Humans do this automatically because we have spent 17 years learning to do this before they will let us behind a wheel. AI, initially for driving, was like a 3 year old. Tesla reached about a 10 year old with the current level of FSD. Now it needs to give it the training and experience to take it from 10 to 17 when it is "mature" enough to sit a test.
In order to do this, Tesla has had to create a monster training computer which can take data from 8 camera's from 1m vehicles and recognise edge cases, label them and create additional info as training data for the driving AI. That monster training computer is an AI, specifically created to train AI programs to drive.
When Google wanted to create a game playing AI, they spent 3 years writing the code for the AI, the Intelligence Logic if you will. During that 3 years they recorded session data with human and artificial players. When the software was complete and tested it took 3 DAYS for the AI to ingest the training data and start playing at a very high level. Humans could still beat it, but every time it was beaten it became even harder to beat next time.
This is what is happening with AI driving.
In-between starting to write this and losing it because of PC replacement issues (windows activation etc....), I was reading up on the so called "Tesla accidents". I noticed two things.
The first was that every driver who had an accident, who did not get killed, was convicted of varying offenses from driving without due care and attention to dangerous driving. This is because it quite specifically says you must be alert, aware and in control of the vehicle when using Autopilot. Because it is level 2 and not really in charge of the vehicle. In fact it is like when you take someone out to teach them to drive, you must be more aware than the person who is actually driving the vehicle except that with Autopilot it is like being in a dual control vehicle.
The second thing I noticed was that Tesla's have an issue. In fact not just Tesla's, but every AI vehicle today. There is one edge case which causes a problem. It is when the vehicle in front moves out of the lane, your vehicle starts to accelerate to meet the set speed, only to find that there is an obstruction in the road, which is why the vehicle in front moved out of the lane. In this case, depending on the angle of the vehicles, the visible and radar sensors were conflicting. Radar could not really see the obstacle but vision could. It set up a conflict which blocked the automatic safety systems from slamming on the brakes.
In every case I can find (and there are not many for 1m Tesla vehicles on the road), it seems that the vehicle had between 3 and 4 seconds from the lane clearing to impact. Which means the vehicle in front did an extremely late avoidance and the vehicle following behind had very little reaction time. This happens with humans and almost always results in an accident unless the person has very fast reactions.
Tesla has finally decided how to deal with this. They are ditching the radar and going with Human senses. Pure vision. This removes the conflict and allows the safety systems to react. Every other driving AI system is using blended sensors, setting the scenario for exactly this conflict.
This is where we are in AI self driving today. The circus chimp who has almost everything done for them and the 10 year old. It is hardly surprising that people have accidents and get killed when they treat these things as if they a are fully trained and certified driver.
There is video out there as to what happens when you switch on GM supercruise on a road which is not geomapped. The thing goes crazy, tries to drive off the road, ignores traffic lights and, essentially, reacts like a toddler. That is the same abuse as sitting in the passenger seat of a Tesla whilst the uncertified Juvenile drives the car.