There seems to be an unstoppable momentum toward the development and deployment of autonomous vehicles. Almost every day there is a story about the latest advanced driver assistance system (ADAS), drone or supposedly intelligent robot. As this rush to market accelerates, we are also regularly reminded that these technologies remain in their infancy when it comes to full autonomy and the much touted societal benefits it will bring.
For example, the Las Vegas self-driving bus was involved in a crash less than two hours into the first day of its career. It stopped when a human-driven truck in front of it stopped, as it was programmed to do, but was powerless when the truck then backed up into its front fender. Whichever vehicle was at fault, the slogan “Look Ma No Driver” in the front window of the bus reads like a child showing off. As we know, pride comes before a fall.
And this is not the first incident. In October, a remotely piloted drone hit a small passenger plane in Canada in what was possibly the first recorded collision of this type.
Earlier in the summer, we heard about Steve, the Washington DC security robot, who decided to reverse the path of human evolution by moving from the land into the water. Unfortunately, he was very poorly equipped for the swim and ended his day being unceremoniously fished out of the pond by his very human colleagues. If robots could be embarrassed, I am sure his cheeks would be very red.
Fortunately, in each of these cases no one was hurt. However, the possibility that any one of these incidents could have resulted in injury or worse is very real.
They also reveal the dualistic nature of safety that autonomous vehicles must address. First, they must provide a safe environment for the person or object being transported autonomously (like the passengers on the Las Vegas bus). Second, they must not be a safety threat to themselves, or to people or objects nearby (like the drone, or Steve).
In most cases, before these autonomous vehicles can be deployed more widely they have to meet a range of safety criteria and be signed off by a regulator or authority.
This is one of the biggest challenges facing the developers of autonomous vehicles today. Waymo, the self-driving car division of Google, recently published a report stating that in 8 years their self-driving cars had amassed around 3.5 million road miles of experience. Contrast this with the reported 5 to 10 billion miles experts estimate that it will take to train the self-driving algorithms to reach an approved level of confidence in their safety and performance. The math tells us that road testing alone is not a practical solution.
So what is? The same report from Waymo described how, in parallel with physical road testing, simulation is enabling them to drive 25,000 virtual autonomous vehicles a total of 8 million miles per day. These simulations span everything from the various sensors, chips and processors to the vehicle dynamics and the virtual environment in which the autonomous vehicle is moving. This virtual testing is underpinned by a foundation of data management and encompassed within a simulation platform that brings everything together. These simulations do not just apply to the development of the vehicles. In operation, these autonomous vehicles need to continue to learn and develop. Running in-operation simulations will be a key to ongoing learning.
Of course, if you are talking simulation you have to be talking about ANSYS. Combining our history of deep multiphysics capabilities with a simulation platform for autonomous vehicle simulation, ANSYS is at the forefront of not only supporting companies in the race to deliver an autonomous vehicle to market, but also helping them make sure their products are safe for their occupants and those around them.
If you are interested in learning more about how ANSYS simulation solutions for autonomous vehicles are helping reduce the number of crashes, smashes and splashes, then read our white paper or contact us directly.