Wednesday, October 12, 2016
Self-driving cars are nearing reality, thanks to advancements in machine learning. But when the issue comes down to ¡°safety testing,¡± machine learning is self-driving¡¯s Achilles heel, according to safety experts.
Philip Koopman, professor of Carnegie Mellon University, believes the biggest hole in a federal automated policy published late September is in the regulators¡¯ failure to tangle head-on with fundamental difficulties in testing machine learning¡ªa problem already known to the scientific/engineering community.
¡°Mapping Machine Learning©\based systems to traditional safety standards is challenging,¡± Koopman said, ¡°because the training data set does not conform to traditional expectations of software requirements and design.¡±
In Koopman¡¯s opinion, the Fed¡¯s policy ¡°should say that Machine Learning is an unusual, emerging technology.¡± This acknowledgement would prompt regulators to ask more pointed questions on Machine Learning in their safety assessment.
¡°I¡¯m not saying how to test the Machine Learning (ML)¡¯s training data set,¡± said Koopman. Rather, ¡°I¡¯m proposing that the DoT should demand from a carmaker or autonomous car platform vendor a written document that justifies why their ML-based autonomous vehicle is safe,¡± he said.
Koopman has been involved in autonomous vehicle safety for 20 years. His experience ranges from participating in the Automated Highway System (AHS) program early in his career to working at the National Robotics Engineering Centre with funded projects on autonomous vehicle safety and robotic system dependability.
By: DocMemory Copyright © 2023 CST, Inc. All Rights Reserved
|