Email
Password
Remember meForgot password?
    Log in with Twitter

article imageMIT and Microsoft make self-driving safer

By Tim Sandle     Feb 1, 2019 in Technology
Boston - As electric, autonomous vehicles continue to advance the causes of errors are being addressed. One area that has led to some test-drive accidents are ‘blond spots’ with autonomous technology. A new solution aims to overcome this.
The solution comes from scientists based at the Massachusetts Institute of Technology (MIT) and technologists working for Microsoft. This is based on a new predictive model that assesses when autonomous systems have "learned" something from training which does not really reflect what is happening in the real world.
The risk here is that these ‘learned’ practices can lead to dangerous and expensive issues arising when an autonomous vehicle has to react and assess real-world scenarios. The biggest issue with autonomous cars is how they react when a human-driven car does something unexpected, that hasn’t been predicted in the classroom.
An example cited is getting a self-driving car to distinguish between large white cars or vans, and white ambulances that have red flashing lights on them. In this scenario, what is wanted is a means to get the self-driving car to slow down to allow an ambulance to pass, but not to do the same thing when a van containing building materials is equally close by. A similar misinterpretation may occur when it comes to fire trucks.
By understanding the weaknesses with the training of autonomous machine learning systems, through the application of the model, the researchers are confident that they can reduce the rate of errors and misalignment of practice sessions and life on the road.
The model not only applies to autonomous vehicles, it can also apply to different types of robots, using the same principles of identifying and correcting ‘blind spots’.
The new predictive model — a form of reinforcement learning — is based on human input and uses humans to assess when it looks like an autonomous vehicle is about to make a mistake. By taking the human observed errors and the original training data together, the model provides greater assurance that the autonomous technology will avoid making dangerous errors.
According to MIT researcher Ramya Ramakrishnan: "The model helps autonomous systems better know what they don't know.”
He adds: “Many times, when these systems are deployed, their trained simulations don’t match the real-world setting [and] they could make mistakes, such as getting into accidents. The idea is to use humans.”
More about selfdriving, autonomous cars, Transport, Microsoft
More news from
Latest News
Top News