Connect with us

Hi, what are you looking for?

Tech & Science

MIT and Microsoft make self-driving safer

The solution comes from scientists based at the Massachusetts Institute of Technology (MIT) and technologists working for Microsoft. This is based on a new predictive model that assesses when autonomous systems have “learned” something from training which does not really reflect what is happening in the real world.

The risk here is that these ‘learned’ practices can lead to dangerous and expensive issues arising when an autonomous vehicle has to react and assess real-world scenarios. The biggest issue with autonomous cars is how they react when a human-driven car does something unexpected, that hasn’t been predicted in the classroom.

An example cited is getting a self-driving car to distinguish between large white cars or vans, and white ambulances that have red flashing lights on them. In this scenario, what is wanted is a means to get the self-driving car to slow down to allow an ambulance to pass, but not to do the same thing when a van containing building materials is equally close by. A similar misinterpretation may occur when it comes to fire trucks.

By understanding the weaknesses with the training of autonomous machine learning systems, through the application of the model, the researchers are confident that they can reduce the rate of errors and misalignment of practice sessions and life on the road.

The model not only applies to autonomous vehicles, it can also apply to different types of robots, using the same principles of identifying and correcting ‘blind spots’.

The new predictive model — a form of reinforcement learning — is based on human input and uses humans to assess when it looks like an autonomous vehicle is about to make a mistake. By taking the human observed errors and the original training data together, the model provides greater assurance that the autonomous technology will avoid making dangerous errors.

According to MIT researcher Ramya Ramakrishnan: “The model helps autonomous systems better know what they don’t know.”

He adds: “Many times, when these systems are deployed, their trained simulations don’t match the real-world setting [and] they could make mistakes, such as getting into accidents. The idea is to use humans.”

Avatar photo
Written By

Dr. Tim Sandle is Digital Journal's Editor-at-Large for science news. Tim specializes in science, technology, environmental, business, and health journalism. He is additionally a practising microbiologist; and an author. He is also interested in history, politics and current affairs.

You may also like:

Tech & Science

Microsoft and Google drubbed quarterly earnings expectations.

Business

Catherine Berthet (L) and Naoise Ryan (R) join relatives of people killed in the Ethiopian Airlines Flight 302 Boeing 737 MAX crash at a...

Tech & Science

The groundbreaking initiative aims to provide job training and confidence to people with autism.

Business

There is no statutory immunity. There never was any immunity. Move on.