Robots can now see into the near future

Posted Dec 15, 2017 by Tim Sandle
A leap forwards in robotic learning has taken place. New technology enables robots imagine the future of their actions. This allows them to determine how to manipulate objects they have never previously encountered.
Shimon  the robotic marimba player  can listen to  understand  collaborate with  and surprise his hu...
Shimon, the robotic marimba player, can listen to, understand, collaborate with, and surprise his human counterparts.
Georgia Institute of Technology
The new technology comes from computer scientists based at University of California - Berkeley. Taking 'machine learning' principles and creating specialized 'robotic learning' systems, the researchers have given robots a degree of precognition. This new way of thinking will, one day, help to advance self-driving cars and to develop more intelligent robotic assistants for business operations.
As things stand currently, the new technology has been tested out through an initial prototype which focuses on learning simple manual skills entirely from autonomous play. This is the foundation for more advanced applications with robotics.
The new technology is called visual foresight. Through this a robot can predict what their cameras will see if they decide to perform a particular sequence of movements. The current technology only allows a robot to make such determinations for a few seconds into the future. While improvements will follow, this relatively brief time is sufficient for a robot to determine how to manoeuvre specific objects on a plain without disturbing other objects within the same area.
This happens by first allowing the robots free play, moving nay object on a surface that the robot opts to touch. Once the play phase is complete the robotic learning is consolidated and the robot can construct a predictive model of its immediate world.
The development narrative is taken up by principal researcher Professor Sergey Levine: “In the same way that we can imagine how our actions will move the objects in our environment, this method can enable a robot to visualize how different behaviors will affect the world around it.”
The scientist adds: “This can enable intelligent planning of highly flexible skills in complex real-world situations."
To test out the technology, the UC Berkeley team made the Vestri AI Robot that learns from its own actions. The concept was with the robot completing tasks like a baby would: by playing with objects and then imagining how to get the task done. The process of training the Vestri robot is shown in the video below:
The research has been presented to the Neural Information Processing Systems conference, which took place at Long Beach, California, U.S. during December 2017.