Email
Password
Remember meForgot password?
    Log in with Twitter

article imageAI learns to run, with a wobble

By Tim Sandle     Jul 31, 2017 in Technology
A new research project from Google’s DeepMind supercomputer has pushed the boundaries of machine learning further. It has used artificial intelligence to teach a simulated humanoid how to navigate a parkour course.
Can a machine be taught how to walk? Or at least how a human should walk? This is not straightforward because although the ‘natural’ process of walking may be instinctive to humans, for machines there are dozens of variables to take into account. Understanding this is, and in a way that relies upon artificial intelligence, is key for the future building of ‘human like’ robots. The biggest stumbling block with robotics is when a robot comes across an object that it does not expect within its path and needs to navigate around it.
Google’s approach, Extreme Tech reports, is to use machine learning and then to ‘reward’ the artificial intelligence when the obstacle is overcome. This approach relies upon ‘reinforcement learning’ and DeepMind has demonstrated that this technique can be successfully applied to locomotion. The goal of DeepMind is to use techniques from machine learning and systems neuroscience to build powerful general-purpose learning algorithms.
READ MORE: Reimagining work: What robots can do for us
For this a computer simulation was developed and the set-up was complex, with a parkour course established (parkour is a training discipline using movement that developed from military obstacle course training). The aim was to use a computer simulation of a person and to make the humanoid travel as far as possible and as fast as possible, overcoming a range of hurdles, from tilting floors to walls.
Overtime it was found that the artificial intelligence learnt how to overcome objects and as new challenges were imposed, the machine was able to traverse the new terrain. This was without any additional programming; the machine based its current movements on what it had learned and how this knowledge needed to be applied. An example of this was with varying the ‘run up’ needed to jump over walls of different heights.
The outcomes are illustrated in the following video:
DeepMind has also examined non-human walkers as well. This includes a so-called “ant” walker, which has greater flexibility than a human walker, such as the ability to jump across chasms. This was also learnt by the machine using trial and error.
SEE ALSO: Construction robots will be building it big
Overall the research demonstrates that complex problems can be solved with very little human input. Such findings will be useful for the future development of ‘human like’ androids and robots. The outcomes of the experiment have been written up as a white paper under the heading “Emergence of Locomotion Behaviours in Rich Environments.”
More about deepmind, Artificial intelligence, Robots, Robotics, humanoid
More news from