Email
Password
Remember meForgot password?
    Log in with Twitter

article imageTeaching AI to see like a human by filling in the blanks

By Tim Sandle     May 19, 2019 in Science
A new type of artificial intelligence has been developed, which is said by its inventors to see like a human. This has been a achieved by building on existing forms of AI and filling in the blanks that have previously limited full visual perception.
Researchers based at the University of Texas at Austin have successfully taught an artificial intelligence platform how to do perform something that come naturally to humans, but which no machine has yet managed to achieve: to take a few quick glimpses around and use what is 'seen' to infer its whole environment.
This is a skill necessary for the future development of effective search-and-rescue robots, plus the types of robots that will be required to undertake dangerous missions for either civilian or military purposes (the project was part sponsored by the U.S. Defense Advanced Research Projects Agency). The research goal in relation to computer vision is to develop the algorithms and representations that will allow a computer to autonomously analyze visual information.
Explaining the concept further, lead researcher computer scientist Professor Kristen Grauman states: "We want an agent that's generally equipped to enter environments and be ready for new perception tasks as they arise."
And with the specific artificial intelligence agent that Grauman and her team have constructed: "It behaves in a way that's versatile and able to succeed at different tasks because it has learned useful patterns about the visual world."
In trials, the new artificial intelligence agent takes only a few “glimpses” of its surroundings, representing less than 20 percent of the full 360 degree view. From this, the platform can infers the rest of the whole environment. What makes this new agent so effective is that it is not just taking pictures in random directions; instead, after each glimpse the system chooses the next shot that it predicts will add the most new information about the whole scene. This was achieved after just one day of training. The current limitation is that the AI can undertake this only when stationary; the next phase will be to develop the technology to work with a moving robot.
The new development is outlined in the journal Science Robotics, in a paper titled "Emergence of exploratory look-around behaviors through active observation completion."
More about Artificial intelligence, machine learning, data points
More news from
Latest News
Top News