Remember meForgot password?
    Log in with Twitter

article imageGoogle publishes new research into how neural networks 'think'

By James Walker     Mar 13, 2018 in Technology
Google has published research that provides new insights into how neural networks work. Although the use of neural networks is rapidly growing in AI, much remains unknown about how they're able to accurately recognise images and interpret speech.
Understanding interpretations
Establishing what happens inside the "brain" of a neural network has been an ongoing research aim for Google over the past few years. The company first described the internal workings of neural networks in a 2015 paper, explaining how the systems are able to create new images and recognise items.
The company has now followed up on its "Inceptionism" paper with a new study into the "Building Blocks of Interpretability." Over the past year, Google has acquired more understanding of the way in which neural networks interpret images. In a blog post, the company said it's now exploring how to understand neural networks in the context of the "bigger picture."
At a high level, neural networks process data by passing it between different layers. Each successive layer builds on the work of the previous one, so more complex interpretations are generated as the processing progresses. In computer vision scenarios, the first layers may recognise the basic shapes and textures of images before later ones start to identify fine details.
Interpreting neural networks
Interpreting neural networks
READ NEXT: Slingshot malware campaign hid inside routers for six years
While the general concept of layering is understood, Google's struggled to determine how neural networks actually pass data between the layers. The company has now developed ways to "stand in the middle" of a neural network as it operates. This provides visibility into the neural network's decisions as it recognises new inputs.
In one example cited by Google, it explained how a neural network is able to recognise sections of images and then alter a label for the picture. If the AI has been trained to recognise "floppy ears," it may increase the probability the label is "Labrador Retriever" or "Beagle" when shown a picture of a dog with floppy ears.
The neural network's training creates a series of detectors. These detectors, such as "floppy ears," are activated when their subject is found inside the input image. The output from each detector alters the overall probability that an image will be assigned a certain label. Google's new software project enables the company to spectate as these detectors are fired.
"Scratches the surface"
While the research is a step forward in understanding neural networks, Google acknowledged that many unknowns remain. The company said it still "doesn't really know" the low-level details of how neural networks operate. Gaining a more complete understanding will be critical to unlocking their full potential.
"Neural networks are a powerful approach to machine learning, allowing computers to understand images, recognize speech, translate sentences, play Go, and much more. As much as we’re using neural networks in our technology at Google, there’s more to learn about how these systems accomplish these feats," said Google.
"For example, neural networks can learn how to recognize images far more accurately than any program we directly write, but we don’t really know how exactly they decide whether a dog in a picture is a Retriever, a Beagle, or a German Shepherd."
Google's open-sourced its Lucid neural network visualisation library to enable other researchers to investigate their own systems. Engagement with the community could unlock further insights that begin to explain how modern AI mechanisms "think" and learn. The company said its latest work "only scratches the surface" of what it thinks could be learned by studying neural networks.
More about Google, neural networks, Ai, machine learning, Artificial intelligence
Latest News
Top News