Looking to the human brain to improve artificial intelligence

Posted Jun 18, 2017 by Tim Sandle
How can artificial intelligence be improved if the working model is the human brain and neural network? Moreover, if we do not fully understand how the brain works, will this hamper AI progress? A new insight could help.
Robots being developed at the Salford Institute for Dementia
Robots being developed at the Salford Institute for Dementia
University of Salford Press Office
The brain makes sense of the world by collecting information from the eyes, processing this via neuronal networks and then recognizing objects and places based on what is stored in the brain’s memory. This process has not been well-understood and the lack of clarity is one of the barriers to replicating this process in computer systems, where the aim is to advance artificial intelligence systems.
A new study from the Salk Institute has looked at neurons in a part of the brain called V2. Visual area (V2) is the second major area in the visual cortex, and the first region within the visual association area of the brain. This area receives connections from the thalamus, which is in the primary visual cortex (or Visual area one). Area V2 also sends strong feedback connections to V1.
By studying these neurons as they respond to visual stimuli has given new insights into brain function. As Professor Tatyana Sharpee explains: “Understanding how the brain recognizes visual objects is important not only for the sake of vision, but also because it provides a window on how the brain works in general.”
She adds that: most “of our brain is composed of a repeated computational unit, called a cortical column. In vision especially we can control inputs to the brain with exquisite precision, which makes it possible to quantitatively analyze how signals are transformed in the brain.”
The researchers found that one third of the brain is dedicated to the visual processes. These enable a person to react to what they see and to translate objects, people and places into thoughts and actions.
This process begins with vitiation ('to impair the quality of') of light and darkness. Here pixels of light are transmitted along nerves. The brain then matches these up with edges in the visual scene, a little like a jigsaw puzzle. How this happens exactly is a mystery, although Professor Sharpee has made progress.
For this the researchers created algorithms to define how V2 neurons process images. This was revealed to be a three-fold process. It starts combining all edges of a similar orientation. After this neurons orientate in the opposite way by 90 degrees, resulting in what they’ve termed “cross orientation suppression.” Finally, combined with the similarly oriented neurons, the brain pieces together the information to create a scene. Here there are patterns, when repeated, lead to the space being filled in which the brain intercepts as texture.
The mathematical model has been called the Quadratic Convolutional. The researcher hope this can be used with artificial intelligence systems and machine learning, such as that found in autonomous vehicles.
The research has been published in Nature Communications, under the title “Cross-orientation suppression in visual area V2.”