Connect with us

Hi, what are you looking for?

Tech & Science

Looking to the human brain to improve artificial intelligence

The brain makes sense of the world by collecting information from the eyes, processing this via neuronal networks and then recognizing objects and places based on what is stored in the brain’s memory. This process has not been well-understood and the lack of clarity is one of the barriers to replicating this process in computer systems, where the aim is to advance artificial intelligence systems.

A new study from the Salk Institute has looked at neurons in a part of the brain called V2. Visual area (V2) is the second major area in the visual cortex, and the first region within the visual association area of the brain. This area receives connections from the thalamus, which is in the primary visual cortex (or Visual area one). Area V2 also sends strong feedback connections to V1.

By studying these neurons as they respond to visual stimuli has given new insights into brain function. As Professor Tatyana Sharpee explains: “Understanding how the brain recognizes visual objects is important not only for the sake of vision, but also because it provides a window on how the brain works in general.”

She adds that: most “of our brain is composed of a repeated computational unit, called a cortical column. In vision especially we can control inputs to the brain with exquisite precision, which makes it possible to quantitatively analyze how signals are transformed in the brain.”

The researchers found that one third of the brain is dedicated to the visual processes. These enable a person to react to what they see and to translate objects, people and places into thoughts and actions.

This process begins with vitiation (‘to impair the quality of’) of light and darkness. Here pixels of light are transmitted along nerves. The brain then matches these up with edges in the visual scene, a little like a jigsaw puzzle. How this happens exactly is a mystery, although Professor Sharpee has made progress.

For this the researchers created algorithms to define how V2 neurons process images. This was revealed to be a three-fold process. It starts combining all edges of a similar orientation. After this neurons orientate in the opposite way by 90 degrees, resulting in what they’ve termed “cross orientation suppression.” Finally, combined with the similarly oriented neurons, the brain pieces together the information to create a scene. Here there are patterns, when repeated, lead to the space being filled in which the brain intercepts as texture.

The mathematical model has been called the Quadratic Convolutional. The researcher hope this can be used with artificial intelligence systems and machine learning, such as that found in autonomous vehicles.

The research has been published in Nature Communications, under the title “Cross-orientation suppression in visual area V2.”

Avatar photo
Written By

Dr. Tim Sandle is Digital Journal's Editor-at-Large for science news. Tim specializes in science, technology, environmental, business, and health journalism. He is additionally a practising microbiologist; and an author. He is also interested in history, politics and current affairs.

You may also like:

World

US President Joe Biden delivers remarks after signing legislation authorizing aid for Ukraine, Israel and Taiwan at the White House on April 24, 2024...

World

AfD leaders Alice Weidel and Tino Chrupalla face damaging allegations about an EU parliamentarian's aide accused of spying for China - Copyright AFP Odd...

Business

Meta's growth is due in particular to its sophisticated advertising tools and the success of "Reels" - Copyright AFP SEBASTIEN BOZONJulie JAMMOTFacebook-owner Meta on...

World

Iran's supreme leader Ayatollah Ali Khamenei leads prayers by the coffins of seven Revolutionary Guards killed in an April 1 air strike on the...