An example of a task that humans find simple but a robot finds challenging is washing-up. These types of obstacles are holding back the development of robots for use in the home and in business settings. a new development from Cold Spring Harbor Laboratory researchers, drawing on neuroscience, have pinpointed a way forwards.
This has been led by Professor Anthony Zador, who has spent much of his career examining individual neurons to understand how animal brains work. More recently he has applied some of his thinking to artificial neural networks. Such networks are modelled on networks of neurons in animal and human brains. Such systems “learn” to perform tasks by considering examples, generally without programmed with task-specific rules.
Professor Zador has been seeking to understand why we can develop advanced learning algorithms that enable AI systems to undertake increasingly complex problems, such as beating a human at chess, and yet we can’t create a robot to wash-up some dishes.
According to Professor Zador: “The things that we find hard, like abstract thought or chess playing, are actually not the hard thing for machines. The things that we find easy, like interacting with the physical world, that’s what’s hard. The reason that we think it’s easy is that we had half a billion years of evolution that has wired up our circuits so that we do it effortlessly.”
He argues that a different approach to designing artificial neural networks is needed. For this we need to study biological neural networks sculpted by evolution provide the basis for redesigning how machines learn. This means moving away from versions that are too generalized. A squirrel is genetically predetermined to be able to jump from tree-to-tree whereas a mouse is not; on this basis the researcher says we should be thinking more about the intended task of the machine and then build the scaffolding to support artificial neural networks on more specific lines.
The new research has been published in the journal Nature Communications, where the study is titled “A critique of pure learning and what artificial neural networks can learn from animal brains.”