Connect with us

Hi, what are you looking for?

Tech & Science

Op-Ed: Analogies teach computers to think like humans — Sort of

The move to analogies as teaching methods is a major step in “cognitive computing,” which allows computers to learn and even reprogram themselves. Cognitive computing is the big evolutionary step to true artificial intelligence.
The new research is being carried out by Northwestern University, using a new approach called the Structure Mapping Engine (SME) which is capable of analogic problem solving, including “moral dilemmas”.
So far, the results are pretty straightforward — computers retrieve memories to find analogic situations — Case A resembles Example C, but not Cases B or D, etc. This type of very basic, very important learning is roughly kindergarten level for humans. Human brains automatically see relationships, sometimes very complex relationships, as a result of this type of learning in early childhood. A huge amount of human thought, in fact, is based on understanding these relational interactions.
The extra theoretical dimension, as though this analogy-based approach wasn’t the original piece of string in multiple practical terms, is psychology.
From Science Daily:
The theory underlying the model is psychologist Dedre Gentner’s structure-mapping theory of analogy and similarity, which has been used to explain and predict many psychology phenomena. Structure-mapping argues that analogy and similarity involve comparisons between relational representations, which connect entities and ideas, for example, that a clock is above a door or that pressure differences cause water to flow.
Gentner may be being a bit modest regarding the importance of this type of structure mapping. Consider for a moment how many similarities, entities and ideas may be involved in something as simple as making a piece of toast.
A human being can make a piece of toast exactly the way they like it, with a simple connection of bits of information. A computer needs to define a piece of toast, then construct variables to define the rest of the process according to preferences, which are also analogy-related.
Preferences make up the core working logic of just about all human decisions. Whether the preferences are rational or not is also an analogy-related issue. Can a computer arrive at an irrational decision? In this case, it could. It’d be able to produce the same logic as irrational human thought, in theory.
It’s not at all impossible that a computer which is also a cordon bleu toast maker and moral sage, rational or otherwise, could be developed — the only real question is how much structuring and mapping is required.
Issues with operational analogy-based computer logic
The use of analogy based logic is where this new research goes straight off the map. In terms of evaluating situations quickly and efficiently, it’s a big plus. In terms of creating risks whereby available analogies are not appropriate or useless, there’s a very large hole to be filled somehow.
Human structural mapping isn’t very efficient with new information or unfamiliar situations, for example. The brain will run through its store of useful relationship analogies ASAP, but more learning is likely to be required. Humans learn, willingly or by necessity, almost automatically. A computer would have to develop a cognition strategy, and put it in place to manage the new information. It may even have to figure out how to analyze a situation, not just process information.
Setting a performance bar on this process includes whether a structure mapping engine has enough cognitive capacity to work in real time. Can a computer see a problem starting, recognize it, and fix it? Can it use analogies, or would dull ‘ol digital code responses be better? Or both?
Humans often don’t use analogies well. Humans, in fact, are the best possible analogy for the process of misusing analogies. They often try to find comparative relationships that don’t exist, or enforce the wrong analogies in problem solving. They meet something they’ve never encountered before, which has nothing at all to do with any previous experience, and instantly misread it, using analogies. They assume that structural information is correct when it isn’t even remotely workable.
(OK, if all you have is lousy analogies, that’s understandable. That said, it’s also a road map to the wrong conclusions on just about any subject. There is a risk that structure mapping could follow human bad habits in these regards.)
Arguably, one of the best examples of human use of analogies and relationships is the default human response to call in other humans. They find people who do know how to manage the structures and logic required, or to work together on problems.
Achieving that level of ancient practicality is asking a lot of any cognitive computing research program at its outset. SME will work and does work to the extent it’s been able to be tested. The practical problems and needs of putting structure mapping to work, particularly regarding complex decisions, are a much higher degree of difficulty.
Talking about analogies – Cognitive computing like Magellan’s voyage in code form. Nobody knew what to expect when Magellan set sail. Cognition, the noun for intelligent thought processes, is a much bigger subject than the Pacific Ocean.
Expect the impossible, the unsuspected, and the truly bizarre — and keep your own head working on seeing the relationships between structure mapping and how humans think. This will definitely not get dull.

Avatar photo
Written By

Editor-at-Large based in Sydney, Australia.

You may also like:

Social Media

Wanna buy some ignorance? You’re in luck.

Tech & Science

Under new legislation that passed the House of Representatives last week, TikTok could be banned in the United States.

Life

Platforms like Instagram and Pinterest often suggest travel destinations based on your likes and viewing habits.

Social Media

From vampires and wendigos to killer asteroids, TikTok users are pumping out outlandish end-of-the-world conspiracy theories.