The headlines about artificial intelligence routinely range from the general theories to the bitsy practical applications of AI. Those practical applications are growing by the second.
Rather irritatingly basic comprehension of AI is continually sidetracked by not-very-useful debates about “intelligence”. If the science of AI is about anything at all, it’s about expectations. Sentience, awareness, you name it; human intelligence is compulsively expecting AI to have a similar intelligence.
Why should it?
Does it actually need human or humanlike intelligence?
It does if it’s interacting with humans.
Otherwise it really doesn’t.
There is something definitively stupid about assuming an intelligence which is definitively not human has to be humanlike. Human expectations are lugging around ideas of artificial intelligence which were created long before real AI happened.
This existential baggage doesn’t fit facts or anything else. Why would a genuinely intelligent entity act like something it isn’t? What possible use could it be? Why mindlessly attach human behaviors to something which has no need to be human?
There’s a lengthy but interesting article in The New York Times which explores these issues in some depth but has to switch topics to cover the spectrum. Just about every idea about AI from 100-year-old science fiction to wishful thinking is mentioned. It’s a good synopsis of the state of human thinking about AI.
Intelligence of any kind has to be functional and individual-specific. The turgid and frankly imbecilic debate about animal intelligence is a case in point. They need their kind of intelligence to function as what they are. They do not need some drivelling human version of intelligence requiring them to act like humans.
Intelligence, however, does have broader applications, like learning, perceiving, analyzing, coordinating actions and responses, and communications. Communication in particular is supposed to be a sign of intelligence. In an interaction between a human and a dog or cat, who puts more effort into communication? Who understands more of what’s said? They learn and often mimic human expressions.
On the basis of communications as a function of intelligence, the animals are doing all the intelligent work. Humans show at best a limited awareness of animal communications, but animals are expected to understand humans making so little effort to communicate. Note that functional actions are also learned in much the same way as AI is taught. “This = do that” is the basic value system.
AI can behave the same way. Which leads us to an inevitable point – Why is it so monotonously assumed that any AI has the slightest functional use for human ideas of sentience? Imagine the horrors of teaching an AI to smile or wag its tail, with all the related references. Useful? No.
What does sentience have to do with AI functions? Not a lot. To achieve “awareness” is an ideal, not a function. More to the point, it’s not even required for most AI functions, which are actually logic-based processes, not abstract blue sky thinking. AI doesn’t need “awareness” in any of these contexts.
Real AI won’t be human
Human awareness is based on something rather inadequately called the self, with a lot of constant inputs from senses, logic, memory, instincts, and experiences. It’s a complex relationship of factors to create “sentience”.
AI doesn’t even have any of those things in any of the same ways. Nor does it have always-developing synaptic functions to manage them. A genuinely sentient AI would have to have “subjective” intelligence, even according to the pedants. It would have to be “I” in the sense of being a standalone entity.
Based on the turgid logic of human behaviourism, it would also have to have objectives based on its subjective self. It would require independent, internally-motivated logic.
If AI is based on a physical entity it also requires functional intelligence and behaviors regarding its physical needs. If it’s a mobile entity, able to exist in some form by taking up space in a system, its needs will be definitively different.
It’s a bit hard to take the idea that some AI entity will suddenly spring into existence and say,
“Hi there! I’m Botty, a new independent AI. I’m here to fulfil all your most mundane expectations about AI. How’re we getting on with this world domination thing? Don’t those Dallas Cowboys cheerleaders always look great? Actually, I was thinking of a Lexus… Do you wanna buy a duck? “
Oh, come off it! One of the problems with human expectations is that they’re usually at the lowest common denominator on any subject. Imitating or actually achieving human intelligence may tick the boxes for “sentience”, but really, so what?
What use is that to anyone, particularly an intelligent person, real or imagined? Why would a super-intelligence behave like that? Expectations of high-functional, high-process AI are simply ridiculous on those terms.
There’s a problem with all this human-washing of AI, and it’s fundamental:
Real intelligence requires independent thought.
There’s no getting around it. This is a much higher bar to jump. Human “thinking” again puts obstacles in the way using its own half-baked logic. If you remember the idiot dogma “There’s no such thing as an original idea”, you’ll see how far away this objective is.
So, if there’s no such thing as an original idea, everybody is the equivalent of Leonardo Da Vinci, right? You should be capable of doing what he did, by definition, because no thought can be original, according to dogma.
(This theory is largely for losers who do very little real thinking, by the way. People with no ideas have to justify themselves somehow to equal others. This is a good example of how badly they do it.)
Ah…meanwhile…Wrong, to put it mildly enough. Utter rubbish, would be another description. Why does one person have a unique idea when literally billions of others don’t? Independent thinking is the key. Everyone may learn the same things, and all come up with totally different ideas. Some ideas are right, some wrong, and a few highly exceptional.
Just take a bucket and spade and rummage around in philosophy for a while. The same subjects; different ideas, different logic, different inputs, different outputs. As a system design it’d be hideous, but that’s how human intelligence works.
…So why are we expecting a super-powered AI with all those data assets and efficient processes to be anything remotely like human? Would we recognize that intelligence at all? Probably not.
The thoroughly deserved irony here is that expectations are well below the range of possibilities. We’re looking at a garbage patch of non-sequiturs. An artificial entity is expected to behave like a human, for no reason whatsoever. Its intellect should be humanlike, for even less reason.
I’m rather tired of using this analogy, but it is relevant. Years ago, early AIs developed a language for communication among themselves. Researchers decided, for no good reason, that they shouldn’t do that, so the AI language was scrubbed and they weren’t allowed to do that anymore.
Consider the human stupidity required for this outcome:
- Instead of even asking why a new language was required, it was dropped with no reasons given except that this behavior was somehow undesirable.
- There were no indications what the language could do.
- There was no apparent analysis of the language, or its efficiencies.
- A great learning opportunity was completely missed by the human element, when the AIs produced something for themselves.
This human intelligence “at work”? Ignoring just about everything to do with an AI invention. …And you say you have expectations about AI? You can define intelligence, when you don’t even try to use your own?
Humans, you’re truly lousy at playing god. Look and learn with AI, and grow up your expectations.
The opinions expressed in this Op-Ed are those of the author. They do not purport to reflect the opinions or views of the Digital Journal or its members.