Email
Password
Remember meForgot password?
    Log in with Twitter

article imageOp-Ed: Super-intelligent AI ‘uncontrollable’ — Define the risks, idiots

By Paul Wallis     Jan 14, 2021 in Technology
Sydney - Research has indicated that a truly intelligent artificial intelligence would be “uncontrollable”. Risks are described as potentially catastrophic. That’s hardly good enough. This clickbait terror isn’t exactly impressive.
The research found quite a few issues, real and non-existent. Some are interesting, but seem to be based on pretty vague ideas based on a lack of information about exactly what they’re trying to control. I’ll leave out the old sci-fi for now and focus on the practical issues.
The overview:
1. The super-intelligence may be beyond comprehension. (What a surprise.)
2. Creating rules for AI behaviour may not work if the range of situations isn’t defined.
3. A super-smart AI could operate on huge scales to achieve objectives.
4. The Turing problem – It’s logically impossible to define all future possibilities of computer programs.
5. A definitive containment algorithm would therefore be impossible.
6. Limiting the scope of AI would limit its capabilities to manage problems “beyond the scope of humans”.
This comes under the heading of “something might go wrong”. It’s appallingly open-ended. To describe this shoddy mix of barely thought-out issues and total failure to explore actual potential threats as a problem would be to flatter it. It’s a mix of possible issues, no more. Nor is the timid, not to say cowardly, assumption that any problem is insoluble.
To start with, in order:
1. Beyond whose comprehension? People who don’t even have a problem yet?
2. Creating rules may not be necessary. Why the assumption that technology which doesn’t even exist yet will be a problem? Obviously it could be, but so could be some vintage bargain basement algorithm. Not good enough as an excuse for this drivel.
3. A super-smart AI may be operating on logical principles, or is that too much to ask?
4. It’s more than likely that the nature of future programs is also indescribable. So something indescribable is a problem? Well done.
5. Are algorithms gods? No. Why would containment be based on an algorithm? Maybe millions or billions of algorithms, but not this blasé description.
6. AI is being developed specifically to solve mega-problems, not create them. Why the assumption that “beyond the scope of humans” is an inevitable result? Will future humans be uncomprehending idiots?
Old sci-fi, badly misread
The equation with the Laws of Robotics is a good example of the lack of thought in this clickbait information. There were three laws, to start with, and the connection with robotics should also remind people you’re talking about machines, independent or otherwise. They complemented each other because they had to work together. Why wouldn’t there be possibly billions of laws to manage some gigantic AI?
It may make sense to humans that artificial intelligence, created by humans for the benefits of humans, will just naturally destroy humanity. It’s unlikely to make sense to any actual intelligence.
What is the basis for “containing” something which isn’t even well defined? I realise that literacy isn’t exactly at plague levels these days, but how do you propose to convince anyone of the need to contain a non-existent threat?
The actual dangers of AI are based on stupid things
Any real reason for fear of automation is more rightly pointed at lousy economics than technologies. The next few generations will grow up in a polluted sewer where jobs, income and quality of life aren’t even considered, the same way they are now.
AI is already a sort of mega-hyped sacred cow, far beyond justification in fact. “Algorithms will do everything.” No, they won’t, because they can’t. Algorithms have functional limits. They’re tasks, not philosophies or techno-gods. This terror of functionality may have a place in politics, but none in science or technology.
The fact that you can’t even describe the basic problem doesn’t encourage a lot of trust in your solving it, either. You design machines to do specific tasks, not to be omnipotent gods of life on Earth. …Or had you forgotten that?
“One AI to rule them all” would be a truly lousy idea on all levels. Nobody needs a sort of AI Sauron or ever will. An integrated pluralistic multi-element AI would be more likely, and easily preventable, if you’re so worried about it.
The idea of a humanity cowering before some mad machine is too banal for words, except some appropriate scatology. Get your timid little heads in order, identify the issues, and figure out how to manage the basics.
Enough of this whimpering imbecility. The only problems you can’t solve are the ones you don’t try to solve. Right now, what’s needed is efficient and objective thinking, not idiot paranoia based on something that doesn’t even exist in theory, let alone practice, yet.
This opinion article was written by an independent writer. The opinions and views expressed herein are those of the author and are not necessarily intended to reflect those of DigitalJournal.com
More about super inteligent AI, isaac asimov, Artificial intelligence
 
Latest News
Top News