All technologies are abused. To this day, one of the most dangerous weapons on Earth is a box of matches. The hype, endless moldy terminology, and mystification of AI aren’t helping at all. The problem is that AI is perfectly capable of being a superweapon and nobody’s looking at slamming on the brakes.
There was a rather gruesome article in VOX in March this year which spelled out some of the risks. AI could discover both super-drugs and super-weapons. The same applies to bioweapons and similar “learnable” threats.
There are multiple issues here:
The mythologization of AI. Check out this dripping tap of news searches. Consider headlines like “The future of creativity, brought to you by artificial intelligence”. Sure, it is. I’m quite sure AI will be able to rehash plot lines and digitally tweak images, like every hack writer and “artist” on the planet, but really? What’s this based on? Not a lot, like most of the non-tech hype.
Issues like “alignment”. This is the theory that AI should perform in accordance with human values and goals. What, all 8 billion of them? One look at this issue will tell anyone that the terms of reference are so vague that even defining the issues is blurred from the start. “Catastrophic misbehavior” is one of the predictions.
Wonder where else you’d find catastrophic misbehavior? Every incompetent moron on Earth, perhaps? This isn’t even a description of artificial intelligence issues, just possible dysfunction.
AI takeover. This is the old machines ruling the world thing. You can actually smell the intense creativity, can’t you? The actual issue now is that AI can (and does) at least rule a lot of the world in functional terms.
What if something goes wrong? Solutions? None. Questions? None. It’s a threat, so there, you silly people. The tiresome assumption of superior knowledge strikes again, and again… You’re not very good at playing gods. Kindly remember that.
…What if AI doesn’t even think about “ruling” anything? That’s a human idea, a sort of 5000-year-old masturbation aid. It’s not a particularly useful idea, either. This useless babble is playing to the grandstands, not the problems, as usual.
These are just basic issues. I hope that’s enough of them to display the sheer turgidity of thought, or rather the lack of it, on this subject.
A few points:
- AI is neither good or bad. How could it be either?
- The famous Laws of Robotics can be written in or written out. No-brainer.
- Any idiot criminal or megalomaniac can access AI. Who’s the villain?
…And the problem is AI? No. It’s the total inability to even try to manage foreseeable risks, let alone manage practical aspects. All this squeaking about risks of AI is notably failing so far to deliver any mention of failsafes, the most basic of all system issues.
Say a nutcase AI is running your power systems. You have backups. You may even have tested this thing, you saints, you, and you thought it was OK. It wasn’t. You blacked out North America (everyone was doing it) and didn’t have options because your system plan lacked something or more likely everything.
The sheer monotony of this timid thinking isn’t getting anywhere at all. Possible AI risks were first raised when machine learning started delivering results with IBM’s early research.
As usual, digression followed digression. Eventually, the issues became exercises in dissembling, not management. They remain in this slapdash “big highly selective picture” form with no actual management in place.
Can we get the rhetoric out of the way and get some actual engineers in place looking at the issues? …Or are we too busy being brilliant, again?
Real risks – For absolute idiots.
One of the types of scenarios predicted is that AI simply follows a train of logic to a catastrophe. Pharmaceutical research, in particular, is a major AI function. It’s not at all impossible that a formula created by AI may be toxic, simply due to the good or bad scenario above.
AI designs a new painkiller which just happens to contain a nice dose of a cyanide equivalent or some other major toxin. That’s not actually too likely, but it’s also not impossible, either. Read the VOX link above. That’s what it’s there for. That’s exactly what the good or bad scenario is about. Meanwhile – What are the safeguards? Press releases? Heartfelt interviews on FOX News? You’d think so.
In practice, the better option would be to have the new formula reviewed by another independent dedicated AI, or even (shudder) a human who knows what they’re looking at.
(Yes, someone’s finally found a use for pharmaceutical qualifications! Huzzah!)
It’s that simple, you fruit flies. We’re talking absolute basic system design.
This actually basic quality control. Is that difficult to understand? Apparently it is. The hype is riding the subject. Maybe researchers and operators want to go bankrupt with massive system failures? Maybe investors in AI like a lot of murderous liabilities to go with their market crap feeds?
None of these things are insoluble. What’s seemingly insoluble is the slow, dismal tread of lousy logic applied to issues.
AI is high-value. This endless solve-nothing drivel isn’t. Fix that. Enough with the Monsters Under The Bed garbage. Fix that while you’re at it.