It’s a rare bit of non-panic-driven news about AI. The heavyweights in the sector have got on board with Biden’s AI checks and balances. There’s been a lot of angst about AI’s dangers, and this is at least a welcome bit of practical sanity.
The safeguards include:
- Thorough testing
- AI-generated content notifications
- Focus on user privacy and anti-discrimination
AI is already out-evolving the current clichés. DishBrain alone could wipe the floor with current AI. These checks and balances are obviously just the basics. Quite a lot of emphasis was placed on all parties being on the same page. They jointly approved the White House initiative. That’s good news because it means some degree of standardization will apply to AI on the market.
Standardization makes regulation a lot easier. These “new norms” can greatly simplify protections for users and consumers. There’d be no need tor endless tweaks to AI systems if they all have clear parameters. If anything, this is an improvement on the existing intrusive situations many online users experience.
But…
You hardly need a degree in psychology to have noticed that a lot of people don’t oppose the no-rules version of AI. “Get rid of all those jobs. Replace actors and writers with software. Be the evil genius of AI and dominate the world,” etc.
You’d have to be incredibly naïve to think that this Ultra-Bozo mindset will just go away. In keeping with corporate traditions, these are the guys who know less than nothing about AI. They think it can actually deliver some sort of Idiots’ Utopia for Cretins in Suits. It can’t, and it won’t.
There’s a very pleasant irony to this situation. This is the beginning of a very different ball game.
The unrestricted development of AI has achieved an impasse of its own making. Part of that impasse was the recognition of AI’s possible risks. Maybe you can develop a super-AI, but so can everyone else.
AI is the best defense against AI. It could wipe out the global malware racket with ease. It could run a system blockchain that could stop any penetration.
…And so on. This tech may not be very advanced, but it’s way ahead of existing hacking capabilities. It’s also a lot faster and can handle big data with ease. You can expect AI security to be the next big thing for quite a while.
If you develop a sort of rogue mega-bot, which is what a rogue AI has to be, AI can counter it. So what you’re really doing is creating the global scenario to shut down rogue AIs.
Also notable is the fact that a chatbot attached to huge amounts of data is still a chatbot. It’s not a creative genius. It’s not a god on your phone.
It can’t originate anything. It can recycle and remodel existing information. It can manage huge data loads in comparison with current systems but that’s about all. It can synthesize combinations of data elements under instruction. It’s as good as the instructions it gets.
That’s exactly what people mean when they say AI is not “intelligent”. This generation of AI on its own is nothing much. A lot of very unimpressed people have been saying that. The safeguards are the groundwork.
Never mind the bozos, full speed ahead.
__________________________________________________________
Disclaimer
The opinions expressed in this Op-Ed are those of the author. They do not purport to reflect the opinions or views of the Digital Journal or its members.