John Giordani is a Cybersecurity expert. In an article in Forbes today, he describes cybersecurity as a war between machines, and it makes perfect sense.
AI has changed the rules of computer science, automating what at one time was manual actions by both the attackers and their victims. This is because AI is an instrument, and while it was made by humans, it is morally agnostic. So it can be either good or bad – depending on how it is used.
AI can be used for surveillance of terrorists or to spy on ordinary citizens. With content filters, AI can be used to bury fake news stories or to manipulate public opinion. Basically, this means that governments and private individuals can use AI for the greater good of humanity or for their own self-interests.
For this reason, Giordani suggests that artificial intelligence could become “a real global security challenge. AI reduces the cost of existing attacks and introduces attacks that have never been seen before, all the while making it more difficult to figure out where the attack is coming from.
A cybersecurity war between machines is inevitable
Giordani quotes Russian author Anton Chekhov, “If in Act I you have a pistol hanging on the wall, then it must fire in the last act.”
How does the quote relate to artificial intelligence? You could say that seeing as we have artificial intelligence, it would be stupid not to use it. “And whoever uses it wins. That’s why in the end, cybersecurity will be a war between machines,” writes Giordani.
We have all kinds of virus, antivirus, firewall and anti-intrusion systems, and they are sufficient in doing the tasks they were programmed to handle. But today, cyber attacks use codes that are constantly changing – making it impossible to identify them at first because our current protection software is not autonomous.
This means simply that the security software in use today is based on pre-existing conditions, or simply put – our software can’t think for itself. And this is where AI can be a valuable tool. Defenders can use AI to view online behavior using AI programs, which are able to analyze vast amounts of data by themselves.
Taking cybersecurity seriously
Last month, Washington D.C. finally came to the realization that artificial intelligence could pose both opportunities and challenges to the country, reports Tech Crunch.
On March 15, the Center for a New American Security (CNAS), one of America’s top defense and foreign policy think tanks, announced the creation of a Task Force on Artificial Intelligence and National Security, as part of the organization’s Artificial Intelligence and Global Security Initiative.
The task force is a sign that the federal government is finally waking up to the challenges that AI poses to national security. The CNAS initiative will at least take a closer look at what Silicon Valley has created, and hopefully, begin to understand the implications for AI going forward.
Do You Trust This Computer?
In related news, a new documentary from Chris Paine, the man behind Who Killed the Electric Car? came out this week. It is supposed to be a documentary on the ethical consequences of artificial intelligence.
The film is dedicated to Stephen Hawking, who was quite vocal about the threat posed by AI and features a “who’s who list” of academics, authors, and CEOs involved with artificial intelligence, including Elon Musk who is not real thrilled about where AI could lead us. The film lasts one-and-a-half hours and is wide in scope.
It covers everything from robotic surgery and self-driving cars to autonomous weapons and humanoid robots. But it fails to answer its title question adequately. Motherboard points out that the film is clearly a warning about the dangers of AI.
Paine goes into how Cambridge Analytica used AI to manipulate the 2016 election; Tay.ai, Microsoft’s chatbot that the internet turned into a Nazi within 24 hours of its release, and he touches on the rise of autonomous weapons. And yes, all these known “bad scenarios” are real.
But, and it’s a big but – What if anything is being done to protect us from all the negative impacts of artificial intelligence? Motherboard suggests that profits speak louder than good sense.