For many businesses, artificial intelligence (AI) is one of the most important technological innovations of our current times. One such are of application is with cybersecurity, from live defence technologies to finding out about threat attacks before they happen. This interest is borne out by the finding that by 2026 the AI in cybersecurity market is projected to reach $38.2 billion USD.
Other applications include AI being used to identify and prioritize risks and instantly spot any malware on the network to guide cybersecurity incident response. To do so involves new forms of human-machine partnerships.
The technology is also assisting them in analyzing cybercrimes better. This includes IBM who are using their AI system, Watson, to apply their algorithms to a massive body of information, what is termed a cognitive security depository.
However, cybersecurity providers are not the only ones utilizing the technology as cybercriminals are also beginning to embrace AI in creative and unique ways that allow them to go unnoticed by cybersecurity tools.
An example is ‘manipulating bots’. This is based on the premise that where AI algorithms are making decisions, they can also be manipulated to make the wrong decision. Hence if hackers understand such models, they can abuse them.
This is now the unfortunate reality in the world of cybersecurity. While enterprises may think that deploying the most AI-heavy cybersecurity solutions is their best route to protection against these attackers, they’re not thinking about things correctly.
According to Ric Longenecker, CISO at Open Systems, the mission-driven cybersecurity services provider, those developing new technologies need to increase the pace if they are to stay ahead of the game.
Longenecker finds: “AI and ML have taken the security market by storm over the past 5 years, and now cyber attackers have also begun embracing AI to evade detection and create their own storm.”
Casting his eye over the next eleven months, Longenecker predicts: “This year, we may see AI increasingly used to attack the models within security software and those outputs used to enable malware to evade detection.”
This will create challenges, which Longenecker foresees as: “Although cybersecurity vendors already integrate AI within their platforms, it is important to recognize that this alone is not enough. To truly combat the threat of bad actors exploiting AI, enterprises must ensure their security providers use AI in combination with the human know-how of security experts to implement clear and repeatable processes.”