ChatGPT is a tool from OpenAI that enables a person to type natural-language prompts. To this, ChatGPT offers conversational, if somewhat stilted, responses. The potential of this form of ‘artificial intelligence’ is, nonetheless, considerable.
Google is launching Bard A.I. in response to ChatGPT and Microsoft is following closely with an application called Redmond.
What do these tools mean for the expanding threat landscape? To find out, Digital Journal sought the opinions of two NetSPI representatives.
First is Nabil Hannan, Managing Director at NetSPI. According to Hannan businesses seeking to adopted the technology need to stand back and consider the implications: “With the likes of ChatGPT, organizations have gotten extremely excited about what’s possible when leveraging AI for identifying and understanding security issues—but there are still limitations. Even though AI can help identify and triage common security bugs faster – which will benefit security teams immensely – the need for human/manual testing will be more critical than ever as AI-based penetration testing can give organizations a false sense of security.”
Hannan adds that things can still go wrong, and that AI is not perfect. This could, if unplanned, impact on a firm’s reputation. Hannan adds: “In many cases, it may not produce the desired response or action because it is only as good as its training model, or the data used to train it. As more AI-based tools emerge, such as Google’s Bard, attackers will also start leveraging AI (more than they already do) to target organizations. Organizations need to build systems with this in mind and have an AI-based “immune system” (or something similar) in place sooner rather than later, that will take AI-based attacks and automatically learn how to protect against them through AI in real-time.”
The second commentator is Nick Landers, VP of Research at NetSPI.
Landers looks at wider developments, noting: “The news from Google and Microsoft is strong evidence of the larger shift toward commercialized AI. Machine learning (ML) and AI have been heavily used across technical disciplines for the better part of 10 years, and I don’t predict that the adoption of advanced language models will significantly change the AI/ML threat landscape in the short term – any more than it already is. Rather, the popularization of AI/ML as both a casual conversation topic and an accessible tool will prompt some threat actors to ask, “how can I use this for malicious purposes?” – if they haven’t already.”
What does this mean for cybersecurity? Landers’ view is: “The larger security concern has less to do with people using AI/ML for malicious reasons and more to do with people implementing this technology without knowing how to secure it properly.”
He adds: “In many instances, the engineers deploying these models are disregarding years of security best practices in their race to the top. Every adoption of new technology comes with a fresh attack surface and risk. In the vein of leveraging models for malicious content, we’re already starting to see tools to detect generated content – and I‘m sure similar features will be implemented by security vendors throughout the year.”
Landers concludes, offering: “In short, AI/ML will become a tool leveraged by both offensive and defensive actors, but defenders have a huge head start at present. A fresh cat-and-mouse game has already begun with models detecting other models, and I’m sure this will continue. I would urge people to focus on defense-in-depth with ML as opposed to the “malicious actors with ChatGPT AI” narrative.”