Connect with us

Hi, what are you looking for?

Tech & Science

AI-powered chatbots present both risk and hope

ChatGPT can generate “semi-reliable” text to complete tasks commonly associated with phishing campaigns and other inauthentic
operations.

ChatGPT caused a global sensation when it was released last year for its ability to generate essays, songs, exams and even news articles from brief prompts
ChatGPT caused a global sensation when it was released last year for its ability to generate essays, songs, exams and even news articles from brief prompts - Copyright AFP STR
ChatGPT caused a global sensation when it was released last year for its ability to generate essays, songs, exams and even news articles from brief prompts - Copyright AFP STR

Do AI driven chatbots present a security risk? A report from SecAlliance research suggests that some possess inherent vulnerabilities. There are concerns that chatbots could trigger a rise in low-to-moderate complexity malicious code and phishing text based cybercrimes.

This is because, as the report explains, current GPT chatbots are not sophisticated enough to deliver high level threat. To address this, future technology improvements will require increased investment from defenders in AI-enabled detection and response capabilities.

The report is titled “Security and Innovation in the Age of the Chatbot”.  This finds that current-generation LLM (Large Language Model)-enabled generative AI tools (such as ChatGPT, BingBot and BardAI) possess three distinct areas of concern: phishing campaign support, information operation enablement and malware development.

Since the launch of ChatGPT in November 2022, the generative pre-trained transformer (GPT) model has rapidly grown a 100 million-plus user base. With this growth comes fears of the technology being used to create malicious code, even among those with little to no understanding or skills in coding.

SecAlliance suggests that current-generation LLM-enabled generative AI tools are likely to provide lower-skilled threat actors with the ability to generate low- to moderate-complexity malicious code – without requiring significant programming experience or resources.

Although OpenAI (the research institute behind ChatGPT) prohibit the use of its tools for purposes that violate its content policy, many of the safeguards it has implemented to prevent misuse have been shown to be easily circumvented.

Nicholas Vidal, Strategic Cyber Threat Intelligence Team Lead at SecAlliance says: “While current LLM-tools present considerable promise and considerable risk, our research shows that their broader security impacts remain muted by limitations in the underlying technology that enables their use. However, their pace of innovation is rapid, and future advancements are likely to expand the scope of possibilities for misuse.”

The SecAlliance report finds that ChatGPT can generate “semi-reliable” text to complete tasks commonly associated with phishing campaigns and other inauthentic behaviour operations, with motivated users circumventing the language model’s content filtering mechanisms.

This mirrors business concerns. According to a study conducted by Blackberry, approximately 50 percent of IT decision makers polled stated they expect a successful cyberattack leveraging ChatGPT to be reported within the year. Of the same group, over 80 percent stated they planned to invest in AI-driven cybersecurity products within two years.

In addition, CyberArk researchers point out that this remains a key issue for such malware developers, who, they argue, must be skilled enough to validate all possible modulation scenarios to produce exploit code capable of being executed.

It is expected that GPT-4 is likely to be more reliable and capable of handling more nuanced instruction than its earlier generation counterparts, potentially further reducing barriers to its use by malicious actors. However, until then, the digital community need to be mindful of the risks.

Avatar photo
Written By

Dr. Tim Sandle is Digital Journal's Editor-at-Large for science news. Tim specializes in science, technology, environmental, business, and health journalism. He is additionally a practising microbiologist; and an author. He is also interested in history, politics and current affairs.

You may also like:

Tech & Science

Scammers analyse online shopping habits to target victims with gift card requests from stores they frequently use

Business

The US government said Tuesday it was "disappointed" after nations negotiating a global treaty to curb plastic waste failed to reach a deal.

Life

Australian firefighter Paddy Stevenson spoke about making a positive difference and helping multiple charities with the 2025 Australian Firefighters Calendar.

Business

Wall Street: — © Digital JournalAsian traders shifted tentatively Tuesday as they battled to track another record on Wall Street owing to fresh China-US...