Generative AI and the Potential for Nefarious Use

Chris McGowan
Author: Chris McGowan
Date Published: 1 August 2023

Generative AI seemed to explode onto the scene in late 2022. Prior to this time, no one was really talking about it. But now everyone (well beyond the IT world) is not only talking about it but using it. This is exciting and problematic at the same time.
Generative artificial intelligence (AI) refers to a class of machine learning (ML) algorithms that can autonomously generate new content, such as images, text or audio, by learning patterns and structures from vast amounts of data. This technology has made significant progress in recent years, resulting in AI models that can create remarkably realistic and convincing outputs.

The introduction of generative AI has brought accelerated advancements in various fields revolutionizing creative processes, content generation and decision-making systems. While generative AI presents numerous positive applications, like any powerful tool, it can also be used for malicious purposes. Its misuse in the realm of cyberattacks poses a grave concern and brings into focus the inherent risk and far-reaching implications that emerge when generative AI intersects with malicious activities. Organizations must be aware and ready to combat the possibility of generative AI being used to conduct cyberattacks.

Its misuse in the realm of cyberattacks poses a grave concern and brings into focus the inherent risk and far-reaching implications that emerge when generative AI intersects with malicious activities.

WormGPT

One tool gaining notoriety on underground forums is WormGPT. WormGPT, developed in 2021, is a generative AI tool built using the GPT-J language model. It offers a diverse set of features including unlimited character support, chat memory retention and the ability to handle code formatting efficiently. It is a powerful tool for adversaries to execute sophisticated phishing and business email compromise (BEC) attacks.1

WormGPT presents itself as a black hat alternative to standard GPT models because it is purposefully designed for malicious intent. Cybercriminals can leverage WormGPT to automate the creation of highly convincing fake emails that are expertly personalized to each recipient, which significantly increases the success rates of their attacks.

The introduction of WormGPT and its unscrupulous purpose highlights an alarming threat posed by generative AI, enabling even inexperienced cybercriminals to launch large-scale attacks without technical expertise Enterprises need to be aware of these new tools and be particularly sensitive to new phishing and BEC attacks. One way to help mitigate this increased risk is through updated training programs and/or reviewing and enhancing email verification processes.

PoisonGPT

Another concerning scenario arises from the potential modification of an existing open-source AI model to spread disinformation. Such a manipulated model can be uploaded to public repositories such as Hugging Face,2 which is a prominent open-source community that specializes in developing tools that empower users to build, train and deploy ML models using open-source code and technologies. This practice is known as large language model (LLM) supply chain poisoning.

The success of this technique, known as PoisonGPT, relies on uploading the altered model under a name that impersonates a reputable organization, allowing it to blend seamlessly and go unnoticed. Poison GPT underscores the urgent need for heightened vigilance in the face of the evolving cyberthreat landscape.3

Combating both WormGPT and PoisonGPT requires a robust security program that consists of educating users on the latest cybercriminal tactics, integrating multifactor authentication and implementing robust access control measures to significantly bolster organizational security against potential malicious applications of this cutting-edge generative AI technology. Adhering to these best practices enables organizations to fortify their defenses and achieve maximum protection.

Polymorphic Malware

Generative AI can also be employed to build malware. Polymorphic malware is a type of malicious software that continuously changes its code to evade detection by antivirus software and other security measures. What makes it particularly formidable is its use of adaptable cutting-edge AI technology to generate new code with each iteration. This adaptive behavior makes the malware incredibly challenging to detect and counteract.

Many other advanced persistent threats (APTs) also rely on AI-driven techniques, magnifying the need for unwavering vigilance when securing IT assets.

Conclusion

Generative AI has the potential to revolutionize various aspects of life in positive ways. However, its misuse for conducting cyberattacks presents significant risk to individuals, organizations and society. By understanding and recognizing the potential threats and taking proactive measures, cybersecurity professionals can harness the benefits of generative AI while safeguarding against malicious exploitation, thereby fostering a secure and trustworthy digital environment.

Endnotes

1 Mahirova, S.; “What Is WormGPT? The New AI Behind the Recent Wave of Cyberattacks,” Dazed, 18 July 2023
2 The Hacker News, “WormGPT: WormGPT: New AI Tool Allows Cybercriminals to Launch Sophisticated Cyber Attacks,” 15 July 2023
3 Shenwai, D.; “Meet PoisonGPT: An AI Method To Introduce A Malicious Model Into An Otherwise-Trusted LLM Supply Chain,” Marktechpost, 14 July 2023

Chris McGowan

Is the principal of information security professional practices on the ISACA® Content Development and Services team. In this role, he leads information security thought leadership initiatives relevant to ISACA’s constituents. McGowan is a highly accomplished US Navy veteran with nearly 23 years of experience spanning multidisciplinary security and cyberoperations.