While mainstream generative AI models have built-in safety barriers, open-source alternatives have no such restrictions. Here’s what that means for cyber crime.

There’s little doubt that open-source is the future of software. According to the 2024 State of Open Source Report, over two-thirds of businesses increased their use of open-source software in the last year.

Generative AI is no exception. The number of developers contributing to open-source projects on GitHub and other platforms is soaring. Organizations are investing billions in generative AI across a vast range of use cases, from customer service chatbots to code generation. Many of them are either building proprietary AI models from the ground up or on the back of open-source projects.

But legitimate businesses aren’t the only ones investing in generative AI. It’s also a veritable goldmine for malicious actors, from rogue states bent on proliferating misinformation among their rivals to cyber criminals developing malicious code or targeted phishing scams.

Tearing down the guard rails

For now, one of the few things holding malicious actors back is the guardrails developers put in place to protect their AI models against misuse. ChatGPT won’t knowingly generate a phishing email, and Midjourney won’t create abusive images. However, these models belong to entirely closed-source ecosystems, where the developers behind them have the power to dictate what they can and cannot be used for.

It took just two months from its public release for ChatGPT to reach 100 million users. Since then, countless thousands of users have tried to break through its guardrails and ‘jailbreak’ it to do whatever they want — with varying degrees of success.

The unstoppable rise of open-source models will render these guardrails obsolete anyway. While performance has typically lagged behind that of closed-source models, there’s no doubt open-source models will improve. The reason is simple — developers can use whichever data they like to train them. On the positive side, this can promote transparency and competition while supporting the democratization of AI — instead of leaving it solely in the hands of big corporations and regulators.

However, without safeguards, generative AI is the next frontier in cyber crime. Rogue AIs like FraudGPT and WormGPT are widely available on dark web markets. Both are based on the open-source large language model (LLM) GPT-J developed by EleutherAI in 2021.

Malicious actors are also using open-source image synthesizers like Stable Diffusion to build specialized models capable of generating abusive content. AI-generated video content is just around the corner. Its capabilities are currently limited only by the availability of high-performance open-source models and the considerable computing power required to run them.

What does this mean for businesses?

It might be tempting to dismiss these issues as external threats that any sufficiently trained team should be adequately equipped to handle. But as more organizations invest in building proprietary generative AI models, they also risk expanding their internal attack surfaces.

One of the biggest sources of threat in model development is the training process itself. For example, if there’s any confidential, copyrighted or incorrect data in the training data set, it might resurface later on in response to a prompt. This could be due to an oversight on the part of the development team or due to a deliberate data poisoning attack by a malicious actor.

Prompt injection attacks are another source of risk, which involves tricking or jailbreaking a model into generating content that goes against the vendor’s terms of use. That’s a risk facing every generative AI model, but the risks are arguably greater in open-source environments lacking sufficient oversight. Once AI tools are open-sourced, the organizations they originate from lose control over the development and use of the technology.

The easiest way to understand the threats posed by unregulated AI is to ask the closed-source ones to misbehave. Under most circumstances, they’ll refuse to cooperate, but as numerous cases have demonstrated, all it typically takes is some creative prompting and trial and error. However, you won’t run into any such restrictions with open-source AI systems developed by organizations like Stability AI, EleutherAI or Hugging Face — or, for that matter, a proprietary system you’re building in-house.

A threat and a vital tool

Ultimately, the threat of open-source AI models lies in just how open they are to misuse. While advancing democratization in model development is itself a noble goal, the threat is only going to evolve and grow and businesses can’t expect to count on regulators to keep up. That’s why AI itself has also become a vital tool in the cybersecurity professional’s arsenal. To understand why, read our guide on AI cybersecurity.

More from Artificial Intelligence

How I got started: AI security executive

3 min read - Artificial intelligence and machine learning are becoming increasingly crucial to cybersecurity systems. Organizations need professionals with a strong background that mixes AI/ML knowledge with cybersecurity skills, bringing on board people like Nicole Carignan, Vice President of Strategic Cyber AI at Darktrace, who has a unique blend of technical and soft skills. Carignan was originally a dance major but was also working for NASA as a hardware IT engineer, which forged her path into AI and cybersecurity.Where did you go to…

ChatGPT 4 can exploit 87% of one-day vulnerabilities: Is it really that impressive?

2 min read - After reading about the recent cybersecurity research by Richard Fang, Rohan Bindu, Akul Gupta and Daniel Kang, I had questions. While initially impressed that ChatGPT 4 can exploit the vast majority of one-day vulnerabilities, I started thinking about what the results really mean in the grand scheme of cybersecurity. Most importantly, I wondered how a human cybersecurity professional’s results for the same tasks would compare.To get some answers, I talked with Shanchieh Yang, Director of Research at the Rochester Institute…

How cyber criminals are compromising AI software supply chains

3 min read - With the adoption of artificial intelligence (AI) soaring across industries and use cases, preventing AI-driven software supply chain attacks has never been more important.Recent research by SentinelOne exposed a new ransomware actor, dubbed NullBulge, which targets software supply chains by weaponizing code in open-source repositories like Hugging Face and GitHub. The group, claiming to be a hacktivist organization motivated by an anti-AI cause, specifically targets these resources to poison data sets used in AI model training.No matter whether you use…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today