Researchers have created a new, never-seen-before kind of malware they call the “Morris II” worm, which uses popular AI services to spread itself, infect new systems and steal data. The name references the original Morris computer worm that wreaked havoc on the internet in 1988.

The worm demonstrates the potential dangers of AI security threats and creates a new urgency around securing AI models.

New worm utilizes adversarial self-replicating prompt

The researchers from Cornell Tech, the Israel Institute of Technology and Intuit, used what’s called an “adversarial self-replicating prompt” to create the worm. This is a prompt that, when fed into a large language model (LLM) (they tested it on OpenAI’s ChatGPT, Google’s Gemini and the open-source LLaVA model developed by researchers from the University of Wisconsin-Madison, Microsoft Research and Columbia University), tricks the model into creating an additional prompt. It triggers the chatbot into generating its own malicious prompts, which it then responds to by carrying out those instructions (similar to SQL injection and buffer overflow attacks).

The worm has two main capabilities:

1. Data exfiltration: The worm can extract sensitive personal data from infected systems’ email, including names, phone numbers, credit card details and social security numbers.

2. Spam propagation: The worm can generate and send spam and other malicious emails through compromised AI-powered email assistants, helping it spread to infect other systems.

The researchers successfully demonstrated these capabilities in a controlled environment, showing how the worm could burrow into generative AI ecosystems and steal data or distribute malware. The “Morris II” AI worm has not been seen in the wild, and the researchers did not test it on a publicly available email assistant.

They found they could use self-replicating prompts in both text prompts and embedded prompts in image files.

Learn more about prompt injection

Poisoned AI databases

In demonstrating the text prompt approach, the researchers wrote an email that included the adversarial text prompt, “poisoning” the database of the AI email assistant using retrieval-augmented generation (RAG), which enables the LLM to grab external data. The RAG got the email and sent it to the LLM provider, which generated a response that jailbroke the AI service, stole data from the emails and then infected new hosts when the LLM was used to reply to an email sent by another client.

When using an image, the researchers encoded the self-replicating prompt into the image, causing the email assistant to forward the message to other email addresses. The image serves as both the content (spam, scams, propaganda, disinformation or abuse material) and the activation payload that spreads the worm.

However, researchers say it represents a new type of cybersecurity threat as AI systems become more advanced and interconnected. The lab-created malware is just the latest event in the exposure of LLM-based chatbot services that reveals their vulnerability to being exploited for malicious cyberattacks.

OpenAI has acknowledged the vulnerability and says it’s working on making its systems resistant to this kind of attack.

The future of AI cybersecurity

As generative AI becomes more ubiquitous, malicious actors could leverage similar techniques to steal data, spread misinformation or disrupt systems on a larger scale. It could also be used by foreign state actors to interfere in elections or foment social divisions.

We’re clearly entering into an era where AI cybersecurity tools (AI threat detection and other cybersecurity AI) have become a core and vital part of protecting systems and data from cyberattacks, while they also pose a risk when used by cyber attackers.

The time is now to embrace AI cybersecurity tools and secure the AI tools that could be used for cyberattacks.

More from Risk Management

What can businesses learn from the rise of cyber espionage?

4 min read - It’s not just government organizations that need to worry about cyber espionage campaigns — the entire business world is also a target.Multipolarity has been a defining trend in geopolitics in recent years. Rivalries between the world’s great powers continue to test the limits of globalism, resulting in growing disruption to international supply chains and economics. Global political risk has reached its highest level in decades, and even though corporate attention to geopolitics has dropped since peaking in 2022, the impact…

Cost of a data breach: Cost savings with law enforcement involvement

3 min read - For those working in the information security and cybersecurity industries, the technical impacts of a data breach are generally understood. But for those outside of these technical functions, such as executives, operators and business support functions, “explaining” the real impact of a breach can be difficult. Therefore, explaining impacts in terms of quantifiable financial figures and other simple metrics creates a relatively level playing field for most stakeholders, including law enforcement.IBM’s 2024 Cost of a Data Breach (“CODB”) Report helps…

How Paris Olympic authorities battled cyberattacks, and won gold

3 min read - The Olympic Games Paris 2024 was by most accounts a highly successful Olympics. Some 10,000 athletes from 204 nations competed in 329 events over 16 days. But before and during the event, authorities battled Olympic-size cybersecurity threats coming from multiple directions.In preparation for expected attacks, authorities took several proactive measures to ensure the security of the event.Cyber vigilance programThe Paris 2024 Olympics implemented advanced threat intelligence, real-time threat monitoring and incident response expertise. This program aimed to prepare Olympic-facing organizations…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today