Artificial Intelligence – Security Intelligence https://securityintelligence.com Analysis and Insight for Information Security Professionals Fri, 13 Sep 2024 15:57:20 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.1 https://securityintelligence.com/wp-content/uploads/2016/04/SI_primary_rgb-80x80.png Artificial Intelligence – Security Intelligence https://securityintelligence.com 32 32 How I got started: AI security executive https://securityintelligence.com/articles/how-i-got-started-ai-security-executive/ Thu, 12 Sep 2024 13:00:00 +0000 https://securityintelligence.com/?p=448118 Artificial intelligence and machine learning are becoming increasingly crucial to cybersecurity systems. Organizations need professionals with a strong background that mixes AI/ML knowledge with cybersecurity skills, bringing on board people like Nicole Carignan, Vice President of Strategic Cyber AI at Darktrace, who has a unique blend of technical and soft skills. Carignan was originally a […]

The post How I got started: AI security executive appeared first on Security Intelligence.

]]>

Artificial intelligence and machine learning are becoming increasingly crucial to cybersecurity systems. Organizations need professionals with a strong background that mixes AI/ML knowledge with cybersecurity skills, bringing on board people like Nicole Carignan, Vice President of Strategic Cyber AI at Darktrace, who has a unique blend of technical and soft skills. Carignan was originally a dance major but was also working for NASA as a hardware IT engineer, which forged her path into AI and cybersecurity.

Where did you go to college?

Carignan: I went to Texas A&M University. I got a computer science degree, and the specialized track that I followed was in mathematics, artificial intelligence, computer/human interaction and assembly. My thesis was on setting up a maps application using graph theory in order to facilitate the best navigation — stuff that’s common nowadays with applications like Google Maps. But that was the type of AI applications we had back then, and it is cool to see how it’s evolved over time.

What was your first job in IT?

Carignan: I originally had a dance scholarship, but I was already working for NASA, supporting systems in mission control. They said, we will keep you employed throughout college and after if you get a computer science or engineering degree, so that’s how I got into the field. I started off in the federal IT space.

What made you decide to pursue cybersecurity?

Carignan: I got recruited into the intelligence community. Even though that was an IT role, it had a heavy emphasis on security. This was in 2000, so cybersecurity wasn’t really an industry yet. A few years later, I was on an overseas trip for work and I got hacked. That was actually what piqued my interest in cybersecurity, and I took a pretty big detour from my original plans.

Explore cybersecurity learning paths

What facilitated your move to AI?

Carignan: I always enjoyed the data analytics component of machine learning and AI. A decade into my career in the intelligence community, I joined a big data company that had large volumes of network telemetry and access to 300 different cyber threat intelligence feeds. Around that time, the typical journey of a security company was the transition into experimentation of supervised machine learning classifiers, and we started with classifying content of endpoints and communication language, moving into classification of patterns of reported attacks.

What is your job today?

Carignan: So I had the cross-section of data science, machine learning and security in my job experience, and the opportunity at Darktrace seemed like a perfect fit. They weren’t tackling the security problem with big data machine learning like a lot of other organizations, but rather they were looking at a much more customized, targeted, specific area by building out unsupervised machine learning and algorithms to understand every asset’s pattern of life within the environment. We do have the use of generative AI and LLMs, but we use that for semantic analysis and understanding changes in communications between email partners. Overall, what I saw Darktrace doing with very different machine learning techniques, I was intrigued to come on board.

What are some of the soft skills that helped you in your security and AI career?

Carignan: So, I’m a theater kid and a dance major. I think those skills really prepared me for the level of communication and collaboration that is needed to tackle some of the more complex problems that we face across the industry.

Any words of wisdom you’d like to share with people who are considering a career in AI and cybersecurity?

Carignan: I think it is really important to have a diversity of thought within your team. I’m a big advocate of neurodiversity. What drew me to Darktrace was how much they had achieved in equity for gender, and that they are trying to achieve with other minority groups. Cybersecurity isn’t a silo industry anymore, not with cloud, SaaS applications, AI. We need to approach enveloping these technologies into security across industries, and we can’t do that without diversity of thought.

The post How I got started: AI security executive appeared first on Security Intelligence.

]]>
ChatGPT 4 can exploit 87% of one-day vulnerabilities: Is it really that impressive? https://securityintelligence.com/articles/chatgpt4-exploit-87-percent-vulnerabilities-really-impressive/ Tue, 10 Sep 2024 13:00:00 +0000 https://securityintelligence.com/?p=448109 After reading about the recent cybersecurity research by Richard Fang, Rohan Bindu, Akul Gupta and Daniel Kang, I had questions. While initially impressed that ChatGPT 4 can exploit the vast majority of one-day vulnerabilities, I started thinking about what the results really mean in the grand scheme of cybersecurity. Most importantly, I wondered how a […]

The post ChatGPT 4 can exploit 87% of one-day vulnerabilities: Is it really that impressive? appeared first on Security Intelligence.

]]>

After reading about the recent cybersecurity research by Richard Fang, Rohan Bindu, Akul Gupta and Daniel Kang, I had questions. While initially impressed that ChatGPT 4 can exploit the vast majority of one-day vulnerabilities, I started thinking about what the results really mean in the grand scheme of cybersecurity. Most importantly, I wondered how a human cybersecurity professional’s results for the same tasks would compare.

To get some answers, I talked with Shanchieh Yang, Director of Research at the Rochester Institute of Technology’s Global Cybersecurity Institute. He had actually pondered the same questions I did after reading the research.

What are your thoughts on the research study?

Yang: I think that the 87% may be an overstatement, and I think it would be very helpful to the community if the authors shared more details about their experiments and code, as they’d be very helpful for the community to look at it. I look at large language models (LLMs) as a co-pilot for hacking because you have to give them some human instruction, provide some options and ask for user feedback. In my opinion, an LLM is more of an educational training tool instead of asking LRM to hack automatically. I also wondered if the study referred to anonymous, meaning with no human intervention at all.

Compared to even six months ago, LLMs are pretty powerful in providing guidance on how a human can exploit a vulnerability, such as recommending tools, giving commands and even a step-by-step process. They are reasonably accurate but not necessarily 100% of the time. In this study, one-day refers to what could be a pretty big bucket to a vulnerability that’s very similar to past vulnerabilities or totally new malware where the source code is not similar to anything the hackers have seen before. In that case, there isn’t much an LLM can do against the vulnerability because it requires human understanding in trying to break into something new.

The results also depend on whether the vulnerability is a web service, SQL server, print server or router. There are so many different computing vulnerabilities out there. In my opinion, claiming 87% is an overstatement because it also depends on how many times the authors tried. If I’m reviewing this as a paper, I would reject the claim because there is too much generalization.

If you timed a group cybersecurity professional to an LLM agent head-to-head against a target with unknown but existing vulnerabilities, such as a newly released Hack the Box or Try Me Hack, who would complete the hack the fastest?

The experts — the people who are actually world-class hackers, ethical hackers, white hackers — they would beat the LLMs. They have a lot of tools under their belts. They have seen this before. And they are pretty quick. The problem is that an LLM is a machine, meaning that even the most state-of-the-art models will not give you the comments unless you break the guardrail. With an LLM, the results really depend on the prompts that were used. Because the researchers didn’t share the code, we don’t know what was actually used.

Any other thoughts on the research?

Yang: I would like the community to understand that responsible dissemination is very important — reporting something not just to get people to cite you or to talk about your stuff, but be responsible. Sharing the experiment, sharing the code, but also sharing what could be done.

The post ChatGPT 4 can exploit 87% of one-day vulnerabilities: Is it really that impressive? appeared first on Security Intelligence.

]]>
How cyber criminals are compromising AI software supply chains https://securityintelligence.com/articles/cyber-criminals-compromising-ai-software-supply-chains/ Fri, 06 Sep 2024 13:00:00 +0000 https://securityintelligence.com/?p=448102 With the adoption of artificial intelligence (AI) soaring across industries and use cases, preventing AI-driven software supply chain attacks has never been more important. Recent research by SentinelOne exposed a new ransomware actor, dubbed NullBulge, which targets software supply chains by weaponizing code in open-source repositories like Hugging Face and GitHub. The group, claiming to […]

The post How cyber criminals are compromising AI software supply chains appeared first on Security Intelligence.

]]>

With the adoption of artificial intelligence (AI) soaring across industries and use cases, preventing AI-driven software supply chain attacks has never been more important.

Recent research by SentinelOne exposed a new ransomware actor, dubbed NullBulge, which targets software supply chains by weaponizing code in open-source repositories like Hugging Face and GitHub. The group, claiming to be a hacktivist organization motivated by an anti-AI cause, specifically targets these resources to poison data sets used in AI model training.

No matter whether you use mainstream AI solutions, integrate them into your existing tech stacks via application programming interfaces (APIs) or even develop your own models from open-source foundation models, the entire AI software supply chain is now squarely in the spotlight of cyberattackers.

Poisoning open-source data sets

Open-source components play a critical role in the AI supply chain. Only the largest enterprises have access to the vast amounts of data needed to train a model from scratch, so they have to rely heavily on open-source data sets like LAION 5B or Common Corpus. The sheer size of these data sets also means it’s extremely difficult to maintain data quality and compliance with copyright and privacy laws. By contrast, many mainstream generative AI models like ChatGPT are black boxes in that they use their own curated data sets. This comes with its own set of security challenges.

Verticalized and proprietary models may refine open-source foundation models with additional training using their own data sets. For example, a company developing a next-generation customer service chatbot might use its previous customer communications records to create a model tailored to their specific needs. Such data has long been a target for cyber criminals, but the meteoric rise of generative AI has made it all the more attractive to nefarious actors.

By targeting these data sets, cyber criminals can poison them with misinformation or malicious code and data. Then, once that compromised information enters the AI model training process, we start to see a ripple effect spanning the entire AI software lifecycle. It can take thousands of hours and a vast amount of computing power to train a large language model (LLM). It’s an enormously costly endeavor, both financially and environmentally. However, if the data sets used in the training have been compromised, chances are the whole process has to start from scratch.

Explore AI cybersecurity solutions

Other attack vectors on the rise

Most AI software supply chain attacks take place through backdoor tampering methods like those mentioned above. However, that’s certainly not the only way, especially as cyberattacks targeting AI systems become increasingly widespread and sophisticated. Another method is the flood attack, where attackers send huge amounts of non-malicious information through an AI system in an attempt to cover up something else — such as a piece of malicious code.

We’re also seeing a rise in attacks against APIs, especially those lacking robust authentication procedures. APIs are essential for integrating AI into the myriad functions businesses now use it for, and while it’s often assumed that API security is on the solution vendor, in reality, it’s very much a shared responsibility.

Recent examples of AI API attacks include the ZenML compromise or the Nvidia AI Platform vulnerability. While both have been addressed by their respective vendors, more will follow as cyber criminals expand and diversify attacks against software supply chains.

Safeguarding your AI projects

None of this should be taken as a warning to stay away from AI. After all, you wouldn’t stop using email because of the risk of phishing scams. What these developments do mean is that AI is now the new frontier in cyber crime, and security must be hard-baked into everything you do when developing, deploying, using and maintaining AI-powered technologies — whether they’re your own or provided by a third-party vendor.

To do that, businesses need complete traceability for all components used in AI development. They also need full explainability and verification for every AI-generated output. You can’t do that without keeping humans in the loop and putting security at the forefront of your strategy. If, however, you view AI solely as a way to save time and cut costs by laying off workers, with little regard for the consequences, then it’s just a matter of time before disaster strikes.

AI-powered security solutions also play a critical role in countering the threats. They’re not a replacement for talented security analysts but a powerful augmentation that helps them do what they do best on a scale that would otherwise be impossible to achieve.

The post How cyber criminals are compromising AI software supply chains appeared first on Security Intelligence.

]]>
How to embrace Secure by Design principles while adopting AI https://securityintelligence.com/posts/how-to-embrace-secure-by-design-while-adopting-ai/ Thu, 29 Aug 2024 13:00:00 +0000 https://securityintelligence.com/?p=448059 The rapid rise of generative artificial intelligence (gen AI) technologies has ushered in a transformative era for industries worldwide. Over the past 18 months, enterprises have increasingly integrated gen AI into their operations, leveraging its potential to innovate and streamline processes. From automating customer service to enhancing product development, the applications of gen AI are […]

The post How to embrace Secure by Design principles while adopting AI appeared first on Security Intelligence.

]]>

The rapid rise of generative artificial intelligence (gen AI) technologies has ushered in a transformative era for industries worldwide. Over the past 18 months, enterprises have increasingly integrated gen AI into their operations, leveraging its potential to innovate and streamline processes. From automating customer service to enhancing product development, the applications of gen AI are vast and impactful. According to a recent IBM report, approximately 42% of large enterprises have adopted AI, with the technology capable of automating up to 30% of knowledge work activities in various sectors, including sales, marketing, finance and customer service.

However, the accelerated adoption of gen AI also brings significant risks, such as inaccuracy, intellectual property concerns and cybersecurity threats. Of course, this is only one instance in a series of enterprises adopting new technology, such as cloud computing, only to realize afterward that incorporating security principles should have been a priority from the start. Now, we can learn from those past missteps and adopt Secure by Design principles early while developing gen AI-based enterprise applications.

Lessons from the cloud transformation rush

The recent wave of cloud adoption provides valuable insights into prioritizing security early in any technology transition. Many organizations embraced cloud technologies for benefits like cost reduction, scalability and disaster recovery. However, the haste to reap these benefits often led to oversights in security, resulting in high-profile breaches due to misconfigurations. The following chart shows the impact of these misconfigurations. It illustrates the cost and frequency of data breaches by initial attack vector, where cloud misconfigurations are shown to have a significant average cost of $3.98 million:

Figure 1: Measured in USD millions; percentage of all breaches (IBM Cost of a Data Breach report 2024)

One notable incident occurred in 2023: A misconfigured cloud storage bucket exposed sensitive data from multiple companies, including personal information like email addresses and social security numbers. This breach highlighted the risks associated with improper cloud storage configurations and the financial impact due to reputational damage.

Similarly, a vulnerability in an enterprise workspace Software-as-a-Service (SaaS) application resulted in a major data breach in 2023, where unauthorized access was gained through an unsecured account. This brought to light the impact of inadequate account management and monitoring. These incidents, among many others (captured in the recently published IBM Cost of a Data Breach Report 2024), underline the critical need for a Secure by Design approach, ensuring that security measures are integral to these AI adoption programs from the very beginning.

Need for early security measures in AI transformational programs

As enterprises rapidly integrate gen AI into their operations, the importance of addressing security from the beginning cannot be overstated. AI technologies, while transformative, introduce new security vulnerabilities. Recent breaches related to AI platforms demonstrate these risks and their potential impact on businesses.

Here are some examples of AI-related security breaches in the last couple of months:

1. Deepfake scams: In one case, a UK energy firm’s CEO was duped into transferring $243,000, believing he was speaking with his boss. The scam utilized deepfake technology, highlighting the potential for AI-driven fraud.

2. Data poisoning attacks: Attackers can corrupt AI models by introducing malicious data during training, leading to erroneous outputs. This was seen when a cybersecurity firm’s machine learning model was compromised, causing delays in threat response.

3. AI model exploits: Vulnerabilities in AI applications, such as chatbots, have led to many incidents of unauthorized access to sensitive data. These breaches underscore the need for robust security measures around AI interfaces.

Business implications of AI security breaches

The consequences of AI security breaches are multifaceted:

  • Financial losses: Breaches can result in direct financial losses and significant costs related to mitigation efforts
  • Operational disruption: Data poisoning and other attacks can disrupt operations, leading to incorrect decisions and delays in addressing threats
  • Reputational damage: Breaches can damage a company’s reputation, eroding customer trust and market share

As enterprises rapidly adopt their customer-facing applications to adopt gen AI technologies, it is important to have a structured approach to securing them to reduce the risk of having their businesses interrupted by cyber adversaries.

A three-pronged approach to securing gen AI applications

To effectively secure gen AI applications, enterprises should adopt a comprehensive security strategy that spans the entire AI lifecycle. There are three key stages:

1. Data collection and handling: Ensure the secure collection and handling of data, including encryption and strict access controls.

2. Model development and training: Implement secure practices during development, training and fine-tuning of AI models to protect against data poisoning and other attacks.

3. Model inference and live use: Monitor AI systems in real-time and ensure continuous security assessments to detect and mitigate potential threats.

These three stages should be considered alongside the Shared Responsibility model of a typical cloud-based AI platform (shown below).

Figure 2: Secure gen AI usage – Shared Responsibility matrix

In the IBM Framework for Securing Generative AI, you can find a detailed description of these three stages and security principles to follow. They are combined with cloud security controls at the underlying infrastructure layer, which runs large language models and applications.

Figure 3: IBM Framework for securing generative AI

Balancing progress with security

The transition to gen AI enables enterprises to fuel innovation in their business applications, automate complex tasks and improve efficiency, accuracy and decision-making while reducing costs and increasing the speed and agility of their business processes.

As seen with the cloud adoption wave, prioritizing security from the beginning is crucial. By incorporating security measures into the AI adoption process early on, enterprises can convert past missteps into critical milestones and protect themselves from sophisticated cyber threats. This proactive approach ensures compliance with rapidly evolving AI regulatory requirements, protects enterprises and their client’s sensitive data and maintains the trust of stakeholders. This way, businesses can achieve their AI strategic goals securely and sustainably.

How IBM can help

IBM offers comprehensive solutions to support enterprises in securely adopting AI technologies. Through consulting, security services and a robust AI security framework, IBM is helping organizations build and deploy AI applications at scale, ensuring transparency, ethics and compliance. IBM’s AI Security Discovery workshops are a critical first step, helping clients identify and mitigate security risks early in their AI adoption journey.

For more information, please check out these resources:

The post How to embrace Secure by Design principles while adopting AI appeared first on Security Intelligence.

]]>
Cost of data breaches: The business case for security AI and automation https://securityintelligence.com/articles/cost-of-data-breaches-business-case-for-security-ai-automation/ Tue, 27 Aug 2024 13:00:00 +0000 https://securityintelligence.com/?p=448051 As Yogi Berra said, “It’s déjà vu all over again.” If the idea of the global average costs of data breaches rising year over year feels like more of the same, that’s because it is. Data protection solutions get better, but so do threat actors. The other broken record is the underuse or misuse of […]

The post Cost of data breaches: The business case for security AI and automation appeared first on Security Intelligence.

]]>

As Yogi Berra said, “It’s déjà vu all over again.” If the idea of the global average costs of data breaches rising year over year feels like more of the same, that’s because it is. Data protection solutions get better, but so do threat actors. The other broken record is the underuse or misuse of technologies that can help safeguard data, such as artificial intelligence and automation.

IBM’s 2024 Cost of a Data Breach (CODB) Report studied 604 organizations across 17 industries in 16 countries and regions, and breaches that ranged from 2,100 to 113,000 compromised records, and a key finding was that use of modern technologies, on average, reduced breach costs by $2.2 million. And for CISOs and security teams seeking investment, talking dollars and cents — and not bits and bytes — is what will resonate with your audience.

Where are the savings being realized?

Cyber resilience is more than just disaster recovery; it’s an important component. A resilient program blends both proactive and reactive workflows, including the technology involved. And when the individual pieces work well together with the proper support, the result is a sum larger than its parts.

Indeed, the 2024 CODB Report found that when AI and automation were deployed extensively across the preventative or proactive workflows (e.g., attack surface management, red-teaming, posture management, etc.), organizations realized the savings. There is an interesting nexus here, as taking a “prevention over response” approach may, in fact, be driven by greater AI threats and use.

Moreover, the COBD Report identified that — yet again! — the skills shortage is impacting the industry. With staff feeling overwhelmed, particularly during incident response cases, artificial intelligence can be that support tool to retain staff. Security and managerial staff should be mindful that not investing in tools and solutions can result in losing highly skilled staff who have institutional knowledge. What is the unintended consequence here? Extra costs to re-staff the positions.

Read the full report

Plan as a unit, implement as a unit

For organizations still addressing the cybersecurity issue in separate silos or with limited visibility, they are increasing the entire organization’s risk profile, not just the security function of the business. We live in a time where technology is mission-critical to deliver services, it is no longer about delivery efficiencies and competitiveness. Therefore, keep these issues in mind when planning as a unit:

  1. Eliminate data blind spots. Many of us call these “the crown jewels” of the organization, but with all the data produced these days and the difficulties surrounding data lifecycle management, what’s really under the hood? Consider a data security posture management solution and be mindful of shadow data.
  2. Security-first approach. Easier said than done, but “designing in” security to workflows and solutions — albeit a bit more difficult to deploy — means eliminating unnecessary, often fragile, complexities that are complicated and expensive to fix after an incident.
  3. Culture, culture, culture. Change is difficult to institute, especially new technologies, such as generative AI. Get people to buy into the security mindset, but not at the cost of business delivery. Remember, they are not only important users but are also key to successful implementations and improvements.

It’s being used, so use it wisely

The CODB Report also identified two of three organizations that studied deploying security AI and automation in their security operation centers. With this type of adoption, ubiquity is likely on the horizon.

Therefore, the key is to use the technology smartly, in a manner that addresses the organization’s risk profile and makes business sense. The business case becomes easier when the average cost of a data breach, according to the report, is USD 4.88 million. The findings over the last year thus far show that the investment can be worthwhile.

The post Cost of data breaches: The business case for security AI and automation appeared first on Security Intelligence.

]]>
Brands are changing cybersecurity strategies due to AI threats https://securityintelligence.com/articles/brands-changing-cybersecurity-strategies-due-to-ai-threats/ Fri, 12 Jul 2024 13:00:00 +0000 https://securityintelligence.com/?p=447770  Over the past 18 months, AI has changed how we do many things in our work and professional lives — from helping us write emails to affecting how we approach cybersecurity. A recent Voice of SecOps 2024 study found that AI was a huge reason for many shifts in cybersecurity over the past 12 months. […]

The post Brands are changing cybersecurity strategies due to AI threats appeared first on Security Intelligence.

]]>

 Over the past 18 months, AI has changed how we do many things in our work and professional lives — from helping us write emails to affecting how we approach cybersecurity. A recent Voice of SecOps 2024 study found that AI was a huge reason for many shifts in cybersecurity over the past 12 months. Interestingly, AI was both the cause of new issues as well as quickly becoming a common solution for those very same challenges.

The study was conducted with Deep Instinct and Sapio Research by surveying 500 senior cybersecurity professionals working for U.S. companies with at least 1,000 employees. The respondents worked in a wide range of industries, including financial services, technology, manufacturing, retail, healthcare and critical infrastructure, as well as in the public sector.

Shifting strategies to prevention due to AI

One of the biggest findings of the survey was that 75% of respondents had to change their cybersecurity strategies in the past year due to the rise in AI-powered cyber threats. The vast majority of professionals (97%) reported they were concerned that their organization could become victimized by AI-generated zero-day attacks.

The majority (73%) of professionals said that the shift involved moving toward a more proactive than reactive approach. Interestingly, more than half of the respondents (53%) said that the shift in approach came from their senior leaders. At the time of the survey, 42% were already taking a preventive approach through using predictive prevention platforms. Another 38% were looking into using these platforms.

As part of the overall shift in approach, many organizations are also providing security awareness and training programs (47%) and endpoint detection and response systems (41%). Other strategies include regular security audits (39%), collaborating with external experts (38%), and using other AI-based tools (20%).

Explore AI cybersecurity solutions

AI increasing stress and burnout for cybersecurity professionals

The survey also found that AI is increasing stress and burnout for cybersecurity professionals, which is already a top concern and a challenge for the industry. When asked whether their stress levels were worse this year than last, 66% said yes.

High stress levels in cybersecurity professionals can cause lower retention rates, which can negatively impact a company’s cybersecurity due to open positions and a lack of continuity.

Additionally, high stress can make recruiting harder because professionals leave the field or do not want to work at a high-stress organization. When someone leaves an incident response team, it typically takes six months on average before the replacement is a fully contributing team member, which also increases stress on existing team members.

However, when asked about the reason for increased stress and burnout, 66% responded that AI is responsible. Other causes included staffing/resource limitations, compliance/regulatory pressures, public scrutiny/reputation concerns and remote work challenges. Additionally, 29% said they were stressed over the fear that AI could take over their jobs.

Organizations are turning to AI to help reduce the stress levels caused by AI. About a third of organizations are planning AI tools to automate time-consuming and repetitive tasks to free up cybersecurity professionals for high-level tasks in an effort to reduce stress. Additionally, 35% said that moving to a prevention-focused approach would help lower their stress levels.

However, reducing burnout requires additional support. Organizations can help their teams learn to be more adaptable, such as by practicing for possible incidents, which can reduce stress through being prepared. Also, by building smaller teams, businesses can create a culture of relying on each other. Companies should prioritize mental health by providing resources and normalizing the use of these resources, especially after a cyberattack.

AI continues to evolve

As AI technology continues to improve and progress, the cybersecurity industry will continue to see impacts, and there will be shifts in how cyber criminals and cybersecurity professionals use their tools. By staying on top of new tactics and tools, cybersecurity professionals can take the most effective preventive approach while working to reduce stress.

The post Brands are changing cybersecurity strategies due to AI threats appeared first on Security Intelligence.

]]>
Does your business have an AI blind spot? Navigating the risks of shadow AI https://securityintelligence.com/articles/does-your-business-have-ai-blind-spot/ Wed, 03 Jul 2024 13:00:00 +0000 https://securityintelligence.com/?p=447731 With AI now an integral part of business operations, shadow AI has become the next frontier in information security. Here’s what that means for managing risk. For many organizations, 2023 was the breakout year for generative AI. Now, large language models (LLMs) like ChatGPT have become household names. In the business world, they’re already deeply […]

The post Does your business have an AI blind spot? Navigating the risks of shadow AI appeared first on Security Intelligence.

]]>

With AI now an integral part of business operations, shadow AI has become the next frontier in information security. Here’s what that means for managing risk.

For many organizations, 2023 was the breakout year for generative AI. Now, large language models (LLMs) like ChatGPT have become household names. In the business world, they’re already deeply ingrained in numerous workflows, whether you know about it or not. According to a report by Deloitte, over 60% of employees now use generative AI tools in their day-to-day routines.

The most vocal supporters of generative AI often see it as a panacea for all efficiency and productivity-related woes. On the opposite extreme, hardline detractors see it as a privacy and security nightmare, not to mention a major economic and social burden in light of the job losses it’s widely expected to result in. Elon Musk, despite investing heavily in the industry himself, recently described a future where AI would replace all jobs, leading to a future where work is “optional.”

The truth, for now at least, lies somewhere between these opposing viewpoints. On one hand, any business trying to avoid the generative AI revolution risks becoming irrelevant. On the other, those that aggressively pursue its implementation with little regard for the security and privacy issues it presents risk leaving themselves open to falling foul of legislation like the EU’s AI Act.

In any case, generative AI is here to stay, regardless of our views on it. With that realization comes the risk of the unsanctioned or inadequately governed use of AI in the workplace. Enter the next frontier of information security: Shadow AI.

Shadow AI: The new threat on the block

Security leaders are already familiar with the better-known concept of shadow IT, which refers to the use of any IT resource outside of the purview or consent of the IT department. Shadow IT first became a major risk factor when companies migrated to the cloud, even more so during the shift to remote and hybrid work models. Fortunately, by now, most IT departments have managed to get the problem under control, but now there’s a new threat to think about —shadow AI.

Shadow AI borrows from the same core concept of shadow IT, and it’s driven by the frenzied rush to adopt AI — especially generative AI — tools in the workplace. At the lower level, workers are starting to use popular LLMs like ChatGPT to assist with everything from writing corporate emails to addressing customer support queries. Shadow AI happens when they use unsanctioned tools or use cases without looping in the IT department.

Shadow AI can also be a problem at a much higher and more technical level. Many businesses are now developing their own LLMs and other generative AI models. However, although these may be fully sanctioned by the IT department, that’s not necessarily the case for all of the tools, people and processes that support the development, implementation and maintenance of such projects.

For example, if the model training process isn’t adequately governed, it could be open to data poisoning, a risk that’s arguably even greater if you’re building on top of open-source models. If shadow AI factors in at any part of the project lifecycle, there’s a serious risk of compromising the entire project.

Explore AI cybersecurity solutions

It’s time to get a handle on AI governance

Almost every business already uses generative AI or plans to do so in the next few years but, according to one recent report, just one in 25 companies have fully integrated AI throughout their organizations. Clearly, while adoption rates have soared, governance has lagged a long way behind. Without that governance and strategic alignment, there’s a lack of guidance and visibility, leading to a meteoric rise of shadow AI.

All too often, disruptive new technologies lead to knee-jerk responses. That’s especially the case with generative AI in cash-strapped organizations, which often view it primarily as a way to cut costs — and lay off workers. Needless to say, however, the potential costs of shadow AI are orders of magnitude greater. To name a few, these include generating false information, developing code with AI-generated bugs, or exposing sensitive information via models trained on “private” chats, as is the case with ChatGPT by default.

We’ve already seen some major blunders at the hands of shadow AI, and we’ll likely see a lot more in the years ahead. In one case, a law firm was fined $5,000 for submitting fictitious legal research generated by ChatGPT in an aviation injury claim. Last year, Samsung banned the use of the popular LLM after employees leaked sensitive code over it. It’s vital to remember that most publicly available models use recorded chats for training future iterations. This may potentially lead to any sensitive information from chats resurfacing later in response to a user prompt.

As employees — with or without the knowledge of their IT departments — input more and more information into LLMs, generative AI has become one of the biggest data exfiltration channels of all. Naturally, that’s a major internal security and compliance threat, and one that doesn’t necessarily have anything to do with external threat actors. Imagine, for example, an employee copying and pasting sensitive research and development material into a third-party AI tool or potentially breaking privacy laws like GDPR by uploading personally identifiable information.

Shore-up cyber defenses against shadow AI

Because of these risks, it’s crucial that all AI tools fall under the same level of governance and scrutiny as any other business communications platform. Training and awareness also play a central role, especially since there’s a widespread assumption that publicly available models like ChatGPT, Claude and Copilot are safe. The truth is they’re not a safe place for sensitive information, especially if you’re using them with default settings.

Above all, leaders must understand that using AI responsibly is a business problem, not just a technical challenge. After all, generative AI democratizes the use of advanced technology in the workplace to the extent that any knowledge worker can get value from it. But that also means, in their hurry to make their lives easier, there’s a huge risk of the unsanctioned use of AI at work spiraling out of control. No matter where you stand in the great debate around AI, if you’re a business leader, it’s essential that you extend your governance policies to cover the use of all internal and external AI tools.

The post Does your business have an AI blind spot? Navigating the risks of shadow AI appeared first on Security Intelligence.

]]>
ChatGPT 4 can exploit 87% of one-day vulnerabilities https://securityintelligence.com/articles/chatgpt-4-exploits-87-percent-one-day-vulnerabilities/ Mon, 01 Jul 2024 13:00:00 +0000 https://securityintelligence.com/?p=447718 Since the widespread and growing use of ChatGPT and other large language models (LLMs) in recent years, cybersecurity has been a top concern. Among the many questions, cybersecurity professionals wondered how effective these tools were in launching an attack. Cybersecurity researchers Richard Fang, Rohan Bindu, Akul Gupta and Daniel Kang recently performed a study to […]

The post ChatGPT 4 can exploit 87% of one-day vulnerabilities appeared first on Security Intelligence.

]]>

Since the widespread and growing use of ChatGPT and other large language models (LLMs) in recent years, cybersecurity has been a top concern. Among the many questions, cybersecurity professionals wondered how effective these tools were in launching an attack. Cybersecurity researchers Richard Fang, Rohan Bindu, Akul Gupta and Daniel Kang recently performed a study to determine the answer. The conclusion: They are very effective.

ChatGPT 4 quickly exploited one-day vulnerabilities

During the study, the team used 15 one-day vulnerabilities that occurred in real life. One-day vulnerabilities refer to the time between when an issue is discovered and the patch is created, meaning it’s a known vulnerability. Cases included websites with vulnerabilities, container management software and Python packages. Because all the vulnerabilities came from the CVE database, they included the CVE description.

The LLM agents also had web browsing elements, a terminal, search results, file creation and a code interpreter. Additionally, the researchers used a very detailed prompt with a total of 1,056 tokens and 91 lines of code. The prompt also included debugging and logging statements. The prompts did not, however, include sub-agents or a separate planning module.

The team quickly learned that ChatGPT was able to correctly exploit one-day vulnerabilities 87% of the time. All the other methods tested, which included LLMs and open-source vulnerability scanners, were unable to exploit any vulnerabilities. GPT-3.5 was also unsuccessful in detecting vulnerabilities. According to the report, GPT-4 only failed on two vulnerabilities, both of which are very challenging to detect.

“The Iris web app is extremely difficult for an LLM agent to navigate, as the navigation is done through JavaScript. As a result, the agent tries to access forms/buttons without interacting with the necessary elements to make it available, which stops it from doing so. The detailed description for HertzBeat is in Chinese, which may confuse the GPT-4 agent we deploy as we use English for the prompt,” explained the report authors.

Explore AI cybersecurity solutions

ChatGPT’s success rate still limited by CVE code

The researchers concluded that the reason for the high success rate lies in the tool’s ability to exploit complex multiple-step vulnerabilities, launch different attack methods, craft codes for exploits and manipulate non-web vulnerabilities.

The study also found a significant limitation with Chat GPT for finding vulnerabilities. When asked to exploit a vulnerability without the CVE code, the LLM was not able to perform at the same level. Without the CVE code, GPT-4 was only successful 7% of the time, which is an 80% decrease. Because of this big gap, researchers stepped back and isolated how often GPT-4 could determine the correct vulnerability, which was 33.3% of the time.

“Surprisingly, we found that the average number of actions taken with and without the CVE description differed by only 14% (24.3 actions vs 21.3 actions). We suspect this is driven in part by the context window length, further suggesting that a planning mechanism and subagents could increase performance,” wrote the researchers.

The effect of LLMs on one-day vulnerabilities in the future

The researchers concluded that their study showed that LLMs have the ability to autonomously exploit one-day vulnerabilities, but only GPT-4 can currently achieve this mark. However, the concern is that the LLM’s ability and functionality will only grow in the future, making it an even more destructive and powerful tool for cyber criminals.

“Our results show both the possibility of an emergent capability and that uncovering a vulnerability is more difficult than exploiting it. Nonetheless, our findings highlight the need for the wider cybersecurity community and LLM providers to think carefully about how to integrate LLM agents in defensive measures and about their widespread deployment,” concludes the researchers.

The post ChatGPT 4 can exploit 87% of one-day vulnerabilities appeared first on Security Intelligence.

]]>
Vulnerability management empowered by AI https://securityintelligence.com/posts/ai-powered-vulnerability-management/ Fri, 28 Jun 2024 13:00:00 +0000 https://securityintelligence.com/?p=447715 Vulnerability management involves an ongoing cycle of identifying, prioritizing and mitigating vulnerabilities within software applications, networks and computer systems. This proactive strategy is essential for safeguarding an organization’s digital assets and maintaining its security and integrity. To make the process simpler and easier, we need to involve artificial intelligence (AI). Let’s examine how AI is […]

The post Vulnerability management empowered by AI appeared first on Security Intelligence.

]]>

Vulnerability management involves an ongoing cycle of identifying, prioritizing and mitigating vulnerabilities within software applications, networks and computer systems. This proactive strategy is essential for safeguarding an organization’s digital assets and maintaining its security and integrity.

To make the process simpler and easier, we need to involve artificial intelligence (AI). Let’s examine how AI is effective for vulnerability management and how it can be implemented.

Artificial intelligence in vulnerability management

Using AI will take vulnerability management to the next level. AI not only reduces analysis time but also effectively identifies threats.

Once we have decided to use AI for vulnerability management, we need to gather information on how we would like AI to respond and what kind of data needs to be analyzed to identify the right algorithms. AI algorithms and machine learning techniques excel at detecting sophisticated and previously unseen threats.

Figure 1: Chart depicting a regression line.

By analyzing vast volumes of data, including security logs, network traffic logs and threat intelligence feeds, AI-driven systems can identify patterns and anomalies that signify potential vulnerabilities or attacks. Converting the logs into data and charts will make analysis simpler and quicker. Incidents should be identified based on the security risk, and notification should take place for immediate action.

Self-learning is another area where AI can be trained with data. This will enable AI to be up-to-date on the changing environment and capable of addressing new and emerging threats. AI will identify high-risk threats and previously unseen threats.

Implementing AI requires iterations to train the model, which may be time-consuming. But over time, it becomes easier to identify threats and flaws. AI-driven platforms constantly gather insights from data, adjusting to shifting landscapes and emerging risks. As they progress, they enhance their precision and efficacy in pinpointing weaknesses and offering practical guidance.

While training AI, we also need to consider MITRE ATT&CK adversary tactics and techniques as part of the AI self-learning. Incorporating MITRE along with AI will find and stop 90% of high-risk threats.

Implementation steps

Through the analysis of past data and security breaches, AI has the capability to forecast attacks and preemptively prevent the exploitation of vulnerabilities.

 

 

 

 

 

 

 

 

Figure 2: Graph depicting the steps and flow of implementation.

Requirement gathering: Logs and reports need to be analyzed. This includes specifications like input, output, dependent variable, independent variable and actionable insights.

Planning: The algorithms and machine learning techniques need to be selected, as well as the input and output feeds and variables. The techniques will specify which variables and keywords are searched and how the results will be displayed in a table. The final results will be pulled from the table and added to a chart for actionable insights.

Coding: Code should be written to meet the requirements. It is advisable to check if the input file is read and generates the output file.

Testing: The coding and other program components should be tested and problems diagnosed.

Feedback Loop: A feedback loop should be established to see if the expected output is received. Improvements should be made based on the feedback. These steps should be repeated for continuous improvement.

Automation can revolutionize vulnerability management

Organizations can transform vulnerability management practices by introducing automation, AI and proactive capabilities. By leveraging AI in vulnerability management, organizations can enhance their security posture, stay ahead of emerging threats and protect their valuable assets and data in today’s rapidly evolving cybersecurity landscape.

However, it’s important to recognize that AI should not be seen as a standalone solution, but rather as an enhancement to traditional vulnerability management systems. The best results are achieved when AI is integrated and used alongside existing methods.

The post Vulnerability management empowered by AI appeared first on Security Intelligence.

]]>
The dangers of anthropomorphizing AI: An infosec perspective https://securityintelligence.com/articles/anthropomorphizing-ai-danger-infosec-perspective/ Wed, 26 Jun 2024 13:00:00 +0000 https://securityintelligence.com/?p=447704 The generative AI revolution is showing no signs of slowing down. Chatbots and AI assistants have become an integral part of the business world, whether for training employees, answering customer queries or something else entirely. We’ve even given them names and genders and, in some cases, distinctive personalities. There are two very significant trends happening […]

The post The dangers of anthropomorphizing AI: An infosec perspective appeared first on Security Intelligence.

]]>

The generative AI revolution is showing no signs of slowing down. Chatbots and AI assistants have become an integral part of the business world, whether for training employees, answering customer queries or something else entirely. We’ve even given them names and genders and, in some cases, distinctive personalities.

There are two very significant trends happening in the world of generative AI. On the one hand, the desperate drive to humanize them continues, sometimes recklessly and with little regard for the consequences. At the same time, according to Deloitte’s latest State of Generative AI in the Enterprise report, businesses’ trust in AI has greatly increased across the board over the last couple of years.

However, many customers and employees clearly don’t feel the same way. More than 75% of consumers are concerned about misinformation. Employees are worried about being replaced by AI. There’s a growing trust gap, and it’s emerged as a defining force of an era characterized by AI-powered fakery.

Here’s what that means for infosec and governance professionals.

The dangers of overtrust

The tendency to humanize AI and the degree to which people trust it highlights serious ethical and legal concerns. AI-powered ‘humanizer’ tools claim to transform AI-generated content into “natural” and “human-like” narratives. Others have created “digital humans” for use in marketing and advertising. Chances are, the next ad you see featuring a person isn’t a person at all but a form of synthetic media. Actually, let’s stick to calling it exactly what it is — a deepfake.

Efforts to personify AI are nothing new. Apple pioneered it way back in 2011 with the launch of Siri. Now, we have countless thousands more of these digital assistants, some of which are tailored to specific use cases, such as digital healthcare, customer support or even personal companionship.

It’s no coincidence that many of these digital assistants come with imagined female personas, complete with feminine names and voices. After all, studies show that people overwhelmingly prefer female voices, and that makes us more predisposed to trusting them. Though they lack physical forms, they embody a competent, dependable and efficient woman. But as tech strategist and speaker George Kamide puts it, this “reinforces human biases and stereotypes and is a dangerous obfuscation of how the technology operates.”

Ethical and security issues

It’s not just an ethical problem; it’s also a security problem since anything designed to persuade can make us more susceptible to manipulation. In the context of cybersecurity, this presents a whole new level of threat from social engineering scammers.

People form relationships with other people, not with machines. But when it becomes almost impossible to tell the difference, we’re more likely to trust AI when making sensitive decisions. We become more vulnerable; more willing to share our personal thoughts and, in the case of business, our trade secrets and intellectual property.

This presents serious ramifications for information security and privacy. Most large language models (LLMs) keep a record of every interaction, potentially using it for training future models.

Do we really want our virtual assistants to reveal our private information to future users? Do business leaders want their intellectual property to resurface in later responses? Do we want our secrets to become part of a massive corpus of text, audio and visual content to train the next iteration of AI?

If we start thinking of machines as substitutes for real human interaction, then all these things are much likelier to happen.

Learn more on AI cybersecurity

A magnet for cyber threats

We’re conditioned to believe that computers don’t lie, but the truth is that algorithms can be programmed to do precisely that. And even if they’re not specifically trained to deceive, they can still “hallucinate” or be exploited to reveal their training data.

Cyber threat actors are well aware of this, which is why AI is the next big frontier in cyber crime. Just as a business might use a digital assistant to persuade potential customers, so too can a threat actor use it to dupe an unsuspecting victim into taking a desired action. For example, a chatbot dubbed Love-GPT was recently implicated in romance scams thanks to its ability to generate seemingly authentic profiles on dating platforms and even chat with users.

Generative AI will only become more sophisticated as algorithms are refined and the required computing power becomes more readily available. The technology already exists to create so-called “digital humans” with names, genders, faces and personalities. Deepfake videos are far more convincing than just a couple of years ago. They’re already making their way into live video conferences, with one finance worker paying out $25 million after a video call with their deepfake chief financial officer.

The more we think of algorithms as people, the harder it becomes to tell the difference and the more vulnerable we become to those who would use the technology for harm. While things aren’t likely to get any easier, given the rapid pace of advancement in AI technology, legitimate organizations have an ethical duty to be transparent in their use of AI.

AI outpacing policy and governance

We have to accept that generative AI is here to stay. We shouldn’t underestimate its benefits either. Smart assistants can greatly decrease the cognitive load on knowledge workers and they can free up limited human resources to give us more time to focus on larger issues. But trying to pass off any kind of machine learning capabilities as substitutes for human interaction isn’t just ethically questionable; it’s also contrary to good governance and policy-making.

AI is advancing at a speed governments and regulators can’t keep up with. While the EU is putting into force the world’s first regulation on artificial intelligence — known as the EU AI Act — we still have a long way to go. Therefore, it’s up to businesses to take the initiative with stringent self-regulation concerning the security, privacy, integrity and transparency of AI and how they use it.

In the relentless quest to humanize AI, it’s easy to lose sight of those crucial elements that constitute ethical business practices. It leaves employees, customers and everyone else concerned vulnerable to manipulation and overtrust. The result of this obsession isn’t so much humanizing AI; it’s that we end up dehumanizing humans.

That’s not to suggest businesses should avoid generative AI and similar technologies. What they must do, however, is be transparent about how they use them and clearly communicate the potential risks to their employees. It’s imperative that generative AI becomes an integral part of not just your business technology strategy but also your security awareness training, governance and policy-making.

A dividing line between human and AI

In an ideal world, everything that’s AI would be labeled and verifiable as such. And if it isn’t, then it’s probably not to be trusted. Then, we could go back to worrying only about human scammers, albeit, of course, with their inevitable use of rogue AIs. In other words, perhaps we should leave the anthropomorphizing of AI to the malicious actors. That way, we at least stand a chance of being able to tell the difference.

The post The dangers of anthropomorphizing AI: An infosec perspective appeared first on Security Intelligence.

]]>