Topics – Security Intelligence https://securityintelligence.com Analysis and Insight for Information Security Professionals Fri, 13 Sep 2024 15:57:20 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.1 https://securityintelligence.com/wp-content/uploads/2016/04/SI_primary_rgb-80x80.png Topics – Security Intelligence https://securityintelligence.com 32 32 What can businesses learn from the rise of cyber espionage? https://securityintelligence.com/articles/what-can-businesses-learn-from-rise-of-cyber-espionage/ Fri, 13 Sep 2024 13:00:00 +0000 https://securityintelligence.com/?p=448122 It’s not just government organizations that need to worry about cyber espionage campaigns — the entire business world is also a target. Multipolarity has been a defining trend in geopolitics in recent years. Rivalries between the world’s great powers continue to test the limits of globalism, resulting in growing disruption to international supply chains and […]

The post What can businesses learn from the rise of cyber espionage? appeared first on Security Intelligence.

]]>

It’s not just government organizations that need to worry about cyber espionage campaigns — the entire business world is also a target.

Multipolarity has been a defining trend in geopolitics in recent years. Rivalries between the world’s great powers continue to test the limits of globalism, resulting in growing disruption to international supply chains and economics. Global political risk has reached its highest level in decades, and even though corporate attention to geopolitics has dropped since peaking in 2022, the impact on global economic stability remains worryingly high.

Adding to this backdrop of geopolitical tension, cyberspace has become the fifth dimension of warfare. Rival nation-states and the organizations loyal to them are increasingly turning to cyber espionage to gain a strategic advantage. However, they’re not only targeting government organizations. They’re also targeting the private sector to disrupt economies and gain unauthorized access to confidential — and highly valuable — information. That means every business is a potential target, regardless of industry.

The real threat of state-sponsored cyber operatives

What makes cyber espionage so concerning is that most campaigns are carried out by state-sponsored attackers for economic, political or even military gain. Unlike rogue individuals and crime syndicates operating off the dark web — usually for financial gain — state-sponsored operatives tend to have access to the financial and human resources needed to launch highly sophisticated attacks against specific targets. And, even if a particular company isn’t likely to be targeted deliberately, that doesn’t mean they’re safe. After all, just like any other dimension of warfare, there’s always a risk of collateral damage.

For businesses, protecting against cyber espionage starts with knowing where the threats are coming from. Long gone are the days when standalone criminals and rogue groups working towards their own agendas are the greatest threat. These days, by far, the greater threat comes from nation-states as well as large enterprises that have capitalized on the opportunities of digital espionage. While the headlines have typically focused on Russia, China and the U.S., the U.K. Government Communications Headquarters (GCHQ) intelligence agency recently estimated that there are now at least 34 nation-states with advanced cyber espionage teams.

Processing the deluge of data

Further complicating matters is rapid technological advancement, particularly in AI, and all the risks and opportunities that come with it. On one hand, AI shows great promise in supporting growth and innovation. On the other, it’s also a source of risk as governments assume the dual responsibilities of fostering innovation while regulating the technology to ensure it remains a force for good.

The combination of AI and increasingly massive amounts of data means business strategy can be decided in hours and days rather than months. And no entity has more data than the governments of the world’s largest states and the organizations aligned with them. Intelligence has taken a very different form, with millions of data points being collected every second. For any entity hoping to make use of this deluge of data, AI has become an absolute necessity. The world of cyber crime and espionage is no different.

Explore AI cybersecurity solutions

AI on the frontlines

The rise of generative AI technologies has propelled AI to the frontlines of cyber warfare. State-sponsored attackers are already using tools like large language models (LLMs) to scale, inform and enhance their attacks, making AI a force multiplier in the broader threat landscape. For example, threat actors can now use tailor-made LLMs to generate malicious code or even inform reconnaissance to gain insights into potential targets.

What makes attacks like these so worrying is their widespread implications. When the world’s largest cloud providers are targeted by state-sponsored cyber espionage campaigns, there’s also a trickle-down effect, potentially involving any business that uses their services. Because of their critical role in software supply chains, state-sponsored attackers with virtually unlimited resources tend to go after the biggest targets.

Striking the right balance of cyber risk

Despite these risks, companies can’t afford to abandon their use of the major cloud vendors. After all, their platforms provide the critical infrastructure that today’s organizations need to scale and innovate. Nonetheless, organizations must proactively protect against these threats by layering on a zero trust architecture, conducting regular security audits and ensuring that all sensitive information is encrypted regardless of where it resides. That means they need to be strategic in choosing their vendors, as well as building security initiatives that align with their specific requirements.

We also need to remember that the biggest players in global software supply chains also have the resources to keep ahead of cyber espionage threats, even if there’s no such thing as being 100% secure. AI has become an undisputable necessity in information security, but it’s also a double-edged sword. Rogue states and cyber criminals are using it to scale their attacks and launch highly convincing social engineering campaigns. However, AI also offers the only way to effectively improve threat detection and response times. Just as you can’t fight in a modern war with sticks and stones, neither can you defend against today’s threats without cutting-edge technology.

Innovation is the key to successful security

In the end, while no business will ever be immune to cyberattacks, it’s important to remember that by far the greatest risk comes with a failure to innovate. As it’s often said, “we’ve always done it this way” are the costliest words in the business world. Even in the case of sophisticated state-sponsored attackers, attempted data breaches are far likelier to be successful when they exploit vulnerabilities in outdated infrastructures and security systems.

To effectively protect against the rising tide of AI-driven cyber espionage, businesses need to continuously monitor, review and update their security systems. Layering on AI has become a necessary part of that process thanks to its ability to augment real-time threat detection and response capabilities. Regardless of one’s opinions about AI, it’s here to stay, and it’s vital for businesses to strike the right balance by strategically incorporating AI as a tool to protect against the next generation of state-sponsored cyber threats.

To learn how IBM X-Force can help you with anything regarding cybersecurity including incident response, threat intelligence, or offensive security services schedule a meeting here.

If you are experiencing cybersecurity issues or an incident, contact X-Force to help: US hotline 1-888-241-9812 | Global hotline (+001) 312-212-8034.

The post What can businesses learn from the rise of cyber espionage? appeared first on Security Intelligence.

]]>
How I got started: AI security executive https://securityintelligence.com/articles/how-i-got-started-ai-security-executive/ Thu, 12 Sep 2024 13:00:00 +0000 https://securityintelligence.com/?p=448118 Artificial intelligence and machine learning are becoming increasingly crucial to cybersecurity systems. Organizations need professionals with a strong background that mixes AI/ML knowledge with cybersecurity skills, bringing on board people like Nicole Carignan, Vice President of Strategic Cyber AI at Darktrace, who has a unique blend of technical and soft skills. Carignan was originally a […]

The post How I got started: AI security executive appeared first on Security Intelligence.

]]>

Artificial intelligence and machine learning are becoming increasingly crucial to cybersecurity systems. Organizations need professionals with a strong background that mixes AI/ML knowledge with cybersecurity skills, bringing on board people like Nicole Carignan, Vice President of Strategic Cyber AI at Darktrace, who has a unique blend of technical and soft skills. Carignan was originally a dance major but was also working for NASA as a hardware IT engineer, which forged her path into AI and cybersecurity.

Where did you go to college?

Carignan: I went to Texas A&M University. I got a computer science degree, and the specialized track that I followed was in mathematics, artificial intelligence, computer/human interaction and assembly. My thesis was on setting up a maps application using graph theory in order to facilitate the best navigation — stuff that’s common nowadays with applications like Google Maps. But that was the type of AI applications we had back then, and it is cool to see how it’s evolved over time.

What was your first job in IT?

Carignan: I originally had a dance scholarship, but I was already working for NASA, supporting systems in mission control. They said, we will keep you employed throughout college and after if you get a computer science or engineering degree, so that’s how I got into the field. I started off in the federal IT space.

What made you decide to pursue cybersecurity?

Carignan: I got recruited into the intelligence community. Even though that was an IT role, it had a heavy emphasis on security. This was in 2000, so cybersecurity wasn’t really an industry yet. A few years later, I was on an overseas trip for work and I got hacked. That was actually what piqued my interest in cybersecurity, and I took a pretty big detour from my original plans.

Explore cybersecurity learning paths

What facilitated your move to AI?

Carignan: I always enjoyed the data analytics component of machine learning and AI. A decade into my career in the intelligence community, I joined a big data company that had large volumes of network telemetry and access to 300 different cyber threat intelligence feeds. Around that time, the typical journey of a security company was the transition into experimentation of supervised machine learning classifiers, and we started with classifying content of endpoints and communication language, moving into classification of patterns of reported attacks.

What is your job today?

Carignan: So I had the cross-section of data science, machine learning and security in my job experience, and the opportunity at Darktrace seemed like a perfect fit. They weren’t tackling the security problem with big data machine learning like a lot of other organizations, but rather they were looking at a much more customized, targeted, specific area by building out unsupervised machine learning and algorithms to understand every asset’s pattern of life within the environment. We do have the use of generative AI and LLMs, but we use that for semantic analysis and understanding changes in communications between email partners. Overall, what I saw Darktrace doing with very different machine learning techniques, I was intrigued to come on board.

What are some of the soft skills that helped you in your security and AI career?

Carignan: So, I’m a theater kid and a dance major. I think those skills really prepared me for the level of communication and collaboration that is needed to tackle some of the more complex problems that we face across the industry.

Any words of wisdom you’d like to share with people who are considering a career in AI and cybersecurity?

Carignan: I think it is really important to have a diversity of thought within your team. I’m a big advocate of neurodiversity. What drew me to Darktrace was how much they had achieved in equity for gender, and that they are trying to achieve with other minority groups. Cybersecurity isn’t a silo industry anymore, not with cloud, SaaS applications, AI. We need to approach enveloping these technologies into security across industries, and we can’t do that without diversity of thought.

The post How I got started: AI security executive appeared first on Security Intelligence.

]]>
ChatGPT 4 can exploit 87% of one-day vulnerabilities: Is it really that impressive? https://securityintelligence.com/articles/chatgpt4-exploit-87-percent-vulnerabilities-really-impressive/ Tue, 10 Sep 2024 13:00:00 +0000 https://securityintelligence.com/?p=448109 After reading about the recent cybersecurity research by Richard Fang, Rohan Bindu, Akul Gupta and Daniel Kang, I had questions. While initially impressed that ChatGPT 4 can exploit the vast majority of one-day vulnerabilities, I started thinking about what the results really mean in the grand scheme of cybersecurity. Most importantly, I wondered how a […]

The post ChatGPT 4 can exploit 87% of one-day vulnerabilities: Is it really that impressive? appeared first on Security Intelligence.

]]>

After reading about the recent cybersecurity research by Richard Fang, Rohan Bindu, Akul Gupta and Daniel Kang, I had questions. While initially impressed that ChatGPT 4 can exploit the vast majority of one-day vulnerabilities, I started thinking about what the results really mean in the grand scheme of cybersecurity. Most importantly, I wondered how a human cybersecurity professional’s results for the same tasks would compare.

To get some answers, I talked with Shanchieh Yang, Director of Research at the Rochester Institute of Technology’s Global Cybersecurity Institute. He had actually pondered the same questions I did after reading the research.

What are your thoughts on the research study?

Yang: I think that the 87% may be an overstatement, and I think it would be very helpful to the community if the authors shared more details about their experiments and code, as they’d be very helpful for the community to look at it. I look at large language models (LLMs) as a co-pilot for hacking because you have to give them some human instruction, provide some options and ask for user feedback. In my opinion, an LLM is more of an educational training tool instead of asking LRM to hack automatically. I also wondered if the study referred to anonymous, meaning with no human intervention at all.

Compared to even six months ago, LLMs are pretty powerful in providing guidance on how a human can exploit a vulnerability, such as recommending tools, giving commands and even a step-by-step process. They are reasonably accurate but not necessarily 100% of the time. In this study, one-day refers to what could be a pretty big bucket to a vulnerability that’s very similar to past vulnerabilities or totally new malware where the source code is not similar to anything the hackers have seen before. In that case, there isn’t much an LLM can do against the vulnerability because it requires human understanding in trying to break into something new.

The results also depend on whether the vulnerability is a web service, SQL server, print server or router. There are so many different computing vulnerabilities out there. In my opinion, claiming 87% is an overstatement because it also depends on how many times the authors tried. If I’m reviewing this as a paper, I would reject the claim because there is too much generalization.

If you timed a group cybersecurity professional to an LLM agent head-to-head against a target with unknown but existing vulnerabilities, such as a newly released Hack the Box or Try Me Hack, who would complete the hack the fastest?

The experts — the people who are actually world-class hackers, ethical hackers, white hackers — they would beat the LLMs. They have a lot of tools under their belts. They have seen this before. And they are pretty quick. The problem is that an LLM is a machine, meaning that even the most state-of-the-art models will not give you the comments unless you break the guardrail. With an LLM, the results really depend on the prompts that were used. Because the researchers didn’t share the code, we don’t know what was actually used.

Any other thoughts on the research?

Yang: I would like the community to understand that responsible dissemination is very important — reporting something not just to get people to cite you or to talk about your stuff, but be responsible. Sharing the experiment, sharing the code, but also sharing what could be done.

The post ChatGPT 4 can exploit 87% of one-day vulnerabilities: Is it really that impressive? appeared first on Security Intelligence.

]]>
How cyber criminals are compromising AI software supply chains https://securityintelligence.com/articles/cyber-criminals-compromising-ai-software-supply-chains/ Fri, 06 Sep 2024 13:00:00 +0000 https://securityintelligence.com/?p=448102 With the adoption of artificial intelligence (AI) soaring across industries and use cases, preventing AI-driven software supply chain attacks has never been more important. Recent research by SentinelOne exposed a new ransomware actor, dubbed NullBulge, which targets software supply chains by weaponizing code in open-source repositories like Hugging Face and GitHub. The group, claiming to […]

The post How cyber criminals are compromising AI software supply chains appeared first on Security Intelligence.

]]>

With the adoption of artificial intelligence (AI) soaring across industries and use cases, preventing AI-driven software supply chain attacks has never been more important.

Recent research by SentinelOne exposed a new ransomware actor, dubbed NullBulge, which targets software supply chains by weaponizing code in open-source repositories like Hugging Face and GitHub. The group, claiming to be a hacktivist organization motivated by an anti-AI cause, specifically targets these resources to poison data sets used in AI model training.

No matter whether you use mainstream AI solutions, integrate them into your existing tech stacks via application programming interfaces (APIs) or even develop your own models from open-source foundation models, the entire AI software supply chain is now squarely in the spotlight of cyberattackers.

Poisoning open-source data sets

Open-source components play a critical role in the AI supply chain. Only the largest enterprises have access to the vast amounts of data needed to train a model from scratch, so they have to rely heavily on open-source data sets like LAION 5B or Common Corpus. The sheer size of these data sets also means it’s extremely difficult to maintain data quality and compliance with copyright and privacy laws. By contrast, many mainstream generative AI models like ChatGPT are black boxes in that they use their own curated data sets. This comes with its own set of security challenges.

Verticalized and proprietary models may refine open-source foundation models with additional training using their own data sets. For example, a company developing a next-generation customer service chatbot might use its previous customer communications records to create a model tailored to their specific needs. Such data has long been a target for cyber criminals, but the meteoric rise of generative AI has made it all the more attractive to nefarious actors.

By targeting these data sets, cyber criminals can poison them with misinformation or malicious code and data. Then, once that compromised information enters the AI model training process, we start to see a ripple effect spanning the entire AI software lifecycle. It can take thousands of hours and a vast amount of computing power to train a large language model (LLM). It’s an enormously costly endeavor, both financially and environmentally. However, if the data sets used in the training have been compromised, chances are the whole process has to start from scratch.

Explore AI cybersecurity solutions

Other attack vectors on the rise

Most AI software supply chain attacks take place through backdoor tampering methods like those mentioned above. However, that’s certainly not the only way, especially as cyberattacks targeting AI systems become increasingly widespread and sophisticated. Another method is the flood attack, where attackers send huge amounts of non-malicious information through an AI system in an attempt to cover up something else — such as a piece of malicious code.

We’re also seeing a rise in attacks against APIs, especially those lacking robust authentication procedures. APIs are essential for integrating AI into the myriad functions businesses now use it for, and while it’s often assumed that API security is on the solution vendor, in reality, it’s very much a shared responsibility.

Recent examples of AI API attacks include the ZenML compromise or the Nvidia AI Platform vulnerability. While both have been addressed by their respective vendors, more will follow as cyber criminals expand and diversify attacks against software supply chains.

Safeguarding your AI projects

None of this should be taken as a warning to stay away from AI. After all, you wouldn’t stop using email because of the risk of phishing scams. What these developments do mean is that AI is now the new frontier in cyber crime, and security must be hard-baked into everything you do when developing, deploying, using and maintaining AI-powered technologies — whether they’re your own or provided by a third-party vendor.

To do that, businesses need complete traceability for all components used in AI development. They also need full explainability and verification for every AI-generated output. You can’t do that without keeping humans in the loop and putting security at the forefront of your strategy. If, however, you view AI solely as a way to save time and cut costs by laying off workers, with little regard for the consequences, then it’s just a matter of time before disaster strikes.

AI-powered security solutions also play a critical role in countering the threats. They’re not a replacement for talented security analysts but a powerful augmentation that helps them do what they do best on a scale that would otherwise be impossible to achieve.

The post How cyber criminals are compromising AI software supply chains appeared first on Security Intelligence.

]]>
New report shows ongoing gender pay gap in cybersecurity https://securityintelligence.com/articles/new-report-shows-gender-pay-gap-in-cybersecurity/ Thu, 05 Sep 2024 13:00:00 +0000 https://securityintelligence.com/?p=448099 The gender gap in cybersecurity isn’t a new issue. The lack of women in cybersecurity and IT has been making headlines for years — even decades. While progress has been made, there is still significant work to do, especially regarding salary. The recent  ISC2 Cybersecurity Workforce Study highlighted numerous cybersecurity issues regarding women in the […]

The post New report shows ongoing gender pay gap in cybersecurity appeared first on Security Intelligence.

]]>

The gender gap in cybersecurity isn’t a new issue. The lack of women in cybersecurity and IT has been making headlines for years — even decades. While progress has been made, there is still significant work to do, especially regarding salary.

The recent  ISC2 Cybersecurity Workforce Study highlighted numerous cybersecurity issues regarding women in the field. In fact, only 17% of the 14,865 respondents to the survey were women.

Pay gap between men and women

One of the most concerning disparities revealed by the study is a persistent pay gap. The study found that U.S. male cybersecurity professionals are paid higher on average than females of the same level. The results show an average salary of $148,035 for men and $141,066 for women. A pay gap also exists globally, with the average global salary for women being $109,609 and for men $115,003.

ISC2 also found a gender pay disparity among people of color in the U.S. The study found that men of color earned an average of $143,610, and women of color earned $135,630. However, the study wasn’t able to compare salaries for people of color on a global basis.

Lack of women in cybersecurity

The study also showed a gap between the number of men and the number of women who work in cybersecurity. Based on the results, ISC2 found that only 20% to 25% of people working in the cybersecurity field are women. Because the percentage of women under 30 years of age in cybersecurity was 26% compared to 16% among women between 39 and 44, the report created optimism that more younger women are choosing cybersecurity as a career.

Interestingly, teams with women on them seemed to have a higher proportion of women than of men, illustrating that women likely seek out teams and companies that have other women working in cybersecurity. Women reported a higher number of women team members (30%) compared to men (22%).

However, 11% of security teams were found to have no women at all, with only 4% saying that it was an equal split between men and women. The industries with the highest number of no-women security teams included IT services (19%), financial services (13%) and government (11%). Mid-sized organizations with 100 to 999 employees were most likely to have security teams with no women.

However, the report also found several areas of concern regarding women’s experiences working in the cybersecurity field:

  • 29% of women in cybersecurity reported discrimination at work, with 19% of men reporting discrimination
  • 36% of women felt they could not be authentic at work, with 29% of men reporting this sentiment
  • 78% of women felt it was essential for their security team to succeed, compared to 68% of men
  • 66% of women feel that diversity within the security team contributed to the security team’s success, compared to 51% of men

Using hiring initiatives to increase women on security teams

The gaps in cybersecurity — both pay and gender — won’t be resolved without a focused effort by industry and companies. Many companies are seeing results by adopting specific DEI hiring initiatives, such as skills-based hiring, and using job descriptions that refer to DEI programs/goals.

The ISC2 report found that businesses using skills-based hiring have an average of 25.5% women in their workforces compared with 22.2% for businesses using other methods. By including DEI program goals in job descriptions, companies can also increase the number of women on their security teams, with 26.6% for those using these types of job descriptions vs. 22.3% for women at those that do not.

Lack of perspectives hurts cybersecurity teams

Without women on cybersecurity teams, security teams lack the wide range of experience and perspectives needed to reduce security risks. Organizations can improve their security by focusing on increasing the number of women on their team, which also means eliminating the pay gap.

“Broader than cybersecurity, there’s a body of research that says the more perspectives you bring to the table, the better off you will be at problem-solving,” Clar Rosso, CEO of ISC2, told Dark Reading. “In cybersecurity, which is a very complex, growing threat landscape, the more perspectives that we bring to the table to solve problems, the more likely we will be able to impact our cyber defense.”

The post New report shows ongoing gender pay gap in cybersecurity appeared first on Security Intelligence.

]]>
Cost of a data breach: Cost savings with law enforcement involvement https://securityintelligence.com/articles/cost-of-a-data-breach-cost-savings-law-enforcement/ Tue, 03 Sep 2024 13:00:00 +0000 https://securityintelligence.com/?p=448072 For those working in the information security and cybersecurity industries, the technical impacts of a data breach are generally understood. But for those outside of these technical functions, such as executives, operators and business support functions, “explaining” the real impact of a breach can be difficult. Therefore, explaining impacts in terms of quantifiable financial figures […]

The post Cost of a data breach: Cost savings with law enforcement involvement appeared first on Security Intelligence.

]]>

For those working in the information security and cybersecurity industries, the technical impacts of a data breach are generally understood. But for those outside of these technical functions, such as executives, operators and business support functions, “explaining” the real impact of a breach can be difficult. Therefore, explaining impacts in terms of quantifiable financial figures and other simple metrics creates a relatively level playing field for most stakeholders, including law enforcement.

IBM’s 2024 Cost of a Data Breach (“CODB”) Report helps to explain the financial impact when law enforcement is involved in the response. Specifically, the CODB report, which studied over 600 organizations, found that when law enforcement assisted the victim during a ransomware attack the cost of a breach lowered by an average of $1 million, excluding the cost of any ransom paid. That is an increase compared to the 2023 CODB Report when the difference was closer to $470,000.

But law enforcement involvement is not ubiquitous. For example, when an organization faced a ransomware attack only 52% of those surveyed involved law enforcement, but the majority of those (63%) also did not end up paying the ransom. Moreover, the CODB Report found law enforcement support helped reduce the time to identify and contain a breach from 297 days to 281.

So why are nearly half of victims not reaching out to law enforcement? Let us look at a few possibilities.

Read the full report

Awareness, embarrassment, secrecy and trust

Outside of cyberspace, a 911 call to local law enforcement is a pretty reasonable first call when falling victim to a crime. But there is no “911” to dial for a cyberattack, and certainly no menu options for ransomware, data exfiltration or destructive attacks. Even experienced incident responders will likely share experiences where opening questions to the victim are, “Have you contacted law enforcement?” or “Have you reported this IC3?” The first answer is often “no” or “not yet,” while the second is “I see what?” Therefore, the awareness issue is still prevalent.

We must also consider emotional responses, such as embarrassment. Think of the employee who may be thinking, “Was I responsible for this by clicking a wrong link?” Embarrassment leads to reluctance, therefore both organizations and law enforcement must message better to their people and partners that reaching out for help is okay. Moreover, add in another psychological factor: additional threats made by the actor demanding victims not contact law enforcement.

There is the secrecy aspect, especially from a business impact perspective. Decision makers may not yet know the business impact of law enforcement involvement. Will the news go public? Will competitors find out? What privacy assurances are available? All of these are reasonable questions, and likely to be important with the regulatory requirements of reporting cyber crimes.

Trust ties all these factors together, ranging from benign “Can I trust law enforcement?” to explicit “We do not trust law enforcement.” These gaps must be bridged.

Building relationships and the future of reporting

Managing a crisis requires competence, but also trust, so exchange business cards before the incident. The issues identified can be proactively addressed by reaching out to law enforcement partners when you do not need them. Learn the capabilities of your local agencies; request meet-and-greets with those in your state and federal regions.

Remember, there is a little “Customer Service 101” here. When the incident hits, what do you want: the general helpline, or somebody you know and have a bond with?

Moreover, the future of cyber crime reporting is becoming more of a public matter, such as SEC reporting rules. Having relationships in place will be beneficial. They can buy time and serve as extra hands.

The case for involving law enforcement from a cost-savings perspective appears pretty transparent. Therefore, it is more of a cultural issue. Make friends, build two-way trust and establish protocols. These can go a long way to reduce the pain and cost of an attack.

The post Cost of a data breach: Cost savings with law enforcement involvement appeared first on Security Intelligence.

]]>
How to embrace Secure by Design principles while adopting AI https://securityintelligence.com/posts/how-to-embrace-secure-by-design-while-adopting-ai/ Thu, 29 Aug 2024 13:00:00 +0000 https://securityintelligence.com/?p=448059 The rapid rise of generative artificial intelligence (gen AI) technologies has ushered in a transformative era for industries worldwide. Over the past 18 months, enterprises have increasingly integrated gen AI into their operations, leveraging its potential to innovate and streamline processes. From automating customer service to enhancing product development, the applications of gen AI are […]

The post How to embrace Secure by Design principles while adopting AI appeared first on Security Intelligence.

]]>

The rapid rise of generative artificial intelligence (gen AI) technologies has ushered in a transformative era for industries worldwide. Over the past 18 months, enterprises have increasingly integrated gen AI into their operations, leveraging its potential to innovate and streamline processes. From automating customer service to enhancing product development, the applications of gen AI are vast and impactful. According to a recent IBM report, approximately 42% of large enterprises have adopted AI, with the technology capable of automating up to 30% of knowledge work activities in various sectors, including sales, marketing, finance and customer service.

However, the accelerated adoption of gen AI also brings significant risks, such as inaccuracy, intellectual property concerns and cybersecurity threats. Of course, this is only one instance in a series of enterprises adopting new technology, such as cloud computing, only to realize afterward that incorporating security principles should have been a priority from the start. Now, we can learn from those past missteps and adopt Secure by Design principles early while developing gen AI-based enterprise applications.

Lessons from the cloud transformation rush

The recent wave of cloud adoption provides valuable insights into prioritizing security early in any technology transition. Many organizations embraced cloud technologies for benefits like cost reduction, scalability and disaster recovery. However, the haste to reap these benefits often led to oversights in security, resulting in high-profile breaches due to misconfigurations. The following chart shows the impact of these misconfigurations. It illustrates the cost and frequency of data breaches by initial attack vector, where cloud misconfigurations are shown to have a significant average cost of $3.98 million:

Figure 1: Measured in USD millions; percentage of all breaches (IBM Cost of a Data Breach report 2024)

One notable incident occurred in 2023: A misconfigured cloud storage bucket exposed sensitive data from multiple companies, including personal information like email addresses and social security numbers. This breach highlighted the risks associated with improper cloud storage configurations and the financial impact due to reputational damage.

Similarly, a vulnerability in an enterprise workspace Software-as-a-Service (SaaS) application resulted in a major data breach in 2023, where unauthorized access was gained through an unsecured account. This brought to light the impact of inadequate account management and monitoring. These incidents, among many others (captured in the recently published IBM Cost of a Data Breach Report 2024), underline the critical need for a Secure by Design approach, ensuring that security measures are integral to these AI adoption programs from the very beginning.

Need for early security measures in AI transformational programs

As enterprises rapidly integrate gen AI into their operations, the importance of addressing security from the beginning cannot be overstated. AI technologies, while transformative, introduce new security vulnerabilities. Recent breaches related to AI platforms demonstrate these risks and their potential impact on businesses.

Here are some examples of AI-related security breaches in the last couple of months:

1. Deepfake scams: In one case, a UK energy firm’s CEO was duped into transferring $243,000, believing he was speaking with his boss. The scam utilized deepfake technology, highlighting the potential for AI-driven fraud.

2. Data poisoning attacks: Attackers can corrupt AI models by introducing malicious data during training, leading to erroneous outputs. This was seen when a cybersecurity firm’s machine learning model was compromised, causing delays in threat response.

3. AI model exploits: Vulnerabilities in AI applications, such as chatbots, have led to many incidents of unauthorized access to sensitive data. These breaches underscore the need for robust security measures around AI interfaces.

Business implications of AI security breaches

The consequences of AI security breaches are multifaceted:

  • Financial losses: Breaches can result in direct financial losses and significant costs related to mitigation efforts
  • Operational disruption: Data poisoning and other attacks can disrupt operations, leading to incorrect decisions and delays in addressing threats
  • Reputational damage: Breaches can damage a company’s reputation, eroding customer trust and market share

As enterprises rapidly adopt their customer-facing applications to adopt gen AI technologies, it is important to have a structured approach to securing them to reduce the risk of having their businesses interrupted by cyber adversaries.

A three-pronged approach to securing gen AI applications

To effectively secure gen AI applications, enterprises should adopt a comprehensive security strategy that spans the entire AI lifecycle. There are three key stages:

1. Data collection and handling: Ensure the secure collection and handling of data, including encryption and strict access controls.

2. Model development and training: Implement secure practices during development, training and fine-tuning of AI models to protect against data poisoning and other attacks.

3. Model inference and live use: Monitor AI systems in real-time and ensure continuous security assessments to detect and mitigate potential threats.

These three stages should be considered alongside the Shared Responsibility model of a typical cloud-based AI platform (shown below).

Figure 2: Secure gen AI usage – Shared Responsibility matrix

In the IBM Framework for Securing Generative AI, you can find a detailed description of these three stages and security principles to follow. They are combined with cloud security controls at the underlying infrastructure layer, which runs large language models and applications.

Figure 3: IBM Framework for securing generative AI

Balancing progress with security

The transition to gen AI enables enterprises to fuel innovation in their business applications, automate complex tasks and improve efficiency, accuracy and decision-making while reducing costs and increasing the speed and agility of their business processes.

As seen with the cloud adoption wave, prioritizing security from the beginning is crucial. By incorporating security measures into the AI adoption process early on, enterprises can convert past missteps into critical milestones and protect themselves from sophisticated cyber threats. This proactive approach ensures compliance with rapidly evolving AI regulatory requirements, protects enterprises and their client’s sensitive data and maintains the trust of stakeholders. This way, businesses can achieve their AI strategic goals securely and sustainably.

How IBM can help

IBM offers comprehensive solutions to support enterprises in securely adopting AI technologies. Through consulting, security services and a robust AI security framework, IBM is helping organizations build and deploy AI applications at scale, ensuring transparency, ethics and compliance. IBM’s AI Security Discovery workshops are a critical first step, helping clients identify and mitigate security risks early in their AI adoption journey.

For more information, please check out these resources:

The post How to embrace Secure by Design principles while adopting AI appeared first on Security Intelligence.

]]>
Cost of data breaches: The business case for security AI and automation https://securityintelligence.com/articles/cost-of-data-breaches-business-case-for-security-ai-automation/ Tue, 27 Aug 2024 13:00:00 +0000 https://securityintelligence.com/?p=448051 As Yogi Berra said, “It’s déjà vu all over again.” If the idea of the global average costs of data breaches rising year over year feels like more of the same, that’s because it is. Data protection solutions get better, but so do threat actors. The other broken record is the underuse or misuse of […]

The post Cost of data breaches: The business case for security AI and automation appeared first on Security Intelligence.

]]>

As Yogi Berra said, “It’s déjà vu all over again.” If the idea of the global average costs of data breaches rising year over year feels like more of the same, that’s because it is. Data protection solutions get better, but so do threat actors. The other broken record is the underuse or misuse of technologies that can help safeguard data, such as artificial intelligence and automation.

IBM’s 2024 Cost of a Data Breach (CODB) Report studied 604 organizations across 17 industries in 16 countries and regions, and breaches that ranged from 2,100 to 113,000 compromised records, and a key finding was that use of modern technologies, on average, reduced breach costs by $2.2 million. And for CISOs and security teams seeking investment, talking dollars and cents — and not bits and bytes — is what will resonate with your audience.

Where are the savings being realized?

Cyber resilience is more than just disaster recovery; it’s an important component. A resilient program blends both proactive and reactive workflows, including the technology involved. And when the individual pieces work well together with the proper support, the result is a sum larger than its parts.

Indeed, the 2024 CODB Report found that when AI and automation were deployed extensively across the preventative or proactive workflows (e.g., attack surface management, red-teaming, posture management, etc.), organizations realized the savings. There is an interesting nexus here, as taking a “prevention over response” approach may, in fact, be driven by greater AI threats and use.

Moreover, the COBD Report identified that — yet again! — the skills shortage is impacting the industry. With staff feeling overwhelmed, particularly during incident response cases, artificial intelligence can be that support tool to retain staff. Security and managerial staff should be mindful that not investing in tools and solutions can result in losing highly skilled staff who have institutional knowledge. What is the unintended consequence here? Extra costs to re-staff the positions.

Read the full report

Plan as a unit, implement as a unit

For organizations still addressing the cybersecurity issue in separate silos or with limited visibility, they are increasing the entire organization’s risk profile, not just the security function of the business. We live in a time where technology is mission-critical to deliver services, it is no longer about delivery efficiencies and competitiveness. Therefore, keep these issues in mind when planning as a unit:

  1. Eliminate data blind spots. Many of us call these “the crown jewels” of the organization, but with all the data produced these days and the difficulties surrounding data lifecycle management, what’s really under the hood? Consider a data security posture management solution and be mindful of shadow data.
  2. Security-first approach. Easier said than done, but “designing in” security to workflows and solutions — albeit a bit more difficult to deploy — means eliminating unnecessary, often fragile, complexities that are complicated and expensive to fix after an incident.
  3. Culture, culture, culture. Change is difficult to institute, especially new technologies, such as generative AI. Get people to buy into the security mindset, but not at the cost of business delivery. Remember, they are not only important users but are also key to successful implementations and improvements.

It’s being used, so use it wisely

The CODB Report also identified two of three organizations that studied deploying security AI and automation in their security operation centers. With this type of adoption, ubiquity is likely on the horizon.

Therefore, the key is to use the technology smartly, in a manner that addresses the organization’s risk profile and makes business sense. The business case becomes easier when the average cost of a data breach, according to the report, is USD 4.88 million. The findings over the last year thus far show that the investment can be worthwhile.

The post Cost of data breaches: The business case for security AI and automation appeared first on Security Intelligence.

]]>
How Paris Olympic authorities battled cyberattacks, and won gold https://securityintelligence.com/articles/paris-olympic-authorities-battled-cyberattacks-won-gold/ Fri, 23 Aug 2024 13:00:00 +0000 https://securityintelligence.com/?p=448044 The Olympic Games Paris 2024 was by most accounts a highly successful Olympics. Some 10,000 athletes from 204 nations competed in 329 events over 16 days. But before and during the event, authorities battled Olympic-size cybersecurity threats coming from multiple directions. In preparation for expected attacks, authorities took several proactive measures to ensure the security […]

The post How Paris Olympic authorities battled cyberattacks, and won gold appeared first on Security Intelligence.

]]>

The Olympic Games Paris 2024 was by most accounts a highly successful Olympics. Some 10,000 athletes from 204 nations competed in 329 events over 16 days. But before and during the event, authorities battled Olympic-size cybersecurity threats coming from multiple directions.

In preparation for expected attacks, authorities took several proactive measures to ensure the security of the event.

Cyber vigilance program

The Paris 2024 Olympics implemented advanced threat intelligence, real-time threat monitoring and incident response expertise. This program aimed to prepare Olympic-facing organizations for emerging cyber threats by offering a blueprint for cybersecurity strategies.

High alert and incident monitoring

The French Cybersecurity Agency (ANSSI) was on high alert throughout the Olympics, monitoring for attacks that could disrupt critical operations like organizing committees, ticketing, venues and transport.

Extensive use of AI

The Paris Olympics used AI to secure critical information systems, protect sensitive data and raise awareness within the Games ecosystem. Additionally, under France’s Olympics and Paralympics Games Law, a pilot program allowed the use of “algorithmic video surveillance.” Because of Europe’s strong privacy laws, the surveillance did not allow the use of biometric identification or automated data matching. Instead, AI scanned video for scenarios, such as abandoned bags, the presence of weapons, unusual crowd movements and fires.

Collaboration and training

French authorities collaborated with international organizations and conducted extensive training for cybersecurity teams. They focused on understanding threat actor tactics and employed frameworks like MITRE ATT&CK to anticipate and mitigate potential attacks.

Despite the precautions, the Grand Palais, a venue hosting Olympic events, was hit by a ransomware attack. French authorities quickly responded with containment measures, showcasing their preparedness to handle such incidents.

How did the Olympic cybersecurity measures hold up?

Sifting through available facts in the aftermath, the reality of the threats is becoming clearer.

French authorities announced that more than 140 cyberattacks struck the games, but did not disrupt events. ANSSI detected 119 “low-impact” “security events” and 22 incidents where malicious actors successfully gained access to information systems between July 26 and August 11, 2024. Many of these caused system downtime, often through denial-of-service (DoS) attacks.

Other attempted cyberattacks were aimed at Paris, but not directly at the Olympic venue infrastructure. For example, the Grand Palais and some 40 other museums in France were targeted by a ransomware attack in early August, which was thwarted due to rapid response.

Thwarting a wide swath of potential threats

Authorities had to battle not only attacks coming through the global internet but also local threats. The Olympic Games is unique in that it attracts government officials from France and all over the world, then places them in close proximity to large numbers of unvetted international visitors. Spies and data thieves no doubt saw this as a rare opportunity to steal confidential data of high monetary and geopolitical value. A range of techniques enables this kind of data theft, including Wi-Fi hotspot man-in-the-middle attacks and theft of physical devices.

Well before the games, Olympic organizers battled with ticket scams. Researchers at threat intelligence provider QuoIntelligence found that fraudulent websites were selling fake tickets to the Olympics, mainly to Russians unable to buy legitimate tickets because of European sanctions imposed because of Russia’s invasion of Ukraine. Organizers identified 77 fake ticket resale sites.

One of the most prominent threats was the spread of disinformation. Russian groups, such as Storm-1679, widely believed to be a spinoff of Russia’s Internet Research Agency “troll farm,” had been using AI-generated content to create fake news and images, aiming to discredit the International Olympic Committee and instill fear among potential attendees. These campaigns often involve fabricated stories about terrorism and other threats, leveraging AI to enhance their credibility and reach.

In the end, despite enormous efforts by malicious actors, state-sponsored attackers and others, the Games succeeded without major disruption, violence or data theft.

The post How Paris Olympic authorities battled cyberattacks, and won gold appeared first on Security Intelligence.

]]>
Cost of a data breach: The industrial sector https://securityintelligence.com/articles/cost-of-a-data-breach-industrial-sector/ Tue, 20 Aug 2024 13:00:00 +0000 https://securityintelligence.com/?p=448020 Industrial organizations recently received a report card on their performance regarding data breach costs. And there’s plenty of room for improvement. According to the 2024 IBM Cost of a Data Breach (CODB) report, the average total cost of a data breach in the industrial sector was $5.56 million. This reflects an 18% increase for the […]

The post Cost of a data breach: The industrial sector appeared first on Security Intelligence.

]]>

Industrial organizations recently received a report card on their performance regarding data breach costs. And there’s plenty of room for improvement.

According to the 2024 IBM Cost of a Data Breach (CODB) report, the average total cost of a data breach in the industrial sector was $5.56 million. This reflects an 18% increase for the sector compared to 2023.

These figures place the industrial sector in third place for breach costs among the 17 industries studied. On average, data breaches cost industrial organizations 13% more than the $4.88 million global average.

Clearly, the industrial sector is facing strong headwinds when it comes to dealing with data breaches. Let’s take a closer look at some of the challenges tied to the sector, as well as solutions that can help reduce the impact of cyberattacks on industrial organizations.

Highest increase in cost of data breach

The industrial sector experienced the highest data breach cost increase of all industries surveyed in the 2024 COBD report, rising by an average of $830,000 per breach over last year. Organizations in this sector are highly sensitive to operational interruptions since a manufacturing plant shutdown can be devastating. For example, unplanned downtime, perhaps due to ransomware, could cost up to $125,000 per hour.

Part of the problem may be found in the time to identify and contain a data breach at industrial organizations. At 199 days to identify and 73 days to contain, this is above the global average of 194 days to identify and 64 days to contain.

The 2024 COBD report also revealed the root causes of a data breach for industrial organizations, which are:

  • Malicious attack (47%)
  • IT failure (26%)
  • Human error (27%)
Read the report

Regulations for the industrial sector

The industrial sector faces unique regulations that also may contribute to data breach costs. For example, the North American Electric Reliability Corporation Critical Infrastructure Protection (NERC CIP) applies to the energy sector, requiring stringent cybersecurity measures to protect the bulk power system. This includes asset management, personnel training, incident reporting and recovery plans​. Non-compliance with NERC CIP standards can result in fines of up to $1 million per day per violation, highlighting the critical importance of adhering to these cybersecurity measures​.

Furthermore, the Cyber Incident Reporting for Critical Infrastructure Act of 2022 (CIRCIA) aims to mandate how critical infrastructure organizations will be required to report cyber incidents to the federal government. Within the realm of critical infrastructure, a large part of the industrial sector will be required to adhere to these requirements as well.

Industrial sector cybersecurity needs

The industrial sector requires specialized cybersecurity solutions due to its reliance on operational technology (OT) and industrial control systems (ICS). Also, the interconnected nature of industrial supply chains makes vendor risk management and secure communication channels imperative.

For the industrial sector, hybrid cloud environments are evolving and scaling faster than ever, creating a larger and more complex attack surface. To meet these demands, Security Information and Event Management (SIEM) can help industrial organizations detect and prioritize threats. SIEM provides real-time visibility, enabling the rapid identification and response to potential security incidents.

AI and automation cut data breach costs

The 2024 CODB report also revealed that only 32% of industrial organizations implement extensive use of security AI and automation. Meanwhile, a $1.9 million cost savings was shown with extensive use of security AI and automation versus no security AI and automation.

AI-powered automation can accelerate threat response dramatically and drive down data breach costs considerably. For industrial organizations, this can minimize business risk while reducing damages and service interruptions.

Let’s hope that next year’s CODB report will show a new trend for the industrial sector, one that reveals costs are coming down.

The post Cost of a data breach: The industrial sector appeared first on Security Intelligence.

]]>