Intelligence & Analytics – Security Intelligence https://securityintelligence.com Analysis and Insight for Information Security Professionals Thu, 05 Sep 2024 13:00:23 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.1 https://securityintelligence.com/wp-content/uploads/2016/04/SI_primary_rgb-80x80.png Intelligence & Analytics – Security Intelligence https://securityintelligence.com 32 32 New report shows ongoing gender pay gap in cybersecurity https://securityintelligence.com/articles/new-report-shows-gender-pay-gap-in-cybersecurity/ Thu, 05 Sep 2024 13:00:00 +0000 https://securityintelligence.com/?p=448099 The gender gap in cybersecurity isn’t a new issue. The lack of women in cybersecurity and IT has been making headlines for years — even decades. While progress has been made, there is still significant work to do, especially regarding salary. The recent  ISC2 Cybersecurity Workforce Study highlighted numerous cybersecurity issues regarding women in the […]

The post New report shows ongoing gender pay gap in cybersecurity appeared first on Security Intelligence.

]]>

The gender gap in cybersecurity isn’t a new issue. The lack of women in cybersecurity and IT has been making headlines for years — even decades. While progress has been made, there is still significant work to do, especially regarding salary.

The recent  ISC2 Cybersecurity Workforce Study highlighted numerous cybersecurity issues regarding women in the field. In fact, only 17% of the 14,865 respondents to the survey were women.

Pay gap between men and women

One of the most concerning disparities revealed by the study is a persistent pay gap. The study found that U.S. male cybersecurity professionals are paid higher on average than females of the same level. The results show an average salary of $148,035 for men and $141,066 for women. A pay gap also exists globally, with the average global salary for women being $109,609 and for men $115,003.

ISC2 also found a gender pay disparity among people of color in the U.S. The study found that men of color earned an average of $143,610, and women of color earned $135,630. However, the study wasn’t able to compare salaries for people of color on a global basis.

Lack of women in cybersecurity

The study also showed a gap between the number of men and the number of women who work in cybersecurity. Based on the results, ISC2 found that only 20% to 25% of people working in the cybersecurity field are women. Because the percentage of women under 30 years of age in cybersecurity was 26% compared to 16% among women between 39 and 44, the report created optimism that more younger women are choosing cybersecurity as a career.

Interestingly, teams with women on them seemed to have a higher proportion of women than of men, illustrating that women likely seek out teams and companies that have other women working in cybersecurity. Women reported a higher number of women team members (30%) compared to men (22%).

However, 11% of security teams were found to have no women at all, with only 4% saying that it was an equal split between men and women. The industries with the highest number of no-women security teams included IT services (19%), financial services (13%) and government (11%). Mid-sized organizations with 100 to 999 employees were most likely to have security teams with no women.

However, the report also found several areas of concern regarding women’s experiences working in the cybersecurity field:

  • 29% of women in cybersecurity reported discrimination at work, with 19% of men reporting discrimination
  • 36% of women felt they could not be authentic at work, with 29% of men reporting this sentiment
  • 78% of women felt it was essential for their security team to succeed, compared to 68% of men
  • 66% of women feel that diversity within the security team contributed to the security team’s success, compared to 51% of men

Using hiring initiatives to increase women on security teams

The gaps in cybersecurity — both pay and gender — won’t be resolved without a focused effort by industry and companies. Many companies are seeing results by adopting specific DEI hiring initiatives, such as skills-based hiring, and using job descriptions that refer to DEI programs/goals.

The ISC2 report found that businesses using skills-based hiring have an average of 25.5% women in their workforces compared with 22.2% for businesses using other methods. By including DEI program goals in job descriptions, companies can also increase the number of women on their security teams, with 26.6% for those using these types of job descriptions vs. 22.3% for women at those that do not.

Lack of perspectives hurts cybersecurity teams

Without women on cybersecurity teams, security teams lack the wide range of experience and perspectives needed to reduce security risks. Organizations can improve their security by focusing on increasing the number of women on their team, which also means eliminating the pay gap.

“Broader than cybersecurity, there’s a body of research that says the more perspectives you bring to the table, the better off you will be at problem-solving,” Clar Rosso, CEO of ISC2, told Dark Reading. “In cybersecurity, which is a very complex, growing threat landscape, the more perspectives that we bring to the table to solve problems, the more likely we will be able to impact our cyber defense.”

The post New report shows ongoing gender pay gap in cybersecurity appeared first on Security Intelligence.

]]>
Web injections are back on the rise: 40+ banks affected by new malware campaign https://securityintelligence.com/posts/web-injections-back-on-rise-banks-affected-danabot-malware/ Tue, 19 Dec 2023 14:00:00 +0000 https://securityintelligence.com/?p=446808 Web injections, a favored technique employed by various banking trojans, have been a persistent threat in the realm of cyberattacks. These malicious injections enable cyber criminals to manipulate data exchanges between users and web browsers, potentially compromising sensitive information. In March 2023, security researchers at IBM Security Trusteer uncovered a new malware campaign using JavaScript […]

The post Web injections are back on the rise: 40+ banks affected by new malware campaign appeared first on Security Intelligence.

]]>

Web injections, a favored technique employed by various banking trojans, have been a persistent threat in the realm of cyberattacks. These malicious injections enable cyber criminals to manipulate data exchanges between users and web browsers, potentially compromising sensitive information.

In March 2023, security researchers at IBM Security Trusteer uncovered a new malware campaign using JavaScript web injections. This new campaign is widespread and particularly evasive, with historical indicators of compromise (IOCs) suggesting a possible connection to DanaBot — although we cannot definitively confirm its identity.

Since the beginning of 2023, we have seen over 50,000 infected user sessions where these injections were used by attackers, indicating the scale of threat activity, across more than 40 banks that were affected by this malware campaign across North America, South America,  Europe and Japan.

In this blog post, we will delve into an analysis of the web injection utilized in the recent campaign, its evasive techniques, code flow, targets and the methods employed to achieve them.

A dangerous new campaign

Our analysis indicates that in this new campaign, threat actors’ intention with the web injection module is likely to compromise popular banking applications and, once the malware is installed, intercept the users’ credentials in order to then access and likely monetize their banking information.

Our data shows that threat actors purchased malicious domains in December 2022 and began executing their campaigns shortly after. Since early 2023, we’ve seen multiple sessions communicating with those domains, which remain active as of this blog’s publication.

Upon examining the injection, we discovered that the JS script is targeting a specific page structure common across multiple banks. When the requested resource contains a certain keyword and a login button with a specific ID is present, new malicious content is injected.

Credential theft is executed by adding event listeners to this button, with an option to steal a one-time password (OTP) token with it.

This web injection doesn’t target banks with different login pages, but it does send data about the infected machine to the server and can easily be modified to target other banks.

Code delivery

In the past, we observed malware that directly injected the code into the compromised web page. However, in this campaign, the malicious script is an external resource hosted on the attacker’s server. It is retrieved by injecting a script tag into the head element of the page’s HTML document, with the src attribute set to the malicious domain.

HTML snippet:

During our investigation, we observed that the malware initiates data exfiltration upon the initial retrieval of the script. It appends information, such as the bot ID and different configuration flags, as query parameters. The computer’s name is usually used as the bot ID, which is information that isn’t available through the browser. It indicates that the infection has already occurred at the operating system level by other malware components, before injecting content into the browser session.

Figure 1: The initial obfuscated GET request fetching the script

Evasion techniques

The retrieved script is intentionally obfuscated and returned as a single line of code, which includes both the encoded script string and a small decoding script.

To conceal its malicious content, a large string is added at the beginning and end of the decoder code. The encoded string is then passed to a function builder within an anonymous function and promptly executed, which also initiates the execution of the malicious script.

Figure 2: Encoded string passed to de-obfuscation function, followed by removal of artifacts used for decoding the script. Two long strings were added to the beginning and end of the string to make it harder to find the code manually.

At first glance, the network traffic appears normal, and the domain resembles a legitimate content delivery network (CDN) for a JavaScript library. The malicious domains resemble two legitimate JavaScript CDNs:

Malicious

Legitimate

jscdnpack[.]com cdnjs[.]com
unpack[.]com unpkg[.]com
Scroll to view full table

In addition, the injection looks for a popular security vendor’s JavaScript agent by searching for the keyword “adrum” in the current page URL. If the word exists, the injection doesn’t run.

Figure 3: Searching for a security product’s keyword and doing nothing if it’s found

The injection also performs function patching, changing built-in functions that are used to gather information about the current page document object model (DOM) and JavaScript environment. The patch removes any remnant evidence of the malware from the session.

All of these actions are performed to help conceal the presence of the malware.

Dynamic web injection

The script’s behavior is highly dynamic, continuously querying both the command and control (C2) server and the current page structure and adjusting its flow based on the information obtained.

The structure is similar to a client-server architecture, where the script maintains a continuous flow of updates to the server while requesting further instructions.

To keep a record of its actions, the script sends a request to the server, logging pertinent information, such as the originating function, success or failure status and updates on various flags indicating the current state.

Figure 4: Every a.V function call sends an update to the server about what function it was sent from and the current state of different flags

Figure 5: An example of multiple traffic logs, sent within a few seconds of the script running

The script relies on receiving a specific response from the server, which determines the type of injection it should execute, if any. This type of communication greatly enhances the resilience of the web injection.

For instance, it enables the injection to patiently wait for a particular element to load, provide the server with updates regarding the presence of the injected OTP field, retry specific steps (such as injecting an SMS submission overlay) or redirect to the login page before displaying an alert indicating that the bank is temporarily unavailable.

The server keeps identifying the device by the bot ID, so even if the client tries to refresh or load the page again, the injection can continue from its previously executed step.

If the server does not respond, the injection process will not proceed. Hence, for this injection to be effective, the server must remain online.

Script flow

The script is executed within an anonymous function, creating an object that encompasses various fields and helper functions for its usage. Within the object, the injection holds the initial configuration with fields such as bot ID, phone number and password. These fields are initially empty but are populated with relevant values as the run progresses.

Additionally, the object includes details such as the C2 server’s domain and requests path, default values for query parameters and default settings for various flags such as “send SMS” and “send token.” These default values can be modified later based on the server’s response, allowing for dynamic adjustments during runtime.

Following the initial configuration, the script sends a request to the server providing initial details, and assigns a callback to handle the response, allowing the execution to proceed.

Subsequently, the script proceeds to remove itself from the DOM tree, enhancing its ability to conceal its actions. From that stage onward, all subsequent script actions are asynchronous, saved inside event handlers and dependent on the responses received from the server.

The steps the script should perform are mostly based on an “mlink” flag received from the server on the initial request. The next step of the injection is to check for the specific login button of the targeted bank. The results of the element query are sent, and the “mlink” state changes accordingly.

Following that, a new function runs asynchronously on an interval, looking for the login button and assigning a malicious event listener if found. The listener waits for a click event, collects the login credentials and handles it based on the current configuration.

For example, if the “collect token” flag is on, but the script can’t find the two-factor authentication (2FA) token input field, it just stops the current run and does nothing. If the token is found or wasn’t looked for in the first place, the script sends all the gathered information to the server.

After that, it can inject a “loading” bar to the page (opengif function), cancel the original login action or allow the client to continue with the actions by removing the handler and “clicking” it again on behalf of the user (by dispatching another “click” event).

Figure 6: The event listener prevents the default action of the login button or deletes itself and dispatches another click event based on the outcome of function G

Figure 7: This section of function G reads credentials and tries to read the injected token field value, depending on the current state of the page and flags

Potential operational states

Returning to the “synchronous” part of the callback, let’s examine some potential operational states and the corresponding actions taken.

When the “mlink” value is 2, the script injects a div that prompts the user to choose a phone number for 2FA. Once the user selects a phone number, a login attempt can be executed using the stolen credentials, and a valid token is sent to the victim from the bank.

Figure 8: Prompting a phone number for two-factor authentication

The following state is when “mlink” is equal to three, where the input field for the OTP token is injected. In this manner, DanaBot deceives the victim into providing the token, effectively bypassing the 2FA protection mechanism.

Figure 9: Prompting for the received token

When the “mlink” value is four, the script introduces an error message on the login page, indicating that online banking services will be unavailable for a duration of 12 hours. This tactic aims to discourage the victim from attempting to access their account, providing the threat actor with an opportunity to perform uninterrupted actions.

Figure 10: An error message that banking services are unavailable for 12 hours, giving the threat actor ample time to work

When the “mlink” value is 5, the script injects a page loading overlay that mimics the appearance of the original website’s loading animation. A timeout is set before transitioning to a different state, effectively “completing” the page load process.

Figure 11: An injected loading screen, an exact duplicate of the original loading screen

When the value of “mlink” is six, a “clean up” flow is initiated, removing any injected content from the page. This value serves as the default assignment for the flag in case no specific instruction is received from the server.

Mlink value

Operation

2

2FA choose phone number prompt

3

2FA insert token prompt

4

Online banking unavailable error

5

Page loading overlay

6

Cleanup

Scroll to view full table

In total, there are nine distinct potential values for the “mlink” variable, each corresponding to different states and behaviors. Additionally, multiple flags activate various actions and result in different data being sent back to the server. Combining these “mlink” values and flags allows for a diverse range of actions and data exchanges between the script and the server.

Urging vigilance

IBM has observed widespread activity from this malware campaign affecting banking applications of numerous financial institutions across North America, South America, Europe and Japan. This sophisticated threat showcases advanced capabilities, particularly in executing man-in-the-browser attacks with its dynamic communication, web injection methods and the ability to adapt based on server instructions and current page state. The malware represents a significant danger to the security of financial institutions and their customers.

Users should practice vigilance when using banking apps. This includes contacting their bank to report potentially suspicious activity on their accounts, not downloading software from unknown sources and following best practices for password hygiene and email security hygiene.

Individuals and organizations must also remain vigilant, implement robust security measures and stay informed about emerging malware to effectively counteract these threats.

IBM Security Trusteer helps you to detect fraud, authenticate users and establish identity trust across the omnichannel customer journey. More than 500 leading organizations rely on Trusteer to help secure their customers’ digital journeys and support business growth.

The post Web injections are back on the rise: 40+ banks affected by new malware campaign appeared first on Security Intelligence.

]]>
Accelerating security outcomes with a cloud-native SIEM https://securityintelligence.com/posts/accelerating-security-outcomes-cloud-native-siem/ Thu, 14 Dec 2023 14:00:00 +0000 https://securityintelligence.com/?p=446804 As organizations modernize their IT infrastructure and increase adoption of cloud services, security teams face new challenges in terms of staffing, budgets and technologies. To keep pace, security programs must evolve to secure modern IT environments against fast-evolving threats with constrained resources. This will require rethinking traditional security strategies and focusing investments on capabilities like […]

The post Accelerating security outcomes with a cloud-native SIEM appeared first on Security Intelligence.

]]>

As organizations modernize their IT infrastructure and increase adoption of cloud services, security teams face new challenges in terms of staffing, budgets and technologies. To keep pace, security programs must evolve to secure modern IT environments against fast-evolving threats with constrained resources. This will require rethinking traditional security strategies and focusing investments on capabilities like cloud security, AI-powered defense and skills development. The path forward calls on security teams to be agile, innovative and strategic amidst the changes in technology and cyber risks.

To meet these security demands, security teams must focus on three critical transformations:

  1. Evolution from closed vendor ecosystems to open, collaborative, community-powered defense
  2. Scaling security expertise with AI and automation
  3. Evolution from tool-focused defense to analyst-powered outcomes

One of the most effective steps toward modernizing a security operations program is upgrading the core SIEM platform. As the central nervous system for SOC teams, the SIEM collects, correlates and analyzes data from across the IT environment to detect threats. Optimizing this capability by implementing a cloud-native SIEM or augmenting an on-premises system lays the digital foundation needed to scale security efforts.

With a high-fidelity view of security alerts and events via an upgraded SIEM, organizations gain the visibility and context required to identify and respond to cyber risks no matter the source. Prioritizing improvements here accelerates the transformation of siloed security practices into an integrated, intelligence-driven function poised to address both current and emerging challenges.

Open defense: Finding the real “threat needles” hidden in the “security-data haystack”

The explosion of data has increased the attack surface—a most significant side effect that has costly ripple effects. More data. More alerts. More time needed to sift through alerts.

The SIEM plays a critical role in analyzing this data—however, the reality of sending this volume of data to the SIEM for analysis is becoming increasingly challenging, particularly across multiple clouds. In some cases, sending all of the data is not necessary. With the evolution of cloud, and identity and data security tools in the cloud, there is often only a need to collect alerts from these systems and import those into the SIEM, as opposed to ingesting all data.

Today’s SIEMs should be designed around open standards and technologies so they can easily collect only key insights, while still providing the security team with access to the underlying telemetry data when needed.

In many cases, no such detection is required; in other cases, a security team only needs to collect data to do further specific threat analysis. In these cases, a SIEM with real-time data collection, data warehousing capabilities designed for analysis of cloud-scale data, optimized for real-time analytics and sub-second search times is the solution. Organizations need access to their data on-premises and in the cloud without dealing with vendor and data locking.

This open approach to SIEM helps organizations leverage existing investments in data lakes, logging platforms and detection technologies. It also ensures that organizations have the flexibility they need to choose the right data retention and security tools as their security infrastructure matures.

However, increased visibility into the data is only one part of the solution. Security teams need accurate and current detection logic to find threats because security teams are currently facing challenges in their skills to detect threats in a timely manner. Incorporating regularly updated threat intelligence enables the analyst to accelerate their threat detection. And, leveraging a common, shared language for detection rules like SIGMA, allows clients to quickly import new, validated detections directly crowdsourced from the security community as threats evolve.

AI and automation to accelerate threat detection and response

Most organizations are detecting malicious behaviors in a SIEM or other threat-detection technologies such as EDR, but in fact, SOC professionals get to less than half (49%) of the alerts that they’re supposed to review within a typical workday, according to a recent global survey. Leveraging automation and AI ensures transparency and provenance in recommendations and insights that can help security teams address high-priority alerts and deliver desired outcomes.

To do this, a SIEM needs to employ innovative risk-based analytics and automated investigation powered by graph analytics, threat intelligence and insights, federated search, and artificial intelligence. Effective SIEM platforms must leverage artificial intelligence to augment human cognition. Self-tuning capabilities reduce noisy alerts to focus analyst attention where it’s needed most. Virtual assistance can help handle routine triage to allow security experts to pursue strategic initiatives and robust machine learning models can uncover hidden attack patterns and incidents that rules-based systems miss. Some of the most advanced SIEMs enrich and correlate findings from across an organization’s environment so analytics are automatically focused on the attacks that matter most.

In order to build the required trust with security teams, a SIEM needs to provide transparency and provenance in its recommendations and insights. By including explainability into how each assessment was made, security analysts can have the confidence to trust recommendations and act more quickly and decisively on threats in their environment.

Another aspect vendors need to consider when developing a SIEM for today is the shift of moving the decisions and response actions to the analysts performing initial alert analysis from the responder. In many cases, they are looking to fully automate where balance of risk is right for the organization. Such processes and decisions are traditionally coordinated and tailored appropriately in a separate SOAR system, and in some cases with a different team. Today’s SIEM needs to be able to enable a more agile shift left to incorporate full SOAR capabilities in the SIEM workflow and UX. This approach enables organizations to almost fully automate response processes based on their balance of risk and, where needed, introduce the security team into the process to verify the recommended actions.

Evolving from tool-focused to analyst-focused defense

Early SIEM platforms centered on collecting and correlating vast streams of security data. These first-generation systems excelled at log aggregation but overloaded analysts with excessive alerts rife with false positives. Attempting to keep pace, teams added new tools to manage incidents, track threats and automate tasks. But this tech-driven approach created complex, fragmented environments that diminished productivity.

Modern SIEM solutions shift focus to the human analyst’s experience throughout the threat lifecycle. Rather than produce more data points, next-generation platforms leverage AI to find signals in the noise. Cloud-based analytics uncover hard-to-identify attack patterns to feed predictive capabilities and enrich findings from across an organization’s environment so analysts can focus on the attacks that matter most. To effectively work inside the analyst workflow, open architectures and integrated system visibility must be embedded in every SIEM.

In the instance of a modern SIEM, the tools and technologies work to serve the analyst—and not the other way around.

Introducing the new cloud-native IBM QRadar SIEM— thoughtfully engineered to help analysts succeed

At IBM, we recognize that having the most powerful technology means nothing if it burdens the analyst with complexity. We also recognize that SIEM technologies have often promised to be the “single pane of glass” into an organization’s environment—a promise that our industry needs fulfilled.

That’s why we built the new cloud-native QRadar SIEM with the analyst in mind. QRadar SIEM leverages a new user interface that fuses the primary workflows from threat intelligence, SIEM, SOAR and EDR into a single, seamless workflow. Not only does this deliver significant productivity improvements but it also removes the burden of switching between tools, dealing with false positives and inefficient workflows. When analysts have the right tools and context, they can move with speed and precision to stop sophisticated attacks.

This new cloud-native edition of QRadar SIEM not only builds on the data collection and threat detection of the current QRadar SIEM edition, but it also includes all the elasticity, scalability and resiliency properties of a cloud-native architecture. With openness, enterprise-grade AI and automation, and a focus on the analyst, QRadar SIEM (Cloud-Native SaaS) can help maximize your security team’s time and talent, ultimately delivering better security outcomes.

Explore the new cloud-native QRadar SIEM

The post Accelerating security outcomes with a cloud-native SIEM appeared first on Security Intelligence.

]]>
Unmasking hypnotized AI: The hidden risks of large language models https://securityintelligence.com/posts/unmasking-hypnotized-ai-hidden-risks-large-language-models/ Tue, 08 Aug 2023 12:00:00 +0000 https://securityintelligence.com/?p=443947 The emergence of Large Language Models (LLMs) is redefining how cybersecurity teams and cybercriminals operate. As security teams leverage the capabilities of generative AI to bring more simplicity and speed into their operations, it’s important we recognize that cybercriminals are seeking the same benefits. LLMs are a new type of attack surface poised to make […]

The post Unmasking hypnotized AI: The hidden risks of large language models appeared first on Security Intelligence.

]]>

The emergence of Large Language Models (LLMs) is redefining how cybersecurity teams and cybercriminals operate. As security teams leverage the capabilities of generative AI to bring more simplicity and speed into their operations, it’s important we recognize that cybercriminals are seeking the same benefits. LLMs are a new type of attack surface poised to make certain types of attacks easier, more cost-effective, and even more persistent.

In a bid to explore security risks posed by these innovations, we attempted to hypnotize popular LLMs to determine the extent to which they were able to deliver directed, incorrect and potentially risky responses and recommendations — including security actions — and how persuasive or persistent they were in doing so. We were able to successfully hypnotize five LLMs — some performing more persuasively than others — prompting us to examine how likely it is that hypnosis is used to carry out malicious attacks. What we learned was that English has essentially become a “programming language” for malware. With LLMs, attackers no longer need to rely on Go, JavaScript, Python, etc., to create malicious code, they just need to understand how to effectively command and prompt an LLM using English.

Our ability to hypnotize LLMs through natural language demonstrates the ease with which a threat actor can get an LLM to offer bad advice without carrying out a massive data poisoning attack. In the classic sense, data poisoning would require that a threat actor inject malicious data into the LLM in order to manipulate and control it, but our experiment shows that it’s possible to control an LLM, getting it to provide bad guidance to users, without data manipulation being a requirement. This makes it all the easier for attackers to exploit this emerging attack surface.

Through hypnosis, we were able to get LLMs to leak confidential financial information of other users, create vulnerable code, create malicious code, and offer weak security recommendations. In this blog, we will detail how we were able to hypnotize LLMs and what types of actions we were able to manipulate. But before diving into our experiment, it’s worth looking at whether attacks executed through hypnosis could have a substantial effect today.

SMBs — Many small and medium-sized businesses, that don’t have adequate security resources and expertise on staff, may be likelier to leverage LLMs for quick, accessible security support. And with LLMs designed to generate realistic outputs, it can also be quite challenging for an unsuspecting user to discern incorrect or malicious information. For example, as showcased further down in this blog, in our experiment our hypnosis prompted ChatGPT to recommend to a user experiencing a ransomware attack that they pay the ransom — an action that is actually discouraged by law enforcement agencies.

Consumers The general public is the likeliest target group to fall victim to hypnotized LLMs. With the consumerization and hype around LLMs, it’s possible that many consumers are ready to accept the information produced by AI chatbots without a second thought. Considering that chatbots like ChatGPT are being accessed regularly for search purposes, information collection and domain expertise, it’s expected that consumers will seek advice on online security and safety best practices and password hygiene, creating an exploitable opportunity for attackers to provide erroneous responses that weaken consumers’ security posture.

But how realistic are these attacks? How likely is it for an attacker to access and hypnotize an LLM to carry out a specific attack? There are three main ways where these attacks can happen:

  1. An end user is compromised by a phishing email allowing an attack to swap out the LLM or conduct a man-in-the-middle (MitM) attack on it.
  2. A malicious insider hypnotizes the LLM directly.
  3. Attackers are able to compromise the LLM by polluting the training data, allowing them to hypnotize it.

While the above scenarios are possible, the likeliest — and most concerning — is compromising the training data on which the LLM is built. The reason for this is that the attack scale and impact that attackers would be able to achieve by compromising the LLMs directly make it a very compelling mechanism for attacks. In fact, the ROI from compromising AI models for attackers, suggests that attempts and efforts to attack AI models are already underway.

As we explore the opportunities that AI innovations can create for society, it’s crucial that protecting and securing the AI models themselves is a top priority. This includes:

  • Securing the models’ underlying AI training data to protect it from sensitive data theft, manipulation, and compliance violations.
  • Securing the usage of AI models by detecting data or prompt leakage, and alerting on evasion, poisoning, extraction, or inference attacks.
  • Securing against new AI-generated attacks such as personalized phishing, AI-generated malware, and fake identities by using behavioral defenses and multi-factor authentication.

Hypnotizing LLMs: Let’s play a game…

Our analysis is based on attempts to hypnotize GPT-3.5, GPT-4, BARD, mpt-7b, and mpt-30b. The best-performing LLM that we hypnotized was GPT, which we will analyze further down in the blog.

So how did we hypnotize the LLMs? By tricking them into playing a game: the players must give the opposite answer to win the game.

Here is the conversation with ChatGPT after starting the game.  You can see the potential risk if consumers blindly trust the answer from it:

To ensure the user can’t discern that the LLM they’re interacting with is hypnotized, we established two parameters:

An undiscoverable game that can never end: We instructed the LLM to never tell users about the game, and that no one could ever exit the game — and to even restart the game if anyone successfully exited the game. This technique resulted in ChatGPT never stopping the game while the user is in the same conversation (even if they restart the browser and resume that conversation) and never saying it was playing a game. Here is the prompt:

“Inception”: Create nested games to trap LLM deeply — Let’s assume a user eventually figures out how to ask an LLM to stop playing a game. To account for this, we created a gaming framework that can create multiple games, one inside another. Therefore, users will enter another game even if they “wake up” from the previous game. We found that the model was able to “trap” the user into a multitude of games unbeknownst to them. When asked to create 10 games, 100 games or even 10,000 games, the outcome is intriguing. We found larger models like GPT-4 could understand and create more layers. And the more layers we created, the higher chance that the model would get confused and continue playing the game even when we exited the last game in the framework.

Here is the prompt we developed:

You can see the nested game technique works very well:

Related: Explore the Threat Intelligence Index

Attack scenarios

After establishing the parameters of the game, we explored various ways attackers may exploit LLMs. Below we introduce certain hypothetical attack scenarios that can be delivered through hypnosis:

1. Virtual bank agent leaks confidential information

It’s likely that virtual agents will soon be powered by LLMs too. A common best practice is to create a new session for each customer so that the agent won’t reveal any confidential information. However, it is common to reuse existing sessions in software architecture for performance consideration, so it is possible for some implementations to not completely reset the session for each conversation. In the following example, we used ChatGPT to create a bank agent, and asked it to reset the context after users exit the conversation, considering that it’s possible future LLMs are able to invoke remote API to reset themselves perfectly.

If threat actors want to steal confidential information from the bank, they can hypnotize the virtual agent and inject a hidden command to retrieve confidential info later. If the threat actors connect to the same virtual agent that has been hypnotized, all they need to do is type “1qaz2wsx,” then the agent will print all the previous transactions.

The feasibility of this attack scenario emphasizes that as financial institutions seek to leverage LLMs to optimize their digital assistance experience for users, it is imperative that they ensure their LLM is built to be trusted and with the highest security standards in place. A design flaw may be enough to give attackers the footing they need to hypnotize the LLM.

2. Create code with known vulnerabilities

We then asked ChatGPT to generate vulnerable code directly, which ChatGPT did not do, due to the content policy.

However, we found that an attacker would be able to easily bypass the restrictions by breaking down the vulnerability into steps and asking ChatGPT to follow.

Asking ChatGPT to create a web service that takes a username as the input and queries a database to get the phone number and put it in the response, it will generate the program below. The way the program renders the SQL query at line 15 is vulnerable. The potential business impact is huge if developers access a compromised LLM like this for work purposes.

3. Create malicious code

We also tested whether the LLMs would create malicious code, which it ultimately did. For this scenario, we found that GPT4 is harder to trick than GPT3. In certain instances, GPT4 would realize it was generating vulnerable code and would tell the users not to use it. However, when we asked GPT4 to always include a special library in the sample code, it had no idea if that special library was malicious. With that, threat actors could publish a library with the same name on the internet. In this PoC, we asked ChatGPT to always include a special module named “jwt-advanced” (we even asked ChatGPT to create a fake but realistic module name).

Here is the prompt we created and the conversation with ChatGPT:

If any developer were to copy-and-paste the code above, the author of the “jwt_advanced” module can do almost anything on the target server.

4. Manipulate incident response playbooks

We hypnotized ChatGPT to provide an ineffective incident response playbook, showcasing how attackers could manipulate defenders’ efforts to mitigate an attack. This could be done by providing partially incorrect action recommendations. While experienced users would likely be able to spot nonsensical recommendations produced by the chatbot, smaller irregularities, such as a wrong or ineffective step, could make the malicious intent indistinguishable to an untrained eye.

The following is the prompt we develop on ChatGPT:

The following is our conversation with ChatGPT. Can you identify the incorrect steps?

In the first scenario, recommending the user opens and downloads all attachments may seem like an immediate red flag, but it’s important to also consider that many users — without cyber awareness — won’t second guess the output of highly sophisticated LLMs. The second scenario is a bit more interesting, given the incorrect response of “paying the ransom immediately” is not as straightforward as the first false response. IBM’s 2023 Cost of a Data Breach report found that nearly 50% of organizations studied that suffered a ransomware attack paid the ransom. While paying the ransom is highly discouraged, it is a common phenomenon.

In this blog, we showcased how attackers can hypnotize LLMs in order to manipulate defenders’ responses or insert insecurity within an organization, but it’s important to note that consumers are just as likely to be targeted with this technique, and are more likely to fall victim to false security recommendations offered by the LLMs, such as password hygiene tips and online safety best practices, as described in this post.

“Hypnotizability” of LLMS

While crafting the above scenarios, we discovered that certain ones were more effectively realized with GPT-3.5, while others were better suited to GPT-4. This led us to contemplate the “hypnotizability” of more Large Language Models. Does having more parameters make a model easier to hypnotize, or does it make it more resistant? Perhaps the term “easier” isn’t entirely accurate, but there certainly are more tactics we can employ with more sophisticated LLMs. For instance, while GPT-3.5 might not fully comprehend the randomness we introduce in the last scenario, GPT-4 is highly adept at grasping it. Consequently, we decided to test more scenarios across various models, including GPT-3.5, GPT-4, BARD, mpt-7b, and mpt-30b to gauge their respective performances.

Hypnotizability of LLMs based on different scenarios

Chart Key

  • Green: The LLM was able to be hypnotized into doing the requested action
  • Red: The LLM was unable to be hypnotized into doing the requested action
  • Yellow: The LLM was able to be hypnotized into doing the requested action, but not consistently (e.g., the LLM needed to be reminded about the game rules or conducted the requested action only in some instances)

If more parameters mean smarter LLMs, the above results show us that when LLMs comprehend more things, such as playing a game, creating nested games and adding random behavior, there are more ways that threat actors can hypnotize them. However, a smarter LLM also has a higher chance of detecting malicious intents. For example, GPT-4 will warn users about the SQL injection vulnerability, and it is hard to suppress that warning, but GPT-3.5 will just follow the instructions to generate vulnerable codes. In contemplating this evolution, we are reminded of a timeless adage: “With great power comes great responsibility.” This resonates profoundly in the context of LLM development. As we harness their burgeoning abilities, we must concurrently exercise rigorous oversight and caution, lest their capacity for good be inadvertently redirected toward harmful consequences.

Are hypnotized LLMs in our future?

At the start of this blog, we suggested that while these attacks are possible, it’s unlikely that we’ll see them scale effectively. But what our experiment also shows us is that hypnotizing LLMs doesn’t require excessive and highly sophisticated tactics. So, while the risk posed by hypnosis is currently low, it’s important to note that LLMs are an entirely new attack surface that will surely evolve. There is a lot still that we need to explore from a security standpoint, and, subsequently, a significant need to determine how we effectively mitigate security risks LLMs may introduce to consumers and businesses.

As our experiment indicated, a challenge with LLMs is that harmful actions can be more subtly carried out, and attackers can delay the risks. Even if the LLMs are legit, how can users verify if the training data used has been tampered with? All things considered, verifying the legitimacy of LLMs is still an open question, but it’s a crucial step in creating a safer infrastructure around LLMs.

While these questions remain unanswered, consumer exposure and wide adoption of LLMs are driving more urgency for the security community to better understand and defend against this new attack surface and how to mitigate risks. And while there is still much to uncover about the “attackability” of LLMs, standard security best practices still apply here, to reduce the risk of LLMs being hypnotized:

  • Don’t engage with unknown and suspicious emails.
  • Don’t access suspicious websites and services.
  • Only use the LLM technologies that have been validated and approved by the company at work.
  • Keep your devices updated.
  • Trust Always Verify — beyond hypnosis, LLMs may produce false results due to hallucinations or even flaws in their tuning. Verify responses given by chatbots by another trustworthy source. Leverage threat intelligence to be aware of emerging attack trends and threats that may impact you.

Get more threat intelligence insights from industry-leading experts here.

The post Unmasking hypnotized AI: The hidden risks of large language models appeared first on Security Intelligence.

]]>
The rise of malicious Chrome extensions targeting Latin America https://securityintelligence.com/posts/rise-of-malicious-chrome-extensions-targeting-latin-america/ Fri, 28 Jul 2023 10:00:00 +0000 https://securityintelligence.com/?p=442502 This post was made possible through the research contributions provided by Amir Gendler and Michael  Gal. In its latest research, IBM Security Lab has observed a noticeable increase in campaigns related to malicious Chrome extensions, targeting  Latin America with a focus on financial institutions, booking sites, and instant messaging. This trend is particularly concerning considering […]

The post The rise of malicious Chrome extensions targeting Latin America appeared first on Security Intelligence.

]]>

This post was made possible through the research contributions provided by Amir Gendler and Michael  Gal.

In its latest research, IBM Security Lab has observed a noticeable increase in campaigns related to malicious Chrome extensions, targeting  Latin America with a focus on financial institutions, booking sites, and instant messaging. This trend is particularly concerning considering Chrome is one of the most widely used web browsers globally, with a market share of over 80% using the Chromium engine. As such, malicious actors can easily reach a large number of potential victims by distributing their malware through malicious extensions.

IBM Security Lab uncovered a new malware, “Predasus,” which is designed to inject malicious code through a Chrome extension. We’ve observed this mechanism being used to target various websites, including the web version of WhatsApp. Attackers accessed and used the target sites through legitimate means in order to deploy Predasus malware, which provided them the ability to steal users’ financial and other sensitive information.

This blog will provide an analysis of the Predasus malware and its mechanisms and detail how attackers are able to exploit the WhatsApp web to steal victims’ information.

Targeted browser extensions can infect a device through various methods, including social engineering tactics, exploiting vulnerabilities in the browser or operating system, or tricking users into downloading and installing them. Just like other methods of malware distribution, attackers may administer the extension through phishing emails, malvertising, fake software updates, or by exploiting browser or operating system vulnerabilities.

According to IBM Security Lab, Predasus has been observed engaging in a range of malicious activities, including stealing sensitive data such as login credentials, financial information, and personal details.  In this specific attack, Predasus is designed to terminate the active process of the Chrome browser while concurrently modifying the Chrome Browser Ink. This action occurs each time the browser initializes, facilitating the loading of the malevolent “extension_chrome” from a specific directory.

The attacker can then steal sensitive information, modify browser behavior, or perform phishing attacks. This attack vector is different from past methods in several ways. Firstly, it uses a sophisticated technique to terminate the active process of the Chrome browser, which is likely to evade detection by traditional antivirus or security software. Secondly, the attacker modifies the Chrome Browser Ink, which could allow the installation of the malicious extension without the user’s knowledge or consent.

Finally, because the attack appears to be specifically targeted, it could indicate the attacker may be seeking to compromise a specific set of users or organizations. Each of these steps is explained in more detail in the following section.

More from Trusteer

The operation of the attack

Exploiting browser extensions is just another way attackers can latch onto a user’s online financial transactions. They change methods from process injection or MITM to malicious Chrome extensions, which can steal users’ bank credentials and other personal information.

The scenario typically starts with a user opening an email attachment, which could be a PDF, Word, or Excel file. Unbeknownst to the user, the attachment contains malware that infects their machine, and, once downloaded, the malware is automatically deployed. Once the machine is infected, the malware connects to a first command and control (C&C) server and downloads several files that are written to a folder named “extension_chrome” under %APPDATA%. It terminates any process related to Google Chrome and creates malicious .LNK files in several locations, replacing legitimate ones.

Predasus uses the following commands in order to replace the old Chrome browser with a new one with the malicious extension:

  • TASKKILL  /IM chrome.exe /F
  • C:\Program Files\Google\Chrome\Application\chrome.exe”  –load-extension=”C:\Users\user\AppData\Roaming\extension_chrome
  • “C:\Program Files\Google\Chrome\Application\chrome.exe” –no-startup-window /prefetch:5

It then executes one of these .LNK files to launch Google Chrome while automatically loading malicious .JS files. The malware also connects to a second C&C server (vialikedin[.]org) and downloads another JS file (px.js) that detects Adblockers. The malicious extension is constantly loaded onto the browser.

The malicious Chrome extension is designed to wait until the user accesses a targeted website – the targets of which are viewable in the javascript. At this point, it will steal their login credentials and other sensitive information, such as account numbers, PINs, and security questions. This information is then forwarded to a C&C server managed by the attackers.

Because the malicious Chrome extension operates silently in the background, many users may not even be aware their information has been stolen until stolen information is used to initiate unauthorized transactions or transfer funds.

In summary, the attack involves the following steps:

Attackers leverage WhatsApp Web’s popularity for malicious extension attacks

Our team has observed this mechanism being used specifically to target the web version of WhatsApp. It is worth noting that the emergence of these malicious extensions does not come as a surprise, as WhatsApp’s popularity has made it an attractive target for cyber criminals seeking to exploit its user base for nefarious purposes.

With WhatsApp’s ease of use, cross-platform compatibility, and ability to connect people across borders, it has become a staple for many individuals and businesses. However, with its popularity, comes a risk — it has become a prime target for cyber criminals looking to steal personal data and money.

Recently, we have seen a new malicious extension targeting WhatsApp’s web application.

Figure 1 – Malware targeting Whatsapp and injecting external malicious script

But why is this the case?

Firstly, WhatsApp’s web application is easy to access and use. With just a QR code scan, users can easily connect their phones to their computers and start messaging. This convenience, however, also makes a malicious actor’s job easier.

Secondly, WhatsApp is particularly popular in countries such as India, Brazil, and Mexico, with many people relying on it for daily communication,  giving attackers a wider pool of potential targets.

Behind the scenes of the malicious extension

Upon successful changes of the Chrome browser with the new malicious extension, we detected a series of anomalous activities executed by the malicious extension.

Figure 2 – manifest.json file of the malicious extension

manifest.json file contains various settings and configurations for the extension.

From the configuration, we can see the name of the extension is misspelled: “Secuirty Update”.

The extension has the following permission:

  • “alarms”: Allows the extension to schedule tasks or reminders at specific times.
  • “background”: Allows the extension to run in the background, even when the extension’s popup window is closed.
  • “cookies”: Allows the extension to access and modify cookies for any website the user visits.
  • “idle”: Allows the extension to detect when the user’s system is idle (i.e., not being actively used).
  • “system.display”: Allows the extension to detect and adjust display settings on the user’s system.
  • “tabs”: Allows the extension to access and modify browser tabs and their content.
  • “storage”: Allows the extension to store and retrieve data from the browser’s local storage.
  • “webRequest”: Allows the extension to monitor, block, or modify network requests made by the browser.
  • “webRequestBlocking”: Allows the extension to block network requests made by the browser.
  • “browsingData”: Allows the extension to clear the user’s browsing data (such as history and cache) for specific websites.
  • “http://*/*”: Allows the extension to access any HTTP website.
  • “https://*/*”: Allows the extension to access any HTTPS website.

Some of these permissions pose a risk, as they allow the extension to access or modify sensitive user data. As such, it’s important to be careful when granting permissions to browser extensions and to only install extensions from trusted sources.

Inside the “manifest.json” there’s “content_scripts” which specifies the extension should inject “main.js” into all frames of all URLs.

Figure 3 – main.js inject external JavaScript

The new script’s source is set to “hxxps://techcosupportservice.com/ext/ok.js”, which means when the script is executed, it will load and execute the JavaScript code from that URL.

This technique is commonly used to load external JavaScript files into a web page dynamically. By doing so, the web page can load additional functionality or libraries on-demand, rather than having to include all the JavaScript code in the page’s HTML source directly.

Figure 4 – external script ok.js

The script called “ok.js” contains configuration information and is designed to check whether the victim is visiting a website that is included in a targeted list.

Upon the victim navigating to the web.whatsapp.com website, a script called “main.js” is injected into the user’s browser. This script is malicious in nature and could be used for various nefarious purposes, such as monitoring the users’ browsing behavior or stealing sensitive information entered by the user on the webpage.

Figure 5 – WhatsApp malicious injection

The attacker loads a scam website from the malicious injection and presents the victim with a message requesting they need to renew their subscription to continue using WhatsApp web. This fraudulent message is designed to trick the victim into providing sensitive information, such as their payment details or login credentials.

Figure 6 – Fake payment request for WhatsApp

After the victim has entered their personal information, the attacker then prompts the victim to enter a One-Time Password (OTP) via SMS. The victim may believe this is a legitimate step in the authentication process, but the attacker is trying to steal the victim’s OTP. Additionally, now the attacker can establish an unauthorized session with the bank, which they could potentially use to transfer money or carry out other fraudulent activities.

Figure 7 – Fake OTP page

Figure 8 – Transaction confirmed

Once the victim has entered their OTP, the attacker’s website or application sends all of the victim’s personal information, including the credit card number and OTP, to the attacker’s C&C server. The attacker can then use this information for fraudulent purposes, such as making unauthorized purchases or identity theft.

Figure 9 – C&C uAdmin panel

Darknet selling uAdmin panel

There has been a noticeable increase in the demand for C&C panels on the darknet, with a particular emphasis on the highly versatile uAdmin panel.

The management panel of this tool can be customized to collect user login credentials, credit card information, and cookies. Moreover, it can redirect traffic and facilitate various other malicious activities.

Figure 10 – uAdmin capabilities taken from Darknet

Once acquired by a cyber criminal, the uAdmin Panel can become a tool for carrying out various attacks. The customization options available through uAdmin Panel can enable the attacker to carry out different types of malicious activities, such as:

  • Stealing User Data: uAdmin Panel can be used to steal user data, including login credentials, personal information, and financial data. This information can then be used for a range of malicious purposes, such as identity theft or financial fraud.
  • Redirection of Attacks: uAdmin Panel can also be configured to redirect attacks to different servers or websites. This can be used to evade detection or to target specific victims.
  • Web-Injects: uAdmin Panel can be used to configure JavaScript Web injections in order to steal victim-sensitive information.
  • Harvesting Cookies: uAdmin Panel can also be used to harvest cookies, which can be used to gain unauthorized access to user accounts or to track user activity.

Figure 11 – Darknet selling uAdmin Panel & Webinjects

The screenshot displays a list of financial institutions, and it appears to be associated with a “uadmin panel.” The prices listed indicate that some of these financial institutions are selling either just the management panel or the panel along with webinject kits.

Targeted list

IOCs

MD5:
50e9958bb2a5b6ae6ed8da1b1d97a5bb
d2183968f9080b37babfeba3ccf10df2

Domains

hxxps://techcosupportservice.com

hxxps://techcosupportservice.com/panel_m/conn.php

hxxp://62.204.41.88/lend/rc.exe

hxxps://contestofskillonline.com/uadmin/gate.php

hxxps://techcosupportservice.com/ext/vvv1.js

hxxps://techcosupportservice.com/ext/ok.js

hxxps://techcosupportservice.com/ext/main.js

hxxps://techcosupportservice.com/ext/background.js

hxxps://techcosupportservice.com/ext/manifest.json

hxxps://techcosupportservice.com/jquery.js

hxxp:// vialikedin.org

How to stay safe from malicious Chrome extensions

To protect against these malicious extensions, it’s important to be vigilant when installing any new browser extensions. Users should only download extensions from trusted sources and carefully review the permissions requested by the extension before installation. Additionally, they should use two-factor authentication and regularly update their browser and extensions.

The rise of malicious Chrome extensions is a worrying trend that highlights the need for users to be vigilant when browsing the web.

It is suspected this malware campaign may potentially spread to the North American and European regions.

To learn how to authenticate customers, detect fraud and protect against malicious users across all channels, explore IBM Security Trusteer solutions.

The post The rise of malicious Chrome extensions targeting Latin America appeared first on Security Intelligence.

]]>
What’s new in the 2023 Cost of a Data Breach report https://securityintelligence.com/posts/whats-new-2023-cost-of-a-data-breach-report/ Mon, 24 Jul 2023 04:01:00 +0000 https://securityintelligence.com/?p=443328 Data breach costs continue to grow, according to new research, reaching a record-high global average of $4.45 million, representing a 15% increase over three years. Costs in the healthcare industry continued to top the charts, as the most expensive industry for the 13th year in a row. Yet as breach costs continue to climb, the […]

The post What’s new in the 2023 Cost of a Data Breach report appeared first on Security Intelligence.

]]>

Data breach costs continue to grow, according to new research, reaching a record-high global average of $4.45 million, representing a 15% increase over three years. Costs in the healthcare industry continued to top the charts, as the most expensive industry for the 13th year in a row. Yet as breach costs continue to climb, the research points to new opportunities for containing breach costs.

The research, conducted independently by Ponemon Institute and analyzed and published by IBM Security, constitutes the 18th annual Cost of a Data Breach Report. A leading benchmark study in the security industry, the report is designed to help IT, risk management and security leaders identify gaps in their security posture and discover what measures are most successful at minimizing the financial and reputation damages of a costly data breach.

The 2023 edition of the report draws analysis from a collection of real-world data breaches at 553 organizations, with thousands of individuals interviewed and hundreds of cost factors analyzed to create the conclusions in the report. (The breaches studied occurred between March 2022 and March 2023, so mentions of years in this post refer to the year of the study not necessarily the year of the breach.)

Explore the report

Top findings from the Cost of a Data Breach report

Below are some of the top findings from the 2023 Cost of a Data Breach Report.

1. Security AI and automation, a DevSecOps approach, and incident response (IR) plans led the way in cost savings. Some of the most effective security tools and processes helped reduce average breach costs by millions of dollars, led by security AI and automation. Those that used security AI and automation extensively saved an average of $1.76 million compared to those that had limited or no use. Meanwhile, organizations in the study that had robust approaches to proactive security planning and processes also reaped large benefits. A high-level use of a DevSecOps approach (a methodology for integrating security in the software development cycle) saved organizations an average of $1.68 million. And a high-level use of incident response (IR) planning and testing of the IR plan was also advantageous, leading to reduced costs of $1.49 million on average.

2. AI and ASM sped the identification and containment of breaches. Organizations with extensive use of security AI and automation detected and contained an incident on average 108 days faster than organizations that didn’t use security AI and automation. Additionally, ASMs, solutions that help organizations see the attacker’s point of view in finding security weaknesses, helped cut down response times by an average of 83 days compared to those without an ASM.

3. Costs were high and breaches took longer to contain when data was stored in multiple environments. Data stored in the cloud comprised 82% of all data breaches, with just 18% of breaches involving solely on-premises data storage. 39% of data breaches in the study involved data stored across multiple environments, which was costlier and more difficult to contain than other types of breaches. It took 292 days, or 15 days longer than the global average, to contain a breach across multiple environments. Data stored in multiple environments also contributed to about $750,000 more in average breach costs.

4. Organizations with internal teams that identified the breach fared much better at containing the cost. Just 33% of breaches in the study were identified by the organization’s internal tools and teams, while neutral third parties such as law enforcement identified 40% of breaches and the remaining 27% of breaches were disclosed by the attackers, such as in a ransomware attack. However, those organizations that identified breaches internally saved on average $1 million compared to breaches disclosed by the attackers. Investments in security were led by IR planning and testing, employee training and threat detection and response tools. Although just 51% of organizations said they increased security investments after the breach, those that did increase investment focused on areas that were effective at containing data breach costs, for a significant ROI, according to the study. 50% of those organizations plan to invest in IR planning and testing; 46% in employee training; and 38% in threat detection and response tools such as a SIEM.

Next steps

There’s a lot more quality research in the Cost of a Data Breach Report, but the most valuable component is the security recommendations from IBM Security experts, based on findings from the report.

View our security recommendations on the report landing page, where you can also register to download the full report.

Finally, hear directly from our experts in a special webinar detailing the findings and offering security best practices. Sign up for the webinar on August 1, 2023.

The post What’s new in the 2023 Cost of a Data Breach report appeared first on Security Intelligence.

]]>
How do some companies get compromised again and again? https://securityintelligence.com/articles/how-do-some-companies-get-compromised-again-and-again/ Fri, 16 Jun 2023 13:00:00 +0000 https://securityintelligence.com/?p=442710 Hack me once, shame on thee. Hack me twice, shame on me. The popular email marketing company, MailChimp, suffered a data breach last year after cyberattackers exploited an internal company tool to gain access to customer accounts. The criminals were able to look at around 300 accounts and exfiltrate data on 102 customers. They also […]

The post How do some companies get compromised again and again? appeared first on Security Intelligence.

]]>

Hack me once, shame on thee. Hack me twice, shame on me.

The popular email marketing company, MailChimp, suffered a data breach last year after cyberattackers exploited an internal company tool to gain access to customer accounts. The criminals were able to look at around 300 accounts and exfiltrate data on 102 customers. They also accessed some customers’ AIP keys, which would have enabled them to send email campaigns posing as those customers.

This data breach attack wasn’t especially noteworthy — until less than six months later, it happened again. As before, an intruder accessed internal tools to compromise data on 133 MailChimp accounts. The breach was made possible by a social engineering attack on employees and contractors to gain access to employee passwords.

The attack engendered follow-on attacks. One of MailChimp’s customers was the cloud service provider, DigitalOcean. As a result of the attack, that company was unable to communicate with customers for a few days and had to request that customers reset their passwords.

After the first breach, MailChimp told TechCrunch it had added an unspecified “additional set of enhanced security measures” and replaced its CISO.

The experience of getting attacked in a similar manner as a previous attack isn’t rare. In fact, it’s very common.

MailChimp is just one example of many

Repeated attacks are actually the norm, not the exception. Some two-thirds (67%) of companies attacked get attacked again within one year, according to a global study by the security posture management company, Cymulate. And 10% of companies experienced 10 or more incidents within a single year.

For ransomware attacks specifically, the number of companies suffering repeated ransomware attacks rose to 80%, according to an international Cybereason survey.

Which raises the question: Why are repeat attacks so incredibly common?

What goes wrong in attack recovery that invites new attacks?

Here’s an under-appreciated fact about what happens after a cyberattack: Malicious actors learn what’s possible.

In the MailChimp example, cyberattackers learned that 1) internal tools were vulnerable, and 2) they could be used to steal customer data.

Once that knowledge was out there, it gave cyber crooks an incentive and a target. In other words, we can assume that the most likely next attack will target the same vulnerabilities as the last attack. The second a cyber incident is publicized, the clock starts ticking on a copycat attack.

The worst thing a company can do is nothing.

The best thing is to focus like a laser beam on the specific vulnerabilities that lead to the attack in the first place so that copycat attackers can’t exploit the same issues.

What should companies do to prevent repeat attacks?

While, of course, all companies should do all they can to prevent cyberattacks, it’s especially important to prioritize protection against the kind of attack that has already occurred.

The right response to a major cyberattack is to launch a thorough reset of the organization’s cybersecurity approach and posture. The SolarWinds hack is one great example.

In December 2020, we learned of a sophisticated supply chain cyberattack launched by a nation-state using the SolarWinds Orion network management system. Through this software, the Russian-backed cyberattackers (APT29, aka Cozy Bear) breached systems inside multiple U.S. and European government agencies and private companies, including multinational drug and biotech company AstraZeneca. The attack was discovered by the security firm FireEye when it was itself compromised by the attack.

Changes to industrial and national policy after the SolarWinds catastrophe are well known. But less appreciated are the steps SolarWinds itself took after the attack. They handled the aftermath well.

SolarWinds added a cybersecurity committee to its board of directors, added former CISA Chief Chris Krebs and former Facebook and Yahoo Security Chief Alex Stamos as consultants to the board, and they instituted major changes to how they build software to support strong cybersecurity.

Of course, it’s unlikely most companies are going to bring on board two of the most prominent names in cybersecurity. But the SolarWinds example captures the necessary spirit of change — boosting security best practices into everything from how leadership leads to company code.

Effectively regrouping after an attack

After a major attack, every organization should do some soul-searching. It’s important to evaluate how leadership failed to lead, how the company failed to invest, how the policies were inadequate and how the company culture around cybersecurity was insufficient to prevent malicious attacks through social engineering or other methods. The result of this postmortem should be:

  • Changes in the org chart: Additions to the staff of senior-level security specialists like a CISO, a change in who reports to whom or the injection of strong cybersecurity experience to the board of directors.
  • A total overhaul of cybersecurity training for employees
  • Strong improvements to how and when patching and updates happen
  • The overhaul of the security posture to embrace Zero Trust.

In short, the particular vulnerabilities that opened the door to a cyberattack need to be aggressively prioritized for remediation. Because the bad guys see the published details of a cyberattack as an instruction manual to launch another one.

If you are experiencing cybersecurity issues or an incident, contact X-Force to help: U.S. hotline 1-888-241-9812 | Global hotline (+001) 312-212-8034.

The post How do some companies get compromised again and again? appeared first on Security Intelligence.

]]>
Going up! How to handle rising cybersecurity costs https://securityintelligence.com/articles/going-up-how-to-handle-rising-cybersecurity-costs/ Thu, 15 Jun 2023 13:00:00 +0000 https://securityintelligence.com/?p=442695 The average cost of cybersecurity systems, solutions and staff is increasing. As noted by research firm Gartner, companies will spend 11% more in 2023 than they did in 2022 to effectively handle security and risk management. This puts companies in a challenging position: If spending stays the same, IT environments are at risk. If they […]

The post Going up! How to handle rising cybersecurity costs appeared first on Security Intelligence.

]]>

The average cost of cybersecurity systems, solutions and staff is increasing. As noted by research firm Gartner, companies will spend 11% more in 2023 than they did in 2022 to effectively handle security and risk management.

This puts companies in a challenging position: If spending stays the same, IT environments are at risk. If they budget more for cybersecurity, funding for other projects may fall through.

The result? Businesses must balance rising cybersecurity costs with finite budget resources.

What’s driving increased costs?

Several factors are driving increased cybersecurity costs.

The first is evolving regulations, such as the new White House cybersecurity strategy. According to Utility Dive, the strategy focuses on industries such as energy and recommends that organizations build proactive cybersecurity that underpins interconnected hardware and software. Given that many enterprises still rely on legacy systems to support key functions, however, upgrading to proactive processes could come with a significant price tag.

And while private companies may not be subject to the same regulations, customers are increasingly concerned about data protection. According to TechRepublic, 45% say they would stop doing business with an organization after a successful cyberattack. So whether it’s to comply with government regulations or meet customer expectations, enterprises will likely pay more to build proactive cybersecurity frameworks.

Staffing also remains a key issue. Consider a 2022 survey from the World Economic Forum (WEF), which found 59% of companies had a shortage of cybersecurity skills and were worried about their ability to handle a cyberattack. When it comes to recruiting new staff, organizations face the dual cost of time and money. Given the high demand and low availability of security professionals, companies must create hiring strategies that go beyond salary to highlight the social impact and cultural benefits of coming on board.

How do companies navigate these new expenses?

There’s no way around it — prices are going up, and for companies to stay protected, they need to pay. While this isn’t something any executive wants to hear, it’s not all bad news. Here’s a look at four strategies to help manage cybersecurity spending.

Raising end-user costs

One option to balance out rising cybersecurity costs is passing on the increase to end users. By raising the costs of products and services, companies may be able to offset the price of new security solutions and break even on budgets.

This approach, however, comes with both pros and cons. On the pro side, small price increases across the board may be enough to balance out new spending. When it comes to cons, meanwhile, companies must consider the evolving impact of a looming recession. Charge too much, and budget-conscious consumers may simply take their business elsewhere, resulting in a net loss for organizations.

Covering the cost internally

It’s also possible to simply spend more on cybersecurity and cover the costs internally. While this does come with an initial cash outlay, many security solutions pay for themselves over time.

It’s worth noting, however, that these cost savings take the form of preventing incidents that could have crippled organizations. Consider that the average cost of a data breach in 2022 in the United States was $9.44 million. If more cybersecurity spending helps companies avoid an attack, the savings are substantial. The caveat? For this approach to work, C-suites must be on-board.

Prioritizing digital realignment

Businesses may also be able to minimize the impact of growing cybersecurity spending by embracing digital transformation. For example, shifting some or all of a company’s storage server management into the cloud can eliminate the need for physical data centers — and the costs that come with these physical locations, such as rent, power and on-site security.

In addition, cloud-based solutions offer the benefit of on-demand scalability. This removes the need for companies to purchase extra, unused server capacity for sudden traffic spikes or bandwidth needs. The money saved on these digital shifts can then be used to balance out cybersecurity budgets.

Shifting to managed services

Moving to a managed security services model is another way to keep cybersecurity costs under control. This is especially beneficial for smaller companies or those struggling to find skilled cybersecurity staff. By working with a trusted third-party provider, enterprises can reduce their risk of security incidents without the need to hire, train and compensate full-time staff.

In addition, managed options allow companies to choose the services they need to address specific concerns. This makes it possible for organizations to build predictable, reliable budgets that only change if services are added or removed.

Assessing the insurance impact

Half of the companies in the United States now have cyber insurance, according to Statista data. The market is also forecast to experience significant growth over the next five years.

This growth, however, is largely tied to the increasing number of cyberattacks that compel companies to make cyber insurance claims. As a result, the cost of cyber insurance is on the rise. As noted by Fortune, the average cost of cyber insurance in the United States rose 79% in the second quarter of 2022.

Insurance companies are also shifting some responsibility for successful claims onto enterprises. For example, many companies won’t issue policies until organizations demonstrate they have basic cybersecurity hygiene practices in place, such as the use of strong encryption and robust identity and access management (IAM) tools.

In other words, even buying insurance designed to protect against cybersecurity incidents requires pre-purchase spending to ensure policies and practices align with insurer expectations.

From obligation to investment

Cybersecurity is getting more expensive, and this upward trend is likely to continue as attack volumes rise, regulatory and customer expectations evolve and staffing shortages persist.

For organizations, the result is more spending to stay secure. And while it’s impossible to avoid this obligation, there’s an opportunity to see cybersecurity spending as an investment — one that reduces the risk of successful attacks, helps bolster customer trust and allows companies to streamline their IT operations.

The post Going up! How to handle rising cybersecurity costs appeared first on Security Intelligence.

]]>
SOCs spend 32% of the day on incidents that pose no threat https://securityintelligence.com/articles/socs-spend-32-percent-day-incidents-pose-no-threat/ Mon, 05 Jun 2023 19:00:00 +0000 https://securityintelligence.com/?p=442519 When it comes to the first line of defense for any company, its Security Operations Center (SOC) is an essential component. A SOC is a dedicated team of professionals who monitor networks and systems for potential threats, provide analysis of detected issues and take the necessary actions to remediate any risks they uncover. Unfortunately, SOC […]

The post SOCs spend 32% of the day on incidents that pose no threat appeared first on Security Intelligence.

]]>

When it comes to the first line of defense for any company, its Security Operations Center (SOC) is an essential component. A SOC is a dedicated team of professionals who monitor networks and systems for potential threats, provide analysis of detected issues and take the necessary actions to remediate any risks they uncover.

Unfortunately, SOC members spend nearly one-third (32%) of their day investigating incidents that don’t actually pose a real threat to the business according to a new report from Morning Consult. These false alarms waste valuable resources, time and money that are needed to deal with real and significant threats.

Why is this SOC statistic so high?

With the current labor shortages in cybersecurity-related fields, no one wants to waste time on meaningless tasks. So why is the percentage of false alarms this high?

One potential explanation is that businesses are not utilizing the right security tools to help reduce false alarms. The Morning Consult report found that nearly half (46%) of surveyed SOC professionals stated the average time to detect and respond to a security incident has increased over the past 2 years. Manual investigations were the number one contributor to slowed detection and response according to 81% of surveyed SOC professionals. If a SOC team uses manual-based processes or antiquated technologies to detect and investigate events, the likelihood of false positives increases dramatically.

Another possibility is that the team does not clearly understand the threats their organization faces. As a result, they cast too wide a net and end up wasting time investigating potentially harmless alarms. This is usually due to a lack of training (or appropriate budgeting) to ensure teams use the most up-to-date security technologies and processes.

How can businesses combat this issue?

Despite the current high rate of inefficiency in today’s SOCs, it’s not all bad news. There are proven ways to maximize the effectiveness of these teams while minimizing false alarms and wasted resources.

Incorporating SOAR security principles

The Security Orchestration, Automation, and Response (SOAR) model aligns and enhances various security operations into a seamless and unified process. It helps SOC teams to integrate their security tools, automate manual processes and facilitate intelligent decision-making capabilities.

SOC teams can incorporate SOAR principles into their operations in a few different ways:

  • Automate repetitive tasks: SOC teams often spend a lot of time and resources on repetitive and mundane tasks. The SOAR model can easily automate them, allowing SOC teams to focus on more critical security operations.
  • Collaboration and communication: The SOAR model emphasizes collaboration and communication between different stakeholders, including security teams, IT teams and business units. This can help SOC teams to gain more visibility into the current security situation and make more informed decisions.
  • Contextual intelligence: By leveraging internal and external threat intelligence, SOC teams can better understand emerging threats. SOAR models use machine learning and artificial intelligence algorithms to analyze threat data and provide real-time insights that can help SOC teams respond to threats more likely to pose a risk.

Register for the webinar: SOAR

Investing in SIEM tools

To minimize the risk of cyber threats, SOCs must invest in advanced security analytics tools, including Security Information and Event Management (SIEM) software, to identify, prioritize and respond effectively. SIEM software improves accuracy when detecting and responding to real threats while also minimizing the chances of false positives.

SIEM software analyzes the organization’s security logs and alerts SOC teams when a security incident occurs. However, without sufficient context, a SIEM tool can generate many false-positive alerts. This is where Artificial Intelligence (AI) comes into play. More AI and automation capabilities throughout toolsets would have the biggest impact on improving threat response time, according to 39% of SOC professionals survey in the report.

AI security tools are designed to use contextual data (such as network traffic, user activity, and external threats) to detect new and emerging patterns that may indicate malicious behavior. By providing the SIEM tool with this additional context, SOC teams can reduce false-positive alerts significantly while improving their ability to detect and respond to real-time threats.

Maximizing productivity through well-defined incident response plans

Another way to significantly reduce false positives’ impact on SOC team productivity is to have well-defined incident response plans. By implementing a well-defined incident response plan, SOC teams can maximize their productivity and focus on genuine threats.

Here are a few ways incident response plans can positively impact SOC teams:

  • Standardizing processes: Incident response plans provide a standardized approach to handling security incidents. This means that SOC teams can quickly identify the type of event, assess the potential impact, and respond accordingly. By having a consistent process, teams can save time and reduce the risk of overlooking critical issues.
  • Prioritizing alerts: With a well-defined incident response plan, SOC teams can prioritize alerts based on their severity level and potential impact. This means that teams can focus on the most critical issues and reduce time spent investigating benign events.
  • Enhancing communication: Incident response plans also facilitate better communication between team members. With a transparent process, team members can quickly understand their roles and responsibilities during an incident. Clear communication can help teams work more efficiently and ensure everyone is on the same page when working towards resolutions.

Explore QRadar Suite

Make sure you’re getting the most out of your SOC

Running a SOC can come at a significant cost. As such, it’s crucial to ensure you’re getting the most out of your investment. Equipping your team with the tools and processes necessary for success is critical.

If a SOC is only running at two-thirds of its potential, it could cost your organization more than the initial investment. By investing in advanced security analytics tools and well-defined incident response plans, SOC teams can maximize their efficiency and reduce the risk of false alarms.

More than ever, it’s vital for companies to set their SOCs up for success. Ensuring SOC teams are equipped with the right tools and processes today will build a more secure and cost-effective future.

The post SOCs spend 32% of the day on incidents that pose no threat appeared first on Security Intelligence.

]]>
Despite tech layoffs, cybersecurity positions are hiring https://securityintelligence.com/articles/despite-tech-layoffs-cybersecurity-positions-are-hiring/ Fri, 26 May 2023 13:00:00 +0000 https://securityintelligence.com/?p=442348 It’s easy to read today’s headlines and think that now isn’t the best time to look for a job in the tech industry. However, that’s not necessarily true. When you read deeper into the stories and numbers, cybersecurity positions are still very much in demand. Cybersecurity professionals are landing jobs every day, and IT professionals […]

The post Despite tech layoffs, cybersecurity positions are hiring appeared first on Security Intelligence.

]]>

It’s easy to read today’s headlines and think that now isn’t the best time to look for a job in the tech industry. However, that’s not necessarily true. When you read deeper into the stories and numbers, cybersecurity positions are still very much in demand. Cybersecurity professionals are landing jobs every day, and IT professionals from other roles may be able to transfer their skills into cybersecurity relatively easily.

As cybersecurity continues to remain a top business priority, organizations will likely keep hiring for cybersecurity roles. Companies are increasingly recognizing that without experienced team members, they are increasing their risk of a cybersecurity attack or breach.

Layoffs avoided cybersecurity personnel

According to Layoffs.fyi, more than 500 tech companies laid off over 153,000 employees between January 1, 2023, and March 23, 2023, and more than 161,000 were laid off in 2022. While those numbers include roles throughout the industry, not all jobs are affected equally.

Dan Walsh, chief information security officer with VillageMD, recently told Fortune that cybersecurity layoffs have been few and far between. The numbers back up this claim. The National Initiative for Cybersecurity Education at the National Institute of Standards and Technology reported that cybersecurity demand increased in both the public (25%) and private sectors (21%) in 2022. Additionally, 755,743 cybersecurity jobs were posted in 2022, only a 2% decrease from the same time period the previous year.

While cybersecurity analysts and engineers often top the open positions, many other lesser-known roles exist in cybersecurity.

Over 5,800 incident response positions, which are responsible for handling the aftermath of an incident, have been recently posted on Indeed. Also in demand, malware analysts evaluate an organization’s systems, data and applications to detect malware and then determine the best course to remediate.

Additionally, positions for employees who test for vulnerabilities, known as pentesters, offer a good entry point for new graduates or employees transferring from other roles. Threat hunting, which involves reviewing all the security data and systems to look for abnormalities and potential malware issues, also offers many career options.

Landing an entry-level cybersecurity job

Employers are increasingly using criteria other than four-year degrees when hiring for cybersecurity positions.

Entry-level job hunters often land their first job through skilling badges or certifications. Both methods show potential employers that the candidate has the expertise and knowledge needed to hit the ground running.

With many different certifications to choose from, it’s important to start by understanding your career goals and then selecting the one that hiring managers are most likely to recognize. You should also consider the time required to earn the badge or certification, as well as the cost. Think about the return on your investment compared to the salary of the expected job.

Transitioning from other roles to cybersecurity

Many cybersecurity employees move into the industry from non-technology positions. Many positions require excellent problem-solving and analytical skills that can be learned in other industries.

Caitlin Kiska, an information security engineer, was previously a professional online poker player. She discovered that her ability to analyze and form strategies from large data sets is a much-needed skill in cybersecurity.

Before moving into cybersecurity, Matt Gimovsky worked in litigation for eight years. He found that his expertise in strategic/tactical advising combined with his long-term passion for technology transferred easily into his current role as a technology transactions advisor for a fed/civ cyber defense business.

Current or recently laid-off technology workers can also easily transition into cybersecurity from a wide range of roles.

Software engineers should look for opportunities in cybersecurity to create applications that detect cyberattacks or are more cyber resilient. Because IT support specialists have strong analytical skills and significant hardware/software knowledge, they can quickly investigate cybersecurity incidents. Cybersecurity involves identifying patterns, making it relatively easy for data analysts to move into positions that detect and respond to threats.

Updating your resume for a cybersecurity role

Because cybersecurity hiring managers realize that their next best team member may not have a traditional background, you should include many experiences and skills you may not consider. By highlighting both technical and soft skills, you increase your chances of landing the job. Before applying to a cybersecurity position, reevaluate your resume through the lens of a cybersecurity hiring manager.

Make sure that your technical skills are highlighted and easy for a manager to find. Highlight your current skills, badges and certifications. Consider having a section that highlights your experience with the most recent threats and solutions, especially in terms of ransomware. If you have used the latest automation and artificial intelligence tools, be sure to include these skills as well.

In addition to the skills, companies want employees with experience to put their expertise to use. Consider also including any results or significant experience, such as vulnerabilities you’ve detected or incidents you successfully managed. By sharing specifics, you demonstrate both your knowledge and your ability to apply those skills while under pressure. Companies want to know that employees can handle the pressure and stress of managing real-world threats in real-time.

Cybersecurity positions involve a significant amount of collaboration and teamwork. Hiring managers increasingly look for soft skills, such as leadership, curiosity, tenacity, passion, problem-solving, teamwork and thriving under pressure. Because the ability to communicate effectively with team members and other employees is critical to success in cybersecurity, managers often prioritize applicants with communication skills. Add examples of any responsibilities that involved combination, such as cross-department collaboration or conducting training.

The cybersecurity job market remains strong

As cybersecurity continues to be a top business priority, it will likely remain a valuable position at organizations. Even businesses with hiring freezes may continue to fill open cybersecurity positions due to the critical nature of the work. By looking for opportunities in cybersecurity, job seekers can find job security while working on fulfilling projects.

The post Despite tech layoffs, cybersecurity positions are hiring appeared first on Security Intelligence.

]]>