Data Protection – Security Intelligence https://securityintelligence.com Analysis and Insight for Information Security Professionals Tue, 03 Sep 2024 13:31:50 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.1 https://securityintelligence.com/wp-content/uploads/2016/04/SI_primary_rgb-80x80.png Data Protection – Security Intelligence https://securityintelligence.com 32 32 Cost of a data breach: Cost savings with law enforcement involvement https://securityintelligence.com/articles/cost-of-a-data-breach-cost-savings-law-enforcement/ Tue, 03 Sep 2024 13:00:00 +0000 https://securityintelligence.com/?p=448072 For those working in the information security and cybersecurity industries, the technical impacts of a data breach are generally understood. But for those outside of these technical functions, such as executives, operators and business support functions, “explaining” the real impact of a breach can be difficult. Therefore, explaining impacts in terms of quantifiable financial figures […]

The post Cost of a data breach: Cost savings with law enforcement involvement appeared first on Security Intelligence.

]]>

For those working in the information security and cybersecurity industries, the technical impacts of a data breach are generally understood. But for those outside of these technical functions, such as executives, operators and business support functions, “explaining” the real impact of a breach can be difficult. Therefore, explaining impacts in terms of quantifiable financial figures and other simple metrics creates a relatively level playing field for most stakeholders, including law enforcement.

IBM’s 2024 Cost of a Data Breach (“CODB”) Report helps to explain the financial impact when law enforcement is involved in the response. Specifically, the CODB report, which studied over 600 organizations, found that when law enforcement assisted the victim during a ransomware attack the cost of a breach lowered by an average of $1 million, excluding the cost of any ransom paid. That is an increase compared to the 2023 CODB Report when the difference was closer to $470,000.

But law enforcement involvement is not ubiquitous. For example, when an organization faced a ransomware attack only 52% of those surveyed involved law enforcement, but the majority of those (63%) also did not end up paying the ransom. Moreover, the CODB Report found law enforcement support helped reduce the time to identify and contain a breach from 297 days to 281.

So why are nearly half of victims not reaching out to law enforcement? Let us look at a few possibilities.

Read the full report

Awareness, embarrassment, secrecy and trust

Outside of cyberspace, a 911 call to local law enforcement is a pretty reasonable first call when falling victim to a crime. But there is no “911” to dial for a cyberattack, and certainly no menu options for ransomware, data exfiltration or destructive attacks. Even experienced incident responders will likely share experiences where opening questions to the victim are, “Have you contacted law enforcement?” or “Have you reported this IC3?” The first answer is often “no” or “not yet,” while the second is “I see what?” Therefore, the awareness issue is still prevalent.

We must also consider emotional responses, such as embarrassment. Think of the employee who may be thinking, “Was I responsible for this by clicking a wrong link?” Embarrassment leads to reluctance, therefore both organizations and law enforcement must message better to their people and partners that reaching out for help is okay. Moreover, add in another psychological factor: additional threats made by the actor demanding victims not contact law enforcement.

There is the secrecy aspect, especially from a business impact perspective. Decision makers may not yet know the business impact of law enforcement involvement. Will the news go public? Will competitors find out? What privacy assurances are available? All of these are reasonable questions, and likely to be important with the regulatory requirements of reporting cyber crimes.

Trust ties all these factors together, ranging from benign “Can I trust law enforcement?” to explicit “We do not trust law enforcement.” These gaps must be bridged.

Building relationships and the future of reporting

Managing a crisis requires competence, but also trust, so exchange business cards before the incident. The issues identified can be proactively addressed by reaching out to law enforcement partners when you do not need them. Learn the capabilities of your local agencies; request meet-and-greets with those in your state and federal regions.

Remember, there is a little “Customer Service 101” here. When the incident hits, what do you want: the general helpline, or somebody you know and have a bond with?

Moreover, the future of cyber crime reporting is becoming more of a public matter, such as SEC reporting rules. Having relationships in place will be beneficial. They can buy time and serve as extra hands.

The case for involving law enforcement from a cost-savings perspective appears pretty transparent. Therefore, it is more of a cultural issue. Make friends, build two-way trust and establish protocols. These can go a long way to reduce the pain and cost of an attack.

The post Cost of a data breach: Cost savings with law enforcement involvement appeared first on Security Intelligence.

]]>
Cost of data breaches: The business case for security AI and automation https://securityintelligence.com/articles/cost-of-data-breaches-business-case-for-security-ai-automation/ Tue, 27 Aug 2024 13:00:00 +0000 https://securityintelligence.com/?p=448051 As Yogi Berra said, “It’s déjà vu all over again.” If the idea of the global average costs of data breaches rising year over year feels like more of the same, that’s because it is. Data protection solutions get better, but so do threat actors. The other broken record is the underuse or misuse of […]

The post Cost of data breaches: The business case for security AI and automation appeared first on Security Intelligence.

]]>

As Yogi Berra said, “It’s déjà vu all over again.” If the idea of the global average costs of data breaches rising year over year feels like more of the same, that’s because it is. Data protection solutions get better, but so do threat actors. The other broken record is the underuse or misuse of technologies that can help safeguard data, such as artificial intelligence and automation.

IBM’s 2024 Cost of a Data Breach (CODB) Report studied 604 organizations across 17 industries in 16 countries and regions, and breaches that ranged from 2,100 to 113,000 compromised records, and a key finding was that use of modern technologies, on average, reduced breach costs by $2.2 million. And for CISOs and security teams seeking investment, talking dollars and cents — and not bits and bytes — is what will resonate with your audience.

Where are the savings being realized?

Cyber resilience is more than just disaster recovery; it’s an important component. A resilient program blends both proactive and reactive workflows, including the technology involved. And when the individual pieces work well together with the proper support, the result is a sum larger than its parts.

Indeed, the 2024 CODB Report found that when AI and automation were deployed extensively across the preventative or proactive workflows (e.g., attack surface management, red-teaming, posture management, etc.), organizations realized the savings. There is an interesting nexus here, as taking a “prevention over response” approach may, in fact, be driven by greater AI threats and use.

Moreover, the COBD Report identified that — yet again! — the skills shortage is impacting the industry. With staff feeling overwhelmed, particularly during incident response cases, artificial intelligence can be that support tool to retain staff. Security and managerial staff should be mindful that not investing in tools and solutions can result in losing highly skilled staff who have institutional knowledge. What is the unintended consequence here? Extra costs to re-staff the positions.

Read the full report

Plan as a unit, implement as a unit

For organizations still addressing the cybersecurity issue in separate silos or with limited visibility, they are increasing the entire organization’s risk profile, not just the security function of the business. We live in a time where technology is mission-critical to deliver services, it is no longer about delivery efficiencies and competitiveness. Therefore, keep these issues in mind when planning as a unit:

  1. Eliminate data blind spots. Many of us call these “the crown jewels” of the organization, but with all the data produced these days and the difficulties surrounding data lifecycle management, what’s really under the hood? Consider a data security posture management solution and be mindful of shadow data.
  2. Security-first approach. Easier said than done, but “designing in” security to workflows and solutions — albeit a bit more difficult to deploy — means eliminating unnecessary, often fragile, complexities that are complicated and expensive to fix after an incident.
  3. Culture, culture, culture. Change is difficult to institute, especially new technologies, such as generative AI. Get people to buy into the security mindset, but not at the cost of business delivery. Remember, they are not only important users but are also key to successful implementations and improvements.

It’s being used, so use it wisely

The CODB Report also identified two of three organizations that studied deploying security AI and automation in their security operation centers. With this type of adoption, ubiquity is likely on the horizon.

Therefore, the key is to use the technology smartly, in a manner that addresses the organization’s risk profile and makes business sense. The business case becomes easier when the average cost of a data breach, according to the report, is USD 4.88 million. The findings over the last year thus far show that the investment can be worthwhile.

The post Cost of data breaches: The business case for security AI and automation appeared first on Security Intelligence.

]]>
Cost of a data breach: The industrial sector https://securityintelligence.com/articles/cost-of-a-data-breach-industrial-sector/ Tue, 20 Aug 2024 13:00:00 +0000 https://securityintelligence.com/?p=448020 Industrial organizations recently received a report card on their performance regarding data breach costs. And there’s plenty of room for improvement. According to the 2024 IBM Cost of a Data Breach (CODB) report, the average total cost of a data breach in the industrial sector was $5.56 million. This reflects an 18% increase for the […]

The post Cost of a data breach: The industrial sector appeared first on Security Intelligence.

]]>

Industrial organizations recently received a report card on their performance regarding data breach costs. And there’s plenty of room for improvement.

According to the 2024 IBM Cost of a Data Breach (CODB) report, the average total cost of a data breach in the industrial sector was $5.56 million. This reflects an 18% increase for the sector compared to 2023.

These figures place the industrial sector in third place for breach costs among the 17 industries studied. On average, data breaches cost industrial organizations 13% more than the $4.88 million global average.

Clearly, the industrial sector is facing strong headwinds when it comes to dealing with data breaches. Let’s take a closer look at some of the challenges tied to the sector, as well as solutions that can help reduce the impact of cyberattacks on industrial organizations.

Highest increase in cost of data breach

The industrial sector experienced the highest data breach cost increase of all industries surveyed in the 2024 COBD report, rising by an average of $830,000 per breach over last year. Organizations in this sector are highly sensitive to operational interruptions since a manufacturing plant shutdown can be devastating. For example, unplanned downtime, perhaps due to ransomware, could cost up to $125,000 per hour.

Part of the problem may be found in the time to identify and contain a data breach at industrial organizations. At 199 days to identify and 73 days to contain, this is above the global average of 194 days to identify and 64 days to contain.

The 2024 COBD report also revealed the root causes of a data breach for industrial organizations, which are:

  • Malicious attack (47%)
  • IT failure (26%)
  • Human error (27%)
Read the report

Regulations for the industrial sector

The industrial sector faces unique regulations that also may contribute to data breach costs. For example, the North American Electric Reliability Corporation Critical Infrastructure Protection (NERC CIP) applies to the energy sector, requiring stringent cybersecurity measures to protect the bulk power system. This includes asset management, personnel training, incident reporting and recovery plans​. Non-compliance with NERC CIP standards can result in fines of up to $1 million per day per violation, highlighting the critical importance of adhering to these cybersecurity measures​.

Furthermore, the Cyber Incident Reporting for Critical Infrastructure Act of 2022 (CIRCIA) aims to mandate how critical infrastructure organizations will be required to report cyber incidents to the federal government. Within the realm of critical infrastructure, a large part of the industrial sector will be required to adhere to these requirements as well.

Industrial sector cybersecurity needs

The industrial sector requires specialized cybersecurity solutions due to its reliance on operational technology (OT) and industrial control systems (ICS). Also, the interconnected nature of industrial supply chains makes vendor risk management and secure communication channels imperative.

For the industrial sector, hybrid cloud environments are evolving and scaling faster than ever, creating a larger and more complex attack surface. To meet these demands, Security Information and Event Management (SIEM) can help industrial organizations detect and prioritize threats. SIEM provides real-time visibility, enabling the rapid identification and response to potential security incidents.

AI and automation cut data breach costs

The 2024 CODB report also revealed that only 32% of industrial organizations implement extensive use of security AI and automation. Meanwhile, a $1.9 million cost savings was shown with extensive use of security AI and automation versus no security AI and automation.

AI-powered automation can accelerate threat response dramatically and drive down data breach costs considerably. For industrial organizations, this can minimize business risk while reducing damages and service interruptions.

Let’s hope that next year’s CODB report will show a new trend for the industrial sector, one that reveals costs are coming down.

The post Cost of a data breach: The industrial sector appeared first on Security Intelligence.

]]>
Cost of a data breach 2024: Financial industry https://securityintelligence.com/articles/cost-of-a-data-breach-2024-financial-industry/ Tue, 13 Aug 2024 13:00:00 +0000 https://securityintelligence.com/?p=447989 According to the IBM Cost of a Data Breach 2024 report, the average global breach cost has reached $4.88 million — a significant increase over last year’s $4.45 million and the biggest jump since the pandemic. For financial industry enterprises, costs are even higher. Companies now spend $6.08 million dealing with data breaches, which is […]

The post Cost of a data breach 2024: Financial industry appeared first on Security Intelligence.

]]>

According to the IBM Cost of a Data Breach 2024 report, the average global breach cost has reached $4.88 million — a significant increase over last year’s $4.45 million and the biggest jump since the pandemic.

For financial industry enterprises, costs are even higher. Companies now spend $6.08 million dealing with data breaches, which is 22% higher than the global average.

Here’s what financial organizations need to know about this year’s Cost of a Data Breach report.

2024 at a glance: Time-consuming and costly

Financial firms had the second highest breach cost of any industry; only healthcare attacks were more expensive. Both healthcare and finance saw the same costs for large-scale breaches: When 50 million records or more were compromised, average costs skyrocketed to $375 million.

Malicious attacks remained the top attack vector in finance, at 51%, but IT failures and human error accounted for one-fourth of all attacks, coming in at 25% and 24%, respectively.

In terms of detection time, financial industry organizations took an average of 168 days to identify and 51 days to contain a breach. While this is lower than the global average of 194 days to identify and 64 days to contain, it’s still a significant period of time.

Consider that 168 days works out to just under six months. That’s six months of attackers infiltrating systems, carrying out reconnaissance and compromising accounts.

Read the report

Tracking data breach trends over time

Simply put, costs are going up.

In 2021, the average cost of a data breach for financial firms was $5.72 million. By 2022, it reached $5.97 million and remained stable at $5.9 million for 2023. This year saw a 3% jump in average breach costs, plus a $40-million bump in the cost of 50-million-plus record breaches.

But it’s not all bad news. Detection times are nine days shorter, and containment times are five days faster. In addition, 2024 saw a significant reduction in human error. As noted above, 24% of breach root causes this year were tied to accidental activity. In 2023, meanwhile, this number was 33%.

Where financial firms are investing in security — and how it can help

To help reduce the risk of data breaches, finance firms are spending more on incident response (IR) and identity and access management (IAM). Reduced costs make the impact clear: Companies with IR teams and robust security testing save $248,000 per year on average, while those with IAM solutions save up to $223,000 each year.

The biggest success stories for financial IT investment, however, are AI and automation. According to study data, firms that use AI and automation save an average of $1.9 million compared to those that don’t.

It’s worth noting, however, that just 24% of generative AI initiatives are secured. As a result, it’s critical for financial firms to develop security frameworks for these tools or run the risk of AI becoming an additional threat vector.

The role of regulation in financial security

Both investment and intelligent security management are critical for finance firms, given the scrutiny they face from regulatory agencies and the large number of compliance regulations they need to navigate.

For example, while firms are familiar with anti-money laundering (AML) rules under the Bank Secrecy Act (BSA) and the segregation of duties required by the Sarbanes-Oxley Act, they may encounter challenges with more regional regulations such as CCPR, GDPR and the LGPD. For example, under GDPR, financial organizations could face fines of up to 2% of the previous year’s revenue or 4% if they have already been penalized for a first offense.

Put simply? The costs of a data breach for financial firms go beyond detection, removal and remediation. Delays in finding and eliminating threats can lead to additional regulatory costs that may outpace initial expenses.

As the Cost of a Data Breach 2024 report shows, however, robust investment in IR, IAM and AI can help companies shore up defenses and keep costs down.

The post Cost of a data breach 2024: Financial industry appeared first on Security Intelligence.

]]>
Cost of a data breach: The healthcare industry https://securityintelligence.com/articles/cost-of-a-data-breach-healthcare-industry/ Tue, 06 Aug 2024 13:00:00 +0000 https://securityintelligence.com/?p=447934 Cyberattacks grow every year in sophistication and frequency, and the cost of data breaches continues to rise with them. A new report by IBM and the Ponemon Institute, the 2024 Cost of Data Breach Study, details the financial impacts of attacks across multiple industries. The global average cost of a data breach reached an all-time […]

The post Cost of a data breach: The healthcare industry appeared first on Security Intelligence.

]]>

Cyberattacks grow every year in sophistication and frequency, and the cost of data breaches continues to rise with them. A new report by IBM and the Ponemon Institute, the 2024 Cost of Data Breach Study, details the financial impacts of attacks across multiple industries.

The global average cost of a data breach reached an all-time high of $4.45 million in 2023, which is a 15% increase over the past three years. This increase was mainly driven by rising expenses associated with lost business and post-breach response actions, according to the report. The United States exceeded all other nations in the highest average cost per breach at $9.48 million.

As in past years, the healthcare industry suffered the highest average breach costs at $10.93 million, followed by the financial sector at $5.9 million. Healthcare data breaches typically last 213 days before discovery, more than the average of 194 days across other industries.

Recent years have also shown a troubling new trend: the rise of very large breaches involving millions of records.

Unique challenges, significantly higher costs

Over the past decade, healthcare has consistently been one of the most expensive industries for data breaches, with costs significantly higher than the global average. But the costs have grown across industries. In 2014, for example, the average total cost of breaches was $3.5 million.

Regulations governing data handling in healthcare, including HIPAA (Health Insurance Portability and Accountability Act), HITECH Act (Health Information Technology for Economic and Clinical Health Act) and even GDPR (General Data Protection Regulation), also contribute to the industry’s high average cost of data breaches.

The study also addressed the ongoing challenge of breaches involving stolen credentials, which took the longest to resolve at an average of 292 days. Only one-third of breaches were detected by internal security staff.

The report contained a particularly useful new finding: Organizations making serious use of automation and AI cybersecurity enjoyed an average cost reduction of $1.76 million compared to those without such technologies. AI security and automation reduced the breach lifecycle by an incredible 108 days on average, according to the report.

Read the report

How healthcare can strengthen its cyber profile

The report suggests other ways to potentially reduce the cost of data breaches. Involving law enforcement in ransomware attacks, for example, reduced the average cost by nearly $1 million. Counterintuitively, perhaps, the report found that organizations that paid ransoms did not see significant cost savings compared to those that did not pay.

In addition, storage matters. Data storage environments affect breach costs and containment times. Breaches involving data stored across multiple environments incurred higher costs and took longer to contain, for example.

The report also advised incident response planning and testing, as well as the integration of AI threat detection and response systems and urged the development of security frameworks specifically for AI initiatives. This includes securing training data, monitoring for malicious inputs and using AI security solutions.

Embracing a multi-pronged approach

Remediation for breaches in the healthcare industry should involve a range of strategies, including:

  • Incident response planning and testing
  • Employee training; deployment of AI and automation in cybersecurity
  • Risk mitigation strategy involving the location
  • Use and encryption of data, identity and access management
  • Embracing DevSecOps to build security into applications
  • Tools and platforms across on-premises and cloud environments

Data breaches in the healthcare industry typically involve data stored across multiple environments, including public cloud, private cloud and on-site servers. This multi-environment storage approach reflects the complexity and diverse data storage needs of healthcare organizations but adds to the challenge of securing this data. In the face of these complex needs, investing in managed security services can help healthcare organizations get the most out of their cybersecurity.

Learn how to protect your most sensitive healthcare data with identity solutions from IBM.

The post Cost of a data breach: The healthcare industry appeared first on Security Intelligence.

]]>
Surging data breach disruption drives costs to record highs https://securityintelligence.com/posts/whats-new-2024-cost-of-a-data-breach-report/ Tue, 30 Jul 2024 10:00:00 +0000 https://securityintelligence.com/?p=447895 Security teams are getting better at detecting and responding to breach incursions, but attackers are inflicting greater pain on organizations’ bottom lines. IBM’s recent Cost of a Data Breach Report 2024 found the global average breach hit a record $4.88 million. That’s a 10% increase from 2023 and the largest spike since the pandemic. While […]

The post Surging data breach disruption drives costs to record highs appeared first on Security Intelligence.

]]>

Security teams are getting better at detecting and responding to breach incursions, but attackers are inflicting greater pain on organizations’ bottom lines. IBM’s recent Cost of a Data Breach Report 2024 found the global average breach hit a record $4.88 million. That’s a 10% increase from 2023 and the largest spike since the pandemic.

While the study notes that organizations, on average, improved their time to identify and contain breaches, rising business costs drove the global average breach cost higher. Among the largest contributors were lost business costs, expenses from post-breach customer support (such as setting up help desks and credit monitoring services) and paying regulatory fines. Some 70% of the 604 organizations studied reported that their operations were either significantly or moderately disrupted.

The new research, conducted independently by Ponemon Institute and analyzed by IBM, studied breached organizations from 16 countries and regions and across 17 industries. It also included interviews with 3,556 security and business professionals from the breached organizations. In its 19th year, the Cost of a Data Breach Report provides actionable insights and up-to-date research, making it a critical benchmark for the industry.

While the report’s findings suggest some damages from a breach are unavoidable, they also highlight several risk areas that security teams can and should address. For instance, the findings underscore the growing importance of security AI and automation technologies for mitigating breach impacts and lowering costs associated with those breaches.

Below are those takeaways and several others from the Cost of a Data Breach Report 2024.

AI and automation in security most effective at reducing average costs

More organizations are adopting AI and automation in their security operations, up 10% from the 2023 report. And most promising, the use of AI in prevention workflows had the highest impact in the study, reducing the average cost of a breach by $2.2 million, compared to organizations that didn’t deploy AI in prevention.

Two out of three organizations in the study deployed AI and automation technologies across their security operations center. This factor may also have contributed to the overall decrease in average response times – those using AI and automation saw their time to identify and contain a breach lowered by nearly 100 days on average.

Only 20% of organizations said they are using gen AI security tools, yet those that did saw a positive impact, with gen AI security tools shown to mitigate the average cost of a breach by more than $167,000.

Read the report

Security staffing shortages led to higher breach costs and more security investment

Staffing shortages in security departments continued to grow, with 53% of organizations facing a high-level skills shortage, up 26% from 2023. The industry-wide skills shortage could be expensive for organizations. Those with severe staffing shortages experienced breach costs that were $1.76 million higher on average than those with low-level or no security staffing issues.

These staffing shortages may be contributing to the increasing use of security AI and automation, which has been shown to reduce data breach costs. At the same time, staffing shortages may see some ease, as businesses reported they intend to increase security investments as a result of the breach. Organizations planned investments including threat detection and response tools like SIEM, SOAR and EDR, according to the report. Organizations also plan to increase investments in identity access management, and data protection tools.

These additional investments could pay off in mitigating future breach costs. More organizations in 2024 identified the breach with their own security teams and tools (42%) compared to last year (33%), and those organizations had lower than average breach costs, including nearly $1 million lower on average than breaches that were identified by the attacker, such as in an extortion attack.

Cloud and data security issues remained prominent

Forty percent of breaches involved data stored across multiple environments including public cloud, private cloud and on-premise. These multi-environment breaches cost more than $5 million on average and took the longest to identify and contain (283 days), highlighting the challenge of tracking and safeguarding data, including shadow data, and data in AI workloads, which can be unencrypted.

The types of data records stolen in these breaches underscored the growing importance of protecting an organization’s most sensitive data, including customer personal identifying information (PII) data, employee PII, and intellectual property (IP). Costs associated with customer PII and employee PII records were the highest on average.

Customer PII was involved in more breaches than any other type of record (46% of breaches). However, IP may grow even more accessible as gen AI initiatives bring this data out in the open. With critical data becoming more dynamic and available across environments, businesses will need to assess the specific risks of each data type and their applicable security and access controls.

What else is new in the 2024 Cost of a Data Breach Report

Each year poses new data security challenges as threats and technologies emerge, and this report evolved to reflect these changes. New research conducted for the first time this year in the 2024 Cost of a Data Breach Report included:

  • Organizations experiencing long-term operational disruption, and the time it takes to restore data, systems or services to their pre-breach state
  • To what extent organizations are using AI and automation in each of four areas of security operations: prevention, detection, investigation and response
  • How long it took organizations to report the breach if they were mandated to do so
  • Whether organizations that involved law enforcement following a ransomware attack paid the ransom

Of course, the report continues to showcase the top costliest geographies and industries, the initial causes of data breaches and their costs, and much more. Importantly, the report continues to provide recommendations from IBM experts, addressing the report findings, to help organizations understand the risks and how to mitigate the impacts and potential costs of a data breach.

Download a copy of the 2024 Cost of a Data Breach Report, and sign up for the Cost of a Data Breach webinar on Tuesday, August 13, 2024, at 11:00 a.m. ET.

The post Surging data breach disruption drives costs to record highs appeared first on Security Intelligence.

]]>
Overheard at RSA Conference 2024: Top trends cybersecurity experts are talking about https://securityintelligence.com/articles/overheard-at-rsa/ Tue, 14 May 2024 13:00:00 +0000 https://securityintelligence.com/?p=447514 At a brunch roundtable, one of the many informal events held during the RSA Conference 2024 (RSAC), the conversation turned to the most popular trends and themes at this year’s events. There was no disagreement in what people presenting sessions or companies on the Expo show floor were talking about: RSAC 2024 is all about […]

The post Overheard at RSA Conference 2024: Top trends cybersecurity experts are talking about appeared first on Security Intelligence.

]]>

At a brunch roundtable, one of the many informal events held during the RSA Conference 2024 (RSAC), the conversation turned to the most popular trends and themes at this year’s events. There was no disagreement in what people presenting sessions or companies on the Expo show floor were talking about: RSAC 2024 is all about artificial intelligence (or as one CISO said, “It’s not RSAC; it’s RSAI”).

The chatter around AI shouldn’t have been a surprise to anyone who attended RSAC in 2023. Generative AI as we know it today was only a few months old then. Everyone wanted to talk about it, but no one was quite sure of the impact it would have on cybersecurity.

A year later, there are still a lot of questions, but the profession has embraced AI into its tools and solutions. It was by far the most popular topic across the educational sessions and in demonstrations and presentations across the Expo. But it wasn’t the only issue that cybersecurity professionals were contemplating. Here are some of the most popular topics that people at RSAC were talking about.

AI isn’t just generative AI

There were over 100 sessions that dealt with AI at the conference. Many conference attendees were most interested in the double-edged sword of generative AI: how to use it as a tool to detect and prevent cyberattacks and how cyber criminals use the technology to launch attacks. AI’s role in misinformation campaigns and developing deepfakes has many people worried about a significant shift in the way threat actors use social engineering. This worry only compounds with the concern that security awareness training won’t be able to keep up.

The term “shadow AI” was mentioned a number of times, often by CISOs who expressed concern that the risks faced through shadow IT and shadow cloud behaviors are beginning to repeat themselves in the use of unauthorized AI. Right now, much of shadow AI is related to employees who use tools like ChatGPT for research resources and trusting the information they receive as absolute truths. But as employees become more sophisticated in using AI tools and as generative AI shows itself as a potential security risk, CISOs want to see steps taken to get AI policies and approved tools adopted into the organizations sooner rather than later.

However, one of the issues that cybersecurity experts were quick to point out is the need to separate generative AI from other types of AI. Because of the overwhelming presence of AI throughout the conference, the technology has this feeling of newness to it, that it is something that was just introduced in the past year. Many of the panel discussions covered machine learning and large language models and how to build on the predictive benefits these technologies bring to cybersecurity tools. AI isn’t new, one CISO said; it’s been around in some form for decades. The hope is that the AI hype of this year settles down by RSAC 2025 and that there will be more positive discussions around building better predictive models with AI or more defined uses of the tool.

Data governance and AI

One topic that seemed to come up almost as much as AI was data governance. Some of the conversations were around AI’s role in data governance, but cybersecurity professionals spoke of the need to know their data and build out policies that will meet ever-evolving compliance standards. Data governance was commonly mentioned along with the SEC cybersecurity disclosure rules and other government regulations put in place. As one cybersecurity executive pointed out, the struggle with data governance comes down to the biases from three different areas within a company: the engineers who create data; the C-suite team who use the data and the CISO who controls the data and the security around it. There is no agreement on what determines metadata, and until there is governance that agrees with all biases’ points, true data governance will be difficult, if not impossible, to achieve—and that hurts overall security efforts.

The absence of zero trust

In 2023, zero trust was far and away the most discussed topic at RSAC. While everyone wanted to talk about generative AI last year, it was often centered around zero trust architecture and principles. This year, zero trust was pushed into the RSAC dustbin. Oh, it was still there: eight sessions had a focus on zero trust and it was highlighted in more than a few company displays. But it has moved beyond its initial buzz, which one CISO suggested wasn’t that surprising.

Applying zero trust principles is time-consuming and because it has been a couple of years since the White House released its cybersecurity executive order, many companies are already well into their zero trust journey. It may be because it is no longer the “it” buzz term or it may be because there isn’t the demand for more information, but the glow around zero trust has officially dimmed.

Budgets, or lack thereof

At the brunch roundtable mentioned earlier, one of the CISOs said they expected to hear a lot about security budgets, or, more to the point, the lack of security budgets. Funding for security was a topic that came up frequently, as many security professionals weren’t afraid to say they were dealing with a delicate balance to manage budget cuts with rising costs around cyber incidents.

IT and security departments need to do a better job of learning the language of business executives and explaining how and why cybersecurity fits into the corporate model and overall business operations. But if cuts to the security budgets continue, with layoffs of experienced security personnel and the inability to get the tools needed to keep up with the latest threats—especially around AI security models—companies will get hit with cyberattacks, and the costs will be greater than the budget cuts.

It’s clear from this year’s RSAC that we’re just at the tip of the iceberg when it comes to AI advancements—and the hype around it doesn’t appear to be going anywhere anytime soon. But what security concern, emerging tech or new marketing buzzword will be top of mind for attendees at next year’s RSAC?

The post Overheard at RSA Conference 2024: Top trends cybersecurity experts are talking about appeared first on Security Intelligence.

]]>
3 Strategies to overcome data security challenges in 2024 https://securityintelligence.com/articles/overcome-data-security-challenges-2024/ Wed, 27 Mar 2024 13:00:00 +0000 https://securityintelligence.com/?p=447354 There are over 17 billion internet-connected devices in the world — and experts expect that number will surge to almost 30 billion by 2030. This rapidly growing digital ecosystem makes it increasingly challenging to protect people’s privacy. Attackers only need to be right once to seize databases of personally identifiable information (PII), including payment card […]

The post 3 Strategies to overcome data security challenges in 2024 appeared first on Security Intelligence.

]]>

There are over 17 billion internet-connected devices in the world — and experts expect that number will surge to almost 30 billion by 2030.

This rapidly growing digital ecosystem makes it increasingly challenging to protect people’s privacy. Attackers only need to be right once to seize databases of personally identifiable information (PII), including payment card information, addresses, phone numbers and Social Security numbers.

In addition to the ever-present cybersecurity threats, data security teams must consider the growing list of data compliance laws and regulations.

Here are three strategies to tackle data security challenges in 2024.

1. Assure privacy

Security teams must make privacy paramount to reduce the risks of identity theft and fraud. For example, physicians require access to comprehensive patient health data, while billing clerks can only see insurance numbers and addresses.

Here are a few ways to assure data privacy:

  • Encrypt your personal and customer data to make it unreadable to unauthorized users
  • Conduct regular security audits to identify weak points and reduce the risk of data breaches
  • Adopt a zero trust security model to minimize the chance of unauthorized access.

IBM Guardium Insights helps sharpen your focus on genuine privacy risks. This platform offers risk-based alerts and analytics, allowing you to detect and respond to data threats. With this vigilant approach, you can keep all stakeholders informed as you defend customer privacy.

IBM Guardium Data Encryption helps you encode your sensitive information and provides granular control to select individuals in your organization.

Discover more challenges & strategies here

2. Address vulnerabilities

Cybersecurity is not a set-and-forget task — you must strive for constant improvements. Unfortunately, many organizations have security technologies that don’t communicate well with each other.

The executive director of IBM Corp, Ram Parasuraman, asserts that “the longer these silos exist, the longer it’s going to take us to respond to those attacks.”

Here are a few ways to address vulnerabilities:

  • Install antivirus programs and firewalls and conduct regular scans to find and eliminate issues in the earliest stages.
  • Release new software updates to patch vulnerabilities before threat actors find issues. You may need to modify in-house programs to adapt them for use with new technologies.
  • Establish clear incident response protocols to make it easy for anyone to report security incidents, minimizing downtime and mitigating damages.

The IBM Guardium Vulnerability Assessment tool helps in identifying potential issues, such as missing updates, weak passwords and configuration errors. With a proactive approach to assessing your security posture, you can maintain a stronger defense against emerging cyber threats.

3. Improve productivity

Your data security and privacy rules should help improve your business, not hinder daily operations or progress. It’s important to consider how you can improve data security without negatively impacting productivity.

Here are some ideas:

  • Encourage cross-functional collaboration between departments to build security measures into workflows
  • Provide easy-to-use encryption tools like a password manager and a Virtual Private Network (VPN) to allow people to remotely access systems without compromising the company’s data security
  • Use automated security tools such as a threat detection system to reduce the burden of manual intervention.

IBM Guardium solutions can help your cybersecurity teams monitor user activity and respond to threats in near real-time. The automated and centralized controls ensure teams waste less time looking into problems and can get the insights they need to make informed decisions.

A centralized approach secures your data

As the cloud matures and scales rapidly, we must realize that effective data security isn’t a sprint but a marathon. Security teams must embrace this ongoing process throughout the data lifecycle.

While there’s no one-size-fits-all approach to data security, your organization can take control of these challenges by bringing all your data security and compliance tools together. This centralized approach improves visibility to give security teams control over their data across the enterprise and cloud.

Want to discover three more data security challenges and how to overcome them? Read the “Overcoming data security challenges in a hybrid multi-cloud world” ebook for a more in-depth view.

The post 3 Strategies to overcome data security challenges in 2024 appeared first on Security Intelligence.

]]>
How data residency impacts security and compliance https://securityintelligence.com/articles/data-residency-security-compliance/ Tue, 12 Mar 2024 13:00:00 +0000 https://securityintelligence.com/?p=447296 Every piece of your organization’s data is stored in a physical location. Even data stored in a cloud environment lives in a physical location on the virtual server. However, the data may not be in the location you expect, especially if your company uses multiple cloud providers. The data you are trying to protect may […]

The post How data residency impacts security and compliance appeared first on Security Intelligence.

]]>

Every piece of your organization’s data is stored in a physical location. Even data stored in a cloud environment lives in a physical location on the virtual server. However, the data may not be in the location you expect, especially if your company uses multiple cloud providers. The data you are trying to protect may be stored literally across the world from where you sit right now or even in multiple locations at the same time. And if you don’t know where your organization’s data is stored, it may not be as secure as you think.

Why data residency matters

The location of your data, referred to as data residency, can make a difference in best practices. Not knowing your data’s residency makes it challenging, if not impossible, to reduce your organization’s risk. You are unable to add additional protections both in terms of encryption and best practices.

Here are two reasons you need to know the data residency of your data:

  • Security: Data in specific locations, such as multi-cloud data, requires additional security precautions. The 2023 IBM Cost of a Data Breach Report found that 39% of breached data was stored across multiple types of environments. If you are not aware your data is in a high-risk location, you are unnecessarily putting your customers, employees and organization at risk.
  • Compliance: Some data requires specific compliance regulations. If you do not know the data’s physical location, you either must pay higher costs to meet the requirements for all data or risk not meeting compliance for some data.

The role of the cloud in data residency

With a physical on-premises data center, organizations can only store a certain amount of data before it becomes necessary to purchase additional equipment and acquire more space, often at a significant cost. Storing data in the cloud is typically less expensive, which allows organizations to afford to store a much higher volume of data.

IT organizations are increasingly using a wide range of options for storing the ever-greater volume of data their companies are collecting and storing. Many use multiple cloud providers, and the data and services used to manage and analyze data are now across private, public or hybrid clouds.

The relationship between data residency and data sovereignty

Many organizations confuse data residency and data sovereignty, which are two different things. Data sovereignty determines which country or region controls the data in terms of legal and regulatory mandates. In most cases, data residency determines data sovereignty, which then dictates the data privacy regulations that must be followed.

Organizations delivering hosted services online are at even greater risk. The organization is responsible for following all compliance regulations in all the regions where customers are located. To meet compliance regulations, you must know the location where all your customers’ specific data is stored. Otherwise, you are at risk of large fines and damage to your reputation if you don’t meet a location’s regulations.

The first step to understanding your data residency is to determine the type of storage for each data set, such as private cloud, CSP or on-premises. By creating a map for all data, you can begin to get a picture of your data residency. Next, determine the physical location of every cloud service provider’s data center and research where your data is located. Once you have determined the residency, you can research the sovereignty to understand the regulations that need to be followed.

Keep far-flung data secure

Understanding data residency is a critical but often overlooked step. Because the volume and location of data have quickly ballooned, initially, getting a handle on data residency may be time-consuming. However, once data residency and data sovereignty are integrated into your best practices, staying on top of the security and compliance regulations becomes much easier.

To learn more about tackling data residency concerns in your growing cloud environments, check out the on-demand webinar where IBM Security experts will discuss how to keep track of your data no matter where it’s stored.

The post How data residency impacts security and compliance appeared first on Security Intelligence.

]]>
From federation to fabric: IAM’s evolution https://securityintelligence.com/posts/identity-and-access-management-evolution/ Tue, 05 Mar 2024 14:00:00 +0000 https://securityintelligence.com/?p=447262 In the modern day, we’ve come to expect that our various applications can share our identity information with one another. Most of our core systems federate seamlessly and bi-directionally. This means that you can quite easily register and log in to a given service with the user account from another service or even invert that […]

The post From federation to fabric: IAM’s evolution appeared first on Security Intelligence.

]]>

In the modern day, we’ve come to expect that our various applications can share our identity information with one another. Most of our core systems federate seamlessly and bi-directionally. This means that you can quite easily register and log in to a given service with the user account from another service or even invert that process (technically possible, not always advisable). But what is the next step in our evolution towards greater interoperability between our applications, services and systems?

Identity and access management: A long evolution

Identity and access management (IAM) has evolved into a sprawling field of separate but interrelated processes. 

Even before the recent pandemic, both the users of our tech stacks and the servers that host their applications were becoming more and more dispersed and scattered. The pandemic only served to hyper-accelerate that trend. 

As Gartner’s Cybersecurity Chief of Research, Mary Ruddy stated recently, “Digital security is reliant on identity whether we want it to be or not. In a world where users can be anywhere and applications are increasingly distributed across datacenters in the multi-cloud… identity and access is the control plane.”

Add to this the fact that most cybersecurity functions score about 2.5 on Gartner’s five-point maturity scale and we see the usual tech dynamic of convenience forging ahead as security struggles to keep pace. 

To see how these patches of user databases and applications can be stitched together into a united whole and allow for risk and context-based access control across the board, we will explore how identity and access interoperability have evolved from federation standards and protocols until now and how this is evolving forward into a cohesive identity fabric. 

It’s time to learn from the past, evaluate the present and, of course, prepare for the future of IAM.

Past: A history of federation

Dropping into the timeline around the year 1995 lands us in a time when the green shoots of identity interoperability were just starting to show.  

Twelve years and several threads of directory (or user database) research and development culminated around this time, with the emergence of the Lightweight Directory Access Protocol (LDAP) – version 3. This standard became the basis for the Netscape Directory Server in 1996, OpenLDAP in 1998, and the now ubiquitous Microsoft Active Directory in 2000. 

The standard was initially optimized for read rather than write operations and was designed to allow client apps with very limited computing available (less than 16MB RAM and 100 MHz CPU) to query and authenticate users quickly. By achieving this low-overhead functionality, LDAP quickly became the de facto authentication protocol for internet services. 

Inside the integrated Microsoft (MS) estate, Active Directory authenticated credentials against an LDAP directory and granted access to the operating system (OS) and any applications to which a user was entitled. 

Outside the MS estate, single sign-on had to be achieved by reverse proxy servers that authenticated users (usually via LDAP) in a holding pen before redirecting them into the various systems to which they were entitled. Under the hood, this approach tended to combine LDAP, 302 HTTP redirects, and identity information injected into HTTP headers, with cookies used as session tokens. This Web Access Management (WAM) paradigm was effective but somewhat crude and varied greatly from app to app. 

Now that a relatively universal authentication protocol was established, the lack of a standardized way of landing users post-authentication into applications along with user, session or account attributes was in evidence. In addition to this, session tokens based on cookies were only viable intra-domain and not inter-domain. Authorization was even clunkier, with specific endpoints/URLs within applications needing to be HTTP redirected to the auth server, which, in turn, would check against LDAP attributes before allowing the user to see a page or take action. 

SAML 2.0: A circle of trust

By the mid-2000s, threads of research and development (R&D) were coming to fruition, with WS Federation,  Liberty Alliance’s ID-FF 1.1, and the Organization for the Advancement of Structured Information Services (OASIS) Security Assertion Markup Language (SAML) 1.1 being the standout candidates. The latter two, along with Shibolleth, converged and OASIS ratified SAML 2.0 in March 2005.

The concept was to create a circle of trust between a user, a directory, and an application. Administrators on both the application and directory sides could exchange signing certificates to create trust between their two systems.

In an identity-provider-initiated flow, directories can redirect authenticated users into an application from an application launchpad. However, in a service-provider-initiated flow, users can attempt to log in to applications and (typically) be recognized by their email domain and redirected to their home directory to be authenticated there before being redirected back to the app. 

In both cases, users land into an application with a SAML assertion, a piece of XML data that encapsulates their identity data, any other custom fields or attributes like account balance or shopping cart contents, and the x.509 signing certificate mentioned above. 

SAML authorization is most commonly performed by landing a user into an application with roles already defined on the application side, such as standard, manager, developer or administrator. This typically means a user’s allowed/disallowed pages or actions are tied to their role type. 

In SAML 2.0, we finally had an identity federation technology, a standardized way for users from one directory to access multiple applications and (best of all) across different network domains. 

In identity federation, one system plays the role of a directory or user database, and the other system plays the role of the application being accessed, even if both systems are commonly thought of as apps. 

Below are diagrams showing how two of the most widely used enterprise systems that support SAML could federate one way or the other. In one, Salesforce acts as the identity provider (directory or user database) for accessing Azure, and in the other scenario, the roles are reversed. The point is to illustrate how the federation uses combinations of LDAP and SAML to allow users to access a service with their accounts from another service.

Scenario 1

 

Key:

  1. The user chooses an option to sign in to Azure with their Salesforce account.
  2. Azure redirects the user to Salesforce for authentication.
  3. The user’s credentials are authenticated via LDAP against Salesforce’s directory.
  4. Salesforce sends a signed SAML assertion containing the user’s data to Azure to log them in.

Scenario 2

 

Key:

  1. The user chooses an option to sign in to Salesforce with their Azure account.
  2. Salesforce redirects the user to Azure for authentication.
  3. The user’s credentials are authenticated via LDAP against Azure’s directory.
  4. Azure sends a signed SAML assertion containing the user’s data to Salesforce to log them in.

The consumer computing revolution

Beyond the enterprise, the release of iOS in 2007 and Android in 2008 saw an explosion in consumer computing. 

Consider this statistic: in 2010, 37 percent of households owned a computer, but by 2014, 37 percent of individuals owned a smartphone. Across the two mobile OS in 2012 alone, roughly 1.3 billion new apps were shipped, with about 35 billion app downloads distributed across these new apps.

Client-side applications became extremely lightweight — mere viewing and input panes — with the vast majority of the logic, data, and computing residing on the server and injected in over the internet.

The number of application programming interfaces (APIs) mushroomed to cater to a population that increasingly demanded their apps and services be able to share their data with one another, particularly to allow for subscribing to a service with their accounts from another service.

R&D into a consumer computing open identity standard had been underway at Twitter and Google since about 2006 to 2007. During these conversations, experts realized that a similar need existed for an open standard for API access delegation. How could one application grant a certain amount of access to another without sharing credentials (which, in any case, would give total access)?

As Eran Hammer-Lahav explains in his guide to OAuth, “Many luxury cars today come with a valet key. It is a special key you give the parking attendant and, unlike your regular key, will not allow the car to drive more than a mile or two… Regardless of what restrictions the valet key imposes, the idea is very clever. You give someone limited access to your car with a special key while using your regular key to unlock everything.”

How does OAuth work?

OAuth was the framework that emerged to solve this problem. It allows users to share data without sharing passwords.

Let’s take a look at what happens on the backend when a photo printing service allows you to share your pictures from an online storage platform instead of requiring you to upload them from your local machine.

Below is an attempt to explain an OAuth authorization flow as simply as possible for a nine-step process. Formal terms for the various parties involved are bracketed. In this process, a user can share images from their Dropbox account with Photobox, an online photograph printing and delivery service. Like in the SAML relationships described earlier, admins from both platforms must establish a backend trust based on a client ID and client secret (instead of an x.509 certificate as in SAML) — this can be thought of as Photobox’s username and password with Dropbox. It describes a scenario where a third-party authorization service (often an IAM platform) is leveraged, but many websites or services may implement their own authorization service.

  1. A user opts to share data from one service (data holder) with another service (data requester). The data requester contacts the data holder with a client ID and client secret.
  2. Data-holding service redirects the request to an authorization service.
  3. The authorization service contacts the user’s browser to have them log in and/or provide consent to share data with the data requester as required. 
  4. The user logs in and/or provides consent to share data, often specifying what data can or cannot be shared (scopes).
  5. The authorizer redirects back to the data requester with an authorization token.
  6. The data requester contacts the authorizer on the backend (not via the user’s browser) with the authorization token plus client ID and client secret.
  7. The authorizer responds with an access token specifying the scope of what may or may not be accessed.
  8. The data requester sends an access token to the data holder.
  9. The data holder responds to the data requester with the scoped content.

SAML authorized users “in advance” by landing users into applications with a specified role, and those applications defined what different roles could or couldn’t do. OAuth allows for much more fine-grained authorization on a per-page or per-action basis. This reflects an expansion from role-based access to a more resource-based access control mentality that emphasizes the thing being accessed over who is doing the accessing.

Registration and authentication

But what about registering and authenticating users? Most people think of OpenID Connect (OIDC) as an extension of OAuth, which is optimized for authentication instead of authorization. OAuth itself, incidentally, appears less keen on this characterization:

“OAuth is not an OpenID extension and at the specification level, shares only a few things with OpenID — some common authors and the fact both are open specification in the realm of authentication and access control.”

While they are used for different purposes — OAuth to authorize, OIDC to authenticate — the fact is that an OIDC flow is an OAuth flow with the addition of identity tokens to the authorization and access tokens.

Let’s look at the flow behind the scenes in a scenario like the one below, where you can register or log in to Airbnb with your Apple ID.

 

  1. The user opts to log in to Airbnb with Apple ID.
  2. Airbnb sends a request to the Apple ID service containing Airbnb’s client ID and client secret configured by both platform admins. 
  3. The user authenticates against Apple ID’s directory.
  4. Apple ID sends an encoded identity JSON Web Token (JWT) to Airbnb that contains the user’s information. Airbnb can decode Apple’s identity token by using a public key. The user’s session is created.

Unlike the OAuth flow described earlier, the resource server/data holder and the authentication service are one and the same organization, with AppleID both holding the data and authorizing its sharing. Alternatively, a third-party IAM platform could be implemented to query an OpenID provider and authenticate against it.

The JSON Web Token

The emergence of the JSON Web Token (JWT) around 2013 was a crucial element in the evolution of identity federation and modern authentication. Essentially a JSON data format with added security features, it defined a secure and standardized format for signing, encrypting, decrypting, and transmitting identity data across domains.

JWTs consist of three parts:

  1. Header: Contains fields for type (which is JWT) and the cryptographic algorithm used in the signature in section three (often RSA or SHA256). If services have opted to encrypt as well as sign the JWT, the encryption algorithm will also be specified here.
  2. Payload: Contains the actual user information being transmitted in key: value pairs.
  3. Signature: This is where the content of the header and payload has the cryptographic algorithm specified in the header applied to ensure its integrity and authenticity. 


This is a sample JWT, encoded and decoded with a header specifying a JWT and the signing algorithm used, a payload specifying a unique ID, a name, and whether the user is an admin, and finally, a signature section.

It’s worth noting that while OAuth implementations may issue authorization and/or access tokens in XML, simple JSON, or JWT formats, OpenID Connect mandates the use of JWTs for identity tokens to ensure the authenticity and integrity of personally identifiable information.

This wraps up the main identity federation and access protocols and frameworks. It’s useful to think in terms of a user that wants to ‘come from’ some directory and ‘go to’ some application in most cases. The terms used in the different protocols vary but can be mapped reasonably well like this:

Generic

Security Assertion Markup Language (SAML)

OpenID Connect (OIDC)

OAuth

User

Principal/Subject

End-User

User

Directory / Identity Source / Registry 

Identity Provider (IdP)

OpenID Provider (OP)

Service Provider

Application

Service Provider (SP)

Relying Party (RP)

Consumer

Scroll to view full table

 

System for Cross-Domain Identity Management (SCIM)

Outside of access management, one more crucial IAM protocol is worth mentioning. The System for Cross-Domain Identity Management (SCIM) is the most common protocol for identity management. It is used to execute remote creation (provisioning), updating and deletion of users and groups from within an identity platform. It is also extremely useful for allowing developers to build out self-service user journeys such as address/phone/payment updating or password resets. Essentially a REST API optimized for identity governance, it has become a relatively universal standard, with most large cloud platforms now having SCIM endpoints that will accept HTTP POST and PUT requests.

Figure: Typical remote user-create SCIM API call

Present day: The state of identity and access management

The long march from LDAP to SAML, OAuth, OIDC and SCIM has seen profound evolution and interoperability in IAM. These protocols have done much to allow systems to lean on one another to authenticate users, authorize the sharing of resources, or agree on standardized ways to lift and shift user data.

As IBM’s Bob Kalka likes to say, “Identity and access is an amorphous blob that touches on everything.” There are several separate but related processes that IAM admins, engineers and architects must be concerned with. The tooling developed by vendors has grown up to service these processes. Let’s look at the main ones:

  1. Orchestrate user journeys across applications, directories, and third-party services (like identity proofers) from the user interface (UI) backward down the stack. The web redirect is still one of the most basic units of work, as users get bounced around between systems to execute user journeys that call on multiple systems. This usually demands that IAM engineers understand front-end web/mobile development and vice versa. 

  1. Consume identities from or sync and provision (CRUD — create, read, update, delete) identities into any number of identity sources of varying kinds.

  1. Control the provisioning, updating, and deletion of your joiners, movers, and leavers on the application side.

  1. Authenticate users into any number of target applications of varying kinds. Things are easier when applications have been built to modern federation specifications like SAML or OpenID Connect. These can then receive identity and account data from directories in a standardized way. However, many organizations do not have the resources to invest in modernizing the applications that do not support these modern protocols. Landing users into these applications securely while populating them with their identity or other account information as necessary (session creation) can be especially challenging.

  1. Perform adaptive or context-based access control across the estate. Access policies can be based on static conditional rules related to location, device, user/group attributes, or the pages or actions being accessed. Access management is increasingly leveraging machine-learning algorithms that profile usage patterns and increase their risk score when significant divergence from these patterns is detected. Once these ‘ifs’ are defined, admins can define ‘thens’ that might range from allow, multi-factor authentication (MFA), extra MFA, or block sessions, depending on the riskiness of the user’s session.

  1. Integrate IAM with the organization’s Security Operations (SecOps). Most cybersecurity organizations scored 50 percent on a recent Gartner five-point maturity scale for IAM. SecOps and IAM are indeed quite distinct specializations, but the low level of interoperability is surprising. At the very least, it should be taken for granted that your security information and event management (SIEM) is consuming IAM logs. This convergence of disciplines is dubbed identity threat detection and response (ITDR).

  1. Control access to privileged systems like server operating systems and root accounts of cloud service providers. These privileged access management (PAM) systems should, at a minimum, vault credentials to these systems. More advanced practices include approval requests, session recording, or credential heartbeats to detect whether credentials have been altered.

This is the point at which IAM stands today: a proliferation of tools, processes, and integrations. To add to that complexity, most organizations’ IAM terrains are fragmented, at least along workforce and consumer lines. There is just as often further fragmentation on a per-business unit, per-product offering, or per-budget basis.

Where can our efforts to further unify this control plane lead us?

Looking Ahead: The identity fabric

Gartner refers to an identity fabric as “a system of systems composed of a blend of modular IAM tools.”

As a discipline, IAM is at a point somewhat reminiscent of the world of SecOps circa 2016. At that time, there were several distinct but interrelated subdisciplines within the Security Operations Centre (SOC). Detection, investigation, and response were perhaps the three main process specializations, as well as product categories. Endpoint detection and response, threat intelligence, and threat hunting were and are swim lanes unto themselves. It was in this context that the need for orchestration processes and SOAR tooling emerged to stitch all of this together.

Given the security ramifications at stake, the evolution toward greater cohesion in IAM must be maintained. This more unified approach is what underpins the identity fabric mentality.

If it is a composable fabric of tools woven together, the orchestration layer is the stitching that weaves that fabric together. It is important to think of orchestration as both a work process and a tool. 

Therefore, an identity fabric constitutes any and all of the seven work processes an organization needs to carry out its use cases — plus an orchestration process. This is how the “centralized control and decentralized enablement” discussed by Gartner is achieved.

IBM tooling across the 7 IAM work processes

IBM’s mission within the IAM space is to allow organizations to connect any user to any resource.

We have, for some time, had the greatest breadth of IAM tools under one roof. We were also the first to offer a single platform that supported both runtime (access management) and administrative (identity governance) workloads in a single product. This product, Verify SaaS, also has the distinction of still being the only platform equally optimized for both workforce and consumer workloads. 

That we have tooling across all seven process categories is a unique differentiator. That we offer a single platform that straddles five of these seven processes is even more unique.

Examining the seven work processes, here is a brief holistic outline of the toolbox:

1. Orchestration

Our new orchestration engine is now available as part of Verify SaaS. It allows you to easily build user journey UIs and use cases in a low-code/no-code environment. On the back end, you can orchestrate directories and applications of all kinds and easily integrate with third-party fraud, risk or identity-proofing tools.

2. Directory integration and federation

IBM’s on-premise directory is the first on the market to support containerized deployments. Virtual Directory functionality allows the consumption of identities from heterogeneous identity sources to present target systems with a single authentication interface. Directory Integrator boasts an unrivaled number of connectors and parsers to read identity records from systems or databases and write them into other required directories. 

3. Identity governance

IBM offers powerful and customizable identity governance platforms in SaaS or software form, as well as out-of-the-box connectors for all the major enterprise applications, along with host adaptors for provisioning into infrastructure operating systems. Additional modules are available for entitlement discovery, separation of duty analysis, compliance reporting, and role mining and optimization.

4. Modern authentication

IBM offers runtime access management platforms available as SaaS or software. Both support SAML and OpenID Connect. The software platform’s heritage is in web access management, so the base module is a reverse proxy server for pre-federation target apps. 

The IBM Application Gateway (IAG) is a special gem in our IAM toolbox. A novel combination of old and new technologies, it allows you to serve a lightweight reverse proxy out of a container. Users are authenticated in via OIDC and out into the target application via reverse proxy. It can front an application that doesn’t support federation. It can also be used to enforce access policies within your custom application based on URL paths, hostnames and HTTP methods. Available at no extra cost with any Verify Access or Verify SaaS entitlement, it is now available as a standalone component. The Application Gateway allows you to modernize how your custom app is consumed without needing to invest in the modernization of the app itself. 

 

 

5. Adaptive access

Trusteer is IBM’s fraud detection solution. It ingests over 200 data criteria to risk score user behaviour, such as time, typing, mouse patterns, browser or OS information, and virtual machine (VM) detection. Available to deploy standalone within your front-end applications, Verify Access and Verify SaaS can also leverage Trusteer’s machine learning algorithm to risk score a user session at authentication time. 

6. Identity threat detection and response

In addition to the Verify products’ native threat detection capabilities, they can easily integrate with the IBM X-Force threat intelligence platform and other third-party risk services. This data can be leveraged to immediately reject common or compromised credentials or requests from known malicious IP addresses. 

7. Privileged access management

To round out the IAM toolbox, Verify Privilege provides credential vaulting and heartbeat, session launchers, and session recording for mission-critical infrastructure operating systems, databases and systems.

Embracing cohesive IAM solutions

In the spirit of composability, IBM offers virtually every kind of IAM tool you could need, along with the orchestration engine that can stitch your identity estate into a cohesive fabric. They are all designed to interoperate with other directories, applications, access managers, or identity governors you may currently have deployed. The unique proposition is that we can provide what is missing, whatever that may be.

Where identity and access have always tended to have been a layer of abstraction within applications or operating systems, the identity fabric paradigm is about decoupling identity and access from applications, directories, and operating systems. The aspiration is for identity to graduate to a layer that floats above systems rather than remain a layer that is embedded within them.

To leave aside tooling and technologies for the final word, implementing the available tooling that facilitates an identity fabric will not automatically make it a reality. Currently, a solution architect is almost as likely as not to believe each solution requires its own directory or access manager, much like most solutions must be underpinned by their own databases. In this context, is it any surprise that IAM processes are so siloed and fragmented?

Contact your in-country technical specialist to book a free identity fabric workshop and discuss how you can evolve your IAM environment into a cohesive security control plane.

Explore IBM IAM solutions

The post From federation to fabric: IAM’s evolution appeared first on Security Intelligence.

]]>