Network – Security Intelligence https://securityintelligence.com Analysis and Insight for Information Security Professionals Tue, 21 May 2024 20:40:49 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.1 https://securityintelligence.com/wp-content/uploads/2016/04/SI_primary_rgb-80x80.png Network – Security Intelligence https://securityintelligence.com 32 32 New cybersecurity sheets from CISA and NSA: An overview https://securityintelligence.com/articles/cisa-nsa-cybersecurity-information-sheets/ Wed, 15 May 2024 13:00:00 +0000 https://securityintelligence.com/?p=447527 The Cybersecurity and Infrastructure Security Agency (CISA) and National Security Agency (NSA) have recently released new CSI (Cybersecurity Information) sheets aimed at providing information and guidelines to organizations on how to effectively secure their cloud environments. This new release includes a total of five CSI sheets, covering various aspects of cloud security such as threat […]

The post New cybersecurity sheets from CISA and NSA: An overview appeared first on Security Intelligence.

]]>

The Cybersecurity and Infrastructure Security Agency (CISA) and National Security Agency (NSA) have recently released new CSI (Cybersecurity Information) sheets aimed at providing information and guidelines to organizations on how to effectively secure their cloud environments.

This new release includes a total of five CSI sheets, covering various aspects of cloud security such as threat mitigation, identity and access management, network security and more. Here’s our overview of the new CSI sheets, what they address and the key takeaways from each.

Implementing cloud identity and access management

The “Use Secure Cloud Identity and Access Management Practices” CSI sheet was created to help identify and address the unique security challenges presented in cloud environments. With most modern businesses quickly adopting more cloud-based solutions to help them scale, the virtual attack surface they create needs adequate protection.

The document goes on to explain that one of the major risks associated with expanding into the cloud comes from malicious cyber actors who actively exploit undiscovered vulnerabilities in third-party platform access protocols. This is primarily due to misconfigurations in user access restrictions or role definitions, as well as the strategic execution of social engineering campaigns.

Many of the risks identified can be successfully mitigated through the use of Identity and Access Management (IAM) solutions designed to monitor and control cloud access more strictly. In addition, the CISA and NSA recommend proper implementation of multifactor authentication protocols, which are particularly effective when improving phishing resistance, as well as the careful management of public key infrastructure certificates.

Another important point mentioned is the use of encrypted channels for users when accessing cloud resources. It’s suggested that organizations mandate the use of Transport Layer Security (TLS) 1.2 or higher as well as relying on the Commercial National Security Algorithm (CNSA) Suite 2.0 whenever possible when configuring all software and firmware.

Hardening cloud key management processes

The “Use Secure Cloud Key Management Practices” sheet was released to reinforce the important role that cryptographic operations play in cloud environments. These operations keep communications secure and provide the right levels of encryption for data both in motion and at rest.

The sheet outlines the various key management options available to cloud customers, including Cloud Service Provider (CSP) managed encryption keys and third-party Key Management Solutions (KMS) that can and should be applied.

Having a dedicated hardware security module (HSM) is another important component of applying adequate key management processes, as it provides a secure and tamper-resistant environment for storing and processing cryptographic keys.

However, organizations will want to weigh the benefits and risks associated with having shared, partitioned and dedicated HSMs in place since a shared responsibility model will need to be applied to both the organization and the third parties they’re working with.

Utilizing network segmentation and encryption

The “Implement Network Segmentation and Encryption in Cloud Environments” sheet was designed to highlight the ongoing shift from perimeter-based security approaches to more granular, identity-based network security. To do this safely, the CISA and NSA recommend using end-to-end encryption and micro-segmentation to isolate and harden their networks from quick-scaling cyberattacks.

Currently, the NSA-approved CNSA Suite algorithms or NIST-recommended algorithms are considered the gold standard for data in transit encryption. These are recommended numerous times throughout all of the sheets provided, and private connectivity versus public connectivity is relied on whenever possible when connecting to cloud services.

Because of how aggressive many modern-day cyberattacks are, implementing network segmentation is highly recommended. This helps to contain breaches that would otherwise move laterally across connected databases or critical systems. There are now many cloud-native options to help organizations implement segmentation and accurately control traffic flows across the network.

Securing data in the cloud

The “Secure Data in the Cloud” sheet provided goes into detail about the classification of cloud data types, including “File,” “Object” and “Block” storage options. The sheet goes on to explain that depending on the type of storage you’re using, this will mean applying diverse measures to properly secure it.

Regardless of the encryption being used for each type of data, it is strongly advised to reduce the use of public networks when accessing cloud services. These are constant sources of security vulnerabilities, as public networks have very limited security in place and are often used by malicious sources to monitor traffic and find weaknesses in device security.

This sheet also stresses the implementation of role-based access control (RBAC) and attribute-based access control (ABAC) as an effective way to manage specific data access. These solutions allow you to see very granular access permissions while also encouraging organizations to eliminate overly permissive cloud access policies.

A big part of maximizing security in the cloud is reviewing and understanding the procedures and policies of cloud service providers, specifically how they apply to data storage and retention.

Businesses can work with their CSPs to implement solutions like “soft deletion,” which is the practice of marking data as deleted without actually removing it from the server. This allows for recovery when needed but still protects it from being accessed by unauthorized users.

Mitigating risk from managed service providers

The final sheet, “Mitigate Risks from Managed Service Providers in Cloud Environments,” is designed to help create more awareness regarding managed service providers (MSPs) being regular targets of malicious actors backed by nation-states.

There are also many misunderstandings about compliance with regulation standards when organizations choose to partner with cloud service providers. Companies need to have a clear understanding of shared responsibility principles and make sure their partnerships place a high priority on data security.

The sheet explains that organizations should have pre-established auditing mechanisms in place that include cloud-native data logging and monitoring. These help organizations better understand, control and secure the actions their MSPs are taking on behalf of the organization.

Embrace proactive cloud security

For years, the CISA and NSA have stressed that companies should take charge of cybersecurity readiness when working with MSPs in the cloud. By following the guidance of these CSIs, organizations can make sure they’re applying the latest best practices that will minimize their attack surface and improve their ability to successfully recover from cloud security breaches.

The post New cybersecurity sheets from CISA and NSA: An overview appeared first on Security Intelligence.

]]>
Easy configuration fixes can protect your server from attack https://securityintelligence.com/articles/easy-configuration-fixes-can-protect-your-server/ Fri, 23 Jun 2023 13:00:00 +0000 https://securityintelligence.com/?p=442803 In March 2023, data on more than 56,000 people — including Social Security numbers and other personal information — was stolen in the D.C. Health Benefit Exchange Authority breach. The online health insurance marketplace hack exposed the personal details of Congress members, their families, staff and tens of thousands of other Washington-area residents. It appears the […]

The post Easy configuration fixes can protect your server from attack appeared first on Security Intelligence.

]]>

In March 2023, data on more than 56,000 people — including Social Security numbers and other personal information — was stolen in the D.C. Health Benefit Exchange Authority breach. The online health insurance marketplace hack exposed the personal details of Congress members, their families, staff and tens of thousands of other Washington-area residents.

It appears the D.C. breach was due to “human error”, according to a recent report. Apparently, a computer server was misconfigured to allow access to data without proper authentication. Implementing authentication would have been something easy to accomplish. Instead, a door was left wide open for attackers to gain access.

Poorly configured web servers are all too common. In fact, a recent study from a firm that indexes internet-facing devices reported that over 8,000 servers hosting sensitive information are not properly configured.

Easy to identify data exposure

A recent Censys report stated that “data exposures via misconfiguration remain a serious problem. We found over 8,000 servers on the internet hosting potentially sensitive information, including possible credentials, database backups and configuration files.” As per the report, these vulnerabilities were easy to identify, as they would be for even inexperienced threat actors.

Meanwhile, print management software developer PaperCut recently warned customers to update their software immediately. PaperCut makes printing management software utilized by companies, state entities and education. As per their website, PaperCut serves hundreds of millions of people from around the globe.

In a recent vulnerability bulletin, PaperCut said, “We have evidence to suggest that unpatched servers are being exploited in the wild.” Other reports of poorly managed Linux servers and poorly secured Interned-exposed Microsoft SQL (MS-SQL) servers have led to malware entry.

Other findings in the Censys report include:

  • Over 1,000 hosts with over 2,000 SQL database files were exposed with no authentication requirements on the HTTP services themselves
  • More than 18,000 CSV files were publicly exposed on just 147 hosts
  • Over 5,000 hosts had over 5,000 exposed files and directories, indicating they are related to a backup.

Based on its findings, Censys states that vulnerable hosts aren’t only servers with outdated and exploitable software. Vulnerabilities can arise from various sources, including errors in judgment, misconfigurations and rushed work. The firm says a quick and easy solution today may prevent a severe data breach tomorrow.

“The often unglamorous work of asset, vulnerability and patch management is critical for helping reduce an organization’s attack surface. The security issues we’ve explored in this report aren’t a result of zero days or other advanced exploits, but rather misconfiguration and exposure issues that are likely a result of simple mistakes or configuration errors,” Censys noted.

Fixing servers that lack authentication

If a computer server was misconfigured to allow access to data without proper authentication, the following steps can be taken to fix server issues:

  1. Shut down the server: The first step is to immediately shut down the server to prevent or halt unauthorized access to the data.
  2. Investigate the scope of the issue: Once the server is shut down, evaluate the extent of the problem by examining log files, system configuration files and other relevant data to determine the extent of unauthorized access, if any.
  3. Identify the root cause of the problem: Examine the server configuration files, software settings and security policies. Determine whether the misconfiguration was due to a human error, software flaw or something else.
  4. Correct the misconfiguration: Once the root cause has been identified, correct the misconfiguration by updating the server configuration files, software settings or security policies. This may involve reconfiguring access controls, updating software or installing security patches.
  5. Test the fix: After correcting the misconfiguration, test the fix by attempting to access the data without proper authentication. Verify that the fix has been successful and that the data is now secure.
  6. Monitor the server: After the fix has been implemented and tested, monitor the server to ensure that it is functioning properly and that no further security issues arise.
  7. Review security policies and procedures: Lastly, review security policies and procedures to ensure they are adequate to prevent similar security issues in the future. You may need to provide additional training to employees, review access controls or implement new security technologies.

How to secure your server

Securing web servers is required to reduce the risk of unauthorized access and data breaches. Here are some steps you can take to enhance the security of your web server:

  1. Keep server software up to date: Make sure to install the latest security patches and updates for your web server software, as well as any related software components (such as databases and scripting languages).
  2. Use strong authentication: Require strong passwords and two-factor authentication for all user accounts. Use SSH keys instead of passwords for remote access.
  3. Limit access: Limit access to the server to only those who need it. Use firewalls and other access control mechanisms to block unauthorized access.
  4. Secure file and directory permissions: Make sure that sensitive files and directories are only accessible to authorized users. Set file permissions to “read-only” for non-essential files and directories.
  5. Use encryption: Use SSL/TLS encryption for all communication between clients and the server, and encrypt sensitive data stored on the server.
  6. Monitor server logs: Regularly monitor server logs to detect suspicious activity. Use intrusion detection systems (IDS) and other security tools to identify and respond to potential threats.
  7. Back up regularly: Regularly back up your server’s data and configuration files and store backups in a secure location.
  8. Implement security policies: Establish and enforce security policies and procedures for your organization. Educate employees and users about best practices for web server security.

Don’t leave the door open

There certainly are a number of highly sophisticated cyber intruders out there. But many data breaches are the result of simply leaving the front door unlocked. Due to human error, mistakes can lead to the exposure of large amounts of data on a server. The problem is the lack of simple security measures, such as authentication, authorization or filtering. But this is good news since obtainable fixes can improve server security substantially.

The post Easy configuration fixes can protect your server from attack appeared first on Security Intelligence.

]]>
Cybersecurity in the next-generation space age, pt. 4: New space future development and challenges https://securityintelligence.com/posts/cybersecurity-next-generation-space-age-pt-4-future-development-challenges/ Fri, 03 Mar 2023 21:00:00 +0000 https://securityintelligence.com/?p=440755 View Part 1, Introduction to New Space, Part 2, Cybersecurity Threats in New Space, and Part 3, Securing the New Space, in this series. After the previous three parts of this series, we ascertain that the technological evolution of New Space ventures expanded the threats that targeted the space system components. These threats could be countered […]

The post Cybersecurity in the next-generation space age, pt. 4: New space future development and challenges appeared first on Security Intelligence.

]]>

View Part 1, Introduction to New Space, Part 2, Cybersecurity Threats in New Space, and Part 3, Securing the New Space, in this series.

After the previous three parts of this series, we ascertain that the technological evolution of New Space ventures expanded the threats that targeted the space system components. These threats could be countered by various cybersecurity measures.

However, the New Space has brought about a significant shift in the industry. This wave of innovation is reshaping the future of space exploration.

5G and 6G technologies

The New Space age is crucial in the development of groundbreaking technologies.

Many experiments have been made to make 5G the adopted standard for space. The American Department of Defense (DoD) said that they conducted some experiments for satellite-direct-to-phone connections.

In addition, SatixFy Technology,a satellite communication provider company, announced that they have successfully demonstrated 5G backhaul communications connected to one of their satellites in Low Earth Orbit.

The Third Generation Partnership Project (3GPP), the international organization responsible for defining technical specifications for mobile wireless networks, developed a set of standards for space-based 5G.

Space-based platforms for 5G will provide high throughput and less latency.

The purpose is to answer effectively to some use cases like edge computing, IoT service continuity and fixed backhaul to remote locations that are underserved.

Even though 5G has not yet reached maturity, the sixth generation of mobile communication networks is under standardization and development.

The 6G era will also enhance the capabilities of existing technologies with an approach of providing unlimited and ultra-fast communications.

This major development for space-based 5G and 6G is a race today between ventures and even between nations.

Optical communications

Optical communication in space provides much higher bandwidth — an increase of 10 to 100 times — which allows NASA and other space agencies to carry more mission data.

Today with NASA’s mission on Mars, the Perseverance rover takes up to 45 minutes between call and response.

The use of laser beams in optical communication to transmit data will allow more carrying of data and will revolutionize deep space communications.

In addition, optical communication networks provide ideal channels for the exchange of quantum photons.

Optical communications, rather than radio frequency communications, will bring significant benefits for space missions, even for launch costs, which will be cheaper due to the characteristics of these communications: smaller size, lower weight and decreased power.

Cloud computing

The New Space era has been marked by the adoption, by many space ventures, of cloud services and cloud-based technologies to take advantage of many capabilities like ground stations as a service.

Cloud technology makes space mission phases (e.g., design, testing, run, analysis) more simple and more accessible. Also, with cloud capabilities, data processing is significantly faster. A good example of cloud usage is NASA’s Perseverance mission: all insights generated by NASA’s Perseverance rover using the cloud technology were done faster than ever.

In October 2022, Sierra Space, a leading commercial space company, and IBM announced collaboration on the next generation of space technology and software platforms.

IBM will support Sierra Space through its journey of building a seamless technology platform in space, which both envision as a first-of-its-kind, comprehensive platform to effectively service the cloud in space and drive mission operations — all while supporting the development of new applications for commerce, research, tourism and more.

“IBM is committed to re-invigorating approaches to science and innovation to meet today’s biggest challenges. Collaborating with a leader like Sierra Space will support the growth of a more robust space economy in low-Earth orbit and beyond,” said Naeem Altaf, CTO of IBM Space Tech.

Edge computing

Edge computing is one of the most important technologies that will speed up the development of the space industry in the New Space age.

Edge computing is about placing computer workloads (both hardware and software) as close as possible to the edge — where the data is being created and where actions are occurring. Edge computing environments give customers faster response times, greater data privacy and reduced data transfer costs.

Edge computing technology benefits include performance, availability and data security.

In edge computing architectures, the analytics data potentially never leaves the physical area where it is gathered and is used within the local edge. Only the edge nodes need to be primarily secured, which makes it easier to manage and monitor and means that the data is more secure.

The best example of edge computing usage is the deployment of IBM Edge computing in the International Space Station (ISS).

DNA sequencing with the MinION device allows for the identification of microbes onboard the ISS. While DNA sequencing has become common onboard the ISS, data processing still requires the downlink of the data to Earth delaying the time to results.

IBM developed the “Edge Computing in Space” solution eliminating the need to move the massive data being produced on the ISS by the DNA Sequencing project, and by presenting containerized analytical code right where the data is being produced by leveraging the local compute to be available on ISS, reducing the time to less than a week to get results.

Conclusion

The New Space age is creating a new space economy and ecosystem, and no one wants to be left behind.

This New Space age evolves with many breakthrough technologies and creates opportunities to provide better protection from security threats in space systems.

Nowadays, space cybersecurity has become a necessity with space systems subject to more cyberattacks. There is an urgency to secure space systems and future space missions. Space is going to grow exponentially and so will its challenges. Let’s get ready to secure the next frontier.

The post Cybersecurity in the next-generation space age, pt. 4: New space future development and challenges appeared first on Security Intelligence.

]]>
Cybersecurity in the next-generation space age, pt. 2: Cybersecurity threats in new space https://securityintelligence.com/posts/cybersecurity-in-the-next-generation-space-age-pt-2-threats/ Fri, 17 Feb 2023 02:30:00 +0000 https://securityintelligence.com/?p=440390 View Part 1 in this series, Introduction to New Space. The growth of the New Space economy, the innovation in technologies and the emergence of various private firms have contributed to the development of the space industry. Despite this growth, there has also been an expansion of the cyberattack surface of space systems. Attacks are […]

The post Cybersecurity in the next-generation space age, pt. 2: Cybersecurity threats in new space appeared first on Security Intelligence.

]]>

View Part 1 in this series, Introduction to New Space.

The growth of the New Space economy, the innovation in technologies and the emergence of various private firms have contributed to the development of the space industry.

Despite this growth, there has also been an expansion of the cyberattack surface of space systems.

Attacks are becoming more and more sophisticated and affecting several components of the space system’s architecture.

Threat actors’ methodology

Every space system architecture is composed of three main components that are responsible for different functions: ground segment, space segment and communications. Each component can be hacked by an adversary.

Most attacks and vulnerabilities are related to communication links, such as radio frequency links or the ground system in general.

From an attacker’s perspective, the following model, Lockheed Martin’s cyber kill chain, identifies what the adversaries may complete to achieve their objective.

Lockheed Martin cyber kill chain

First, the adversary uses the reconnaissance technique to obtain information on the target. He can use Open Source Intelligence (OSINT) or harvesting emails to achieve that goal.

Second, the weaponization technique is deployed, combining the objective with an exploit and commonly results in a deliverable payload — like an exploit with a backdoor, for example.

The third phase is to determine how the weaponized function will be delivered to the target — via email, for example.

Then, the adversary exploits the target’s system to execute code.

In the sixth phase, the adversary installs a malware or another tooling, like mimikatz.

After that, we come to the command and control (C2) phase, which allows the attacker to control the compromised target from a remote central controller, like Cobalt Strike or Empire.

And finally, the adversary can launch some actions on objectives, such as ransomware deployment or data exfiltration.

Let’s now take a closer look at cyber threats to space systems.

Ground segment threats

Ground stations and terminals have a role in data collection. As a result, they are under threat of cyber espionage from state and non-state actors.

Most cyberattacks against the ground segment exploit web vulnerabilities to allow attackers to lure ground station personnel into downloading malware or Trojan horses to ground station computers.

Breaking into the ground station network gives the attacker access to the satellite itself. Once inside the ground station network, attackers can gain access to the satellite itself and perform denial of service (DoS) attacks, as well as hijack industrial control systems (ICS) to control and damage the satellites.

A report published by the NASA Office of Inspector General revealed that in April 2018 threat actors breached the agency’s network and stole approximately 500 megabytes of data related to Mars missions.

The point of entry was a Raspberry Pi device that was connected to the IT network of the NASA Jet Propulsion Laboratory (JPL) without authorization or going through the proper security review.

Ground segment systems are facing many cyber threats and various attack vectors that can be leveraged for compromising these systems.

Threats to COTS

Commercial off-the-shelf (COTS) products are ready-made hardware or software that can be purchased and designed to be easily installed and interoperate with the existing system. Nowadays, space COTS components support New Space technology development with their qualification for small satellite missions, like CubeSats missions.

Unfortunately, COTS software, which is used in space applications, is very risky and presents a tempting point of attack for threat actors.

These components are well known and widely available, and we can find public information related to their security, including configurations, vulnerabilities, software versions and more.

This information is shared among the cyber adversary community.

As a result, COTS components are targets of different attacks, like system modification, DoS and data breach attacks.

Unauthorized access

Unauthorized access can be done by compromising the physical or logical security measures and gaining access to the ground segment assets. This attack can lead to the theft of sensitive data that can be used, for example, against a mission operation.

Data manipulation attacks

These attacks are intended to steal controlled information or to destroy the integrity of different data types. A typical use case is to corrupt data and send wrong commands to the command and data handling (C&DH) on the spacecraft to compromise the mission.

Supply chain attacks

The space field is extremely sensitive to cyber-enabled supply chain attacks. The space supply chain commercialization in the New Space era and its sustainability increases the risk of being targeted by cyber threats.

A supply chain attack will seek to harm an organization by targeting the less secure elements of the chain, resulting in unauthorized access to data and systems and the leaking of software and tools. At this stage, the adversary can take advantage of these vulnerabilities and some exploits, then he can, for example, create a backdoor in the embedded system of supply chain microelectronics devices.

Computer network exploitation

Computer network exploitation (CNE) is a breach of the network that the ground segment is connected to. CNE refers to attackers’ aptitude to attack and exploit vulnerable assets to steal data or gather intelligence about targets to figure out how they work and how they are configured. It’s about spying and reconnaissance.

Cloud platforms attacks

The New Space era is marked by the expansion of cloud infrastructure use. Various space ventures are leveraging cloud service providers’ infrastructure. With cloud technologies, space missions can be designed, tested, executed, and analyzed in an easy and affordable way.

However, cloud service providers have regular outages or disruptions among their networks due to cyberattacks. These attacks can look like cloud abuse (access to cloud storage data by hacking a virtual machine); a Distributed Denial of Service attack (DDoS) on cloud public exposing services; and attacks related to insider threats (e.g., data exfiltration and credential theft).

Space segment threats

Like ground systems, the space segment is also a recognized cyberattack target.

Space vehicle vulnerabilities generally originate from compromised ground stations to network components where threat actors can breach the network.

Satellites are targets of Man in the Middle (MitM), Zero-Day and ransomware attacks.

ROSAT, the US-German satellite, is one example of such an attack.

Threats to COTS

As explained earlier, COTS are reliable solutions for space ventures.

COTS hardware, or plastic encapsulated microcircuits (PEM) of electronic parts, are used onboard Smallsats satellites, such as CubeSats.

Many CVE vulnerabilities are related to the COTS component and can be exploited by adversaries.

Once in orbit, satellite maintenance becomes an increasingly complex operation; and with the COTS shorter product life cycle, hardware obsolescence becomes a major concern for satellite cybersecurity.

Threats to GN&C

Guidance, navigation and control (GN&C) is a system that includes the components that are responsible for satellite position determination and the components used by the Attitude and Orbit Control System (AOCS), also known as the Attitude Determination and Control System (ADCS).

GN&C is used to avoid satellite collisions with space objects and fall into the earth’s atmosphere. In some cases, this system is necessary for maintaining the satellite in an adequate position when it communicates with the ground station.

As such, attackers will attempt to compromise the GN&C system for the purpose of creating wrong navigation data and to impede the capability to navigate.

In addition, software that is used in GN&C systems may contain some vulnerabilities that can be exploited by adversaries to penetrate the system, compromise the integrity of sensors (onboard satellite) data, and cause a navigation system outage.

Threats to SDR

Software defined radio (SDR) is the component that allows the satellite to communicate with the ground station, both for transmitting and receiving signals.

It’s responsible for receiving a radio wave signal from the ground station and converting it into a communication stream, and vice-versa, known as modulating and demodulating a signal.

The SDR technology offers on-orbit configurability and reduces the mass and size of the communication system.

An adversary can send malformed packets to the SDR component to perform the buffer overflow attack and gain unauthorized access.

In addition, most of the SDR’s architecture, used by NASA JPL and other space agencies, includes the POSIX Operating System at the kernel level. There are critical vulnerabilities related to POSIX OS, which allow the attacker to execute arbitrary commands and gain unauthorized access.

Once SDR access is gained by an attacker, he can modify the legitimate frequencies and settings related to the communication with the ground station; thus, he makes the satellite communicate on a different frequency than expected.

The SDR component is also part of the ground station architecture.

Threats to EPS

An electrical power system (EPS) is a critical component for the success of a space vehicle operation. Without power, the satellite can’t run any function; it cannot fly nor communicate with ground stations. Especially for CubeSats, with their low-power nature, they are susceptible to attacks that target their EPS.

Adversaries can use the DoS attack. The goal of the attacker is not to flood the CubeSat or the communication channel, but to clog the CubeSat command queue with useless processes. Consequently, these unnecessary process executions consume the limited power of the CubeSat.

Communications threats

The largest number of space cybersecurity incidents has been related to communication attacks.

The biggest weakness in satellites that makes them exploitable is the usage of long-range telemetry for communication with ground stations.

Additionally, satellite uplinks and downlinks are often transmitted and can be easily accessed. (9)

Jamming

Jamming is the method of disrupting or interfering with the communication between the ground segment and space segment. It overpowers legitimate signals with an even stronger signal (noise signal) to drown out the regular frequency.

In March 2022, SpaceX’s Starlink satellite, which was deployed as an internet service provider in a region, was a target of a jamming attack.

Attackers can easily buy their own jammer on e-commerce websites; most of the jammers used are SDRs.

Spoofing

Spoofing is an attack that manipulates the data communication between the satellite and the ground station and thus changes its integrity. Spoofing is a more sophisticated interference method than jamming. The attacker tricks the system by transmitting a false signal, which appears as an authentic one.

One of the most popular spoofing attacks is against GNSS satellites (GPS systems, for example), which provide positioning and timing data to GNSS receivers (a smartphone, for example) to determine their locations.

Eavesdropping

Eavesdropping is the concept of a man-in-the-middle attack. It’s when an attacker listens and intercepts data and communications exchanged between the ground station and the satellite, and vice versa.

Satellites broadcast radio frequency (RF) signals back to earth to be received by ground stations. Most of the time, the data sent over these signals isn’t encrypted or uses a weak encryption cipher.

In this case, an attacker can use the right equipment to intercept this exchanged data without the necessity to be close to the ground receiver.

Hijacking

Hijacking is gaining unauthorized control of a satellite to transmit the attacker’s signal. This signal can override or alter the legitimate transmitted data.

Hijack attacks are very common for media broadcasts. In 2013, emergency alert systems of TV stations in Montana and Michigan were hacked and the attackers broadcast a report of a zombie invasion.

The Satellite TCP hijacking attack is an example of a communication hijack attack. The attacker can hijack the TCP session, get all the session details, and then masquerade as an authorized component to communicate.

Conclusion

Each component of the space system design can be targeted by a cyberattack. Cyber threats to space systems are real and must be taken seriously. However, what measures can be taken to enhance the security of the space system? We’ll find out in Part 3 of this series.

The post Cybersecurity in the next-generation space age, pt. 2: Cybersecurity threats in new space appeared first on Security Intelligence.

]]>
Beyond shadow IT: Expert advice on how to secure the next great threat surface https://securityintelligence.com/articles/secure-shadow-it-tiktok-secengineer/ Tue, 23 Aug 2022 13:00:00 +0000 https://securityintelligence.com/?p=437821 You’ve heard all about shadow IT, but there’s another shadow lurking on your systems: Internet of Things (IoT) devices. These smart devices are the IoT in shadow IoT, and they could be maliciously or unintentionally exposing information. Threat actors can use that to access your systems and sensitive data, and wreak havoc upon your company. […]

The post Beyond shadow IT: Expert advice on how to secure the next great threat surface appeared first on Security Intelligence.

]]>

You’ve heard all about shadow IT, but there’s another shadow lurking on your systems: Internet of Things (IoT) devices.

These smart devices are the IoT in shadow IoT, and they could be maliciously or unintentionally exposing information. Threat actors can use that to access your systems and sensitive data, and wreak havoc upon your company.

A refresher on shadow IT: shadow IT comes from all of the applications and devices your employees use without your knowledge or permission to get their jobs done and handle their work data. Some examples of shadow IT include departments purchasing and installing their own software, users making unauthorized changes to their endpoints and employees using cloud services that aren’t company standard.

Add a few IoT devices into the mix, and your security efforts are suddenly and obviously more vulnerable. However, what’s not as obvious is that the shadow IoT phenomenon can include things like multicolored light bulbs, coffee makers and Bluetooth speakers.

These devices pose new security risks for the enterprise, as IoT is typically not as secure as it should be. In 2021, 12.2 billion devices connected to the internet worldwide, with an expected growth up to 14.4 billion active connections in 2022. If you think none of those devices are shadow devices on your network, think again. According to Infoblox, 35% of U.S., UK and German companies have more than 5,000 shadow devices connected to their network on any given day.

Putting IoT to the test

TikTok personality and security engineer Jose Padilla (@secengineer) knows how to see which devices might be at risk. His frequent TikTok posts test different IoT devices to determine just how risky they are and examine what kind of network traffic the devices are outputting.

“The Mirai botnet was created almost entirely by IoT devices,” he said. “That’s what inspired me to start looking more into what these IoT devices are doing on my network. Of course, I want to use smart things. They’re very convenient. I obviously love technology. But as a security engineer, I always have to second guess these kinds of things.”

Padilla has tested almost two dozen devices and explains that he takes each through a rigorous process that requires at least three or four hours of sifting through logs to establish patterns to see if anything stands out.

A lightbulb moment for IT staff

What surprised Padilla most from his testing is the security issues arising from something as simple as a smart lightbulb. You can watch his video for more detail, but we won’t name the product here.

“It’s such a well-known brand; a premium IoT brand,” he said. “I expected it to go completely smooth and be boring, and it definitely wasn’t boring.”

Padilla explained that the traffic generated from the smart bulb would raise serious red flags for any security team.

Here are the highlights of what he discovered:

  • Network traffic was “very, very noisy”
  • Used Discovery Protocol to basically look at everything on his network
  • Communicated with his Google Home services despite having turned that feature off
  • Local LAN traffic was encrypted (this is not common with many smart devices)
  • Traffic sent over the internet was not encrypted (also not at all common and a security risk).

What concerned Padilla the most was a vulnerability that, if exploited, could unleash significant damage.

“One of those things that I found was the authentication sessions,” he said. “The authentication sessions are the connection between the company’s cloud servers and the bulb’s smart hub itself. So if you wanted control from the cloud, this is the connection that’s going to do it.”

Risk spreads to other devices

Plus, he had this feature turned off in his tests, but the hub was still connecting to the cloud. All relevant tokens — the single sign-on token, the session token and the authentication token — were transmitting data in the clear.

According to Padilla, a similar bug or vulnerability was found in another of the manufacturer’s products a few years ago, a smart air filter, but was quickly fixed.

“There’s no excuse for IoT devices to send traffic over the Internet unencrypted,” he said. “It’s just opening up more risk. It’s another threat vector, whether it will be easy to exploit or not.”

Another link in the attack chain

While most of the attacks that could be potentially launched against the lightbulb are benign, there are proofs of concept that should raise eyebrows.

“We’ve seen that some light bulbs can have a faster flicker rate, and one potential attack could produce a strobe light effect,” he said. “It could be harmful to anyone that’s photosensitive. But those are more minor in comparison to some of the other attacks or vulnerabilities and proof classes I’ve seen for this lightbulb.”

Padilla explains that security testers were able to upload malicious firmware to the light bulb, and it was not difficult for them to control the light bulb and force an unsuspecting user to connect to a bad bulb. The attack chain would go from the bulb to their phone to the hub.

“The proof of concept demonstrates the kill chain that can happen from just controlling one device,” he said. “It’s not just turning a light on or off; it can go from there to either running code on your phone or the hub and it can get your network to trust those two devices. From there, the sky’s the limit.”

Defending against shadow IoT

Preventing threats resulting from shadow IoT is never easy. After all, shadow IT and shadow IoT are so named because IT teams are in the dark. But, like everything in cybersecurity, good cyber hygiene goes a long way. If your organization is already deploying network segmentation, vulnerability scans, pen tests and patch management, you’re many steps ahead.

“The thing I can advise for organizations wanting to use smart devices is the same thing I suggest for home users: put it on an isolated network and don’t allow it to talk to your main network,” he said. “Treat it as a completely untrusted zone. If the shadow IoT devices are on an isolated network, there should be a safe disconnect.”

It should also come as no surprise that the most basic of security basics should be prioritized.

“The same importance should be applied to patch management,” he said, adding that scanning devices for vulnerabilities via vulnerability assessments and pen tests are also critical.

Finally, for the best protection against shadow IoT, Padilla suggests companies apply principles that align with zero trust.

Whether it’s shadow IT, shadow IoT or other common threats, users will only have access to the resources they need and only to the devices they should have access to. And shouldn’t that be the table stakes for security policy today anyway?

The post Beyond shadow IT: Expert advice on how to secure the next great threat surface appeared first on Security Intelligence.

]]>
Remote work makes it more important than ever to trust zero trust https://securityintelligence.com/articles/remote-work-zero-trust/ Thu, 21 Apr 2022 13:00:00 +0000 https://securityintelligence.com/?p=435595 The remote work era makes the zero trust model critical for most businesses. The time has come to use it. But first, let’s understand what it really is and why the hybrid and remote work trend makes it all but mandatory. What is zero trust? Zero trust is not a product or a service, but […]

The post Remote work makes it more important than ever to trust zero trust appeared first on Security Intelligence.

]]>

The remote work era makes the zero trust model critical for most businesses. The time has come to use it. But first, let’s understand what it really is and why the hybrid and remote work trend makes it all but mandatory.

What is zero trust?

Zero trust is not a product or a service, but an idea or a strategy. Instead of relying on a perimeter (for example, a firewall), every user, device and app must be verified for every instance of access.

Other ideas connected with this idea include strong user identity, machine identification, network segmentation, policy compliance and others.

A student at the U.K.’s University of Stirling named Stephen Paul Marsh coined “zero trust” in his doctoral thesis in 1994. Later, the concept was briefly called de-perimeterization and perimeterless network architecture. In the end, the phrase zero trust became the most widely accepted term. Industry guidelines like Forrester eXtended, Gartner’s CARTA and NIST SP 800-207 further refined ideas and definitions around it.

Why remote and hybrid work demands zero trust

When the pandemic began, employees started working from home in their millions. It didn’t take long for threat actors to realize that the best way to break in was to enter through remote workers’ virtual private network (VPN) connections.

Each work-from-home employee, hybrid worker and digital nomad represents an expansion of the attack surface and new openings for attackers. An organization might be looking at dozens, hundreds or thousands of such employees. So, the attack surface becomes too large for older security models.

How to think about zero trust

Zero trust replaces an outdated idea. That idea? The assumption that everything ‘inside’ is trustworthy by default and that only outsiders pose threats. First, the solution was firewalls to create a perimeter. Then, VPN enabled remote employees to ‘tunnel’ into the perimeter.

This perimeter-centric view is outdated for many reasons. The rise of arbitrary mobile and wearable devices, cloud computing and the Internet of Things trend have eroded it. Now, above all, the hybrid and remote work trend have, too. It also accepts that threats often start inside the walls. Plus, cyberattacks are becoming more high-tech all the time. (There’s still a place for firewalls in zero trust networks — just not for perimeter security.)

At a high level, zero trust best practices start with several elements. They are the identification of critical assets, the establishment of strong identity systems for users, devices and apps and the use of micro segmentation. First, you need to create micro-perimeters on the networks and restricted access zones inside data centers and cloud environments. These control which people, devices and applications have permitted access to each segment, zone and resource. Beyond access restrictions, the hunt for intrusions and malware takes place thorough ongoing encrypted traffic inspection and analysis.

Process or policy?

The zero trust methodology enforces what used to exist in policies. In the past, company policies might say that only employees should access company resources. These employees had to use approved devices and apps. Policies might also call for employees to avoid rummaging through data beyond their purview.

Policies are great. The trouble is that this only guarantees security to the extent that people follow those policies.

Zero trust puts all-day, everyday enforcement of those policies into practice. The right people access the right resources using the right devices and applications. After all, only they have permission to do so. The default is every person, device and app is blocked from accessing every part of the network and everything on those parts until the person, device and app are all authorized.

Attackers are stymied at every turn in a zero trust network. If they can trick or work around user authentication, their device will be denied access. It narrows employee behavior. If one staff member decides to use an insecure app, that app won’t be allowed, even if they’re an authorized user on an authorized device.

The zero trust network architecture also helps with compliance auditing. It allows for improved visibility into user activity, device access and location, credential privileges, application states and other key factors. It also provides more data on which specific network resources have and have not been breached. Both of these are important for success.

Outsourced or in-house?

A zero trust network architecture represents a pretty radical departure from perimeter security. The decision over which parts to outsource and which to keep in-house depends on whether staff has experience with the elements of zero trust. It also depends on how well you’ve staffed in general.

It’s reasonable to outsource many parts of the transition. Then, after learning more, bring some parts in-house, depending on what makes sense for your needs. But even if you’re inclined to keep security work in-house, you might want to consider outsourcing to help with the transition.

The human element

Express the move to zero trust as part of the wider conversation about the new workplace. As we continue to adapt to remote and hybrid work, employees should be included as partners in this transition. Zero trust security is part of that.

Zero trust will impact all employees in multiple ways, including inconvenience in their workday and a learning curve up front. That’s why it’s super important to express the benefits, the link to hybrid and remote work and the impracticality of sticking with yesterday’s perimeter security mindset.

For many organizations — especially those fully embracing remote and hybrid work — zero trust is no longer an option. It’s time to trust it.

The post Remote work makes it more important than ever to trust zero trust appeared first on Security Intelligence.

]]>
Where everything old is new again: Operational technology and ghosts of malware past https://securityintelligence.com/posts/operational-technology-ghost-malware-past/ Wed, 13 Apr 2022 10:00:00 +0000 https://securityintelligence.com/?p=435740 This post was written with contributions from IBM Security X-Force’s Michael Worley. Operational technology (OT) — the networks that control industrial control system processes — face a more complex challenge than their IT counterparts when it comes to updating operating systems and software to avoid known vulnerabilities. In some cases, implementation of a patch could […]

The post Where everything old is new again: Operational technology and ghosts of malware past appeared first on Security Intelligence.

]]>

This post was written with contributions from IBM Security X-Force’s Michael Worley.

Operational technology (OT) — the networks that control industrial control system processes — face a more complex challenge than their IT counterparts when it comes to updating operating systems and software to avoid known vulnerabilities. In some cases, implementation of a patch could lead to hours or days of costly downtime. In other cases, full mitigation would require net new purchases of potentially millions of dollars worth of machinery to replace already functional systems simply because they are timeworn.

It’s no secret OT systems face this conundrum — and it’s become increasingly obvious cyber criminals are aware of this weakness, too. While there’s no shortage of recent headlines decrying the vulnerability of these systems to the more sophisticated malware commonly used by threat actors today, those conversations have overlooked another potential — yet equally serious — threat to OT: older malware still floating in the ether.

This is malware for which most systems have been patched and protected against, immunizing large swaths of networks and effectively dropping the older malware from the radar of IT teams (and headlines). Two examples of this kind of older malware include Conficker and WannaCry.

While occurrences of these malware types plaguing OT environments are relatively rare, they do occur — and often leave organizations combating a threat that was largely forgotten.

WannaCry: The scourge of 2017… and beyond

The WannaCry ransomware outbreak was a watershed for cybersecurity professionals in 2017 — a moment in time many in this industry will never forget. The fast-spreading worm that leveraged the Eternal Blue exploit ended up affecting more than 200,000 devices in over 150 countries. From X-Force’s perspective, WannaCry is the ransomware type they have most commonly seen at organizations with OT networks since 2018 — and, occasionally, WannaCry will even migrate into OT portions of the network itself.

One example of WannaCry infecting an OT network is Taiwan Semiconductor Manufacturing Company (TSMC) in 2018. Despite having robust network segmentation and cybersecurity practices in place, human error led to a vendor installing a software update on the OT portion of the network using a machine unknowingly infected with WannaCry ransomware. Because the laptop used for the software installation had been patched and was using an up-to-date operating system, it was not susceptible to the ransomware — but the OT network, on the other hand, was very susceptible.

The WannaCry ransomware spread quickly across TSMC’s network and infected several systems, since the OT network included multiple unpatched Windows 7 systems. The ransomware affected sensitive semiconductor fabrication equipment, automated material handling systems, and human-machine interfaces. It also caused days of downtime estimated to cost the company $170 million. CC Wei, the CEO of the company, said in a statement, “We are surprised and shocked. We have installed tens of thousands of tools before, and this is the first time this happened.” As a result of the incident, the company implemented new automated processes that would be less likely than human error to miss a critical security step.

WannaCry continues to affect organizations with OT networks, although — thankfully — X-Force observes such incidents much less frequently today than they did in 2018 and 2019, as many organizations are able to apply patches or identify workarounds to more effectively insulate networks from WannaCry.

Enter Conficker: Continuing to emerge in 2021

An old worm — even older than WannaCry — that X-Force has observed on OT networks in 2021, however, is Conficker. This worm emerged in late 2008 as threat actors quickly leveraged newly released vulnerabilities in Microsoft XP and 2000 operating systems. Conficker seeks to steal and leverage passwords and hijack devices running Windows to run as a botnet. Because the malware is a worm, it spreads automatically, without human intervention, and has continued to spread worldwide for well over a decade.

Conficker — sometimes with different names and variants — is still present in some systems today, including in OT environments. As with WannaCry, the presence of legacy technologies and obsolete operating systems — including Windows XP, Windows Server 2003, and proprietary protocols that are not updated or patched as often as their IT network counterparts — make these environments especially vulnerable to Conficker. In addition, many legacy systems have limited memory and processing power, further constraining administrators’ ability to insulate them from infections such as Conficker or WannaCry, as the system will not even support a simple antivirus software installation.

The Conficker worm is particularly effective against Windows XP machines, especially unpatched versions, which are common in OT environments. The fast-spreading nature of the Conficker worm can be a challenge for network engineers — once infected, every Windows machine connected to the network could be impacted in as little as one hour. Since many OT environments are built on 20- to 30-year-old designs, partially modified to have connectivity for ease of access, it provides the ideal environment for even the simplest malware, Conficker included.

From Conficker infections X-Force has observed, the worm is able to affect human machine interfaces (HMIs), which have transmitted network traffic initially alerting security staff of the infection. X-Force malware reverse engineering of the Conficker worm indicates that it exploits the MS08-067 vulnerability to initially infect the host. Fortunately, in some cases Conficker malware — even when present in OT environments — has not led to operational damage or product quality degradation. Of course, this may not be the case for all network architectures on which Conficker malware may appear.

Defending OT networks from old malware: Lessons from the trenches

Even though many OT environments are running obsolete software and network topographies, there are measures organizations can take to defend against older malware strains such as WannaCry and Conficker. Often, the highest priority in an OT environment is maximizing uptime, leaving little room for maintenance, re-design, updates and their associated downtime. Yet even within these confines, there are many measures organizations can take to decrease the opportunities for old malware to get onto, spread within, and negatively affect their network.

Some of these include:

1. Network segmentation: Micro-segment the networks within an OT environment. If different lines do not need to communicate with each other, there is no need to create and maintain a large network subnet for all systems. Improve reliability of systems by segregating those in smaller subnets and restricting traffic at boundaries. In addition, an industrial demilitarized zone (iDMZ) is your best ally for compartmentalization and network segmentation. Avoid dynamic host configuration protocol (DHCP) as much as possible; should you be required to use it, subnet it to the lowest possible net mask. Configure virtual local area networks (VLANs) if possible.

2. Know what you have: Systems older than 20 years probably do not have a good electronic record in a configuration management database (CMDB) and may be missing or have outdated network drawings. Reverse engineering this information during an incident is not productive, and ensuring assets and network information is maintained accurately can go a long way. Be aware of the IPs, MACs, operating systems, and software licenses in your asset inventory. Get to know your environment up to the revision date of your software. Make clear which users are allowed to log on to machines based on specific roles; if possible, link users to a machine’s serial number.

3. Harden legacy systems to maintain a secure configuration: Remove all unused users and revoke all unnecessary administrative privileges, remove all unused software, disable all unused ports (running a packet capture can help), and prohibit using these assets for personal use. Insecure configuration of endpoints can leave open vulnerabilities for exploitation by adversaries or self-propagating malware. Identify unused and unwanted applications and delete them to reduce the attack surface. Avoid proprietary protocols as much as possible, unless they are constantly updated; check for and use better, newer protocols that are standardized.

4. Continuous Vulnerability Management: A vulnerability management program allows organizations to reduce the likelihood of vulnerability exploitation and unauthorized network access by a malicious actor and is necessary to make informed vulnerability treatment decisions based on risk appetite and regulatory compliance requirements. All necessary security and safety relevant patches must be applied as soon as feasible. If it is not possible to patch the system, ensure other compensating security controls are implemented to reduce the risk. Identify the lowest demand times in a day or week and commit to having downtime and maintenance windows for patching and updating. Routinely check for advisories on ICS-CERT and note whether your vendors are impacted.

5. Reduce SMB Attack Surface: Both WannaCry and Conficker are known to exploit SMB. Server Message Block (SMB) is a network communication protocol used to provide shared access to services on a network, such as file shares and printers. Because of its prevalence in information technology environments, adversaries commonly use this protocol to move laterally within a compromised environment, interact with remote systems, deploy malware, and transfer files. Moreover, SMB can provide a convenient way to bypass Multi-Factor Authentication (MFA) and remotely execute code. To reduce the attack surface and the overall risk associated with SMB-based lateral movement, consider the following hardening measures:

  • Configure Windows firewall to DENY all inbound SMB communications to workstations. This control will disable inbound connections on TCP ports 139 and 445.
  • Audit server SMB requirements and explicitly DENY SMB inbound on servers that do not require the protocol as part of their functionality.
  • Consider disabling legacy versions of the SMB protocol and migrating business applications to SMB v3.1. This activity requires careful planning and risk evaluation due to its potential impact on business operations.

6. Avoid the use of Portable Media: Uncontrolled portable media significantly increase the risks to the legacy OT environments, as OT systems may not have the latest security patches to defend against newer attack methodologies. Uncontrolled and unsecured allowance of portable media can expose an OT network to exploits and unplanned outages and downtime.

  • Have a security policy for secure use of portable media in OT environments.
  • Ideally, strictly prohibit use of USB flash drives. Should there be an absolute necessity of using one, designate a single USB stick for any maintenance and re-format it every time you use it.
  • Implement processes and technical controls that adequately support the security policy requirements. Controls may include, but are not limited to the following:
    • Every use of the device is documented in the logbook
    • The devices are scanned on designated quarantine PCs to ensure robust AV scan before using on OT endpoints. Ensure that anti-malware software is configured to automatically scan portable media
    • Control the number of portable media devices approved to be used in the environment
    • Disable autorun and autoplay auto-execute functionality for removable media.
  • Consider implementing Secure Media Exchange solutions such as Honeywell SMX or OPSWAT MetaDefender.

7. Rehearse Disaster Recovery (DR) and Incident Response (IR) scenarios regularly: DR plans should be documented, reliable backups should be available, and OT personnel must have an understanding and intimate knowledge of how the system should be recovered. IR and DR exercises should be conducted regularly to build the muscle memory needed for reliable recovery. Educate your team about imminent security threats and make them part of the security process. As part of any plan, have a direct line with your organization’s CSIRT: your best play is always a fast response and a transparent environment, so be organized and report everything.

8. Employ network monitoring solutions: Firewalls, Access Control Lists (ACLs) and Intrusion Prevention Systems (IPS) can assist in keeping a close eye on traffic traversing your network. Check for new nodes or machines communicating with suspicious assets. If you employ an intrusion detection system (IDS), ensure your signatures are up to date. Even when monitoring for old malware, new signatures appear every day.

While it isn’t common for an OT network to be infected with older malware like WannaCry or Conficker, documented cases do indeed exist, and they can leave costly destruction and even safety consequences in their wake.

To learn how X-Force can keep your network safer, download the X-Force for OT solution brief.

Read the 2022 X-Force Threat Intelligence Index Report to understand the latest OT Threats

The post Where everything old is new again: Operational technology and ghosts of malware past appeared first on Security Intelligence.

]]>
Low-code is easy, but is it secure? https://securityintelligence.com/articles/low-code-easy-secure/ Mon, 28 Mar 2022 13:00:00 +0000 https://securityintelligence.com/?p=435504 Low-code and no-code solutions are awesome. Why? With limited or no programming experience, you can quickly create software using a visual dashboard. This amounts to huge time and money savings. But with all this software out there, security experts worry about the risks. The global low-code platform market revenue was valued at nearly $13 billion […]

The post Low-code is easy, but is it secure? appeared first on Security Intelligence.

]]>

Low-code and no-code solutions are awesome. Why? With limited or no programming experience, you can quickly create software using a visual dashboard. This amounts to huge time and money savings. But with all this software out there, security experts worry about the risks.

The global low-code platform market revenue was valued at nearly $13 billion in 2020. The market is forecast to reach over $47 billion in 2025 and $65 billion in 2027 with a CAGR of 26.1%. Very few, if any, markets can expect to see such robust growth.

What is low- or no-code software? What’s driving the explosive growth in this sector? And what are the security risks?

What is low-code development?

Low-code platforms enable those with limited programming skills to become citizen developers. People can use intuitive graphical interfaces to create applications faster than conventional coding methods. This means non-technical staff can contribute.

At a recent VentureBeat Low-Code/No-Code Summit, brands of all sizes shared how they use low-code to improve and accelerate business processes. For example, no-code solutions can streamline application creation, enable real-time data analysis and automate manual, time-consuming workloads.

Low-code platforms popularity boom

It doesn’t take a master coder to understand the reasons why many companies choose to adopt low-code development. One survey showed that 41% of organizations are using a low- or no-code platform. Within these companies, 69% say professional IT staff use low-code tools. This means nearly a third of low-code users are non-IT team members busily creating software.

During 2020-2021, IT leaders have slashed development times. This increased demand for custom software led to the emergence of non-IT citizen developers. As a result, the low-code market expanded rapidly and will continue to grow by leaps and bounds. Gartner estimates that by 2024, low-code tools will be behind more than 65% of application development.

Starbucks embraced low-code

It’s not only bootstrap businesses that need low-code solutions. On the contrary, many of the biggest brands have pivoted to less technical solutions to meet their needs.

Starbucks chief digital and analytics officer Jonathan Francis says that he saw efficiency gains from low-code tools as the demand for remote solutions stretched IT to the limit. Low- and no-code platforms enabled Starbucks to digest a backlog of development tasks that normally would have taken far longer to finish.

“We need opportunities to scale quickly … You’ll never find enough data scientists,” Francis said. “We’re all competing for the same resources — we have limited budgets. So you have to start thinking about local solutions.”

Who’s guarding the gate?

While all this freewheeling app development may be great for innovation and productivity, the security officer is thinking, “If every Sally, Sam and Joe can conjure up apps across the enterprise, how am I going to secure it all?” Good question.

The good news is that security is built into many low-code platforms. Traditional application development doesn’t always take security into account. Or, someone puts it in place later. But with secure low-code platforms, governance and control are built-in before your people start tinkering. This means IT maintains and sets centralized control over access, automation and data assets.

Setting low-code rules

No matter how good the low-code tool is, there’s still a chance that employees will be tempted to create applications beyond the security radar. For this reason, built-in permissions go a long way in maintaining good governance.

It all begins with proper training for anyone who will dabble in low- or no-code projects. They need to understand that only approved low-code platforms are okay to use. Plus, educate and alert your people to the need for testing. At the end of the day, who gets access to what should be firmly established.

Now, let’s look at some other specific ways to manage low code security risks.

Play in the sandbox

If you put all your approved development resources in a sandbox, then citizen developers can play nice and avoid risk exposure. From there, clearly establish and manage data access and sharing.

Many low-code platforms provide this type of control at the virtual data layer. Some low-code platforms even come with regulation compliance built-in.

Runtime environment management

The runtime environment is where a certain program or application executes. It’s the hardware and software that supports the running of a certain codebase in real-time.

You can configure this to reveal data exposure and poorly applied security controls. These measures can help avoid business logic failure, such as posting sensitive data to a public location.

Other ways to harden low-code environments

Other ways to strengthen low-code environments include:

  • Static code analysis: Perform static analysis on any low-code platform-generated code and test for common errors.
  • Audit proprietary libraries and partners: Ask vendors about their security standards and examine proprietary libraries for potential risks. Does the vendor have a way to verify their security?
  • Secure the API layer: Test API connections regularly with an API scanner.

Trust no one, secure everything

Placed in the hands of non-IT staff, low-code tools are used to create even more applications. This further supports the notion of a perimeter-less architecture. We are in the midst of a boom of applications, APIs, devices, users and environments. This makes securing your network more challenging than ever.

Low-code is only part of a larger, more complex security conundrum. As a response, many organizations are adopting a zero trust approach.

A zero trust security model ensures data and resources are closed off by default. Access is granted on a least-privilege basis. Zero trust requires each and every connection to be verified according to your policies. Zero trust tools then authenticate and authorize every device, network flow and connection using AI-assisted contextual analysis from as many data sources as possible.

Low-code can quickly reshape the technical prowess of any organization. It democratizes development, accelerates innovation and boosts productivity. But to fully leverage the advantages of low-code, it must be secure.

The post Low-code is easy, but is it secure? appeared first on Security Intelligence.

]]>
IAM secures the new, perimeter-less reality https://securityintelligence.com/articles/iam-secures-perimeterless/ Wed, 23 Mar 2022 13:00:00 +0000 https://securityintelligence.com/?p=435478 Necessity may be the mother of invention, and it also drives change. To remain competitive in 2021, companies had to transform rapidly. Today, many of us work from home. Remote and hybrid work models have become the new normal. But what about security? In one recent survey, 70% of office workers admitted to using their […]

The post IAM secures the new, perimeter-less reality appeared first on Security Intelligence.

]]>

Necessity may be the mother of invention, and it also drives change. To remain competitive in 2021, companies had to transform rapidly. Today, many of us work from home. Remote and hybrid work models have become the new normal. But what about security?

In one recent survey, 70% of office workers admitted to using their work devices for personal tasks, while 69% used personal laptops or printers for work. Also, 30% of remote workers let someone else use their work device. Plus, cyber attack rates have gone through the roof. The average person may not think much about security, but they expect it. It all sounds like a busy security officer’s nightmare.

How can you possibly secure your perimeter when so many employees and users engage in risky behavior outside your firewall? The answer is to make identity the new perimeter. And thanks to identity and access management (IAM), this new, fluid perimeter can be secured.

The rush to secure identity

The IAM market is projected to grow from $13.41 billion in 2021 to $34.52 billion in 2028 at a CAGR of 14.5%. Why so much interest?

According to the 2021 IBM Cost of a Data Breach report, compromised credentials continue to be the most common initial attack vector. So, we need better credentials protection. Also, regulatory and organizational pressures continue to mount in a call to secure corporate assets. IAM solutions satisfy both these needs. There are other powerful incentives driving the rush to adopt identity and access strategies, too.

IAM secures the perimeter-less architecture

Protecting apps and digital assets in the remote context requires strict data access management. As device and connection types grow in number, security gets more complex and cumbersome. However, people can still enforce rules according to the who, what, where and when surrounding access to sensitive data.

Zero trust models, which include least privilege access, verify each and every connection and endpoint. This means the system grants every request for access the least amount of privilege. Zero trust ensures that resources are restricted by default, even for connections inside the perimeter.

IAM has become a centerpiece of this new vision. To meet current threats, security teams need to set a perimeter against each and every request for access, no matter where they come from. This is key for distributed teams who work worldwide with employees, partners and freelancers. And as team members change roles, access privileges must be granted or removed.

IAM software relies on machine learning and artificial intelligence to analyze key parameters, such as user, device, browser type and behavior. This enables them to rapidly spot something odd. You can also define adjustable risk scores to match the evolving access terrain. The result is a real-time, accurate and contextual authentication process across your entire ecosystem.

More benefits of IAM

Savvy business and IT leaders rapidly see other benefits that IAM models bring to a company’s performance. For starters, instead of badgering users (and wasting time) about non-authorized device use, people can access networks regardless of location, time or device.

For more complex environments, with multiple applications, you can grant access via single sign-on and multifactor authentication capability. This simplifies web and mobile experiences, increases productivity and drives down the drain on IT resources. From there, automated access management can streamline on- and off-boarding processes critical for remote teams.

Consider the boutique asset management firm that built a cloud-based wealth management platform for its employees, associates and clients. Accessible through a wide range of devices, an IAM-based portal gave the firm’s stakeholders access to a full suite of apps and tools that connect through an API gateway. The company’s website, Salesforce CRM, portfolio analysis software, custom-built in-house solutions and third-party offerings (such as Zoom) were all united to conserve resources, improve user experience and streamline performance.

Can you simplify compliance, too?

In 2020, governments passed over 280 bills or resolutions dealing with cybersecurity. Meanwhile, the General Data Protection Regulation’s Privacy by Design policy insists on data protection by design. Here, IAM fits the bill perfectly. After all, it builds in strong identity and access security into the system.

Keeping up with constant updates to regulations can be painstaking. So it’s comforting to know that a major compliance concern is secure access. Who has access to what data is a top worry as well. IAM goes a long way to satisfy both internal and external compliance mandates.

Let the right ones in

Human beings aren’t the only ones requesting network access. The digital space has exploded with the number of apps, APIs and internet of things devices that come knocking on your network door. IAM includes these connections as well with their own set of permissions and protocols.

An ideal IAM solution caters to all clients, partners, employees and contractors. It also responds to the ever-growing requests of non-human connections. IAM is not just a defense, but a better way to manage the workplace.

Consider the customer journey. From lead to prospect to customer, each interaction must be cultivated to account for user preferences and privacy while providing a great experience. Here, IAM tools can work double-shift to provide access authentication and assemble user profiles that enhance security and user experience.

Whether it’s an employee, partner or customer, every person has one identity no matter the device or platform. This can include access from apps, social media, websites and any other endpoint. This not only makes for a more holistic user experience, but it can also help thwart social engineering-type attacks.

Be perimeter-less, be secure.

While it might be tempting to fall back on rigid, complex authentication processes, this approach does more harm than good in the long run. One might argue that a static solution saves money, but does it really? It cannot address the myriad of attacks that continue to surface. If you consider the business and compliance benefits, a non-IAM solution may lock you out of other ways to improve outcomes.

Today’s digital landscape was thrust upon us before its time. To meet new challenges and seize opportunities, you must clearly define, and skillfully manage, identity.

The post IAM secures the new, perimeter-less reality appeared first on Security Intelligence.

]]>
Threat modeling approaches: On premises or third party? https://securityintelligence.com/posts/threat-modeling-approach-options/ Thu, 17 Mar 2022 20:30:00 +0000 https://securityintelligence.com/?p=435436 What’s the difference between on-premises and cloud security threat modeling approaches? Both can help protect against cloud threats and have distinct benefits and risks. The latest tech developments are happening here in the cross-section of cybersecurity and cloud security. More and more treasured data is being kept and used to make data-driven decisions. So, defending […]

The post Threat modeling approaches: On premises or third party? appeared first on Security Intelligence.

]]>

What’s the difference between on-premises and cloud security threat modeling approaches? Both can help protect against cloud threats and have distinct benefits and risks.

The latest tech developments are happening here in the cross-section of cybersecurity and cloud security. More and more treasured data is being kept and used to make data-driven decisions. So, defending data against internal threats, malware vulnerabilities and unwanted external access is paramount. Advanced cloud security approaches such as threat modeling in the cloud and other software-as-a-service-based solutions can help. They allow your organization to recognize and circumvent threats to key software and data center components of your IT infrastructure.

Two main hosting options

There are two main options for hosting: on-premises servers or in the cloud with a third-party cloud service provider (CSP) using application programming interfaces (APIs). Some defense concerns arise in general for the cloud computing environment. Organization cybersecurity leaders need to consider these when applying threat models in the cloud environment. For example, you’ll have to think about multitenancy and secure data transmission. Data is no longer maintained in data center systems, but now at the CSP. So, the attack surface increases. You have less control over your threat modeling in the cloud, too. Securing data and functions with cryptographic key management techniques involves both the CSP and the cloud tenants. The threat model should judge the threats by taking into account the two-party involvement in cryptographic key exchange and storage, which can introduce problems.

Identity and access management (IAM) also plays an essential role in securing access to public cloud resources. It offers a way for user access provisioning and de-provisioning to specific resources. In addition, IAM with role-based access control can mitigate high risks, such as sharing credentials, with the help of defensive best practices in the cloud.

Which threat modeling approach is right for you?

So, you can see the differences between an on-premise and a CSP. Which path is the correct one for you? This depends entirely on your needs and the design architecture of your enterprise. Take into account your deployment model, cost, control, security and compliance needs.

Any study of information system security resources must reflect the threats and vulnerabilities of the systems that may imperil the enterprise environment. Threats exploit vulnerabilities in the system to increase the risk of system resources or data. Data owners need to use the correct tools to mitigate known vulnerabilities and reduce exposure to an explicit threat or class of threats. Using a threat-based approach in public clouds is paramount in finding out what threats can be thwarted and which continue to exist.

STRIDE threat modeling

A popular approach is called the STRIDE threat modeling methodology. It can be employed for both on-premises and cloud environments.

STRIDE is used to classify the objectives of attacks in both environments. Data owners can apply it at the design level of systems to address spoofing, tampering, repudiation, information disclosure, denial of service and elevation of privilege threats. Public cloud infrastructure faces similar threats to the on-premises data center network.

Thus, using the STRIDE threat model exposes threats that exist both on-premises and in the cloud. However, the use of the public cloud adds unique threats to the customer enterprise. It introduces lack of control, less visibility into resources and operations and undeveloped compliance requirements.

Threat modeling is just as important for the cloud as it is for on-premise infrastructure. Under the shared responsibility model, your enterprise is still responsible for the data and content within the CSP environment. To limit the exposure of your data, you should reduce the risk with on-premise data center cybersecurity best practices and controls.

Threat modeling: An ongoing process

Putting a cloud security-based threat model in place is an ongoing process. Any threat model process document should be a live document you can modify as needed. This is even more important when using cloud hosting. After all, cloud modeling provides rapid elasticity, scalability, on-demand access and other features like broad network access.

A cloud computing environment may introduce more threats beyond STRIDE. Any enterprise working on securing their apps and resources needs to consider these, as well. A threat model should include a methodology that trusts CSPs in their respective areas of accountability and reflects known or modified threats.

Organizations migrating their computer systems from a more traditional on-premises network to a cloud-based model must consider the different classes of threats. Any computer network and infrastructure face different threats when data is in transit, at rest and in use. They must also consider the impacts resulting from the cloud’s unique traits.

The Cloud Security Alliance, the European Union Agency for Cybersecurity and other groups have developed formal lists of threats to the cloud. These include:

  • Data breach risks
  • Insufficient due diligence
  • Unauthorized use of instances (e.g., vCPU, vMem) to execute tasks
  • Compromised virtual machines/devices used to execute attacks against other machines
  • Distributed denial of service attacks
  • Potential vulnerabilities in CSP code/resources infrastructure environment
  • Potential problems in virtualization security (improper execution of isolation techniques leading to inter-device/guest hops level attacks, such as virtual machine sprawl/escaping)
  • User access management
  • Data access controls in cloud environments.

Securing your cloud data

Cloud computing and on-premises security are key steps when moving some or all of your computing applications or network to the cloud. The network/infrastructure security team should apply threat modeling and classify and apply mitigation approaches tailored to your unique case and needs.

In addition, you can use threat models for the cloud to help identify monitoring, logging and alerting needs in an efficient way with reduced cost. In the future, you might want to apply the threat model and add a monitoring and logging architecture that can be deployed in the existing cloud computing environment with greater security of data and resources. That’s why it’s important to make sure your IT teams thoroughly understand the security features that influence the differences between on-premises and cloud environments.

The post Threat modeling approaches: On premises or third party? appeared first on Security Intelligence.

]]>