Host Security

Rodolfo Santos Flaborea
16 min readApr 25, 2024

--

Endpoints comprise machines with different roles within organizational tasks. They are the locations where users manipulate and store data while preparing it for transmission to other endpoints across the network. One possible endpoint division is as follows:

  • Network appliances provide LAN functionalities;
  • Servers offer services by responding to requests;
  • Workstations provide valuable applications for users to execute their tasks;
  • Appliances are industry-standard hardware that provides specific functionalities;
  • Kiosks are closed-environment endpoints that execute specialized tasks;
  • Mobile endpoints encompass portable computational solutions, like cell phones and laptops.

Regardless of their different functions and roles, those hosts have fundamental parameters that security teams must address to a properly controlled environment for working and storing data. Such parameters involve software/application, OS, hardware, and general endpoint security solutions. The present post aims to discuss them.

Application Security

Endpoints provide the environment where practical applications are run. So, a fundamental part of host security is application security, which mainly relies on secure software development life cycle (SDLC) practices. A previous post discussed this topic in detail, but there are some points to consider:

  • Input and output validation is often the most critical aspect developers must consider;
  • Code-signing is essential to validate software for consumers properly;
  • Cloud-based applications must adopt the use of secure cookies, i.e., cookies that have the “secure” attribute enabled and, therefore, can only be transmitted through encrypted channels (e.g., HTTPS)

Although detailed, the above post doesn’t present additional aspects of secure SDLC, such as code analyzers, stress testing, and a more detailed view of sandboxing.

Code Analyzers

Developers must implement tests along the SDLC to obtain good code quality, including its security aspects. Given its laborious and time-consuming nature, manual reviews are a possible option only in a few cases. So, automation is the preferred methodology, especially in agile development processes like DevOps and DevSecOps. Automated tools encompass static and dynamic analyzers.

Static analyzers directly access the source code, looking for possible inconsistencies and vulnerabilities. This analysis happens outside the runtime. They may be a compiler’s component, making checks early in the SDLC while reducing tool complexity and increasing reliability. Since those testers can access the source code, static testing is also called white box testing.

On the other hand, dynamic analyzers aim to verify how the program behaves under anomalous conditions, including malformed and random inputs, packets, and files. Such conditions are provided through a technique called fuzzing. A fuzzer’s generator produces those anomalous parameters and injects them into the evaluated application, observing how it reacts. Fuzzing may come in different types, such as:

  • Application fuzzing: produces random application inputs through the CLI, the UI, and URLs;
  • Protocol fuzzing: forges malformed packets and requests and observes their effects on the program;
  • File format fuzzing: present repeatedly modified or corrupted file formats, register the application’s reaction, and associate that response with the file format that caused it for further analysis.

Since dynamic testing doesn’t involve direct access to source code, its other name is black box testing.

Stress Testing

Stress testing encompasses applying conditions beyond its operational limits. These happen when specific resources are limited or depleted, such as memory/disk space or excessive requests or inputs.

This type of test verifies the software’s robustness and availability during situations such as a denial-of-service attack or an unusually high-demand situation, such as an atypical demand for a web service (e.g., e-commerce seasonal events).

Sandboxing

A previous post has already discussed sandboxing in the context of software development. Quickly reviewing: a sandbox is a computing environment isolated from the host OS, so any processes may not, in principle, affect it.

It’s common practice for each team developer to work in their sandbox so that no developer overwrites the code written by another team member. Given its flexibility, a sandbox environment may also be used in different ways, such as demos, testing, and staging. This flexibility means that a sandbox environment has provided a controlled environment for numerous testing opportunities since the beginning of the SDLC.

OS Security

The OS is software that acts as an intermediary between applications and hardware resources. Therefore, OSs are pivotal in managing applications and hardware while providing conditions for user-system interaction. Given this central role, organizations must address and establish controls for proper OS security.

Patch Management

Like any other program, an OS is expected to have potential vulnerabilities. When vendors are informed of this, they release patches (hotfixes, service packs, and updates) that cover such gaps. Vendors usually periodically release patches; companies must have adequate patch management to implement them correctly.

Patch management doesn’t only involve constant monitoring for updates and installing them when available. Before installation, each update must have its integrity checked by comparing hashes computed by the consumer and those provided by the vendor (e.g., an MD5 hash). The security team must also verify that the updated system is working correctly. Those controls are essential for mission-critical systems, such as web servers.

Automated updates are adequate for most environments. However, manual procedures and testing are fundamental for mission-critical systems like web servers. Such processes must also check if the updated system is running correctly.

In large corporate environments, the updating process may rely on an infrastructure similar to that used for software development, providing security assessment, testing, and deployment of an update.

Least Functionality and Closing Unnecessary Ports and Services

The National Institute for Standards and Technology (NIST) introduced the concept of Least Functionality in its document about configuration management (NIST Publication 800–53 named Configuration Management-7). It states that each system of an organization must be configured to provide only the necessary services for its role while avoiding offering any other non-essential service. This concept is closely related to least privilege: both address the need to limit a specific parameter (access rights [least privilege] or services offered [least functionality]) to reduce possible avenues of attack. Moreover, the least functionality is a method of system hardening, i.e., limiting possible attack vectors for a malicious actor to access a system.

Controlling open ports is one of the most common ways to establish the least functionality. Services use specific ports to communicate with the network. Unnecessarily maintaining an open port is a potential pathway for infiltration. So, after defining the machine’s role within the organization, all non-essential ports must remain closed. It’s a vital decision-making process balancing the company’s needs and security demands, so administrators must be mindful of not maintaining too many ports available for flexibility’s sake.

The table of well-known ports (0–1023) is an essential resource for port management. Security+ candidates must remember these for the test.

Well-known ports for different services and their secure counterparts. This isn’t an exhaustive table but enumerates the most frequent services. Extracted from: https://www.stationx.net/common-ports-cheat-sheet/.

Secure Configurations

Much of what has been discussed regarding OS security concerns elements of secure OS configurations: patch management and system hardening. In addition, secure OS configurations also encompass proper account and access management, resilient or failure-resistant hardware, correct use of PKI key storage, and file-level system encryption (e.g., Windows’ NTFS).

Trusted OS

Trusted OSs are versions of OSs that were built with security in mind. They include controlling access through account segmenting, managing these accounts through least privilege, and establishing kernel-level security enforcement.

Hardware Security

Companies and individuals implicitly trust hardware and its manufacturers. The increasing sophistication of attacks (e.g., rootkits), though, raises uncertainty about this relation, with malware able to infiltrate lower levels of a system (e.g., the kernel and the boot sector).

This scenario highlighted the importance of proper hardware security, especially solutions that rely on cryptoprocessors. Cryptoprocessors are special-purpose hardware for cryptographic operations, including system confidentiality, integrity, and authentication. Software-based solutions are a valid alternative or complement, but, as discussed below, they are significantly less reliable and have lower performance than their hardware-based counterparts.

Full Disk Encryption and Self Encrypting Drives

Full Disk Encryption (FDE) (aka whole-disk encryption) encrypts the entire drive containing the OS (and any data in it) and its boot sector. Both are decrypted after the system boots and encrypted again when the system turns off. Decryption is only possible with the cryptographic key. FDE can be software or hardware-based.

Microsoft’s BitLocker is a well-known example of FDE implementation and may work with software-based or hardware-based solutions. During the release of Windows 11, Microsoft started demanding the use of BitLocker with Trusted Platform Modules (TPMs), which are cryptoprocessors capable of generating and storing cryptographic keys, including those used by BitLocker to encrypt and decrypt the disk.

BitLocker uses the AES algorithm for its cryptographic operations. Extracted from: https://www.maketecheasier.com/set-bitlocker-encryption-aes-256/.

Self-encrypting drives (SEDs) follow the same FDE principle of total encryption and apply it specifically to hard drives. NIST’s Opal is a standard that guides how SEDs’ cryptographic storage works: all the files stored are automatically encrypted, so they can’t be accessed through an OS.

Windows also provides a more fine-tuned and specific encryption with Encrypting File System (EFS). Through this functionality, users may execute encryption file-wise, directory-wise, or even disk-wise. Active Directory (AD) Group Policies may be used to implement EFS.

Trusted Platform Module and Hardware Security Module

As mentioned above, TPMs are cryptoprocessors embedded in the motherboard that provide functionalities to support quick and reliable cryptography. These include key generation, key storage, key sharing, generation of pseudo-random numbers, calculation, collection, and storage of system integrity hash measurements, and boot and system attestation (platform authentication).

To understand TPMs, one must grasp some key concepts and TPMs’ components:

  • Endorsement Key (EK): is a general-purpose 2048-bit asymmetric key pair (RSA) generated during the chip’s manufacturing. Its central purpose is to provide the identity of the TPM to which it belongs. So, the EK is immutable. More so, the EK’s public key is contained within the TPM’s certificate signed by the manufacturer. Given their fundamental purpose for TPM’s authenticity, the EK can’t be used for signing. Otherwise, this could bring privacy risks;
  • Storage Root Key (SRK): a 2048-bit key created to generate all keys to encrypt the TPM’s stored data. This process is called wrapping, which generates and encrypts the key. The TPM only decrypts them during their use. Key use is sealed, i.e., is bonded to the TPM and the OS in a given state, defined by specific system integrity measurements: key use is only allowed if the current system hashes match those taken at the time when the key was created;
  • Attestation Identity Key (AIK): it’s an RSA key generated during the chip’s manufacturing, like the EK. It signs a system’s integrity measurements and provides them to an attestation challenger. The AIK may be configured with an AIK Certificate (an attestation certificate authority validates the TPM’s authenticity by checking the EK certificate and issues an AIK certificate) or associated with an Endorsement hierarchy (the AIK is wrapped with the EK. If the TPM is legitimate, it can decrypt the AIK and used it to sign data);
  • Sealed Storage: TPMs store data and provide access to them only through a system in a given state, similar to the process for storage keys, using integrity measurements and user authentication;
  • Attestation: a system’s measurements are stored in the TPM’s Platform Configuration Registers (PCRs). When a remote challenger issues a system’s authentication, the TPM provides such measurements signed with its AIK.
A TPM chip. Extracted from: https://bristeeritech.com/it-security-blog/what-is-a-tpm-chip/.

Hardware-Security Modules (HSMs), as TPMs, are cryptoprocessors but are not bound to a motherboard. Aside from mobility, HSMs are mainly used for easy and mobile key storage and cryptographic acceleration for large-scale corporate environments.

An HSM. Extracted from: https://www.sefira.cz/en/hsm-hardware-security-module/.

TPMs and HSMs generally provide cryptographic functionalities with performance, scalability, and reliability almost on par with non-cryptographic hardware. Setup time is faster, with better portability. Authentication operations happen during power-up either through a pre-boot program or with a BIOS password. In the latter’s case, the credentials are used as keys to decrypt the drive encryption key. Following this logic, the most effective use of FDE solutions like BitLocker relies on joint implementation of FDE with embedded disk encryption hardware like TPMs.

Additionally, both technologies can protect keys better than software-based solutions since they are tamper-resistant, block direct key access, and work at a very low computational level, which makes malware infiltration harder.

On the other hand, the IT and cybersecurity communities have arguments pointing out possible issues with those hardware-based solutions:

  • Manufacturers and vendors have excessive power to restrict their software and hardware use. Interoperability measures and standards are fundamental so those solutions from different vendors may work together;
  • The only entities directly accessing the EK are the TPM and its manufacturer. Since the EK is tied to a system and the user’s identity, there may be essential problems regarding privacy;
  • Supply-chain attacks against TPMs’ and HSMs’ manufacturing may compromise their keys and have a significant security impact on all the systems that use them. Moreover, key management, in general, is a potential single point of failure: losing the keys means losing an entire system and its data.

Boot Integrity

The boot integrity process generally consists of a sequence of checks that begins with a trusted ROM bootloader initiating it. It checks the validity of the next bootloader, which then validates the following OS software components. This sequential process, called chain of trust, continues until all the OS is loaded. Microsoft’s Secure Boot is an example of boot integrity, being an extension of UEFI that only boots trusted software from the manufacturer.

During the release of Windows 8.1, Microsoft also introduced Measured Boot, which incorporates integrity measures of each system’s components for a more detailed validation. A previous post discussed the implementation of the Core Root of Trust Measurement (CRTM) as the starting point for collecting system measurements and transmitting them to the PCR, ensuring boot integrity, and providing data for boot and system attestation. Secure Boot uses CRTM for its measured checks.

Boot Attestation

Boot attestation is a process initiated by a challenger entity to validate a system’s authenticity through its integrity measurements. As described above and in a former post, the TPM discloses information such as AIK-signed PCR system measurements. The challenger then decides whether to trust the target system.

Hardware Root of Trust

FDE, SED, TPM, and HSM are all hardware security methodologies heavily reliant on cryptoprocessor robustness and optimal performance. At the core of those solutions is the root of trust, which defines the reliability of hardware-based solutions as their central and solid security feature compared to their software-based counterparts. This includes tamper-resistance and key isolation from direct access to the OS, other users, and potential attackers.

It’s crucial to note, though, that supply-chain attacks not only compromise the secured hardware’s keys. They may also compromise the cryptoprocessor’s root of trust, invalidating the entire chain of trust established by the manufacturer and used for boot integrity and platform attestation.

General Endpoint Security

Endpoint security solutions consist of hardware or software-based technologies that aim to establish an endpoint’s central security aspects (confidentiality, integrity, and availability).

Host-Based Firewalls, HIDS, and HIPS

Host-based firewalls (aka software firewalls) follow the same principles as network-based ones. They monitor and control connections between the endpoint and the network. They can be stateful, i.e., monitor the state of each connection individually, or stateless, which means they will follow pre-determined rules that filter traffic. Some fundamental differences, though, denote the limitations of a host-based firewall.

First, given their location, host-based firewalls’ inspection may seem disruptive to the user, which, in turn, may deactivate it for convenience. The problem with this behavior, though, is that some malicious connections not understood by the user may inadvertently pass through, laying out possible attack vectors.

Secondly, this type of firewall often checks only inbound connections by default. This limitation is especially problematic when dealing with malware that communicates to a command-and-control (aka C2) server to report status, exfiltrate information, and request commands.

Host-based Intrusion Detection Systems (HIDS) monitor system data, looking for signatures of a possible intrusion. They may collect and consolidate information such as system logs, port accesses, and program commands while calculating and storing hashes from different system parts, looking for inconsistencies that may indicate an attack. An example of implementing this last function is Linux Advanced Intrusion Detection Environment’s (AIDE) integrity check tool, which periodically collects system hashes and performs comparisons with previous ones.

HIDS may come as single applications or clients that relay system information to an IDS server.

Host-based Intrusion Prevention Systems (HIPS) are similar to HIDS but capable of preventing an attack detected instead of just notifying it.

It’s important to note that both HIDS and HIPS may find false-positive results. Since they use integrity measurements as indicators of an attack, any changes in their hashes (e.g., registry keys) may be viewed as malicious, including those caused by normal updates. Therefore, their proximity to the protected system provides detailed information while being a possible source of false-positive detections.

It’s good practice to run HIDS and HIPS integrity checks when installing a system for the first time. Such practice generates a clean baseline of hashes that the HIDS/HIPS can compare with future hashes after the system interacts with the environment, including the Internet.

Finally, similar to host-based firewalls, HIDS and HIPS effectiveness depends on the user’s posture, which may disrupt their functionalities should they prove inconvenient. User education is paramount.

Antimalware et al.

This topic includes different countermeasures against malware, such as antivirus, antimalware, antispyware, and pop-up blockers.

Antivirus is a program dedicated to detecting and blocking the action of viruses. It scans all the essential parts of a system (memory, boot sector hard drive, and removable media), looking for software signatures (a unique string sequence) compatible with virus signatures already discovered. The effectiveness of an antivirus program relies heavily on the periodic updates provided by the vendor’s signature database.

Unknown viruses are a significant gap for a signature-based antivirus since, by definition, those types of malware aren’t in any database yet. A similar issue happens with polymorphic viruses, a subtype that can change their code for better adaptation and concealment. So, an alternative is to incorporate heuristic detection methods into the antivirus. It tests a suspect program’s code like static and dynamic code analyzers: the antivirus looks for suspicious instructions that match a database of viruses or tests the program's behavior in a controlled environment.

Aside from antivirus, there are more focused programs worth noting:

  • Antispyware programs execute system scans, looking for spyware signatures and preventing their action.
  • Pop-up blockers, as the name indicates, are focused on preventing the constant opening of pop-up windows, especially ads in web browsers. While this may be convenient, legitimate program installers who rely on pop-ups for their processes can be disrupted.
  • Finally, antispam programs, which may be a part of a more extensive antivirus solution, are commonly installed in email servers and clients. They implement heuristic filtering, i.e., scanning incoming emails and deciding whether they are spam through a pre-defined rule set typically based on words and their frequencies on the email text. Fine-tuning an antispam is fundamental to avoid false positives and may include incorporating an allow-list with authorized addresses (such as family).

With malware sophistication increasing, a more holistic approach became necessary. Antimalware programs implement this concept with more functionalities and extensive scope since they aim to prevent an attack and provide protection and mitigation during and after it. They collect and analyze context-based and behavior-based data (e.g., IoC/Big Data analysis and Sandbox testing). Due to such complexity and objectives, antimalware is often defined as a complex enterprise solution.

Cisco’s Advanced Malware Protection (AMP) is an excellent example of an antimalware solution. Extracted from: https://www.pxosys.com/advanced-malware-protection-with-cisco-amp-everywhere/.

Applications Allow and Deny Lists

A system may limit the number of applications that can run on it. The implementation methodology can be of two types: allow or block/deny lists. The former pre-determines which applications are permitted while blocking other programs that aren’t on the list. The latter, on the other hand, focuses on explicitly listing unauthorized applications. Any program outside this list may be executed.

From a cybersecurity standpoint, allow lists, while more restrictive and potentially more inconvenient, are sound control methods compared to deny lists, which may inadvertently authorize a malicious or more vulnerable application.

Data Execution Prevention

Data Execution Prevention (DEP) is a technology that prevents executing a malicious or unauthorized program on a system. It may be hardware or software-based, both types with different approaches:

  • Hardware-based DEP sets attributes to memory spaces so that a program cannot run on them;
  • On the other hand, software-based DEP works by throwing an exception every time a malicious program tries to run. This procedure is critical when dealing with programs that may exploit software issues regarding exception handling.

Both types of DEP may work concomitantly. However, while more limited (e.g., old code may trigger a software DEP), software-based DEP can work without its hardware-based counterpart.

Data Loss Prevention

Data Loss Prevention (DLP) software scans documents in a system, preventing sensitive data exfiltration, intentional or not. It also monitors and establishes rules for user-data interaction. As an endpoint solution, DLP mainly targets data in use, while data in motion requires network solutions, and data at rest demands storage solutions.

Ideally, proper DLP solutions can scan any file regardless of bandwidth and traffic volume (i.e., they are scalable) and perform those activities with as little lag as possible, protecting while preserving the user’s convenience (i.e., they have high performance).

Generally, large enterprise DLP technologies work with central management and control servers that efficiently enforce DLP policies on multiple systems simultaneously.

DLP programs especially focus on email and thumb drives, often the primary avenues through which data may leak. They can block users from sending sensitive documents through email, copying them to removable media, or even encrypting data sent through these channels.

Removable Media Controls

Removable media can often be a data exfiltration path and an attack vector for system infection. One may consider entirely prohibiting the use of thumb drives. Nevertheless, their convenience made them frequently adopted in numerous industries, from the medical field with patient data to aviation, where the pilot may upload flight plans to airplane computers.

Consequently, together with DLP solutions, a system may implement scans that validate the trustworthiness of a connected thumb drive, granting access based on a positive result. This procedure can be executed by an antivirus solution and through an AD’s GPOs.

Endpoint Detection and Response

Today, programs integrating different security solutions into one package are needed, reducing complexity and facilitating implementation and management. Endpoint Detection and Response (EDR) fulfills this demand with a layered approach: through multiple aggregated solutions, it seeks to prevent, respond, and recover from security incidents:

  • FDE
  • DLP
  • Antimalware and antispyware
  • Allow and deny lists
  • Firewalls, HIDS, and HIPS
  • Forensics

Web Application Firewall

Web Application Firewalls (WAFs) are placed in servers before web applications and APIs, preventing attacks such as SQL injections and XSS. Such protection is possible since WAFs perform deep packet inspection, i.e., scan packets deeper in the TCP/IP stack, going through information in requests and responses. WAFs may be signature-based or behavior-based.

--

--

Rodolfo Santos Flaborea
Rodolfo Santos Flaborea

Written by Rodolfo Santos Flaborea

Psychologist and Cybersecurity Student. Certified in Security+ and currently studying for CREST CPTIA (Cyber Threat Intelligence).

No responses yet