Sharks in the Moat

Home > Other > Sharks in the Moat > Page 14
Sharks in the Moat Page 14

by Phil Martin


  The principle of the weakest link is not far from the game show. It states that the weakest component will dictate the resiliency of your software to an attack. It does no good to strengthen an already strong component if a weaker component is not first addressed. Examples of a component might be code, a service or an interface.

  This principle embodies the well-known statement that ‘a chain is only as strong as its

  weakest link’.

  With any system, the weakest link is most often represented by a single point of failure. However, in some systems the weakest link is actually comprised of multiple weak components that combine to create an even weaker source. The best way to address this principle is to implement a defense in depth approach.

  However, when selecting the weakest link to improve upon, we must apply some common sense. Let’s suppose a thief is considering robbing either a bank or a convenience store. Which one would he go after? The bank has a lot more money but will also have a greater level of security. The convenience store will have less money, but it is a lot easier to escape from without being chased. The thief ultimately decides to go after the weaker target in spite of the smaller payout.

  Likewise, we must think like an attacker when choosing which weakest link control to strengthen. Our HR system will contain a lot of juicy information that allows an attacker to steal identities, but access will almost certainly be restricted to the intranet. A publicly accessible site, on the other hand, can be hacked across the Internet but the treasure it holds is less-likely to be of great value. The attacker will probably choose the public site, however, and is where we should more than likely focus our energies.

  Chapter 29: Leveraging Existing Components

  The leveraging existing components principle encourages reuse as a way to keep the attack surface of an application to the minimum and to avoid unnecessary risks of introducing new vulnerabilities. Unfortunately, it can often be at odds with the least common mechanisms principle which encourages separation of functions to reduce cross-privilege access.

  By far the best example of this principle is when implementing a service-oriented architecture design, or SOA design. SOA services are designed from the ground-up to be reusable by remaining loosely-coupled, highly cohesive and implementing the least common mechanism principle. It is interesting to note that by ensuring that cross-privilege access is not a concern, the services become even more reusable, bringing two apparently competing principles into alignment with one another. Leveraging existing components is the security principle that promotes the reusability of existing components.

  We’ve already gone over the danger of trying to implement home-grown encryption algorithms. It is always better to select an open design that has been vetted multiple times by many smart people. Enough said.

  This principle also recommends the need for a scalable architecture that is implemented in layers. For example, breaking an overall solution into presentation, business and data access tiers allows for both logical and physical scaling. Additionally, implementing a data access layer, or DAL, that is invoked for all data access is a great way to enforce the leverage of an existing component. By following this principle, we can achieve three things:

  1) The attack surface stays minimized because we are not introducing new code.

  2) New vulnerabilities are not being introduced since no new code is being written.

  3) Productivity is increased because developers are not spending time reinventing the wheel.

  Chapter 30: The Attack Surface

  The attack surface of a software system represents the number of points which could expose potential vulnerabilities. As software complexity increases, the attack surface by definition also increases. However, it is possible to decrease the attack surface without decreasing complexity by exposing the minimal functionality an attacker has access to. In other words, an incredibly complex system with a single API exposed to the public Internet has a very small attack surface. The complexity of the system behind that lone API still increases the attack surface, but because it is well-abstracted away from public access, it is a much smaller increase than if the system were attached to a large number of APIs.

  The attack surface is represented by the number of entry and exit points that might be vulnerable to an attack. During the requirements phase misuse cases and a subject-object matrix is used to determine these points. The output of an attack surface evaluation is a list of features that an attacker may try to exploit. This list is prioritized based on the impact severity, and potential controls are identified. For example, the following is a list of features that might need to be examined:

  Open ports

  Service end points

  Open sockets

  Active web pages

  Access control lists

  Symbolic links

  A relative attack surface quotient, or RASQ, describes the relative ‘attack ability’ of a given software or system against a baseline. RASQ, developed by Microsoft in the early 2000’s, produces a unitless number that is calculated by assigning a severity number to each opportunity for attack and adding all severities up. In this way we focus on improving the actual security of a product instead of simply trying to battle code level bugs or system vulnerabilities.

  Each attack point is assigned a value based on its relative severity, called an attack bias. Related attack points are gathered together in groups called a root vector, with the sum of all attack points representing the attack surface value.

  A RASQ calculation is comprised of three primary components – what target an attacker would like to reach, what communication channels and protocols an attacker will use to reach the target, and what access rights are associated with the target.

  First, let’s discuss the target. Any system will have some juicy targets, along with various components that an attacker can leverage to reach the target – these components are called enablers. Now, a target can be either a process or data. Processes are active blocks of code currently running in memory, while data is represented by static resources such as files, directories or registries. If an attacker is targeting a process, he wants to either interrupt or take over that process. If the target is data, the attacker will attempt to destroy, alter or steal the target. An attacker could leverage a process as an enabler to target a file. For example, let’s suppose there is an encrypted file that I would love to steal, but I don’t have the encryption key, so stealing the file is useless. However, I know about a process that can decrypt the file and read its contents. if I can take over that process, then I could use it to read the protected file. In this case the process is an enabler, and the target is the content of the encrypted data file.

  In this example, I would need to somehow communicate with the process. There are two aspects of communication I would need to consider – the channel and the protocol. The channel controls how I am able to send commands to the process and can be of two types – message passing or shared memory. If I were to communicate with the enabler process over the network using a TCP socket, then I would be using a message passing channel. On the other hand, I could choose to open a text file, place some commands into the file and close it, knowing that the enabler process will execute whatever commands I placed into that file. This is an example of using a shared memory channel to communicate with the enabler process.

  For either one of those channels to work, though, I would need to have intimate knowledge of the protocol the enabler process uses. If I choose to use the shared memory channel, I would need to know the format of the file, the possible commands I could insert, and the sequence of data the process expects to find inside of the file. If I choose instead to use a message passing channel, then I will still need to know the protocol, such as the order of bits and bytes and possible values the process expects to receive.

  So, we’ve covered two of the three components needed to calculate a RASQ value – what the attacker wishes to reach, and the channels and protocols an attac
ker could use. Now we need to discuss the final component, which are the

  access rights associated with each resource target or enabler. The greater the access rights, the more protected a resource is. The more protected a resource is, the harder it is for an attacker to reach it.

  From these three components, a RASQ value is computed. But what can we do with this value? The RASQ value says nothing about how secure a system is compared to other systems or any kind of standard benchmark. Its value is found only when comparing it to previously computed values for the exact same system, software or product. RASQ can only tell us if we were increasing or decreasing the attack surface for a specific software package. But make no mistake – this is pretty huge. The goal is to decrease the surface area for each progressive release, and RASQ is really the only reliable mechanism we have to tell us if we are being successful.

  The information presented in this book is not enough for you to run out and start using RASQ – that would require a significant amount of time and is beyond the scope of what we want to accomplish in here. But there will come a time when you will find the need to be able to calculate relative attack surface areas, and at that time you will be ready to dive head-first into RASQ.

  Chapter 31: OWASP

  The Open Web Application Security Project, or OWASP, is a global organization that is focused on application security, especially web security. While it does not focus on any one technology, it does provide two different types of projects – development and documentation. The development projects are geared to providing tools while documentation projects offer guidance on security best practices for applications.

  One of the most popular OWASP publication is the OWASP Top 10, listing the top 10 security risks along with the appropriate protection mechanisms for each. The current list is shown in Figure 35. We’re going to quickly cover the most popular and helpful guides that OWASP publishes.

  Figure 35: OWASP Top 10 Web Application Security Risks

  The OWASP Development Guide provides end-to-end guidance on designing, developing and deploying secure web applications and web services. Targeted for architects, developers, consultants and auditors, it covers the various security controls that can be built into software.

  With the target audience of architects and developers, this guide shows how to detect web application vulnerabilities in code, and what safeguards should be used to address each. The guide requires that each reviewer be familiar with the following four components:

  Code, or the programming language.

  Context, or a working knowledge of the software.

  Audience, or familiarity with the end-users.

  Importance, or the impact to the business if the software is not available.

  This guide is crucial to efficiency, as the cost of conducting code reviews is much less than the cost of having to address issues after testing has discovered them.

  The testing guide is, not surprisingly, focused on providing the procedures and tools necessary to validate software assurance. The target audiences are developers, testers and security specialists.

  There are other projects still in-progress that are worth mentioning and that you should keep an eye on:

  Application Security Desk Reference, or ASDR

  Enterprise Security Application Programming Interface, or ESAPI

  Software Assurance Maturity Model, or SAMM

  Chapter 32: Controls

  A control can be best described as any process or technology that mitigates a risk. There are five types of controls that we will cover – detective, preventive, deterrent, corrective, and compensating.

  A preventative control stops attempts to violate a security policy, such as access control, encryption or authentication

  A detective control warns us of attempted or successful violations of a security policy, such as an audit trail, intrusion detection method or the use of checksums

  A corrective control remediates or reverses an impact after it has been felt. An example is a backup restoration process, which will recover a system that has been so damaged it is no longer usable in this current state

  A compensating control makes up for a weakness in another control

  A deterrent control provides warnings that can deter a potential compromise.

  Examples might be a warning sign that cameras are monitoring the premises, or login warning banners

  Let’s use an example to cover all five of the control categories. Let’s say we need to protect a system from evil hackers coming in over the Internet. We put a firewall in place as a preventative control to try and stop unwanted traffic from getting into our network. We let the user know during login that we are recording their activity as a deterrent control to keep them from misbehaving. On the network we have an intrusion detection system, or IDS, that will act as a detection control by looking for hackers trying to carry out a brute-force login attack against the system. In case we don’t catch the attacker and they compromise a system, we use a backup and restore process as a corrective control to bring the system back to a usable state. And finally, we add session timeouts as a compensating control so that if credentials are compromised, the damage is limited to 20 minutes.

  Figure 36 shows the relationships between controls and their effects.

  Figure 36: Control Types and Effect

  PCI DSS has some special requirements when it comes to security controls and requires the following for any compensating control:

  It must fulfill the original requirement.

  It must provide a similar level of defense as the original control.

  It must be a part of defense-in-depth and not compete with other controls.

  It must mitigate the same level of additional risk encountered by not using the original control.

  Figure 37 shows how PCI DSS expects compensating controls to be documented. Beyond understanding the various types of controls and when to use each, there are 5 additional activities you should grasp in order to ensure your software remains secure. These include monitoring, incident management, problem management, change management, and the backup/recovery/archiving function.

  Information Required

  Explanation

  Constraints

  List constraints precluding compliance with the original requirements.

  Objective

  Define the objective of the original control; identify the objective met by the compensating control.

  Identified Risk

  Identify any additional risk posed by the lack of the original control.

  Definition of Compensating Controls

  Define the compensating controls and explain how they address the objectives of the original control and the increased risk, if any.

  Validation of Compensating Controls

  Define how the compensating controls were validated and tested.

  Maintenance

  Define processes and controls in place to maintain compensating controls.

  Figure 37: PCI DSS Compensating Controls Worksheet

  Chapter 33: Open Systems Interconnection Reference Model

  In the 1980s when the concept of a global network was still being realized, there were many competing networking standards. ISO attempted to consolidate all the standards by creating a single protocol set to replace them. Unfortunately, it never quite caught on. But, the model of this protocol set, the Open Systems Interconnection model, or OSI, was adopted by the entire industry and is still used today to describe how network communication takes place. Figure 38 lists the 7 layers and associated protocols.

  Protocol

  A network protocol is a set of rules that systems use to communicate across a network. Network communication models are vertically stacked layers, and each layer has its own unique protocol that no other layer understands. Here’s what happens: you start at the top layer and give it some data. That layer wraps the data with its protocol – just think of the protocol as being a wrapper that encapsulates the data – and hands
the protocol-wrapped data to the next layer beneath the first. The next layer doesn’t understand what was passed to it – it is just data. So, it wraps that ‘data’ up into its own protocol and passes to the layer beneath it. And so forth, until we get to the bottom of the stack. The bottom layer knows how to ship the entire package off to some other computer, where the entire process is reversed until we wind up with the data originally given to the top layer. This is a severe simplification, but at a 10,000-foot level it works.

  Application Layer

  Layer 7 – the Application layer – is the top layer that an actual application talks to. An ‘actual application’ might be a browser, a windows application, a web server, a smartphone app – anything that needs to send data across a network. Layer 7 accepts a chunk of data and wraps it into a high-level networking protocol such as:

  LDP (Line Printer Daemon)

 

‹ Prev