Book Read Free

Sharks in the Moat

Page 60

by Phil Martin

Log management

  Audit trails

  Interconnectivity

  Encoding

  Hashing

  Encryption

  Load balancing, replication or failover approaches

  Secure configuration

  The integrity of the development environment can be assured by using code repositories, access control, and version control.

  A disclaimer is a legal maneuver to disallow

  holding an entity liable for some condition or action that happens at a future time. For example, when you buy a used car, it is often advertised ‘as-is’, and when you purchase it the ‘as-is’ clause prevents you from holding the seller responsible if it turns out to be a lemon. A disclaimer will almost always protect the seller. When it comes to the acquirer/supplier relationship, you must always be on the lookout for explicit or implied disclaimers. The contract language should explicitly state that the entire relationship is contained within the contract and all external clauses or requirements are superseded by the contract. If a contract is not in place when work starts, you may very well be held to legal language contained within a pre-existing disclaimer. Caveat emptor, ya know!

  Even though we may have crafted the perfect contract with a supplier, we must always remember the oft-stated phrase: You can never outsource your own risk. Even if the contract states that a supplier is responsible for any vulnerability found in their software, do you think your customers will really care what the contract says when their identities are stolen because of a data breach in your own environment? Your company’s reputation will take the hit, not the supplier’s. In the court of public opinion, facts seldom count – it only matters who yells the loudest. And with the modern world of Facebook, you can be virtually guaranteed the consumer can yell louder than anyone else.

  Chapter 52: Step 3 - Development and Testing

  While testing is crucial to ensure software can deliver on the promised quality attributes and security controls, it also functions as another important security gate. Once development starts all the way until the final product is delivered to the production environment, there is always a chance that malicious code could be injected or purposefully written into the product by internal team members. In addition to code reviews, testing is our best hope of uncovering such vulnerabilities. When we introduce a supply chain that incorporates external

  parties, the risk of this attack vector increases exponentially.

  The acquirer must require suppliers to describe and provide evidence for all security controls embedded into software. Additionally, each supplier must be able to demonstrate that they have remained compliant with all assurance cases previously documented during the requirements phase. The acquirer can validate compliance using penetration testing, regression testing and certification & accreditation activities. During these activities, we are not simply looking for the absence of defects, but rather positive proof that security controls exist and function correctly.

  Chapter 53: Step 4 - Acceptance

  Before accepting software from a supplier, three activities should be carried out as shown in Figure 168 – checking for anti-tampering resistance, ensuring that authenticity and anti-counterfeiting controls are in-place, and verifying all supplier claims.

  Figure 168: Acceptance Steps

  First, it is crucial that software cannot be tampered with along the entire supply chain route. If tampering is detected, then it must be reversible. We can achieve detection of tampering by hashing the code, which when combined with a digital certificate, results in code signing. On receipt of the software, we simply compute the hash and compare it to the original – any difference lets us know the software has been tampered with.

  Next, we must asset the authenticity of the software by examining the digital certificate we just mentioned. This ensures us that the code originated from the supplier and was not hijacked by a malicious party.

  Lastly, we cannot simply trust that a supplier has implemented all security controls that they have promised – we must verify them. This starts by determining if there are any known vulnerabilities in the software and

  addressing those that are not acceptable. Black box testing can then be carried out if the source code is not available. While we could attempt to reverse engineer the original source code, this activity will almost always be restricted in the licensing agreement. Before carrying out black box testing on software prior to a purchase, be sure the supplier has been notified and has provided legal approval. Otherwise, you might find yourself on the receiving end of a lawsuit claiming you are trying to illegally reverse engineer their code. In fact, it is a good idea to have a third party carry out this black box testing to remove any bias.

  When verifying supplier claims, we must actively test the software against the claims. For example, if the supplier claims to have ‘strong’ authentication measures, find out what they mean by ‘strong’ and verify it. You might find out that their version of ‘strong’ is actually ‘weak’ and therefore unacceptable.

  Chapter 54: Step 5 - Delivery

  Once software has been deemed acceptable, we don’t just simply accept it and go on our merry way. What if it turns out to be unstable once it encounters heavy use? What if the supplier goes out of business and is no longer around to provide patches and updates?

  When dealing with the purchase of a software product that does not include the source code, we can protect ourselves from the supplier going out of business by setting up a code escrow. With this approach, the supplier gives a copy of the source code to a mutually agreed upon third party, who holds the source code and will not release it to the acquirer unless the supplier is no longer a valid entity, or reneges on their contractual obligations. Figure 169 shows how the relationship works.

  Figure 169: Code Escrow

  Code escrow can be viewed as a form of transferring risk through insurance that protects against the disappearance of a supplier. Note that this relationship can also benefit the supplier if unmodifiable source code is part of the license agreement. In this scenario, the acquirer is given the source code so that its security can be verified - but modifications to the source code is not allowed. The escrow holding the unmodified code can be used at any time to compare against the acquirer’s version to ensure modifications have not been carried out. We can also escrow object code along with source code if desired and is actually recommended if multiple versions of the product are to be escrowed.

  However, code escrow is not guaranteed to deliver on its promises unless the acquirer carries out the proper validation of three capabilities as shown in Figure 170 – code retrieval, compilation and versioning.

  Figure 170: Code Escrow Validation

  When verifying code retrieval, we ensure that the process to obtain a copy of the code from the escrow party works. This includes testing that we possess the required credentials to retrieve the code as well as ensuring that spoofing of our credentials is properly protected against. Additionally, we need to ensure that change control mechanisms are in place to protect the integrity of the escrowed contents.

  Just because we can retrieve whatever code is escrowed does not mean that we can actually compile it into a version that can be executed. Some source classes might be missing, or perhaps external libraries that must be linked. The code will no doubt require a specially configured environment in which to run, and we must verify that we can retrieve, compile and deploy the code into a running state before claiming success.

  It is a rare product that does not change over time – at least I have never encountered one. This is not a problem as long as proper version management is carried out. It is possible that the version of code in escrow does not match the version you are running in production, and if you ever have to rely on escrowed code you are going to be hurting badly. Validate that escrow is updated each time a version or patch is released.

  Code escrow agreements are usually executed as part of an agreement when developing custom software. But in
some cases, a developer may create software and hold it in escrow until such a time as a fund-raising goal is reached, at which time the source code is released into the public domain. This is called the ransom model, and the software is sometimes called ransomware. However, due to the proliferation of ransom malware, that term has been taken over to refer to an attacker installing ransomware to encrypt digital files until a ransom is paid, at which time the attacker gives the victim the decryption key.

  The last topic we want to cover regarding handover of software within the supply chain regards export and foreign trade issues. Both the acquirer and supplier are responsible for ensuring that regulatory requirements are met before the software crosses international borders. While the supplier is normally responsible for export licenses, there are some cases in which the acquirer is required to submit an application. However, the supplier is responsible for letting

  the acquirer know of any needs well before we get to the point of handing software over. In a situation where this information is not communicated in a timely fashion, a timeframe must absolutely be setup in which to complete the entire transaction.

  For each software product several key pieces of information must be identified. The export control classification number, or ECCN identifies the export category under which the product will fall. The export list numbers need to be identified, as well as the commodity code classification, which is useful for foreign statistics. The country of origin will need to be noted as well.

  The World Customs Organization, or WCO, oversees international import and export activities, and has developed a framework of standards called SAFE. This framework addresses two different strategies, called pillars of the SAFE framework – customs-to-customs that increase cooperation between countries, and customs-to-business that loops businesses into the process. SAFE has the following goals:

  Establish standards that provide supply chain security.

  Enable integrated supply chain management for all modes of transport.

  Optimize the role, functions and capabilities of Customs.

  Promote the seamless movements of goods through secure international trade supply chains.

  Chapter 55: Step 6 - Deployment

  Once software has been accepted and delivered, it is time to deploy the product. In the real world, deployment has probably been an on-going activity into the development and test environments for quite a while, and more than likely – if you’re smart – you have already executed dry-runs of the production deployment. At this point, however, it is time for the real thing. An operational readiness review, or ORR, is a gating activity that determines if the software is truly ready to be deployed. This includes validating three things as shown in Figure 171 – a secure configuration, perimeter security controls, and systems-of-systems security.

  Figure 171: Operational Readiness Review Components

  By this point we should have already ensured that the software is secure by design. If we have failed in this area, there really is little point in continuing, and we should be dragging the whiteboards back out and starting over! Assuming we did indeed implement security by design, we need to cover the other two D’s – security by default and secure in deployment. This is called a secure configuration. Suppliers must provide the proper settings that will result in a secure configuration along with the associated risk if the settings are not followed. When a product is secure by default, it means that we can install the software as delivered from the supplier with no additional changes needed to secure the product. When we are secure by deployment, then a secure configuration is maintained by applying relevant patches, and that it is continuously monitored and audited for malicious users, content and attacks. To ensure a quick and automated patching process, the supplier should be aligned with the security content automation protocol, or SCAP.

  Next, perimeter defense controls must be installed and properly configured. Not only should this capability be implemented in the final production environment, but it should have already been rolled out as part of a secure supply chain. Otherwise, we risk losing integrity when our software is tampered with. Perimeter controls include firewalls, secure communication protocols such as TLS, and proper session management. With environments moving to the cloud, maintaining a proper perimeter defense is becoming more difficult.

  The last thing an ORR will validate is that the software is securely integrated with the system-of-systems, or SoS. SoS simply refers to the entire ecosystem of external components and applications that the incoming software will touch. Even if our new product is the most secure product ever imagined, as long as there is a linkage to an unsecure system we could be in trouble. In general, risk will always increase with the number of external linkage points, even if other products are deemed to be secure as well.

  All suppliers that participate in an SoS must prove that their system has undergone an attack surface analysis using threat modeling, secure coding and security testing. It is crucial that the acquirer execute integrated systems testing and not rely on testing the new component or product alone. Just as combining two services together can result in vulnerabilities that neither has when standing alone, combining two software products together can easily produce additional risks that were never envisioned. For example, a new linkage to our system may provide a new path for unvalidated input to get into our system. Software that connects directly to an external

  database instead of going through a service layer is notorious for generating this type of risk. Another common emerging risk is that the communications channels between systems is left unsecure due to incompatibilities, performance or configuration issues.

  Chapter 56: Step 7 - Operations and Monitoring

  So far, we have focused primarily on the infrastructure role as it pertains to software developed in-house. There are some nuances to be aware of when dealing with software that is sourced externally, whether it is software purchased off the shelf or custom software being created by contractors. Let’s examine both the similarities and differences.

  Let’s suppose that we have finally deployed our product, and we can sit back and take a break before the next project launches. If you believe that, then you are pretty green to the software development world, aren’t you? The first few weeks after a successful rollout is many times the most hectic and busy time period for the development team as they track down insidious bugs related to load or threading issues that never popped up during testing. If logging was never properly implemented, this quickly becomes a nightmare as the product refuses to stay functional and no one knows why. Beyond that, the infrastructure team remains busy applying patches as quick as the development team can generate them. We must also ensure that only the right people have access to the environment and handle outages as they popup.

  In short, there are six distinct activities that go on post-deployment as shown in Figure 172 – ensuring run-time integrity, applying patches and upgrades, ensuring proper access termination, extending the software, continuous monitoring, and incident management.

  When ensuring run-time integrity, we are effectively configuring the system to check access rights of software in real-time to ensure malicious code cannot execute. There are two primary methods for doing this – code signing

  Figure 172: Post-Deployment Activities

  and TPM. Code signing, as we have already discussed, involves the use of hashing and digital signatures to ensure code has not been tampered with and is authentic. At run-time, this is treated slightly differently. For example, .Net Code Access Security, or CAS, looks inside of an assembly and decides the level of access to OS-level functionality a library can have. For this to work properly the environment must be properly configured and maintained with that same level of security through each subsequent patch and update. The second method for ensuring run-time integrity is to use the trusted platform module or TPM. We have already discussed the TPM at-length, but at run-time it is able to verify both hardware and software as long as the c
ode has been signed. If code signing is not used, the TPM will not be very effective.

  Patches and upgrades must be religiously kept up-to-date. If you recall, one of the valid responses to encountering a risk is to avoid it, or simply stop doing whatever activity is causing the risk. Clearly, that is not an option with our shiny new software, and we therefore have no choice but to mitigate the new risk by patching the software with a hotfix. When dealing with in-house software, this process is not really that difficult, but when using a supply chain, the problem doubles in complexity with each link in the chain. All of the previously-discussed techniques come into play at this point to ensure proper patch management – repository access, provenance points, hash sums and code signing, testing before acceptance, etc. How well the processes after deployment work are completely dependent on how well the processes before deployment worked.

  One of the most overlooked post-deployment process is that of access rights termination. As employees change roles or leave the company, or suppliers are added and removed from the supply chain, accounts and rights tend to hang around unless there is a very well-defined process to remove such things. By far the biggest risk due to a failure to remove access is represented by disgruntled current and former employees. When software completes the handover process, only the receiving party’s personnel should be allowed to access or modify the code.

  Beyond the need to react to bugs and discovered vulnerabilities, software must also be enhanced as new features or capabilities are requested. This may require updates to the software’s own codebase, or perhaps extending it to connect to a new external capability. In either case, we call this a custom code extension. It is tempting to fast-track such activities without generating or updating the threat model, going through the proper security code reviews and applying proper regression testing. Resist this temptation! A proper chain of custody must be maintained through this process.

 

‹ Prev