Sharks in the Moat

Home > Other > Sharks in the Moat > Page 42
Sharks in the Moat Page 42

by Phil Martin


  We have already discussed in detail how encryption can be applied to public key infrastructure, or PKI – that is the combination of CAs, RAs and public/private keys. We talked about how encryption enables digital signatures and digital envelopes, or the combination of an encrypted message and the secret key. And we discussed the value of hashing. Now let’s see how encryption is used in other applications.

  Transport Layer Security, or TLS

  Transport layer security, or TLS, is the successor to SSL. TLS provides encryption services for Internet traffic and is most often associated with communication between a browser and a web server. It operates in three phases:

  1) The browser and server negotiate to choose the algorithms for public key asymmetric, symmetric and hash functions.

  2) The browser and servers use PKI to exchange the shared secret key.

  3) The remainder of the conversation is carried out using the shared secret key.

  TLS is used for the ubiquitous HTTPS, or secure hypertext transmission protocol.

  IP Security, or IPSec

  Whereas TLS works at or above the OSI network layer, IP Security, or IPSec, lives completely within the network layer where IP lives. IPSec is used to secure IP communication between two endpoints, effectively creating a VPN that can run in two modes – transport mode and tunnel mode. The two modes differ in the amount of data encrypted.

  Transport mode only encrypts the IP payload, which is then called the encapsulation security payload, or ESP. The header is left as clear text. This provides confidentiality because the content is encrypted but does not provide non-repudiation.

  The tunnel mode encrypts both the payload and header and adds an authentication header, or AH, to provide non-repudiation as well as confidentiality.

  In either mode, a security association, or SA, is established along with the IPSec session. The SA dictates various configurations such as encryption algorithms, security keys, and more. The SA is sent along with the encrypted payload so that both parties are able to communicate.

  Note that so far IPSec has only used symmetric encryption. In the same way we use asymmetric encryption to provide key management for PKI, IPSec also has a corresponding standard way of managing keys called Internet security association and key management protocol, or ISAKMP.

  Secure Shell, or SSH

  Secure shell, or SSH, is a client-server program that allows remote login across the Internet. It uses strong cryptography and digital certificates and is a much better replacement for the unsecure Telnet. SSH is implemented at the application layer of the OSI model.

  Secure Multipurpose Internet Mail Extensions, or S/MIME

  Secure multipurpose internet mail extensions, or S/MIME, is a secure email protocol that authenticates the identity of both the sender and receiver, and ensures integrity and confidentiality of the message, including any attachments.

  Virtualization

  Before we discuss how great cloud computing is, we need to discuss one of the core concepts behind it. Virtualization allows multiple operating systems, called guests, to live on the same physical server at the same time – all running simultaneously without knowing each other exists. As far as the operating system is concerned, it has the complete use of a stand-alone physical computer – called the host - when in reality it is sharing CPUs, memory and storage space with other OSs. “How is this unholy magic carried out?”, you ask. By using a software layer called a hypervisor. The hypervisor is the only one who can see the entire physical computer, and it carves it up into chunks so that each virtual server is none the wiser. See Figure 111 to get a better understanding of the various layers.

  Figure 111: Virtualization

  The hypervisor does add a little bit of overhead because it is software running on a computer, after all. But that loss is more than made up for by allowing the various OSs to use hardware resources while other OSs are idle. It turns out that rarely do the OSs all want the same resources at the same time, so the net benefit is that we can run more OSs on a single beefy computer than we could on many smaller computers.

  But that is not the major win here – because each OS runs within a tight window in memory, it is relatively easy to take a snapshot of the entire OS – memory, persisted storage, application state, you name it – and store it in a single file called an image. That means three things:

  1) We can run different OSs or versions of an OS at the same time on the same host.

  2) We can easily backup a virtual machine by taking a snapshot, or image.

  3) We can easily clone that virtual machine by spinning up multiple copies, all running at the same time.

  And that third point is where we get the biggest win – we can increase or decrease the number of running OSs on the fly. As load increases, we simply increase the number of virtual servers to handle requests. When load goes back to normal levels, we shut down those extra virtual machines. And we can do all of this programmatically so that no one has to be watching. That directly translates into cost savings. Each running virtual machine uses hardware and bandwidth, so we only pay for the extra capabilities when we need them. This is not one of those ‘sounds good on paper’ things – Microsoft’s Azure, Amazon’s AWS and Google Cloud are wildly successful examples of virtual cloud computing.

  Now, don’t think that this only happens in the public cloud – many large companies create their own internal virtual capabilities as well to save money. Of course, there are some disadvantages from a security view point. The physical host is now another single point of failure, and a compromise at the hypervisor level could allow an attacker access to all hosted virtual machines. As a result, special care must be taken to secure remote access to the host. Additionally, if the hypervisor runs into performance issues, all hosted OSs suffer as well. One last problem is quite interesting – since multiple virtual machines are all running in the same physical space, it is possible for one to reach over and access memory belonging to another, which would not be the case if hosted on different physical computers. The hypervisor must be hardened, tested and continuously patched to ensure this is not possible.

  To mitigate the risks just mentioned, strong physical and logical access controls to the host and its management console must be applied. Virtual machines connect to the network just like any other server, even though they sometimes share the same network card, and should be segregated in the network just like a physical machine would be. Finally, any changes to a host must go through stringent change management processes.

  Cloud Computing

  The concept of cloud computing has been around since the 1960s, but really came into its own when the Internet became a full-fledged force in the early 2000s. The idea behind cloud computing is that processing and data are somewhere in “the cloud” as opposed to being in a known location. However, the cloud does not have to be accessible across the Internet – many companies host their own cloud that is restricted to an intranet – only systems and people within the company’s own network can get to it.

  Cloud computing has five essential characteristics:

  1) It provides on-demand self-service by provisioning computing capabilities without any type of human interaction.

  2) It is accessible over a broad band network and can be used with diverse client platforms.

  3) Computer resources are pooled and reusable so that multiple tenants can use it simultaneously. A tenant can be anything from a single user to an entire company.

  4) Resources can rapidly scale up or down, called elasticity, in response to real-time business needs. In most cases this happens automatically without any reconfiguration needed.

  5) Customers are charged-per-use, so they only pay for what they use. This is called a measured service.

  Cloud Deployment Models

  There are four types of cloud deployment models, as shown in Figure 112.

  A private cloud is entirely hosted inside of a company’s intranet and is not accessible externally. Employee-only appl
ications, such as an HR website, are hosted in a private cloud.

  Figure 112: Cloud Computing Deployment Models

  If you take a private cloud and allow a select few other companies to access it, it becomes a community cloud. Private networks between multiple companies are examples of this model.

  If an application is hosted across the Internet and is publicly accessible, it is in the public cloud. This represents the majority of SaaS applications.

  The last model, a hybrid model, is achieved when a private cloud connects across the public Internet into another application. This is the model normally chosen when companies want to host their custom applications in the public cloud but need to maintain a constant connection between employees and the application.

  Classes of Service

  Figure 113: Classic Architecture vs. Cloud Computing

  The cloud model is comprised of three service models, all having a corresponding cousin in classic computer architecture, as shown in Figure 113.

  Infrastructure as a Service, or IaaS, provides the customer with a ready-made network, storage and servers, ready for the operating systems to be installed and configured.

  Platform as a Service, or PaaS, takes it one step further and manages the operating systems, middleware and other run-time components. PaaS is ready for a custom application to be deployed.

  Software as a Service, or SaaS, is essentially an application that someone hosts and maintains. The customer simply manages user accounts, and employees log in and use the application.

  Overtime, new classes of services have evolved using the ‘as a Service’ model. These are shown in Figure 114.

  Figure 114: 'as a Service' Offerings

  Security as a Service, or SecaaS, provides a way to outsource security processes. For example, a cloud service provider, or CSP, can provide managed services such as antivirus scanning and email security. Or, the CSP can actually host CPU and memory-intensive processes onto hardware managed in the cloud. This has the advantage of reducing the need for the customer to apply patches or updates to those systems as the CSP will take care of it.

  When a company offers Disaster Recovery as a Service, or DRaaS, it takes on the responsibility of hosting and maintaining a disaster recovery solution in the cloud. In addition to backup equipment, the CSP will usually offer services for a business continuity plan, or BCP. The benefits include the following:

  The cost over an in-house DR is much less. Since DR is not a core business function, the ROI can often be considerable.

  Although it is hosted in the cloud, the servers must be physically located somewhere, and if those backup servers are not in the same general area as the company’s primary servers, then a disaster is less likely to affect both.

  Identity as a Service, or IDaaS, has two different interpretations:

  The management of identities used by the company internally is hosted in the cloud, but the company still implements its own identity and access management (IAM) solution.

  The IAM itself is hosted in the cloud. This is called a federated identity.

  Data Storage and Data Analytics as a Service, or big data, is delivered when the storage and analysis of huge amounts of data is performed in the cloud. The primary advantage of big data is that it delivers an almost unlimited amount of storage capacity so that any amount of data can be mined for patterns.

  Cloud access security brokers, or CASBs, provide an easy and comprehensive way to secure the path between a company and hosted cloud services. CASBs provide the following services:

  Authentication

  Authorization

  Single Sign-On (SSO)

  Tokenization

  Logging

  Notification and alerts

  Malware detection and prevention

  Information as a Service, or IaaS – not to be confused with Infrastructure as a Service – builds on big data and takes it one step further. Whereas big data provides the processing power to sift through data and answer a question, IaaS only requires you to ask the question – it takes care of the analysis itself.

  Integration Platform as a Service, or IPaaS, comes into play when a hybrid cloud model is used. Because systems in a hybrid model are accessed across company boundaries and into the public cloud, connecting systems and applications together while maintaining a cohesive IT approach can be daunting. IPaaS works by providing a virtual environment on which to host all of these systems.

  Computer forensics can be a tricky proposition unless you have the right tools, which are often very expensive, and the experience needed to analyze and store evidence that will hold up in court. Forensics as a Service, or FRaaS, provides those tools and optionally the needed expertise.

  Advantages and Disadvantages of Cloud Computing

  Some have compared the advent of cloud computing to the introduction of the personal computer or even the Internet. However, there is one big difference – personal computers and the Internet took decades to develop, but cloud computing has popped up and made its way into everyday use over the course of just a few years. Let’s discuss a few of the reasons why that is so.

  First of all, by using cloud-based resources that can scale up or down at a moment’s notice, we have a virtually unlimited resource pool to draw from whenever we need to. Add to that the ability to pay for only what we use, and the value proposition goes through the roof.

  Secondly, companies operate on two types of expenditures – capital and operational. Capital expenditures are not favored for a variety of reasons, but that is how money spent on hardware and software is categorized. On the other hand, if we take that same money and pay for cloud-hosted solutions, then we can claim it is an operational expenditure since we are not actually purchasing anything. Not only that, but we can ‘dip our toes in the water’ and try out new capabilities without having to spend huge amounts of money. Add to that the ability to quickly implement new solutions, and we have the makings of a major win-win.

  Next, because we can scale up at any time, our applications become that much more performant, responsive and scalable basically for free. All of those adjectives – performant, responsive, scalable and most of all, free - are things IT managers love to hear.

  Another advantage is the ease with which we can upgrade software versions and apply patches. Without going into a lot of explanation, virtualization and virtual images are behind that.

  And finally, cloud services such as Amazon’s AWS or Microsoft’s Azure are famously redundant with fail-over data centers located around the globe. This takes resiliency to a whole new level.

  Unfortunately, all of this high-praise does come at a cost in terms of increased risk. Due to the inherent nature of intentionally hiding the complexity of hosting cloud services, we also have to deal with a lack of transparency on the CSP’s side. If we were to host data in our own data center, the data owner would have full access to and knowledge about that data. When we store this data in the cloud, we rarely have any type of knowledge of where the data is stored and in what manner. As a result, certain types of data and processes should not be stored in the cloud regardless of the economics due to increased security risks.

  Another factor to consider when dealing with global cloud providers is that our data may now cross jurisdictional boundaries without us even knowing it. That could get us in real trouble if regulatory offices hear about it and decide to enforce some rather stiff penalties.

  One last negative note about security and CSPs. The availability of audit logs will almost certainly be a challenge to overcome, and the actual level of secure controls being implemented will more than likely be completely invisible to the customer.

  Figure 115: Cloud Computing Risk Map

  If you take all of the above advantages and disadvantages together along with both the cloud and deployment models, we can come up with a two-dimensional matrix to help us map and describe the risk/benefit discussion. This is shown in Figure 115.

  To help with the selection of a
CSP, there are a number of frameworks available for us to use that are built specifically for cloud providers, such as the CSA Cloud Control Matrix and the Jericho Forum Self-Assessment Scheme.

  Cloud Computing Security

  When dealing with cloud service providers, the bulk of liability will lie with the organization who consumes the service provider’s cloud. The only real way an organization has to change this is through a service level agreement, or SLA. This allows the organization to enforce governance, regulations, compliance and privacy, sometimes abbreviated GRC+P.

  Another challenge when moving to the cloud is in gathering cyberforensics evidence. Beyond struggling to pierce the rather opaque veil most providers will put between themselves and customers, the elastic nature of cloud scalability means that the resources being used right now may not be available 3 minutes from now, let alone when forensics is gathering evidence 2 weeks after a breach.

  Beyond the aspects we just discussed let’s cover some of the most damaging threats to cloud computing.

  Data Disclosure, Loss and/or Remanence

  Keeping our information confidential is the biggest risk we will encounter when moving to the cloud. For example, when a company provides their own data center, the data owner and data custodian are both employees. When we move to the cloud, however, the data owner becomes one tenant among many, and the data custodian is the service provider. It then becomes very important to verify and validate that the data protection and access controls the service provider claims are in-place, do indeed work and are active.

 

‹ Prev