by Phil Martin
Such a capability is called code access security, or CAS, and allows the operating system to decide what access individual code blocks will have at run-time based on how the server has been configured. Most importantly, it allows the OS to block code from untrusted sources from having privileged access. This can only be carried out if the code is generated by a type-safe language. In addition to type safety, CAS provides three other concepts we will cover next – security actions, syntax security, and secure class libraries.
Security Actions
When code is loaded at run-time, there are three types of security actions the operating system has access to – request, demand and override. The controller granting access based on security permissions may be either the OS or the language framework executing the code, such as the .Net CLR. In either case the controller is referred to as the runtime, since it grants access at run-time.
A request security action informs the runtime of all permissions the code requires to successfully execute. The runtime will make access decisions on whether to actually allow the code to execute, but the code cannot influence the runtime - it simply reports what it needs.
A demand security action is used by the code to assert the required permissions and to help protect resources from callers.
An override security action is used to override the default security behavior.
Syntax Security
Code access security (CAS) can be implemented in the code itself in two ways – declarative or imperative.
Declarative security syntax uses attributes at the member, class, or assembly level. In effect, the code to be controlled is decorated with an attribute that is interpreted at run-time. Because of this run-time evaluation, the attributes can actually be extracted into a set of configuration files that can be managed by non-developers. In this case, developers can focus on writing code, while a different team worries about what access rights are required to run specific blocks of code. Figure 91 shows an example of applying declarative security to a C# member.
Figure 91: Declarative Code Access Security
A second method to applying CAS is an imperative approach, where we write code that directly invokes the security model. Figure 92 provides a C# example of imperative security syntax, where we create a new instance of the permission object in code and allow it to prevent further execution if the current context does not have sufficient access. Imperative syntax is useful when the set of permissions is not known until run-time, while a declarative approach requires us to know the possible permissions before deployment. The downside to an imperative approach is that developers are now in control of the decisions, and that capability cannot be externalized like a declarative approach allows. As a side-effect, imperative syntax does not allow the code’s security behavior to alter based on the environment to which it is deployed since all access decisions are being made in code. Additionally, imperative syntax does not allow request security actions to be exposed to the runtime. On the bright side, the imperative method allows us to get as granular as we would like, even protecting multiple sections of code differently within a single function.
Figure 92: Imperative Code Access Security
Figure 93 summarizes the pros and cons of each approach.
Approach
Declarative
Imperative
Must know permissions before deployment
No
Yes
Can externalize configuration
Yes
No
Developer must implement
No
Yes
Granularity
Assembly, Class and Member only
Any
Supported security actions
Request, Demand and Override
Demand and Override
Figure 93: Declarative vs. Imperative Security Syntax
Secure Class Libraries
When we use CAS to protect code contained in a class library that can be used by other processes, we have created a secure class library. The runtime will ensure that code calling a secure class has the necessary permissions. The biggest win we achieve by using secure classes is that when malicious code takes over some other code that normally can invoke a secure class, the malicious code will not be able to invoke the secure class since it does not have the necessary permissions.
As an example, consider Figure 94. Suppose that we have ‘SecureClassA’ that requires the ‘CanDoSensitiveStuff’ permission before the runtime will allow it to be invoked. Then, along comes ‘ProcessB that wants to invoke ‘SecureClassA’, and just happens to have the ‘CanDoSensitiveStuff’ permission. The runtime looks at everything, decides it is perfectly happy and allows the code to continue. Now, a malicious process called ‘EvilProcessC’ somehow manages to take over ‘ProcessB’ and tries to invoke ‘SecureClassA’ directly. The runtime, even though it does not know that ‘EvilProcessC’ is in fact evil, will not allow this action because ‘EvilProcessC’ does not have the ‘CanDoSensitiveStuff’ permission.
Figure 94: CAS in Action
Memory Management
There are a number of memory-related topics that should be taken into account when coding defensively.
Locality of Reference
The principle of locality, also called the locality of reference, simply states that data locations referenced over time by a single process tend to be located next to each other. While a computer does this purposefully to make the reuse of recent instructions and data more efficient, it also makes it easier for an attacker to carry out a successful buffer overflow attack. Since he can predict what data will be overwritten by malicious code when the buffer is exceeded, the attacker has a better chance of getting his own instructions executed. An attacker will use four main types of locality of reference when deciding the memory addresses to target – temporal, spatial, branch and equidistant.
Temporal locality, or time-based locality, is a fancy way of saying that the most recently used memory locations are more likely to be referenced again in the near future. Spatial locality, or space-based locality, implies that memory addresses that are next to recently used memory addresses are likely to be referenced next. Branch locality refers to the behavior of the memory manager when it uses branch predictors, such as conditional branching, to determine which memory addresses will be referenced next. Equidistant locality is part spatial and part branching – it uses simple functions to predict memory addresses based both on memory manager behavior and the memory locations adjacent to those recently accessed.
Dangling Pointers
In programming, memory pointers are a core capability that must be handled correctly. When properly used, some amount of memory is allocated and data or an object is placed into that memory. The code that will use the data or object uses a pointer to reference this memory location, so essentially a pointer is simply a memory address. This works great until the allocated memory is released or used for some other reason, resulting in a dangling pointer that thinks it points to something useful in memory but in reality, the referenced data or object has long been erased or replaced. Another scenario occurs when a pointer is used before memory is allocated for it, resulting in a wild pointer.
Both cases are dangerous as they result in erratic behavior that could crash the current process or allow an attacker to overwrite memory that should be inaccessible. With both types of pointers, an attacker could get his own data or code to be executed by a process. In terms of coding, we need to be sure that when memory is deallocated, we also toss all variables that reference the memory addresses that no longer contain anything useful.
Garbage Collection
Another aspect of pointers that can quickly become problematic occurs when a pointer is discarded without first deallocating the memory a pointer references. Consider the following pseudo-code:
var pointer = alloc(1000);
pointer = null; //We just lost 1,000 bytes!
In this case we explicitly allocat
e 1,000 bytes, and then set the pointer to NULL, meaning that we just forget where that allocated memory is. The memory manager still thinks the memory is in-use and will therefore not let any process use it. Pretty much the easiest way to reclaim the lost memory is to reset the operating system. That is why the mantra “When in doubt, reboot!” always holds true with Windows. This scenario is a perfect example of a memory leak, in which memory is permanently lost. Imagine if the code we just described were in a tight loop that executed 1,000 times – we would have just lost over 1MB of memory in a fraction of a second. This type of behavior is a huge problem with the C and C++ languages.
But not for .Net or Java, because they have a background process called a garbage collector that automatically allocates and deallocates memory as-needed. Instead, our code would appear as the following:
var pointer = new BigObject(); //The garbage collector allocates the memory for us
pointer = null; //No worries – the garbage collector will release the memory eventually
In this case, even though we explicitly try and forget the allocated memory location, the garbage collector remembers it and will eventually deallocate it for us. Essentially, a garbage collector hides the complexity and management of memory so that we cannot shoot ourselves in the foot. Note the word ‘eventually’ – in order not to consume resources all the time, garbage collectors will periodically sweep through its list of allocated memory and release the unused or unreferenced allocations. Therefore, with managed languages that have a garbage collection, huge chunks of memory that are ‘released’ by code may not actually get truly released until later – anywhere from a few milliseconds to many seconds in the future.
To address the latency in garbage collection, a different approach called automatic reference counting, or ARC, is starting to be employed. In this case, the number of references to a pointer, or variable, is kept track of. When the count reaches zero, meaning that there is no longer any pointer referencing the memory location, the memory is immediately released.
If an attacker figures out that a specific process has a memory leak, he can target whatever behavior causes the memory leak and exacerbate it until so much memory has been lost that either the process or computer can no longer function. This is a form of a DoS attack and is called a fast death because the computer suddenly stops working. An alternative is a slow death in which the attacker is able to leak memory sufficiently to cause a managed garbage collector to kick into overdrive – this starves all processes of CPU cycles and slows everything down to a crawl.
Type Safety
When a language is strongly typed variables can only be created if the primitive type of class is well-defined ahead of time. For example, JavaScript allows the following code:
var someUnknownObject = new Object();
someUnknownObject = 12.300;
someUnknownObject = ‘Hey!’;
someUnknownObject = false;
This is an example of NOT being strongly typed, and can result in some very strange behavior, as well as unintentional side-effects. Compare this to C#:
var someUnknownObject = new Object();
someUnknownObject = 12.300; //this throws an exception
‘Object’ is actually a well-defined class in C# and doesn’t allow the same type of behavior as JavaScript – C# is a strongly typed language.
A related concept is called type safe that dictates memory allocated as a specific type can only be referenced as that type. For example, the following C++ code is perfectly allowable:
MyClass1 *pointer1 = new MyClass1();
MyClass2 *pointer2 = *pointer1;
The compiler assumes you know what you are doing, resulting in a possible exception when *pointer2 is used at run-time. The following C# code would throw a compiler error because C# is type safe:
MyClass1 pointer1 = new MyClass1();
MyClass2 pointer2 = (MyClass2)pointer1; //compiler exception thrown here
The compiler refuses to compile the code because there is no relationship between MyClass1 and MyClass2 that allows such as type cast. Without getting too far into object-oriented coding, if MyClass2 were the base class for MyClass1, then the compiler would be perfectly fine with it.
There is a feature that some languages – such as C# and Java – allow called parametric polymorphism, or generics. This feature allows us to process pointers in such a way without knowing until run-time what type of data the pointer references. For example, in C#, we can use a generic list:
function void DoIt
{
foreach(var item in list)
{
//Do something with item
}
}
To maintain type safety, the compiler will run through all code that references ‘DoIt()’ to make sure the function as-written does not violate whatever ‘T’ will eventually be. This allows a language to be more expressive while still enforcing type safety.
Why do we care about type safety? Because without this capability it becomes far too easy to execute buffer overflow attacks. Languages supporting type safety such as C# and Java are far safer than C or C++.
Encryption
The greatest vulnerability when dealing with cryptography is that is simply not used when it should be. If we store sensitive data in clear text, the work factor for an attack drops to near 0%. So, the first task is to identify sensitive data and to actually encrypt that data. The second task is to ensure that the encryption keys are safely stored. After all, if you lock your front door and then leave the key under the door mat, you might as well not have bothered to lock the door to begin with, as the doormat will be the first place a thief looks. In the real world, this is equivalent to storing the encryption key along with the data it encrypts and applies equally well to live data as it does to backups.
As we have stated before, there is never a reason to create a custom encryption algorithm, and you should instead use vetted and stable APIs to provide cryptographic services. Care should be taken not to use APIs that implement compromised or weak algorithms. For example, a lot of native libraries for modern languages provide MD5 hashing algorithms, which has been shown to be much less secure than SHA-2. Yet, when faced with a choice, developers will often choose the easier algorithm to understand and get working.
The functions that carry out encryption and decryption services for an application should be explicitly protected to make sure an attacker cannot easily access that functionality. Implementing the principle of least privilege is a good way to kick start that secure access.
When discussing cryptographic issues that require prevention and mitigation steps to be taken, we can break down the problems into five areas – data at-rest, algorithms, agility, key management, and access control.
A huge number of threats to data-at-rest can be mitigated by following four simple rules:
Encrypt all sensitive data.
Use salting in the encryption process to increase the work factor for brute-force attacks.
Do not allow sensitive data – encrypted or not – to cross from safe zones to unsafe zones. The zones are determined by using threat modeling.
If feasible, separate sensitive data from non-sensitive data using naming conventions and strong types. This makes it simpler to identify code blocks that use unencrypted data when it should be encrypted.
When we us the term appropriate algorithm usage, we mean four things:
The encryption algorithm is not custom.
The encryption algorithm is a standard and has not proven to be weak. For example, both AES and DES are standards, but DES has been proven to be weak. Keep in mind that we could select a secure algorithm but choose to use inputs that render it unsecure. As an example, we could select AES with a 64-bit block size, resulting in an insecure encryption. Alternatively, if we simply make sure that our selected algorithm and inputs align with FIPS-197, then we are golden.
Older cryptography APIs are not being used, such as Crypt
oAPI, which has been supplanted by Cryptography API: Next Generation, or CNG.
The software’s design is such that algorithms can be quickly swapped if needed. Given how severe the consequences are if a broken algorithm is used, any application must be able to react swiftly to change to a different algorithm if the current one used is broken. Otherwise, we will not be able to deliver the ‘A’ in CIA because of having to take the application down while it is being retrofitted.
This last bullet point is called agility, which is our third problem area. Cryptographic agility is the ability to quickly swap out cryptographic algorithms at any time we choose. There is a substantial precedent for requiring this capability as Figure 95 shows.
Type of Algorithm
Banned Algorithm
Acceptable or Recommended Algorithm
Symmetric
DES, DESX, RC2, SKIPJACK, SEAL, CYLINK_MEK, RC4 (<128 bit)
3DES (2 or 3), RC4 (>=128 bit), AES
Asymmetric
RSA (<2048 bit), Diffie-Hellman (<2048 bit)
RSA (>=2048 bit), Diffie-Hellman (>=2048 bit), ECC (>=256 bit)
Hash (including HMAC usage)
SHA-0, SHA-1, MD2, MD4, MD5
SHA-2 (includes SHA-256, SHA-384, SHA-512)
Figure 95: Banned and Acceptable Algorithms