LINUX ANALYSIS
Now that you have looked at the Linux file system, you can start doing some real investigation of user activity and tracking malicious activity. You have some theory under your belt, so let’s take a look at a real file system to see how that theory translates into usage. We will use both SMART and an open source set of command-line tools (The Coroner’s Toolkit, TCT), which when used together can perform a fairly comprehensive internal investigation and can act as a cross-validation. The open source tools require an image that is dd-compatible, meaning that it is a noncompressed, byte-for-byte copy of the partition. As with any investigation, you should always use some sort of case management tool such as SMART, and hash, hash, hash.
Finding File System Signatures
You may run into situations in which the disk appears to have been cleared of the partition table in an effort to destroy evidence. If you look at the disk with an editor and see that good information is still on the disk, chances are you can recover the data by finding the partition signatures and reconstructing the disk. Some tools can do this for you, but to complete the task in a forensically sound manner, you should use a tool like SMART to find the signatures and reconstruct the disk yourself. Remember that in ext2/ext3, everything is based on the superblock, so if you can find that, you can reconstruct the whole file system.
You can use several good utilities to find the partition information, such as findsuper and PartitionMagic. For more information, check out the Linux Documentation Project and its “Linux Ext2fs Undeletion mini-HOWTO” (www.tldp.org/HOWTO/Ext2fs-Undeletion.html), keeping in mind that they are not meant to be forensic tutorials.
The signature that you are looking for is 0xef53. Realize that you will get a bunch of false positives to go along with the true superblock. Also realize that backup copies of the superblock are stored all over the file system, so if the primary one is gone, you can still re-create it from a backup.
Locating and Recovering Deleted Files
The first thing that a suspect will do when she thinks she is in trouble is delete incriminating evidence. Searching for and recovering deleted files is a vital part of an investigation with Linux. The first place to start is the inode table, which is like a lookup table for the file system. When a file is deleted, it’s respective inodes are marked as deleted and set as available for overwriting. If you can get to them before they are overwritten, you can reconstruct the files. If you want to use open source tools, the TASK toolkit has several command-line tools that can help. The tool fls can find files that have been deleted and help you get their inodes. Once you have the inodes, you use ICAT to pull out the file from the inode, if it still exists.
Recovering Files with SMART
SMART has great tools for performing analysis on ext2/ext3 file systems. It supports searching through unallocated space and finding deleted files. To find deleted files, mount the image read-only in SMART. Then right-click the image and choose Filesystem | SMART | Study. This will analyze the file system and find the deleted files.
After the Study has completed, choose Filesystem | SMART and look at the file list, as shown next. Right-click the Filter tab and use the drop-down box to add a filter for active or deleted files. Select Deleted and click the Apply button.
You can right-click the individual files to view them as text or graphics, to hash them, and to create reports for them. You can also export the file out so you can perform more analysis on it if you want. If you do this, make sure you hash when you export and hash during your analysis to show that the file hasn’t been changed.
Guidelines for Analyzing Deleted Files and Partitions
Working with deleted files can be a very tricky thing. Even if you do find information that you think is important, such as that shown in the preceding section, how do you know whether it actually ties the suspect to a crime? Make sure that you are thorough when performing this type of investigative work. Find and investigate all of the deleted files, and be prepared to justify every one of them in court. All the conventional rules for forensic investigation apply.
Differences Among Linux Distributions
It’s difficult these days to throw a rock and not hit two different Linux distributions. While all are based on the same Linux core, the supporting programs and file locations can vary widely. Once you determine what version of Linux the computer was running, you can adjust your investigation accordingly. The first place to look is in the/etc directory. Typically, you’ll find a file such as redhat-release, debian-release, or redhat-version in this directory, which will indicate both the distribution and version of the install. Another file to check is/etc/issue, as most distributions place their information in this logon banner. Realize that these files are nonauthoritative and act only as a marker that points you in the right direction, not to a definitive answer.
If the answer cannot be found in one of these files, try looking in/var/log/dmesg or/var/log/messages. This is the startup log for the OS, and the distribution will typically announce itself in these logs. In addition, these files are not as obvious as those in/etc, and less sophisticated users will not be able to sanitize them easily. Once you know the distribution, you can make some inferences about the type of user that you are dealing with.
Ubuntu Linux Ubuntu Linux has come on strong in the past few years and has gained a lot of ground in the distribution battle, with several reporting agencies stating that it’s now the most popular distribution. A certification process exists for third-party utilities, and new releases of the operating system occur every six months. Ubuntu uses the Advanced Packaging Tool (APT) and the graphical front-end Synaptic to install new applications.
Red Hat/Fedora Linux This is the most storied Linux distribution and typically the choice of those who are new to Linux. New programs are installed using the binary RPM (Red Hat Package Manager) distribution system, and a record of every program installed on the machine appears in the RPM database. Mandrake Linux is very similar in form and function to Red Hat.
Gentoo Linux This distribution has made a lot of waves lately. The interesting thing about Gentoo is that everything is compiled for the machine on which it is being installed; nothing is done from a binary package. This can lead to epic installation times. The package manager for Gentoo is called portage/emerge and acts similarly to the BSD (Berkeley Software Distribution) ports system.
SUSE Linux SUSE recently was acquired by Novell, and the company is in the process of porting all of its networking products to it. This means you may end up doing investigation on SUSE if you ever have to investigate a Novell network. The package manager for SUSE is YaST, which has capabilities similar to Red Hat’s RPM system.
Debian Linux Debian has traditionally been more of a developer’s distribution. It has one of the best package management tools, APT, and the default installation is geared toward development tools and testing.
The Importance of Determining the Distribution
Each distribution has its own way of tracking and auditing user activity and system events. If you don’t correctly identify which distribution is used early on, you will end up in the best case chasing your tail for unnecessary hours, and in the worst case missing evidence that can turn the outcome of a case. Spend the time to identify what you are dealing with and document your findings accordingly.
Tracking User Activity
The command interpreters for Linux are much more advanced than those used by Windows. From a forensics standpoint, this is a very good thing, because the two most popular interpreters, BASH and tcsh, leave audit trails for you to access that outline every command the user has run. Several other files are also unique to each shell that contain information we want to examine.
BASH
BASH is what you will most commonly run into on Linux systems. On the Linux platform, it is arguably the most advanced of the shells, with elaborate scripts and startup files that can do everything from give you the weather at the command prompt to color code your pro
mpt based on machine name. The following table takes a look at the files that BASH uses and the purpose of each file. They can be found in the user’s home directory.
Tcsh
In our experience, tcsh is the shell of choice for people who learned on a platform other than Linux. The semantics are about the same between BASH and tcsh, with even the filenames being similar.
Investigation Using Shells
Now that you have a frame of reference for the files we will be looking at, let’s look at the common ways that suspects try to subvert this system. The most common way is simple deletion. If you go to the user’s directory and these files aren’t stored there, that should be a red flag. Either the suspect is using a shell from the 1980s, or he deleted the files. Time to fire up the deleted file recovery tool of choice and go to work. You will also commonly see that the suspect has created a link from the history file into the/dev/null special file. This is the kernel’s version of a black hole. Things go in and nothing comes out. If you find this, you can do a check to determine whether you can get the prelinked file, but you may end up just trying to find another audit trail to follow.
Printer Activity
Determining what was printed can be extremely useful in espionage and IP cases. Linux has very stout printer auditing, derived from the older UNIX LPR daemons. If the distribution is newer, instead of using LPR, the CUPS daemon is used. LPR keeps a log in the/var/log directory named lpr.log and keeps the printer spools in/var/spool. Look at these files to determine who printed what and when. If you determine that CUPS is installed, look in the directory/var/log/cups for the log files. In addition, if you can’t find the logs, check in the/etc directory for the configuration files to see whether the logging has been moved.
Finding Printed Documents
It’s important that you locate both what was printed and who printed it. This can also serve as a tool for timeline reconstruction and as proof that a file was deleted. If you see that a file was printed and you can no longer find it in the file system, you now have a place to start when searching through deleted space.
Mounting an Image
Using the mount command and the loop kernel module, you can mount drive images in Linux. This will allow you to work with the drive as a live file system. To do this, run this command:
[root@dhcppc3 mnt]# mount -r -o loop
After this command is run, you can access the drive just as you would any other disk and run your favorite searching tools. The -r option requires some explanation—it forces the OS to mount the image as read-only. Never mount a forensic image where you can write to it. And always hash before you mount it and after you unmount it to show that you made no modifications to the data while it was mounted.
Searching Unallocated Space with Lazarus
The Coroner’s Toolkit (TCT) has a very useful tool called lazarus that attempts to recreate files from unallocated and deleted space. Be warned that this is not a fast process by any means. Simple extraction can take from hours to days, depending on the size, and the investigative time involved is also a consideration. Typically, you will want to run a tool such as unrm before you use lazarus, as unrm will extract only the deleted portions and significantly reduce the work lazarus has to do. Lazarus will then attempt to break the data into file types and blocks, and if you specify the -h option, it will actually create HTML that you can use to navigate around. Let’s look at an example:
[root@dhcppc3 blocks]# unrm ../image.dd > output
[root@dhcppc3 test]# lazarus -h ../output
Running these commands creates the following files,
along with several subdirectories with the data files referenced in the HTML. If you take a look at the menu that lazarus creates, you’ll see that it attempts to classify and color code the recovered data by file type.
Analyzing the Swap Space
When the OS runs out of space in RAM, it will use the swap partition to store data temporarily. This can be a source of evidence and should always be examined when you are performing a Linux investigation. Pull the image of the swap partition in the same manner used to pull the other drive images. Once you have done that, you can treat it like a binary file. Remember that the Linux swap structure consists only of data blocks that represent blocks in memory that have been swapped out. Anything that can be stored in memory can end up in the swap file, including passwords, text files that were opened using editors, pictures, and so on. The downside to this structure is that you will very rarely find an entire file. Most of the time, since the blocks are not going to be allocated sequentially and nothing is tying the blocks together like an inode, you won’t be able to pull an entire file. When you find information in the swap, make sure that you always explain what it is and how it fits into the context, and you should be OK.
Searching and Recovery
The ext2/ext3/ext4 file system and Linux in general offer suspects a multitude of ways to hide their tracks. Searching through an image and recovering the evidence is a very time-consuming and meticulous process. Always make sure that you approach this process in a methodical manner that is consistent with your process. Also, good search techniques can really speed up your investigation time. You will commonly find yourself going through gigabytes of unallocated or deleted space with little to no structure. This is the type of situation in which false positives can become your worst enemy. Take the time to learn how to search effectively, because it will save you a ton of time in the long run.
CHAPTER 8
MACINTOSH ANALYSIS
Whether you’re a forensic examiner who wants to use your Mac to perform forensic data analysis or an examiner who needs to create and/or analyze an image of a Macintosh computer’s disk, this chapter is for you.
After nearly 20 years in the industry, I recently attended a computer forensics conference where the majority of presentations were running from Macs. I don’t think there’s been a day that I’ve gone out into the world without seeing an iPod or an iPhone in a long time.
Although today’s Macintosh computers sport the “Intel Inside” sticker, Macs continue their tradition of being different. From a forensic analysis standpoint, there are still more similarities than differences between an Intel Mac and an Intel PC, but the two primary differences related to conducting meaningful forensics are the partitioning scheme and the file system.
Today’s Macs use Guide Partition Table (GPT) to describe the layout of the boot volume. GPT is more extensible than earlier partitioning schemes and resolves several of the limitations inherent in the preceding implementation.
In addition to the new partitioning scheme, Apple has extensively enhanced the Hierarchical File System (HFS) volume format as well. Fortunately, Apple has a long history of doing things right. When the company introduced the newer HFS+ volume specification, it also introduced a “wrapper” that would allow older utilities to understand and respect newer file systems. When an HFS+ volume is presented to an older system that doesn’t “understand” the HFS+ volume format, the user sees a file named “Where did all my files go?” The same kind of wrapper is used to describe a “protective MBR” so that GPT-unaware programs see a single, unknown partition that occupies the entire disk.
You can conduct a forensic examination of a Macintosh in a number of ways. As for the “best” way, that’s a matter of opinion. From an analysis standpoint, you can look at Mac data using a Mac or any other platform. You’ll learn about the pros and cons to each approach in this chapter.
One of the more valuable resources for a deeper understanding of the Mac OS is the Apple Developer Community. Despite its many differences when compared to other operating systems, the Mac OS is just another OS, and HFS is just another volume format when it comes down to it. The mantra “It’s all just 1’s and 0’s” is a good chant to remember if you are tasked with examining a Macintosh for the first time.
THE EVOLUTION OF THE MAC OS
As with Microsoft Windows, Apple’s operating
and file systems have undergone a relentless and driving evolution. The major versions of the Mac OS (7, 8, 9, and X) have been increasingly more complex than earlier versions and have introduced many changes in the fundamental behavior of the operating system.
Mac OS X exemplifies this with the many differences between versions in the current major release cycle. The recent focus on the need for trustworthy computing, security concerns, and other initiatives and continued integration of the OS and the Internet has precipitated a paradigm shift not only of consumer awareness but for fundamental OS design as well.
Know Your Operating System
Even a slight difference in the OS version, patch level, or update versus a “fresh install” can have a profound effect on the way your computer works, where and what data is stored, and what format is used for that data. Try this simple exercise at home:
1. Running Microsoft Windows XP, press CTRL-ALT-DEL to open the Task Manager window.
2. Click the Processes tab.
3. Start at the first item and ask yourself what it is and what it does. Also ask yourself how it may change the default behavior of the system and when it may not operate as intended. Repeat this process for each and every item in the list.
Hacking Exposed Page 17