We are lucky in our lab that our workstations are upgraded on a regular basis so once in use we don't often make many changes.
The most important bits of my current spec are as follows:
- Supermicro X7-DWA-N fitted into a Supermicro CSE-733TQ-645 chassis
- Two Intel Xeon X5482 processors
- 16GB DDR2 800 Ram
- Western Digital 300GB VelociRaptor10,000 rpm hard drive for the OS
- 3 x 1TB Samsung HE103UJ Spinpoint F1 hard drives in a RAID 0 array
- Microsoft XP 64 bit
- 256MB ATI FirePro V3700 GPU
At one time our primary forensic software Encase would max out the processor when carrying pretty much any process. With the advent of multi-core dual processors we aim to max out one core (which on my box is 13% cpu utilisation in task manager). As processors get faster and faster I have noticed that often the CPU core is not maxing out. Something else is slowing us down! We store our Encase evidence files on the Raid 0 array (and just before someone posts a comment about the lack of data resilience etc., the way our lab is set up all the data on my Raid 0 array is mirrored elsewhere). We do this for speed and capacity. When Encase (and most other forensic utilities for that matter) is processing it has a voracious appetite for data. Just look at the Read bytes value in Task Manager. The multi-core processors allow us to run other forensic programs (FTK, Netanalysis hstex etc etc) along with Encase, we can even run other instances of Encase, and because we can - we do. The net result of all these programs running is that they compete to read data from the Raid 0 array in my case (and from wherever you store yours in yours) - the net result once your data storage is maxed out is things slow down. It follows then that performance can be increased by having faster data storage.
One way to achieve this would be faster hard drives. We use sata hard drives for capacity reasons and to an extent cost. SAS hard drives are faster but don't provide the capacity. So as things stand three hard drives in a Raid 0 array was the best that could be done. I decided to see how I could make some improvement.
Currently the three hard drives (and the OS drive) connect to the Intel ESB2 raid controller integrated on the motherboard. Conventional wisdom would have it that by adding a fourth hard drive to the raid 0 array would speed things up.
HD Tach details an average sequential read speed of around 200 MB/s for a three drive array utilising the default stripe size (128kb) with NTFS formatted with the default cluster size.
Adding a fourth drive slowed the sequential read speed to around 180 MB/s.
I tested a variety of different stripe sizes and aligned the partitions but came to the conclusion that the Intel ESB2 controller just did not scale up to four drives very well. The arrays were created via the utility accessed via the controller bios during boot up. This utility is very basic and does not allow much configuration. Intel also provides a Windows utility called Intel Matrix Storage Console. When running this utility I found that by default Volume Write-back cache was disabled. Enabling it made a significant improvement.
Conventional wisdom has it that a hardware Raid controller would improve performance over the Intel ESB2 and in my testing this seems to be the case. I have used an Areca 1212 PCI-E raid card and achieved a sequential read speed of over 600 MB/s.
This array has four 1TB sata hard drives with a 64kb stripe, is partition aligned at 3072 sectors and has one NTFS volume with the default cluster size. Using Syncback to write to the array from our file server across a copper gigabit ethernet network produces some pretty impressive network utilization stats.
No comments:
Post a Comment