Tuesday, 15 May 2012

Old Servers never die – unfortunately

But you can bet your last penny that at some stage you will have to image them.  That is the problem I faced one wet weekend recently when I was required to image an HP behemoth resplendent with two sizable raid 5 arrays and two USB 1 ports.  All drive bays and ports were in use so I could not insert a new drive into the box to image it and I didn’t fancy imaging all the elderly SCSI raided hard drives separately.  I was permitted to shut down the server and had decided to boot the box to a forensic linux distro that had suitable HP Raid Controller drivers.

The problem I faced was USB1.  Obviously I needed to output my images somewhere and an external USB hard drive was an option.  But the maths didn’t add up – the maximum bandwidth of a USB1 port is 12 megabits per second (Mbps) which equates to 1.5 megabytes per second (MB/s) which equates to 5.4 Gigabytes per hour.  There were not going to be enough hours in this weekend to image both arrays on the server. 

What I did next I thought might be worth sharing with you.  I used dd to create a source image, netcat to pipe it to an onsite laptop across a network and ewfacquirestream to capture the dd image, hash it and write it into Encase evidence files. It can be carried out entirely at the command line.  Crucially I achieved an imaging speed of about 25 MB/s which is 1.46 gigabytes a minute or nearly 88 gigabytes an hour using gigabit network interface cards.  In testing I have achieved 39 gigabytes an hour using 10/100 NICS.

Method to image computers across a network

  1. I connected my onsite laptop and the server via Cat5E cables to a Netgear GS105 5 port gigabit switch.  I attached a 2TB external hard drive to my onsite laptop and booted both the server and my laptop to a DEFT 7 forensic linux distro.
  2. To configure Ethernet settings on both using Gigabit NICs (10/100/1000) if available
    • Launch terminal and at prompt type sudo su
    • At prompt type ifconfig to identify network cards
    • At prompt type ifconfig eth0 192.168.0.100 on onsite laptop and ifconfig eth0 192.168.0.101 on machine to be imaged (these commands assume that you are pugged into eth0 – if there is more than one NIC on the computer to be imaged it might be eth1 or higher)
    • Test connection by typing at prompt ping –c 5 192.168.0.100 or ping –c 5 192.168.0.101 as appropriate
  3. On on-site laptop
    • Connect collection hard disk drive
    • Launch terminal and at prompt type sudo su
    • At prompt type fdisk –l to identify storage drive
    • Create a folder to mount the storage drive to by typing mkdir /mnt/(name of your folder)
    • Next mount the storage drive to your folder by typing mount /dev/(sdb2 or whatever) /mnt/(name of your folder)
    • Now we create a netcat listener and a pipe to ewfacquirestream – at prompt type but donʼt press enter just yet nc –l 1234 | ewfacquirestream –c none –b 4096 –C case_number –D description –w –E evidence_number –e ʻRichard Drinkwaterʼ –t /mnt/(name of your folder)/(name of your evidence files)
      [relevant switches –c compression type: none, fast or best; -b amount of sectors to read at once: 64, 128, 256, 512, 1024, 2048, 4096, 8192, 16384 or 32768; words in italics change to suit and use single quote marks (ʻ-- --ʼ) to group more than one word]
  4. On machine to be imaged
    • At prompt type sudo su
    • At prompt type fdisk –l to identify drive to be imaged
    • Next we prepare to dd drive to be imaged and pipe to netcat – at prompt type dd if=/dev/sdb conv=noerror,sync bs=4096 | nc 192.168.0.100 1234 but donʼt press enter (if you are imaging a server with an HP Raid card the command might look something like dd if=/dev/cciss/c0d0 bs=4096 conv=noerror,sync | nc 192.168.0.100 1234)
  5. Start imaging process by
    • Press enter within terminal on onsite laptop first to start netcat listener
    • Then press enter within terminal on machine to be imaged to start dd
  6. When the acquisition completes ewfacquirestream outputs a MD5 hash calculated over data value to the terminal. Either photograph this value or copy and paste it to a text file on your collection hard disk drive.

 

Notes re imaging speed

In testing where the NICs are both gigabit speeds of over 40 Mb/s (144 GB/h) can be achieved. With 10/100 NICs up to 11 Mb/s (39.6 GB/h) can be expected. Compression and block size does affect imaging speed and if you have time it may be worth fine-tuning these settings. The settings shown in this post are probably a good starting point. To fine-tune, run the imaging process with the settings in this post. After 5 minutes or so if you are getting poor speeds stop the process and try adjusting the compression size on the onsite laptop (i.e. change from none to fast). Sometimes either doubling or halving the block size on both source and receiver machines can make a difference also.

4 comments:

Anonymous said...

Excellent post

Just a bit of fine tuning...
Would it be possible to run the stream through a compression tool like gzip before piping it over the network then decompressing it at the other end before storing it.
It may seem like a lot of trouble but if the bottleneck is the network transfer rates then piping less data over it may speed things up even more. I suspect that processing speeds to compress/decompress will easily keep up (unless the server is really old)

Paul

Anonymous said...

You can use dd & gzip and just pipe them both through SSH.

dd if=/dev/sda | gzip | ssh user@backup.remotehost.com dd of=/backup/drive.img.gz

gui said...

Very good.

How did you identify the distro which had the driver for the HP controllers?




gui said...

Very good.

How did you identify the distro which had the drivers for HP controllers?