Friday, 6 August 2010

USN Change Journal

This post includes

  • a new method to recover USN Change Journal artefacts from unallocated
  • some background information
  • some commentary on benefitting from the existing work of Lance Mueller and Seth Nazzaro

Background
The examination of USN Change Journals is nothing new and was commented on as long ago as September 2008 in Lance Mueller's blog. My interest was piqued more recently when Harlan Carvey discussed the python script written by Seth Nazzaro to parse these Journals.

The update sequence number (USN) change journal provides a persistent log of all changes made to files on the volume. As files, directories, and other NTFS objects are added, deleted, and modified, NTFS enters records into the USN change journal, one for each volume. Each record indicates the type of change and the object changed. New records are appended to the end of the stream. Programs can consult the USN change journal to determine all the modifications made to a set of files. The part of the USN change journal that matters to us is the $USNJRNL•$J file found in the $Extend folder at the root of applicable NTFS volumes. This file is a sparse file which means that only non zero data is allocated and written to disk - from a practical point of view the relevance of this will become obvious in the next section of this post. The capability to create and maintain USN Change Journals exists in all versions of NTFS from version 3.0 onwards. This means that they can exist in all Windows versions from Windows 2000 onwards, however the functionality was only turned on by default in Vista and subsequent Windows versions.

You might be thinking by now - why from an evidential perspective does the USN Change Journal matter? A good question and in many cases with data in a live file system USN Change Journal entries might not assist. However it may be relevant to establish the latest event to occur to a file. The event is recorded by creating a reason code in each record within the journal. These reason codes are detailed in Lance's post and by Microsoft here. Where I think the journal entries may be more useful is in establishing some information about a file that no longer exists in the live file system but is referenced elsewhere.

Lance Mueller's Enscript
Lance's script is designed to parse a live $USNJRNL•$J file and output the parsed records into a CSV file. Like Seth Nazzaro I found that when I tried to run the Enscript Encase hung. This turned out not to be a problem with the script but a problem with how Encase presents sparse files. My $USNJRNL•$J file was recorded as being over 6GB in size. Only the last 32MB (or thereabouts) contained any data, the preceding data was a lot of zeroes -00 00 00 00 ad infinitum. Because the file is a sparse file the zeroed out portion of the file is not actually stored on disk - it is just presented virtually. However it appears that the script needed to parse through the almost 6GB of zeroes before it got to the juicy bits which gave the appearance of the script hanging (or resulting in Encase running out of memory). The solution to this was simple - copy out the non zero data into a file entitled $USNJRNL•$J. Create a folder named $Extend and place your extracted file into it. Drag the folder into a new Encase case as Single Files. Rerun the script which will then process the entries almost instantaneously.

Seth Nazzaro's Python Script
Seth wrote his script because he had difficulty in running the Enscript -possibly for the reasons described above. I have described how to run the script in my earlier Python post. The script is useful in validating results ascertained by other means and particularly for the comprehensive way it parses the reason codes (many record entries contain more than one reason code and the way they amalgamate together can be a bit confusing). The script also outputs to a CSV file.

Recovering USN Change Journal Records from unallocated
Regular readers will know that I am particularly keen in recovering file system and other data from unallocated. I am pleased to see I am not alone. In many cases because of OS installations over the top of the OS where your evidence was created we have no choice but to recover evidence from unallocated.

It is possible to locate large numbers of deleted USN Change Journal Records in unallocated clusters. There is a clear structure to them.





To carve these from unallocated I use my file carver of choice Blade. I have created a Blade data recovery profile which recovered a very large number of records from my test data.

Profile Description: $UsnJrnl·$J records
ModifiedDate: 2010-07-14 08:32:57
Author: Richard Drinkwater
Version: 1.7.10195
Category: NTFS artefacts
Extension: bin
SectorBoundary: False
HeaderSignature: ..\x00\x00\x02\x00\x00\x00
HeaderIgnoreCase: False
HasLandmark: True
LandmarkSignature: \x00\x3c\x00
LandmarkIgnoreCase: False
LandmarkLocation: Static: Byte Offset
LandmarkOffset: 57
HasFooter: False
Reverse: False
FooterIgnoreCase: False
FooterSignature: \x00
BytesToEOF: 1
MaxByteLength: 1024
MinByteLength: 64
HasLengthMarker: True
UseNextFileSigAsEof: False
LengthMarkerRelativeOffset: 0
LengthMarkerSize: UInt32

You may wish to be more discriminatory and carve records relating to just avi and lnk files for example. A small change to the Landmark Signature achieves this.

LandmarkSignature: \x00\x3c\x00[^\x2E]+\x2E\x00[al]\x00[vn]\x00[ik]

The next step is to process the recovered records. Given we already have two separate scripts to do this all we have to do is to present the recovered records to the scripts in a form they recognise. This is achieved by concatenating the recovered records contained within the blade output folders


This can be achieved at the command prompt, folder by folder > copy *.bin $USNJRNL•$J. However if you have recovered a very large number of records and have a considerable number of Blade output folders this can be a bit tedious. To assist with this John Douglas over at QCC wrote me a neat program to automate the concatenation within the Blade output folders (email me if you would like a copy). John's program Concat creates a concatenated file within each output folder in one go. Once you have the concatenated $USNJRNL•$J files you can then run either script against them. Please note the folder structure the enscript requires as referred to above.

Carving individual records in this fashion will result (at least in my test cases) in the recovery of a lot (possibly hundreds of thousands) of records. There will be considerable duplication. Excel 2007 or later will assist with the de-duplication within the scripts output.

Given the potentially large number of records that are recoverable I found it sensible to

  • run a restricted Blade Recovery profile for just the file types you are interested in (e.g. avi and lnk)
  • Run John Douglas's concat.exe across Blades output
  • In Windows 7 use the search facility to locate each concatenated $USNJRNL•$J file
  • copy them all into one folder allowing Windows to rename the duplicates
  • at the command prompt use a for loop to process them along the lines of
    >for /L %a in (1,1,40) do python UsnJrnl-24NOV09.py -f "UsnJrnl$J (%a)" output%a -t
    or
    >for /L %a in (1,1,40) do python UsnJrnl-24NOV09.py -f "UsnJrnl$J (%a)" -s >> output.tsv
  • or drag the concatenated files back into Encase as single files and process further with Lance's script.

References
http://www.opensource.apple.com/source/ntfs/ntfs-64/kext/ntfs_usnjrnl.h?txt
http://www.microsoft.com/msj/0999/journal/journal.aspx
http://technet.microsoft.com/en-us/library/cc976808.aspx
http://wapedia.mobi/en/USN_Journal
http://code.google.com/p/parser-usnjrnl/
http://www.forensickb.com/2008/09/enscript-to-parse-usnjrnl.html


3 comments:

Anonymous said...

Brilliant Richard, just brilliant.

Manfred said...

Hi,

can you send me a copy of the concaternation program? This would save me a little work :). Thank you very much.


sincerely

Markus

Anonymous said...

check out the website http://www.tzworks.net/prototypes.php and download/look at the tool called 'jp' for parsing the change log journal on a live system.