2019-08-15

2019 Unofficial Defcon DFIR CTF Writeup - Linux Forensics


When completing this portion of the CTF I relied upon Autopsy 4.12 heavily, using the CTF as an opportunity to practice and trial a different toolset/ approach. In general I was impressed, but I’m not an Autopsy user day to day and as such I was fumbling a fair bit. For the Linux portion of the challenge, in hindsight, I think mounting the image within a Linux distribution would make more sense. For that reason, in this writeup I have addressed how to solve the questions using SIFT.

You can mount the image under sift, using the ewfmount command:
sudo ewfmount /mnt/hgfs/Cases-ssd/Evidence/Adam\ Ferrante\ -\ Laptop-Deadbox/Horcrux/Horcrux.E01 /mnt/ewf
The new file object is named ewf1:




This presents the E01 as a raw file which can the be mounted with a loopback device. First, we need to establish where the partition of interest is located and to achieve this I use ‘mmls’:
mmls /mnt/ewf/ewf1
This provides the below output:


We can see that “Units are in 512-byte sectors” and that the start offset of the Linux partition is 75560960. Multiplying these together we get ‘38687211520’ which is the byte offset we will use for mounting the partition. First I created a directory to use as a mount point, with:
mkdir /mnt/linux_mount
Then using the mount command, we can mount the partition read only:
mount -o ro,loop,offset=38687210496 -t ext4 /mnt/ewf/ewf1 /mnt/linux_mount
Or not…


The error above can result from a number of things, however in this case it is because the filesystem is dirty. When mounting the drive it attempts to rectify the issue, which cannot be performed with a read only mount. We can overcome this issue by passing the ‘norecovery’ option when mounting:
mount -o ro,norecovery,loop,offset=38687211520 -t ext4 /mnt/ewf/ewf1 /mnt/linux_mount
This worked without error and if we 'ls' the new mountpoint we can see that we now have access to the mounted filesystem:



Now we have the filesystem mounted... on with the questions!

red star - 10 pts

Question

What distribution of Linux is being used on this machine?

Answer

There are various ways to determine the distribution in use within a linux install/image. When looking at a dead image checking the contents of /etc/issue, /etc/*-version or /etc/*_version is the quickest and easiest.

A simple cat of /mnt/linux_mount/etc/*version or /mnt/linux_mount/etc/issue provides the following:


Throughout this process be careful to ensure you are targeting the mounted filesystem and not your analysis system's filesystem. I will normally navigate to the root of the target and work from there as my working directory, in this case using ‘cd /mnt/linux_mount’. Thereafter a path in the OS under analysis would be equivalent to that of being at the root of the system and we can presceed any paths with '.' to indicate that we should be starting from the current directory. the current director.e.g.:

cat ./var/log/apache2/access.log
Due to my working from the route of the mounted drive the command above will target '/mnt/linux_mount/var/log/apache2/access.log' on my analysis system.

Going back to the screenshot and the output of our commands, it looks like we are dealing with Kali.

flag<Kali>

abc123 - 10 pts

Question

What is the MD5 hash of the apache access.log?

Answer

By default the apache access log is located at /var/log/apache2/access.log. So with a working directory of the root of the mounted fs, we can use:
md5sum ./var/log/apache2/access.log
Which provides the following:

 

flag<d41d8cd98f00b204e9800998ecf8427e>

Radiohead - No Surprises - 10 pts

Question

It is believed that a credential dumping tool was downloaded? What is the file name of the download?

Answer

As a first step in familiarising myself with the image I reviewed the ‘/etc/passwd’ file in an effort to see what users were active and worthy of further investigation. With regard to this specific questions I was particularly interested to see which users had home directories as possible locations to have downloaded files to.

As is default in kali, the 'root' account is the only user account, and the home directory was at ‘/root’. A quick way of having an easy to review list of home directories in use is to use the following command:
cat ./etc/passwd | cut -d':' -f 6 | sort | uniq
Which results in this output:


Based on the above we will start with the ‘root’ user and it makes sense to check on the contents of the associated ‘Downloads’ folder:
ls -al ./root/Downloads/
Which results in the below:


Only one file, and it’s filename is sure does look like that of a well-known credential dumping tool.

flag<mimikatz_trunk.zip>

super duper secret - 15 pts

Question

There was a super secret file created, what is the absolute path?

Answer

Within the bash history for ‘root’ (/root/.bash_history) we see that someone piped the output of a cat command into ‘/root/Desktop/SuperSecretFile.txt’:


flag</root/Desktop/SuperSecretFile.txt>

this is a hard one - 15 pts

Question

What program used didyouthinkwedmakeiteasy.jpg during execution?

Answer

Still in the bash history we see ‘binwalk’ being used over didyouthinkwedmakeiteasy.jpg:


flag<binwalk>

overachiever - 15 pts

Question

What is the third goal from the checklist Karen created?

Answer

There is a file on the desktop for ‘root’ called ‘Checklist’, reviewing its content we see that it has three items:


flag<Profit>

attack helicopter - 20 pts

Question

How many times was apache run?

Answer

We earlier reviewed the apache access log to calculate its hash, and the eagle eyed forensicators among us may have noticed that the hash was ‘d41d8cd98f00b204e9800998ecf8427e’ which is the MD5 associated with a 0 byte file.

Reviewing the log directory, we find the same to be true of the other logs:


Most notable is the error.log as this log is populated with entries upon apache starting. Assuming the log hasn’t been tampered with, it being empty is an indication that apache has not run.

flag<0>

oh no some1 call ic3 - 25 pts

Question

It is believed this machine was used to attack another, what file proves this?

Answer

While spelunking through the image to get my bearings, I happened upon a screenshot within the home directory for ‘root’. This file is located at ‘/root/irZLAohL.jpeg’ and is reproduced below:



Notably this image contains a screenshot of a windows host, probably captured during a malicious remote access session. The filename was accepted as the flag. Further notable is that the screenshot includes the notepad window open with a flag which we had previously found in the Triage Memory questions. It’s all starting to fall into place…

flag<irZLAohL.jpeg>

scripters prevail - 25 pts

Question

Within the Documents file path, it is believed that Karen was taunting a fellow computer expert through a bash script. Who was Karen taunting?

Answer

When reviewing bash history we saw various references to bash scripts:


Reviewing that excerpt we see that the user navigated to ‘Documents’, made a directory called ‘myfirsthack’, entered that directory and then created, modified, chmod’d and executed two scripts (hellworld.sh and firstscript), they then copied firstscript to firstscript_fixed and executed it. Lets see what they contain:


Not so interesting, and:


Nope, third time is a charm?:



Here we have a reference to a ‘Young’, lets give that a go! 

flag<Young>

the who - 30 pts

Question

A user su'd to root at 11:26 multiple times. Who was it?

Answer

su events are recorded in the auth log at ‘/var/log/auth.log’, we can quickly parse this for events at that time using the following:
cat ./var/log/auth.log | grep 11:26
This gives us the following output:


And we can see that user ‘postgres’ has multiple entries that minute stating “Successful su for postgres by root”.

flag<postgres>

/ - 30 pts
Question
Based on the bash history, what is the current working directory?

Answer
Within the bash history, reviewing the last cd to an absolute path we see that the user changed directory to /root. Thereafter we can review the following cd commands to see what impact they would have on the current working directory.

Command Resultant Working Directory
cd /root /root
cd ../root /root
cd ../root/Documents/myfirsthack/../../Desktop/ /root/Desktop
cd ../Documents/myfirsthack/ /root/Documents/myfirsthack

So we see the final state is' /root/Documents/myfirsthack'

flag</root/Documents/myfirsthack>

2019-08-14

2019 Unofficial Defcon DFIR CTF Writeup - Memory Forensics

For the majority of this section I used Volatility 2.6 under Windows Subsystem for Linux (WSL). As an aside, I commonly use volatility in one of two ways. Most commonly I will run a number of common commands up front and as I progress I will run other less common commands, in each case I redirect the output of the command(s) to txt files which I can then manually review or cat/grep etc, thus reducing processing time that would arise from re-running commands. e.g:
vol.py -f [path_to_memory] --profile[profile] pslist >> [media-id]-pslist.txt
vol.py -f [path_to_memory] --profile[profile] psscan >> [media-id]-psscan.txt
vol.py -f [path_to_memory] --profile[profile] netscan >> [media-id]-netscan.txt
This approach means I can reuse output if later required in analysis. During CTFs and similar I quite often use quick and dirty commands piped to grep to narrow in on answers quickly where I don’t think I will rely upon analysis later. In such cases, a knowledge of the expected output from plugins in advance often means I can do away with the headers in output tables or I can include them with an or statement in grep e.g.:
vol.py -f [path_to_memory] --profile=[profile] pslist | grep -i 'offset\|notepad'
This requires knowledge of a unique string which can be found in the header of the output table for each plugin. In this case I know the pslist header containes ‘Offset’ and I am interested in the ‘notepad’ entry.

Grepping the output of volatility plugins is something memory forensics ninja Alissa Torres (@sibertor) covers in her SANS FOR526 class and it’s really sped up my analysis. There will be a mix of both techniques in the examples that follow.

get your volatility on – 5pts

Question

What is the SHA1 hash of triage.mem?


Answer

No fancy tools needed here a simple sha1sum, in this case using WSL, gives us the answer.

sha1sum [path_to_file]


flag<c95e8cc8c946f95a109ea8e47a6800de10a27abd>

pr0file - 10 pts

Question

What profile is the most appropriate for this machine? (ex: Win10x86_14393)

Answer

The first step in most volatility analysis is to use the ‘imageinfo’ plugin:
vol.py -f [path_to_memory] imageinfo

Reviewing the output, we can see that the plugin presents a few possible profiles, we also review the service pack level to confirm that we require an SP1 profile.



Combining that information, it is possible that a number of other profiles (e.g. Win2008R2SP1x64 or the other kernel variant profiles) would be correct, and it would likely have been possible to use any of them to confirm the exact OS by pulling registry hives from RAM. But in this case, and based on prior experience, I went with ‘Win7SP1x64’ and it was correct.

flag<Win7SP1x64>

hey, write this down - 12 pts

Question

What was the process ID of notepad.exe?

Answer

The 'pslist' command, known profile and a pipe to grep can get us this quickly:
vol.py -f Adam\ Ferrante\ -\ Triage-Memory.mem --profile= Win7SP1x64 pslist | grep -i 'offset\|notepad'

The Process ID (PID) column shows for notepad the PID is 3032.

flag<3032>

wscript can haz children - 14 pts

Question

Name the child processes of wscript.exe.

Answer

The 'pstree' command shows the relationship between parent and children processes, sometimes where there are lots of processes child to a single parent it can be a bit confusing and alternatives like explicitly looking up the PID of the parent and seeing what processed have it as a Parent Process ID (PPID) using 'pslist' (or 'psscan') is the best approach.

In this case 'pstree', the known profile and a pipe to grep with context (-C) is a nice shortcut:

vol.py -f Adam\ Ferrante\ -\ Triage-Memory.mem --profile=Win7SP1x64 pstree | grep wscript -C2

Executing this command results in the following output:


We can see the child process is UWkpjFjDzM.exe.

flag<UWkpjFjDzM.exe>

tcpip settings - 18 pts

Question

What was the IP address of the machine at the time the RAM dump was created?

Answer

There are a couple of quick ways to skin this cat, but my preference is to use netscan output as it is commonly required later, I piped this to a text file with:

vol.py -f Adam\ Ferrante\ -\ Triage-Memory.mem --profile=Win7SP1x64 netscan >> netscan.txt
Which provided the following output:


Per the above, there are multiple established connections which detail an IPv4 address of 10.0.0.101.

flag<10.0.0.101>

intel - 18 pts

Question

Based on the answer regarding to the infected PID, can you determine what the IP of the attacker was?

Answer

The infected process was the child process spawned from ‘wscript.exe’, ‘UWkpjFjDzM.exe’ or PID 3496.
We can re-review the netscan output written to netscan.txt with:
cat netscan.txt | grep -i 'offset\|UWkpjFjDzM'

Which outputs as below:



That process is associated with an established connection to '10.0.0.106':

flag<10.0.0.106>

i <3 windows dependencies - 20 pts

Question

What process name is VCRUNTIME140.dll associated with?

Answer

If you want to know something about loaded dlls, the 'dlllist' plugin is a good place to start. The output is verbose due to us not being able to focus on a single process in this question. The output is a repeating format with a header detailing process information and then a list of the associated dlls. In this case I used the following command to pass all lines which contain details of a process and also pass any instance of ‘VCRUNTIME140’:

vol.py -f Adam\ Ferrante\ -\ Triage-Memory.mem --profile=Win7SP1x64 dlllist | grep -i 'pid\|VCRUNTIME140'
Therefor for the occurance of  VCRUNTIME140 we can review the proceeding line in the output and we can conclude that this was the associated process.

To my surprise there were 5 instances associated with different processes.

While I didn’t do this at the time, a tidier approach would be to use:
vol.py -f Adam\ Ferrante\ -\ Triage-Memory.mem --profile=Win7SP1x64 dlllist | grep -i 'pid\|VCRUNTIME140' | grep -i VCRUNTIME140 -B1
This results in the following output:



Technically any of these would be a correct answer for “What process name is VCRUNTIME140.dll associated with?”, but OfficeClickToR.exe stood out as unique so I went with that first. From memory, I think I tried them all when that didn’t work, before realising that I had to drop the extension…

flag<OfficeClickToR>

mal-ware-are-you - 20 pts

Question

What is the md5 hash value the potential malware on the system?

Answer

As mentioned earlier, the potential malware is ‘UWkpjFjDzM.exe’ or PID 3496. We can dump this process to the current directory and hash it with a one liner. This is because we know the behaviour of the 'procdump' commend when it comes to naming dumped processes. We specify the output location and the filename will be ‘executable.[pid].exe’

vol.py -f Adam\ Ferrante\ -\ Triage-Memory.mem --profile=Win7SP1x64 procdump -p 3496 -D . && md5sum executable.3496.exe

My AV was unimpressed, but we managed to hash the file before it was quarantined:



flag<690ea20bc3bdfb328e23005d9a80c290>

lm-get bobs hash - 24 pts

Question

What is the LM hash of bobs account?

Answer

There is a good guide to the required process here.

To answer this question, we need to use the hashdump plugin. However, this plugin needs to be provided with the virtual address of two hives, SAM and System. We retrieve this information with the hivelist plugin:

vol.py -f Adam\ Ferrante\ -\ Triage-Memory.mem --profile=Win7SP1x64 hivelist

Resulting in the following output:


SYSTEM is at: 0xfffff8a000024010
SAM is at: 0xfffff8a000e66010

We can then use the hashdump command to dump the hashes:

vol.py -f Adam\ Ferrante\ -\ Triage-Memory.mem --profile=Win7SP1x64 hashdump -y 0xfffff8a000024010 -s 0xfffff8a000e66010



The format of the resultant output is:

<Username>:<User ID>:<LM hash>:<NT hash>:<Comment>:<Home Dir>:

As such we are interested in the LM component, so ‘aad3b435b51404eeaad3b435b51404ee’, which happens to be the LM hash of a blank password.

flag<aad3b435b51404eeaad3b435b51404ee>

vad the impaler - 25 pts

Question

What protections does the VAD node at 0xfffffa800577ba10 have?

Answer

Information on VAD notes can be returned using the ‘vadinfo’ command. Running it on its own will result in a lot of output, easily piped to a txt file for subsequent review or narrowed down with a grep with context. The firs relevant line will be the one with the VAD note location and the next 10 lines will be more than enough to answer our question:

vol.py -f Adam\ Ferrante\ -\ Triage-Memory.mem --profile=Win7SP1x64 vadinfo | grep '0xfffffa800577ba10' -A 10

This results in the following output:



And we can see the protection is ‘PAGE_READONLY’

flag<PAGE_READONLY>

more vads?! - 25 pts

Question

What protections did the VAD starting at 0x00000000033c0000 and ending at 0x00000000033dffff have?

Answer

This time we are seeking the same information but based upon start location and end location. I actually just used the same command but substituted @ value for the start location. However, this approach actually returned multiple results. It was easy enough to distinguish which I was looking for from the mess but in short, I hadn’t noticed in the question that we were talking about a historical VAD hence “What protections did the VAD”.

A cleaner way to find exactly the right answer is as follows:

vol.py -f Adam\ Ferrante\ -\ Triage-Memory.mem --profile=Win7SP1x64 vadinfo | grep '0x00000000033c0000' -A 3 | grep '0x00000000033dffff ' -A 3



flag<PAGE_NOACCESS>

vacation bible school - 25 pts

Question

There was a VBS script run on the machine. What is the name of the script? (submit without file extension)

Answer

I expect there are a number of ways to answer this one, and I tried a few possibilities which didn’t get me to the answer. Ultimately the ‘cmdline’ solved it for me but it may not be the most elegant answer. If a VBS script had been executed via the command line then I would have expected there to be evidence here.

As it was a search for vbs entries within here actually identified that the process wscript.exe (PID 5116) had been executed with the command line detailed below:



flag<vhjReUDEuumrX>

thx microsoft - 25 pts

Question

An application was run at 2019-03-07 23:06:58 UTC, what is the name of the program? (Include extension)

Answer

The shimcache is one of many handy ways to evidence process execution and there is a volatility plugin to parse it from memory, the following query immediately gave the process executed at that time:

vol.py -f Adam\ Ferrante\ -\ Triage-Memory.mem --profile=Win7SP1x64 shimcache | grep '23:06:58'

Resulting in:


So we see that Skype executed at the time in question.

flag<Skype.exe>

lightbulb moment - 35 pts

Question

What was written in notepad.exe in the time of the memory dump?

Answer

There is a ‘notepad’ plugin for Volatility however it only supports XP/2003, so we have to do this manually. Fortunaltely being a common challenge there are a few handy guides out there, including the one located here.

Earlier in ‘hey, write this down’, we identified that the PID assoaicted with notepad.exe is 3032, as such we can dump the process memory with:

vol.py -f Adam\ Ferrante\ -\ Triage-Memory.mem --profile=Win7SP1x64 memdump -D dump -p 3032

We can then run strings over the dumped memory and as a first Hail Mary use grep to parse the output for any string containing ‘flag<’, just in case the challenge author has been kind.

And they have:



Note that the command used, per the guide linked above, is strings with the ‘-e l’ flag to set it to 16-bit littleendian, as this is how notepad stores content.

flag<REDBULL_IS_LIFE>

8675309 - 35 pts

Question

What is the shortname of the file at file record 59045?

Answer

The ‘mftparser’ plugin is very useful and I had already run it while looking to solve some of the other challenges. Due to the volume of information returned, run time and how often the content gets returned to I piped the output of the command to a text file.

A search of the file for the string ‘59045’ had 2 results, one of which was the relevant one.



In the screenshot we can quickly see that the short filename associated with this record was ‘EMPLOY~1.XLS’.

flag<EMPLOY~1.XLS>

whats-a-metasploit? - 50 pts

Question

This box was exploited and is running meterpreter. What PID was infected?

Answer

This was a bit of a gimme. Earlier in ‘wscript can haz children’ we identified a malicious process and then in ‘intel’ we used netscan to see what it was communicating with. I noted at the time that it was communicating on port ‘4444’ which will be known to many as the default port for metasploit.

The PID associated with this process was ‘3496’ and lo and behold, it was accepted as the correct answer.

flag<3496>

2019 Unofficial Defcon DFIR CTF Writeup - DFA Crypto Challenge


Question


"On the homepage you will notice the Champlain College Digital Forensics Association's Logo. Can you decipher the hidden message?"



Full disclosure: I wasn’t a fan of this challenge and furthermore I would not have solved it without talking to the question author.

It became apparent that it was likely a multi-stage challenge with the flag string encoded in multiple ways, made more complicated by you not having feedback that the intermediate step had been correct. 

After trying all sorts of encoding methods and some guessed possible keys I ultimately reached out to the author of the question and asked: 
“is the string 'poqdckhn', (with additional work) all you need for the crypto challenge. Or does something else need to be derived from the image/file to use in conjunction?”
They confirmed that there were three steps and when I asked if I would know that the intermediate step had been correctly solved, I was informed that I would not. But I was assured:
“When the challenge was created, we thought of some common ciphers that we were taught in the classroom.” 
So I wasnt to expect anything too exotic/ complicated.

Answer
The Logo obviously contains a string of hexadecimal characters:

70 6F 71 64 63 6B 68 6E

These all fall within the ascii alphabet range and correspond to:

poqdckhn

The kicker here is that you have to ROT13 (at least they didn’t use a less common ROT) the string, resulting in:

cbdqpxua

I tried various different cipher methods but ultimately determined that a Vigenère cipher was correct. I undertook an exercise to try and find possible keys early before this time and included the following list:
  • champlain
  • ccdfa (Champlain College Digital Forensics Association)
  • lcia (Leahy Center for Digital Investigatio)
  • audeamus (champlain college motto)
  • beaver (Chaplain College Mascot)

Throughout the process I used the fantastic tool CyberChef to allow me to quickly try different variations. My eventual recipe was as below:


Imagine my surprise (read moderate rage) when I found the flag had been under my nose all along. Using the key ‘champlain’, or specifically ‘champlai’ due to string length, resulted in the answer ‘audemus’. Due to the fact that the flag was Latin, to be honest I’m not sure I would have realised it was correct had I not previously researched the motto as a possible key.

2019 Unofficial DEFCON DFIR CTF Writeups


The CTF

First a shout out to the Champlain College Digital Forensics Association (@champdfa) for putting together an awesome CTF and to David Cowen for making it public. For those who aren’t aware David has authored and run a number of awesome CTFs over the last few years, including an Unofficial DEFCON DFIR CTF released during the week of DEFCON. Each one of them has been great fun and an awesome learning experience.

This year, due to other commitments, he was hard pressed to design one from scratch. Fortunately CCDFA came to the rescue and David hosted the CTF based upon a dataset and questions which they had previously designed. Details of the CTF can be found here

I’ve never put together a CTF write-up before, but I have often benefited from those written by others. It's great as a learning tool and to help understand other people’s processes when solving these types of challenges. So here goes nothing.

If of interest to anyone, I had no access to my usual commercial tools during this CTF and as such the majority was solved using the following (some I have personal/home use licenses for):

  • FTK Imager (4.1.1.1) - Until I noticed I was out of date and a bug was impeding progress!
  • FTK Imager (4.2.1.4) - Much better
  • Autopsy (4.12)
  • Eric Zimmerman's tools (including KAPE)
  • Volatility 2.6
  • 7Zip
  • 010 Editor 8.0.1
  • Arsenal Image Mounter (3.0.64)
  • Passware Kit Standard (2019.3.2)
I'll be releasing the write-up as a single post per section of the CTF, these are:
  1. DFA Crypto Challenge
  2. Deadbox Forensics
  3. Linux Forensics
  4. Memory Forensics
  5. Triage VM Questions



2019-01-11

Testing of SRUM on Windows Server 2019 (continued)

After my unsuccessful attempts to test SRUM in Windows Server 2019 earlier in the week I followed up with Dave Cowen who confirmed the name of the install media he had used, and went about installing Server 2019 from the same media. Specifically this was:

en_windows_server_2019_x64_dvd_4cb967d8.iso - A876D230944ABE3BF2B5C2B40DA6C4A3

Lo and behold, when I checked for the presence of a SRUM directory...


The Windows version information associated with this install is as follows:


Putting aside the strangeness that SRUM doesn't appear to be enabled by default in certain circumstances, lets look at how it compares to SRUM within Windows 10.

Noted differences between Windows 10 SRUM and Sever 2019 SRUM

As per the methodology outlined in my previous post, I extracted the SRUDB.dat from the following systems: 
  • Fresh install of Server 2019
  • Fresh install of Windows 10
  • Used install of Windows 10
  • Used install of Windows 8
I parsed out a list of tables and their associated fields for each of the SRUDB.dat files I had and compared the tables and their content. A table outlining what tables were present within the SRUDB associated with each of the examined OS samples is provided below:


Notable observations were as follows:

  • The Server 2019 install had four new tables which had not been seen in previous iterations of the OS (or not in my testing):
    • {17F4D97B-F26A-5E79-3A82-90040A47D13D}
    • {841A7317-3805-518B-C2EA-AD224CB4AF84}
    • {DC3D3B50-BB90-5066-FA4E-A5F90DD8B677}
    • {EEE2F477-0659-5C47-EF03-6D6BEFD441B3}
  • The Application Resource usage data table {D10CA2FE-6FCF-4F6D-848E-B2E99266FA89} and Network Connectivity data {DD6636C4-8929-4683-974E-22C046A43763} remain. 
  • The fields present in these tables have not changed.
  • In my testing Network Usage {973F5D5C-1D90-4944-BE8E-24B94231A174}, Energy Usage{FEE4E14F-02A9-4550-B5CE-5FA2DA202E37} and Energy Usage Long Term {FEE4E14F-02A9-4550-B5CE-5FA2DA202E37}LT Tables were absent.
  • In my test the Push Notification Data {D10CA2FE-6FCF-4F6D-848E-B2E99266FA86} table was also absent however I note that it was absent from a fresh install of Windows 10 and may need to have push notifications enabled, or to have them occur, before the table is created and populated.
I have had limited time to perform testing of the new tables so include for reference their field headings, as this may shed some light on the function of the tables:

{17F4D97B-F26A-5E79-3A82-90040A47D13D}
AutoIncId
TimeStamp
AppId
UserId
Total
Used

{841A7317-3805-518B-C2EA-AD224CB4AF84}
AutoIncId
TimeStamp
AppId
UserId
SizeInBytes

{DC3D3B50-BB90-5066-FA4E-A5F90DD8B677}
AutoIncId
TimeStamp
AppId
UserId
ProcessorTime

{EEE2F477-0659-5C47-EF03-6D6BEFD441B3}
AutoIncId
TimeStamp
AppId
UserId
BytesInBound
BytesOutBound
BytesTotal

Parsing SRUM

I performed some limited testing to see about parsing useful data from SRUM on Server 2019 and I am pleased to report that where tables have remained consistent my previous go to tool, Mark Baggett's srum-dump still parses this data successfully.

While it does display errors per the below, it will proceed and extract what it can from the common tables:


Unfortunately the only two tables which fall into this are the Application Resource usage data table {D10CA2FE-6FCF-4F6D-848E-B2E99266FA89} and Network Connectivity data table{DD6636C4-8929-4683-974E-22C046A43763}.

If i have time in the next couple of weeks I will look into these new tables in an effort to derive how they are populated. I'm also keen to try and establish what caused SRUM to be disabled on some of the installs I used for testing but not others.

2019-01-08

Some testing of SRUM on Windows Server 2019

This post is a response to David Cowen’s ‘Sunday Funday' challenge as detailed over at ‘Hacking Exposed - Computer Forensics Blog’.

The question posed by David was as follows:
Server 2019 got SRUM, what if any differences are there between SRUM on Windows 10 and SRUM on Server 2019?
To be up front, don't read this post looking for amazing details on the technical differences in  the implementation of SRUM between Windows 10 and Server 2019, my conclusion is going to disappoint.

Methodology

My approach in answering this question was going to be to export the SRUMDB from a Windows 10 system and a Windows Server 2019 system and then to document the schema within the database and then to explore any differences.

The SRUM database (SRUDB.dat) is commonly located at 'C:\WINDOWS\system32\SRU\SRUDB.dat' within systems where SRUM is available. It is an Extensible Storage Engine (ESE) Database and as such can be parsed with various tools.

I chose to use NirSoft ESEDatabaseView as an easy way to parse out the contents of each table into a csv so the headings and contained data can be reviewed. There are various great tools designed to parse the SRUDB however in this case I was specifically looking for potential new tables or fields which these may miss.

The approach employed was to extract the SRUDB from the target system to another location then to use the below command: 

ESEDatabaseView.exe /table C:\Users\[removed]\Desktop\SRUM\SRUDB.dat * /scomma "C:\Users\[removed]\Desktop\SRUM\*.csv"

This command would parse the content of every table (due to the specified name of *) and then parse the content into individual CSVs named after each table. The results are detailed in the sections that follow.

Windows 10 SRUDB Schema

The Windows 10 system analysed is a heavy use system which has been installed for some time, OS details as below:


When the SRUDB.dat file was reviewed in ESEDatabaseView, the table list looked as follows:



The tables were as follows:

{5C8CF1C7-7257-4F13-B223-970EF5939312}
{7ACBBAA3-D029-4BE4-9A7A-0885927F1D8F}
{973F5D5C-1D90-4944-BE8E-24B94231A174}
{D10CA2FE-6FCF-4F6D-848E-B2E99266FA86}
{D10CA2FE-6FCF-4F6D-848E-B2E99266FA89}
{DD6636C4-8929-4683-974E-22C046A43763}
{FEE4E14F-02A9-4550-B5CE-5FA2DA202E37}
{FEE4E14F-02A9-4550-B5CE-5FA2DA202E37}LT
MSysLocales
MSysObjects
MSysObjectsShadow
MSysObjids
SruDbCheckpointTable
SruDbIdMapTable

I then proceeded to make a really pretty table which contains the field names associated with each of the fields in each table. It looked a little something like this:


Which I think we can all agree presents very well as a table within a blog. Ultimately the content isn't that interesting, but any difference to what we find in Server 2019 will be.

Windows Server 2019 SRUDB Schema

The Windows Server 2019 system analysed is a fresh install of a virtual machine using the evaluation ISO. Following the issues during the rollout of 2019 and associated versions of Windows 10, as detailed here, Microsoft pulled the download links so I had to hunt and locate this one.

OS details as below:


This system was allowed to run for a short while, various applications were executed and it was rebooted/shutdown and powered on a number of times.

Despite all this, when I went to go and extract the SRUDB.dat I had an interesting finding...


So at this moment in time, the answer I submit to David's question is that there are some significant differences between SRUM on Windows 10 and SRUM on Server 2019, most notably that in my testing there is no SRUM in Windows Server 2019.

Unfortunately, having watched David's recent Forensic Lunch Test Kitchen I know full well that in his testing, and recorded on video for all to see is a Windows Server 2019 install with SRUM, all that remains now is to try and figure out what if any differences there are between our test environments and whether they cause this anomalous behavior.


***UPDATED 2019-01-11***

This behavious has now been confirmed by a colleague who was also looking into it, the ISO name and MD5 we were using:

17763.1.180914-1434.rs5_release_SERVER_EVAL_X64FRE_EN-US.
ISO - E62A59B24BD6534BBE0C516F0731E634

17763.1.180914-1434.rs5_release_SERVERESSENTIALS_OEM_X64F
RE_en-us.iso - B0F033EA706D1606404FF43DAD13D398

Notably, looking to the registry in these same systems we find the normal SRUM keys however they are not populated with RecordSets:


Above we see that the SRUM key exists but where we would expect to see RecordSets with the temporary data, there are none. This location normally contains temporary data before it is pushed to the SRUDB.dat.

2019-01-07

Updated feature: Exchange Online mailbox audit to add mail reads by default

Exciting news in the world of Office 365 Business Email Compromise investigations. Following on from their recent commitment to improve logging of account activity within Office365 Microsoft have announced that Exchange Online will audit mail reads/accesses by default for owners, admins and delegates under the MailItemsAccessed action.

I was notified as part of the weekly 'Office 365 changes' roundup sent to Office365 administrators, the text of the update reads:

Updated feature: Exchange Online mailbox audit to add mail reads by default
MC171679
Prevent or Fix Issues
Published On : 4 January 2019
To ensure that you have access to critical audit data to investigate security incidents in your organization, we’re making some updates to Exchange mailbox auditing. After this change takes place, Exchange Online will audit mail reads/accesses by default for owners, admins and delegates under the MailItemsAccessed action.
This message is associated with Microsoft 365 Roadmap ID: 32224.
How does this affect me?
The MailItemsAccessed action offers comprehensive forensic coverage of mailbox accesses, including sync operations. In February 2019, audit logs will start generating MailItemsAccessed audit records to log user access of mail items. If you are on the default configuration, the MailItemsAccessed action will be added to Get-mailbox configurations, under the fields AuditAdmin, AuditDelegate and AuditOwner. Once the feature is rolled out to you, you will see the MailItemsAccessed action added and start to audit reads. 
This new MailItemsAccessed action is going to replace the MessageBind action; MessageBind will no longer be a valid action to configure, instead an error message will suggest turning on the MailItemsAccessed action. This change will not remove the MessageBind action from mailboxes which have already have added it to their configurations. 
Initially, these audit records will not flow into the Unified Audit Log and will only be available from the Mailbox Audit Log. 
We’ll begin rolling this change out in early February, 2019. If you are on the default audit configuration, you will see the MailItemsAccessed action added once the feature is rolled out to you and you start to audit reads. 
What do I need to do to prepare for this change?
There is no action you need to take to derive the security benefits of having mail read audit data. The MailItemsAccessed action will be updated in your Get-Mailbox action audit configurations automatically under AuditAdmin, AuditDelegate and AuditOwner. 
If you have set these configurations before, you will need to update them now to audit the two new mailbox actions. Please click Additional Information for details on how to do this. 
If you do not want to audit these new actions in your mailboxes and you do not want your mailbox action audit configurations to change in the future as we continue to update the defaults, you can set AuditAdmin, AuditDelegate and AuditOwner to your desired configuration. Even if your desired configuration is exactly the same as the current default configuration, so long as you set the AuditAdmin, AuditDelegate and AuditOwner configurations on your mailbox, you will preclude yourself from further updates to these audit configurations. Please click Additional Information for details on how to do this.
If your organization has turned off mailbox auditing, then you will not audit mail read actions.
This is good news for investigating the scope of account compromise, of course it should be noted that there are a number of other concerns, and indeed other ways that messages can be downloaded/accessed, once an account has been compromised.

Once my O365 test account has been updated with the change I plan to do some testing of this additional logging and will document any findings here.

Relevant reading:

2019-01-04

Available Artifacts - Evidence of Execution Updated

Since my original post a couple of months ago there have been new discoveries, additional suggestions and some error corrections. These things combined warranted an update to the spreadsheet and original post. 

The I want to take the opportunity to thank the following people who have directly or indirectly contributed to the update:

  • Maxim Suhanov (@errno_fail) for his great work on Syscache.hve
  • David Cowen (@HECFBlog) for the work put into his Test Kitchen Series and investigation of Syscache.hve and what OSs it is available within
  • Phill Moore (@phillmoore) for correcting entries as they relate to the availability of SRUM
  • Hadar Yudovich (@hadar0x) for his suggestion of Application Experience Program Telemetry
  • Matt (@mattnotmax) for his suggestion of CCM_RecentlyUsedApps
  • Eric Zimmerman (@EricRZimmerman) for his suggestion of further useful tools (yet to be written up!)
  • proneer for their comment with multiple suggestions

I have updated the original blog post, and spreadsheet with corrections, and to include the following artifacts:
  • CCM_RecentlyUsedApps
  • Application Experience Program Telemetry
  • IconCache.db
  • Windows Error Reporting (WER)
  • Syscache.hve

The post is still barebones with a bit of additional writeup work to do and the extra artifacts in the spreadsheet has added a lot more 'TBC' cells, but I hope to get more of it complete over time.

2019-01-03

A little play with the Syscache hive

**UPDATED 2019-01-04**

This post is a response to David Cowen’s ‘Sunday Funday' challenge as detailed over at ‘Hacking Exposed - Computer Forensics Blog’.

The question posed by David was as follows:
What processes update the Syscache.hve file on Windows Server 2008 R2?
There are some significant caveats to this post:

  1. I started looking on Thursday evening so the research is rushed and unverified.
  2. December was a manic month, followed by family focused down time during the holidays. Screen time was minimised and therefore, despite Maxim Suhanov writing up what appears to be a great post, and David dealing with Syscache.hve in a number of test kitchens and posts, I haven't actually read up on the prior work.
  3. I'm also about 90% sure I misinterpreted the question...

What process(es) update the Syscache.hve file?

In my initial read of David's question I thought he was asking for the specific mechanism/processes which are responsible for updating the Syscache.hve file directly.

In search of this answer I installed a fresh copy of Windows Server 2008 R2 into a VM and attempted to use Process Explorer, Hacker and Monitor to see what was touching Syscache.hve. I proceeded to run a few executables in an effort to identify what was updating the Syscache.hve. This spelunking didn't immediately provide the answers I hoped for however and I'm not sure why.

I then proceeded to pull a copy of the hive and review it's content with a view to finding relevant key paths and names to search within the above tools. This still didn't provide the answers I was looking for.

<2019-01-04 EDIT>

Following my later research and an awesome additional post from Maxim here. I realised the reason I wasn't seeing what I expected in Process Monitor was that I was running executables which were not causing the the syscache.hve to be updated.

Furthermore I didn't have an understanding of how the hive was mounted so what I needed to look for in the path. Per Maxim's post the filter string of '\REGISTRY\A' would be what is required, but I'm not sure I actually had any events as other search strings I used should have been fruitful.

In any even the screenshot below shows the result of running .bat and .cmd files from the desktop in subsequent testing:



This evidences the fact that the svchost.exe process is responsible for the actions performed against the syscache.hve.

However this approach initially failed me for the reasons outlined above and as such I moved on to some alternative testing methods, evidently too hastily but the results are documented below.

</2019-01-04 EDIT>

I proceeded to grab a RAM dump and to see where references to the paths and 'Syscache.hve' appear, to try and tie this to the address space of specific processes. The process followed was to take the memory dump, run strings across it with the -o and -nobanner switches to output the offset of each hit and prevent a banner from being printed and output this to a file, as below:
strings64.exe -nobanner -o C:\Users\[removed]\Desktop\memdump.mem >> stringsout.txt
The output of this could then be fed into volatility's 'strings' module to produce a listing of all strings and paired with their associated corresponding process and virtual addresses. An alternative approach would be to reduce your list of relevant strings before having volatility do the leg work but in this case I wasn't sure what I was looking for so I went the long way round.

I then proceeded to grep the resultant file for terms associated with the hive in question, one notable example being as below:
grep -i syscache stringsout.txt
This resulted with plenty of false positives due to my previous spelunking however it notably identified the presence of multiple relevant strings within a discache.sys.




At this point I had also asked some fellow forensicator friends for suggestions and the venerable Charlotte Hammond (@gh0stp0p) had an active case indexed and was able to run a quick search for some hints. She identified that the string 'syscache' appeared within discache.sys and aeevts.dll.mui on the system she was looking at. She then proceeded to analyse the discache.sys binary and confirmed it was littered with references to the structures associated with the hive in question.

It was about now that I got around to reading Maxim Suhanov's post and found that this wasn't news at all, he had already identified the same. In his words:
This library is pushing collected data to the “discache.sys” driver using the NtSetInformationFile() routine (the FileInformationClass argument is set to 52, which means “FileAttributeCacheInformation” in Windows 7).  
The driver receives a file handle and two strings (named “AeFileID” and “AeProgramID”). The “AeFileID” string contains an SHA-1 hash value for a file in question. Then, this data (along with some additional metadata populated by the driver) is written to the “Syscache.hve” hive located in the “System Volume Information” directory.
By now I was pretty confident I had misread the question but thought that it would be worth documenting the approach I used and similar results as those of Maxim.

Execution of what types of processes cause the Syscache.hve file to be updated?

In an effort to identify what processes did or did not cause the Syscache.hve file to be updated I used Dave's summary associated with each of his relevant Test Kitchen episodes as a starting point, specifically these were:

Win 7
  • Programs executed from the Desktop whether from the command line or GUI were not being inserted into the Syscache.hve
  • Programs executed from a temp directory made on the Desktop were being recorded in the Syscache.hve
  • The syscache hive seems to record atleast exe, dll, bat and cmd files executed
  • There are some sysinternals programs that are not being captured at all, these may not need any shiming

Server 2008 R2
  • The syscache hive on server 2008 r2 includes executions from the Desktop, unlike Windows 7
  • The syscache hive on server 2008 r2 does not appear to be catching bat files like Windows 7 but does catch and executables the bat file calls

Based upon this I set about to test whether GUI and CLi execution of .exe, .bat, .cmd and dlls located in the root of C:\, the desktop or within a sub directory of the desktop would cause the Syscache hive to be updated.

By my maths this was 24 distinct tests and noting that I am lazy efficient, I chose to rely upon whether the modified time of the hive changed between tests and whether that change was consistent with the time of actions which had been performed. This is hardly the most scientific proof however in my testing on a fresh install where I avoided making any unnecessary process execution I did not see the hive change outside of my tests.

The procedure was inspired by David's common approach of using TSK on a local system during testing. First I navigated to the directory where I had the TSK binaries:

cd C:\Users\[removed]\Desktop\sleuthkit-4.6.4-win32\bin

I then used a series of fls commands to identify the ID of the Syscache.hve:

fls \\.\c:

This provides me a file listing of the root of c:\, where we can see the SVI folder is 59600.



fls \\.\c: 59600

This provides me a file listing of the SVI folder:






This confirmed that the ID I was interested in was 59649 and I could use the following command to provide the output as below:

istat \\.\c: 59649


I would then perform each of the tests and use this same istat command to check whether the File Modified timestamp had changed. The results of this testing were more than a little surpirsing as compared to the Test Kitchen results and are outlined in the following table:


High level observations:

  • At no point in my testing did my deliberate running of an executable (either from the command line or GUI) cause the syscache.hve to be modified. This was clearly contrary to the behavior evidenced in the Test Kitchen videos but exporting the syscache.hve and reviewing the data inside appeared to corroborate this observation (which was initially based upon modified time of the hive).
  • GUI execution of a batch file from the desktop causes the syscache.hve to be modified
  • GUI execution of the same batch file from the desktop does not cause the syscache.hve to be modified
  • GUI execution of the same batch file from the desktop but modified causes the syscache.hve to be modified
  • GUI execution of the same batch file from the desktop but with a modified name causes the syscache.hve to be modified
  • CLi execution of another batch file from the desktop did not cause the syscache.hve to be modified.
  • My approach for running dlls wasn't suitable so I need to rethink this...

These are certainly not definitive results and the majority of tests were only performed once and will need corroboration. It certainly indicates that there may be more variables at play, either an error in testing (on my part) or my precise version of Windows being different and this being significant.

For information the version used was the Windows Server 2008 R2 Evaluation with no further updates installed and the version specifics were as follows: