2019-01-11

Testing of SRUM on Windows Server 2019 (continued)

After my unsuccessful attempts to test SRUM in Windows Server 2019 earlier in the week I followed up with Dave Cowen who confirmed the name of the install media he had used, and went about installing Server 2019 from the same media. Specifically this was:

en_windows_server_2019_x64_dvd_4cb967d8.iso - A876D230944ABE3BF2B5C2B40DA6C4A3

Lo and behold, when I checked for the presence of a SRUM directory...


The Windows version information associated with this install is as follows:


Putting aside the strangeness that SRUM doesn't appear to be enabled by default in certain circumstances, lets look at how it compares to SRUM within Windows 10.

Noted differences between Windows 10 SRUM and Sever 2019 SRUM

As per the methodology outlined in my previous post, I extracted the SRUDB.dat from the following systems: 
  • Fresh install of Server 2019
  • Fresh install of Windows 10
  • Used install of Windows 10
  • Used install of Windows 8
I parsed out a list of tables and their associated fields for each of the SRUDB.dat files I had and compared the tables and their content. A table outlining what tables were present within the SRUDB associated with each of the examined OS samples is provided below:


Notable observations were as follows:

  • The Server 2019 install had four new tables which had not been seen in previous iterations of the OS (or not in my testing):
    • {17F4D97B-F26A-5E79-3A82-90040A47D13D}
    • {841A7317-3805-518B-C2EA-AD224CB4AF84}
    • {DC3D3B50-BB90-5066-FA4E-A5F90DD8B677}
    • {EEE2F477-0659-5C47-EF03-6D6BEFD441B3}
  • The Application Resource usage data table {D10CA2FE-6FCF-4F6D-848E-B2E99266FA89} and Network Connectivity data {DD6636C4-8929-4683-974E-22C046A43763} remain. 
  • The fields present in these tables have not changed.
  • In my testing Network Usage {973F5D5C-1D90-4944-BE8E-24B94231A174}, Energy Usage{FEE4E14F-02A9-4550-B5CE-5FA2DA202E37} and Energy Usage Long Term {FEE4E14F-02A9-4550-B5CE-5FA2DA202E37}LT Tables were absent.
  • In my test the Push Notification Data {D10CA2FE-6FCF-4F6D-848E-B2E99266FA86} table was also absent however I note that it was absent from a fresh install of Windows 10 and may need to have push notifications enabled, or to have them occur, before the table is created and populated.
I have had limited time to perform testing of the new tables so include for reference their field headings, as this may shed some light on the function of the tables:

{17F4D97B-F26A-5E79-3A82-90040A47D13D}
AutoIncId
TimeStamp
AppId
UserId
Total
Used

{841A7317-3805-518B-C2EA-AD224CB4AF84}
AutoIncId
TimeStamp
AppId
UserId
SizeInBytes

{DC3D3B50-BB90-5066-FA4E-A5F90DD8B677}
AutoIncId
TimeStamp
AppId
UserId
ProcessorTime

{EEE2F477-0659-5C47-EF03-6D6BEFD441B3}
AutoIncId
TimeStamp
AppId
UserId
BytesInBound
BytesOutBound
BytesTotal

Parsing SRUM

I performed some limited testing to see about parsing useful data from SRUM on Server 2019 and I am pleased to report that where tables have remained consistent my previous go to tool, Mark Baggett's srum-dump still parses this data successfully.

While it does display errors per the below, it will proceed and extract what it can from the common tables:


Unfortunately the only two tables which fall into this are the Application Resource usage data table {D10CA2FE-6FCF-4F6D-848E-B2E99266FA89} and Network Connectivity data table{DD6636C4-8929-4683-974E-22C046A43763}.

If i have time in the next couple of weeks I will look into these new tables in an effort to derive how they are populated. I'm also keen to try and establish what caused SRUM to be disabled on some of the installs I used for testing but not others.

2019-01-08

Some testing of SRUM on Windows Server 2019

This post is a response to David Cowen’s ‘Sunday Funday' challenge as detailed over at ‘Hacking Exposed - Computer Forensics Blog’.

The question posed by David was as follows:
Server 2019 got SRUM, what if any differences are there between SRUM on Windows 10 and SRUM on Server 2019?
To be up front, don't read this post looking for amazing details on the technical differences in  the implementation of SRUM between Windows 10 and Server 2019, my conclusion is going to disappoint.

Methodology

My approach in answering this question was going to be to export the SRUMDB from a Windows 10 system and a Windows Server 2019 system and then to document the schema within the database and then to explore any differences.

The SRUM database (SRUDB.dat) is commonly located at 'C:\WINDOWS\system32\SRU\SRUDB.dat' within systems where SRUM is available. It is an Extensible Storage Engine (ESE) Database and as such can be parsed with various tools.

I chose to use NirSoft ESEDatabaseView as an easy way to parse out the contents of each table into a csv so the headings and contained data can be reviewed. There are various great tools designed to parse the SRUDB however in this case I was specifically looking for potential new tables or fields which these may miss.

The approach employed was to extract the SRUDB from the target system to another location then to use the below command: 

ESEDatabaseView.exe /table C:\Users\[removed]\Desktop\SRUM\SRUDB.dat * /scomma "C:\Users\[removed]\Desktop\SRUM\*.csv"

This command would parse the content of every table (due to the specified name of *) and then parse the content into individual CSVs named after each table. The results are detailed in the sections that follow.

Windows 10 SRUDB Schema

The Windows 10 system analysed is a heavy use system which has been installed for some time, OS details as below:


When the SRUDB.dat file was reviewed in ESEDatabaseView, the table list looked as follows:



The tables were as follows:

{5C8CF1C7-7257-4F13-B223-970EF5939312}
{7ACBBAA3-D029-4BE4-9A7A-0885927F1D8F}
{973F5D5C-1D90-4944-BE8E-24B94231A174}
{D10CA2FE-6FCF-4F6D-848E-B2E99266FA86}
{D10CA2FE-6FCF-4F6D-848E-B2E99266FA89}
{DD6636C4-8929-4683-974E-22C046A43763}
{FEE4E14F-02A9-4550-B5CE-5FA2DA202E37}
{FEE4E14F-02A9-4550-B5CE-5FA2DA202E37}LT
MSysLocales
MSysObjects
MSysObjectsShadow
MSysObjids
SruDbCheckpointTable
SruDbIdMapTable

I then proceeded to make a really pretty table which contains the field names associated with each of the fields in each table. It looked a little something like this:


Which I think we can all agree presents very well as a table within a blog. Ultimately the content isn't that interesting, but any difference to what we find in Server 2019 will be.

Windows Server 2019 SRUDB Schema

The Windows Server 2019 system analysed is a fresh install of a virtual machine using the evaluation ISO. Following the issues during the rollout of 2019 and associated versions of Windows 10, as detailed here, Microsoft pulled the download links so I had to hunt and locate this one.

OS details as below:


This system was allowed to run for a short while, various applications were executed and it was rebooted/shutdown and powered on a number of times.

Despite all this, when I went to go and extract the SRUDB.dat I had an interesting finding...


So at this moment in time, the answer I submit to David's question is that there are some significant differences between SRUM on Windows 10 and SRUM on Server 2019, most notably that in my testing there is no SRUM in Windows Server 2019.

Unfortunately, having watched David's recent Forensic Lunch Test Kitchen I know full well that in his testing, and recorded on video for all to see is a Windows Server 2019 install with SRUM, all that remains now is to try and figure out what if any differences there are between our test environments and whether they cause this anomalous behavior.


***UPDATED 2019-01-11***

This behavious has now been confirmed by a colleague who was also looking into it, the ISO name and MD5 we were using:

17763.1.180914-1434.rs5_release_SERVER_EVAL_X64FRE_EN-US.
ISO - E62A59B24BD6534BBE0C516F0731E634

17763.1.180914-1434.rs5_release_SERVERESSENTIALS_OEM_X64F
RE_en-us.iso - B0F033EA706D1606404FF43DAD13D398

Notably, looking to the registry in these same systems we find the normal SRUM keys however they are not populated with RecordSets:


Above we see that the SRUM key exists but where we would expect to see RecordSets with the temporary data, there are none. This location normally contains temporary data before it is pushed to the SRUDB.dat.

2019-01-07

Updated feature: Exchange Online mailbox audit to add mail reads by default

Exciting news in the world of Office 365 Business Email Compromise investigations. Following on from their recent commitment to improve logging of account activity within Office365 Microsoft have announced that Exchange Online will audit mail reads/accesses by default for owners, admins and delegates under the MailItemsAccessed action.

I was notified as part of the weekly 'Office 365 changes' roundup sent to Office365 administrators, the text of the update reads:

Updated feature: Exchange Online mailbox audit to add mail reads by default
MC171679
Prevent or Fix Issues
Published On : 4 January 2019
To ensure that you have access to critical audit data to investigate security incidents in your organization, we’re making some updates to Exchange mailbox auditing. After this change takes place, Exchange Online will audit mail reads/accesses by default for owners, admins and delegates under the MailItemsAccessed action.
This message is associated with Microsoft 365 Roadmap ID: 32224.
How does this affect me?
The MailItemsAccessed action offers comprehensive forensic coverage of mailbox accesses, including sync operations. In February 2019, audit logs will start generating MailItemsAccessed audit records to log user access of mail items. If you are on the default configuration, the MailItemsAccessed action will be added to Get-mailbox configurations, under the fields AuditAdmin, AuditDelegate and AuditOwner. Once the feature is rolled out to you, you will see the MailItemsAccessed action added and start to audit reads. 
This new MailItemsAccessed action is going to replace the MessageBind action; MessageBind will no longer be a valid action to configure, instead an error message will suggest turning on the MailItemsAccessed action. This change will not remove the MessageBind action from mailboxes which have already have added it to their configurations. 
Initially, these audit records will not flow into the Unified Audit Log and will only be available from the Mailbox Audit Log. 
We’ll begin rolling this change out in early February, 2019. If you are on the default audit configuration, you will see the MailItemsAccessed action added once the feature is rolled out to you and you start to audit reads. 
What do I need to do to prepare for this change?
There is no action you need to take to derive the security benefits of having mail read audit data. The MailItemsAccessed action will be updated in your Get-Mailbox action audit configurations automatically under AuditAdmin, AuditDelegate and AuditOwner. 
If you have set these configurations before, you will need to update them now to audit the two new mailbox actions. Please click Additional Information for details on how to do this. 
If you do not want to audit these new actions in your mailboxes and you do not want your mailbox action audit configurations to change in the future as we continue to update the defaults, you can set AuditAdmin, AuditDelegate and AuditOwner to your desired configuration. Even if your desired configuration is exactly the same as the current default configuration, so long as you set the AuditAdmin, AuditDelegate and AuditOwner configurations on your mailbox, you will preclude yourself from further updates to these audit configurations. Please click Additional Information for details on how to do this.
If your organization has turned off mailbox auditing, then you will not audit mail read actions.
This is good news for investigating the scope of account compromise, of course it should be noted that there are a number of other concerns, and indeed other ways that messages can be downloaded/accessed, once an account has been compromised.

Once my O365 test account has been updated with the change I plan to do some testing of this additional logging and will document any findings here.

Relevant reading:

2019-01-04

Available Artifacts - Evidence of Execution Updated

Since my original post a couple of months ago there have been new discoveries, additional suggestions and some error corrections. These things combined warranted an update to the spreadsheet and original post. 

The I want to take the opportunity to thank the following people who have directly or indirectly contributed to the update:

  • Maxim Suhanov (@errno_fail) for his great work on Syscache.hve
  • David Cowen (@HECFBlog) for the work put into his Test Kitchen Series and investigation of Syscache.hve and what OSs it is available within
  • Phill Moore (@phillmoore) for correcting entries as they relate to the availability of SRUM
  • Hadar Yudovich (@hadar0x) for his suggestion of Application Experience Program Telemetry
  • Matt (@mattnotmax) for his suggestion of CCM_RecentlyUsedApps
  • Eric Zimmerman (@EricRZimmerman) for his suggestion of further useful tools (yet to be written up!)
  • proneer for their comment with multiple suggestions

I have updated the original blog post, and spreadsheet with corrections, and to include the following artifacts:
  • CCM_RecentlyUsedApps
  • Application Experience Program Telemetry
  • IconCache.db
  • Windows Error Reporting (WER)
  • Syscache.hve

The post is still barebones with a bit of additional writeup work to do and the extra artifacts in the spreadsheet has added a lot more 'TBC' cells, but I hope to get more of it complete over time.

2019-01-03

A little play with the Syscache hive

**UPDATED 2019-01-04**

This post is a response to David Cowen’s ‘Sunday Funday' challenge as detailed over at ‘Hacking Exposed - Computer Forensics Blog’.

The question posed by David was as follows:
What processes update the Syscache.hve file on Windows Server 2008 R2?
There are some significant caveats to this post:

  1. I started looking on Thursday evening so the research is rushed and unverified.
  2. December was a manic month, followed by family focused down time during the holidays. Screen time was minimised and therefore, despite Maxim Suhanov writing up what appears to be a great post, and David dealing with Syscache.hve in a number of test kitchens and posts, I haven't actually read up on the prior work.
  3. I'm also about 90% sure I misinterpreted the question...

What process(es) update the Syscache.hve file?

In my initial read of David's question I thought he was asking for the specific mechanism/processes which are responsible for updating the Syscache.hve file directly.

In search of this answer I installed a fresh copy of Windows Server 2008 R2 into a VM and attempted to use Process Explorer, Hacker and Monitor to see what was touching Syscache.hve. I proceeded to run a few executables in an effort to identify what was updating the Syscache.hve. This spelunking didn't immediately provide the answers I hoped for however and I'm not sure why.

I then proceeded to pull a copy of the hive and review it's content with a view to finding relevant key paths and names to search within the above tools. This still didn't provide the answers I was looking for.

<2019-01-04 EDIT>

Following my later research and an awesome additional post from Maxim here. I realised the reason I wasn't seeing what I expected in Process Monitor was that I was running executables which were not causing the the syscache.hve to be updated.

Furthermore I didn't have an understanding of how the hive was mounted so what I needed to look for in the path. Per Maxim's post the filter string of '\REGISTRY\A' would be what is required, but I'm not sure I actually had any events as other search strings I used should have been fruitful.

In any even the screenshot below shows the result of running .bat and .cmd files from the desktop in subsequent testing:



This evidences the fact that the svchost.exe process is responsible for the actions performed against the syscache.hve.

However this approach initially failed me for the reasons outlined above and as such I moved on to some alternative testing methods, evidently too hastily but the results are documented below.

</2019-01-04 EDIT>

I proceeded to grab a RAM dump and to see where references to the paths and 'Syscache.hve' appear, to try and tie this to the address space of specific processes. The process followed was to take the memory dump, run strings across it with the -o and -nobanner switches to output the offset of each hit and prevent a banner from being printed and output this to a file, as below:
strings64.exe -nobanner -o C:\Users\[removed]\Desktop\memdump.mem >> stringsout.txt
The output of this could then be fed into volatility's 'strings' module to produce a listing of all strings and paired with their associated corresponding process and virtual addresses. An alternative approach would be to reduce your list of relevant strings before having volatility do the leg work but in this case I wasn't sure what I was looking for so I went the long way round.

I then proceeded to grep the resultant file for terms associated with the hive in question, one notable example being as below:
grep -i syscache stringsout.txt
This resulted with plenty of false positives due to my previous spelunking however it notably identified the presence of multiple relevant strings within a discache.sys.




At this point I had also asked some fellow forensicator friends for suggestions and the venerable Charlotte Hammond (@gh0stp0p) had an active case indexed and was able to run a quick search for some hints. She identified that the string 'syscache' appeared within discache.sys and aeevts.dll.mui on the system she was looking at. She then proceeded to analyse the discache.sys binary and confirmed it was littered with references to the structures associated with the hive in question.

It was about now that I got around to reading Maxim Suhanov's post and found that this wasn't news at all, he had already identified the same. In his words:
This library is pushing collected data to the “discache.sys” driver using the NtSetInformationFile() routine (the FileInformationClass argument is set to 52, which means “FileAttributeCacheInformation” in Windows 7).  
The driver receives a file handle and two strings (named “AeFileID” and “AeProgramID”). The “AeFileID” string contains an SHA-1 hash value for a file in question. Then, this data (along with some additional metadata populated by the driver) is written to the “Syscache.hve” hive located in the “System Volume Information” directory.
By now I was pretty confident I had misread the question but thought that it would be worth documenting the approach I used and similar results as those of Maxim.

Execution of what types of processes cause the Syscache.hve file to be updated?

In an effort to identify what processes did or did not cause the Syscache.hve file to be updated I used Dave's summary associated with each of his relevant Test Kitchen episodes as a starting point, specifically these were:

Win 7
  • Programs executed from the Desktop whether from the command line or GUI were not being inserted into the Syscache.hve
  • Programs executed from a temp directory made on the Desktop were being recorded in the Syscache.hve
  • The syscache hive seems to record atleast exe, dll, bat and cmd files executed
  • There are some sysinternals programs that are not being captured at all, these may not need any shiming

Server 2008 R2
  • The syscache hive on server 2008 r2 includes executions from the Desktop, unlike Windows 7
  • The syscache hive on server 2008 r2 does not appear to be catching bat files like Windows 7 but does catch and executables the bat file calls

Based upon this I set about to test whether GUI and CLi execution of .exe, .bat, .cmd and dlls located in the root of C:\, the desktop or within a sub directory of the desktop would cause the Syscache hive to be updated.

By my maths this was 24 distinct tests and noting that I am lazy efficient, I chose to rely upon whether the modified time of the hive changed between tests and whether that change was consistent with the time of actions which had been performed. This is hardly the most scientific proof however in my testing on a fresh install where I avoided making any unnecessary process execution I did not see the hive change outside of my tests.

The procedure was inspired by David's common approach of using TSK on a local system during testing. First I navigated to the directory where I had the TSK binaries:

cd C:\Users\[removed]\Desktop\sleuthkit-4.6.4-win32\bin

I then used a series of fls commands to identify the ID of the Syscache.hve:

fls \\.\c:

This provides me a file listing of the root of c:\, where we can see the SVI folder is 59600.



fls \\.\c: 59600

This provides me a file listing of the SVI folder:






This confirmed that the ID I was interested in was 59649 and I could use the following command to provide the output as below:

istat \\.\c: 59649


I would then perform each of the tests and use this same istat command to check whether the File Modified timestamp had changed. The results of this testing were more than a little surpirsing as compared to the Test Kitchen results and are outlined in the following table:


High level observations:

  • At no point in my testing did my deliberate running of an executable (either from the command line or GUI) cause the syscache.hve to be modified. This was clearly contrary to the behavior evidenced in the Test Kitchen videos but exporting the syscache.hve and reviewing the data inside appeared to corroborate this observation (which was initially based upon modified time of the hive).
  • GUI execution of a batch file from the desktop causes the syscache.hve to be modified
  • GUI execution of the same batch file from the desktop does not cause the syscache.hve to be modified
  • GUI execution of the same batch file from the desktop but modified causes the syscache.hve to be modified
  • GUI execution of the same batch file from the desktop but with a modified name causes the syscache.hve to be modified
  • CLi execution of another batch file from the desktop did not cause the syscache.hve to be modified.
  • My approach for running dlls wasn't suitable so I need to rethink this...

These are certainly not definitive results and the majority of tests were only performed once and will need corroboration. It certainly indicates that there may be more variables at play, either an error in testing (on my part) or my precise version of Windows being different and this being significant.

For information the version used was the Windows Server 2008 R2 Evaluation with no further updates installed and the version specifics were as follows:


2019-01-02

Converting Sparse VMDK files to fat VMDK files

Certain tasks are performed with the kind of infrequency that ensures you have to spend 10 minutes googling how to do it, every time you need to do it. In my case the conversion of sparse, compressed VMDKs is one such task so I thought I would document it here for my benefit (as well as others).

A number of forensic tools will handle a flat VMDK but refuse a sparse/compressed one and as such when receiving VMDKs provided by clients this can be an issue. There are a few ways to perform this conversion however as I have VirtualBox installed on most of my PCs I tend to use the associated VBoxManage.

The command below will read a source VMDK and create an output VMDK which is not sparse/compressed:

VBoxManage.exe clonehd [source VMDK] [output VMDK] --format VMDK

Given the default install path of VirtualBox the fuller command below will work:

"C:\Program Files\Oracle\VirtualBox\VBoxManage.exe" clonehd [source VMDK] [output VMDK] --format VMDK