2018-07-30

Windows Plug and Play Cleanup

This post is another entry to David Cowen's Sunday Funday challenge series at HECFBlog. This week he asked the following question:

Windows 10 keep changing and with it its behavior. In Windows 8.1 and early versions of Windows 10 there was a task to delete plug and play devices that haven't been plugged in for 30 days. In more recent versions of Windows 10 this appears to be disabled. For this challenge please document what versions of Windows 10 has the task enabled and if it survives being upgraded. 

So with that question in mind I fired up 'a few' Windows 10 VMs, and started poking about:


Plug and Play Cleanup

The 'Plug and Play Cleanup' scheduled task is responsible for clearing legacy versions of drivers. It would appear (based upon reports online) that it also picks up drivers which have not been used in 30 days, despite its description stating that "the most current version of each driver package will be kept". As such, removable devices which have not been connected for 30 days may have their drivers removed. 

The scheduled task itself is located at ‘C:\Windows\System32\Tasks\Microsoft\Windows\Plug and Play\Plug and Play Cleanup’, and its content is displayed below:


The task references 'pnpclean.dll' which is responsible for performing the cleanup activity additionally we see that the ‘UseUnifiedSchedulingEngine’ field is set to ‘TRUE’ which specifies that the generic task scheduling engine is used to manage the task. The ‘Period’ and ‘Deadline’ values of 'P1M' and 'P2M' within ‘MaintenanceSettings’ instruct Task Scheduler to execute the task once every month during regular Automatic maintenance and if it fails for 2 consecutive months, to start attempting the task during the emergency Automatic maintenance. Further information on ‘maintenancesettingstype’ available here.

Different Versions of Windows

To answer the question of which Windows update did away with this scheduled task we first look at the major releases, installing the OS clean from media which has each of the respective updates applied. These clean installs are then reviewed for the presence of the scheduled task. I additionally examined each task (compared by hash) and performed the same review of the 'pnpclean.dll’ within each version.

The table below details the result of this analysis:


Based upon a fresh install, the last major release of Windows 10 to come with the ‘Plug and Play Cleanup’ scheduled task in place was Windows 10 1607 (Anniversary Update). The DLL 'pnpclean.dll’ has been updated in each release (at least enough to cause different MD5 hashes). The scheduled task on the other hand is consistent between all three releases where it is observed (1507, 1511 and 1607).

One other interesting finding during research of this topic was that the PnPCleanup task has been associated with some issues historically, one example was in relation to AWS hosted servers, specifically Windows Server 2012 R2 AMIs made available before 10 September 2014. In these AMIs the task would sometimes identify the EC2 network device as inactive following a reboot and remove it's driver from the system. This would cause the instance to lose network connectivity after a reboot, which is apparently a problem in cloud hosted servers... Further details here and here

Persistence after Update

Regarding persistence after update, a significant number of changes are made to scheduled tasks between the major Windows Updates and as such my working assumption was that updating 1607 to a later version would result in the scheduled task being deleted. This was tested by taking a fresh install of 1607, connecting it to the network and allowing it to download and install all available updates. Following the update (and multiple restarts), the system was now at 1803 however when reviewing the Task Scheduler, we can still see a ‘Plug and Play Cleanup’ task is still there. The ‘pnpclean.dll’ has however been updated; replaced with a new version that matches the one we found in the fresh 1803 install (confirmed via hash match).

To test whether the task was actually being executed I manually initiated Automatic Maintanance in each version of Windows under testing then immediately (read: almost immediately) executed the following command on the command line:

schtasks | find "Queued"

In all cases running it before initiating Maintenance returned 0 results. Running it after initiating maintenance had the following results:


Most notably, on the system we examined which was installed as 1607 and upgraded to 1803 via Windows Update we saw the following:


When we then go on to review the task within Scheduled Tasks we can see that it has now been updated with a 'Last Tun Time':



In conclusion, the last major Windows OS update which included the 'Plug and Play Cleanup' Scheduled Task was Windows 10 1607. Installing from media which contains Windows 10 1703 will not cause the task to be created, however updating from 1607 (or prior) to a more up to date version will leave the scheduled task in place and it will still be activated during system maintenance.

Some interesting resources relating to this feature can be found here:


2018-07-27

Methods to identify historical Time Zone configuration associated with a Windows PC

David Cowen at HECFBlog recently posed the following question:
On a Windows 10 system what are the different ways you could determine what timezones a user was in prior to the whatever timezone is stored in the registry?
In this post we will explore some of the ways you can determine historical timezone information associated with a device, specifically the time zone a Windows 10 system was configured with prior to it’s current configuration. 

Windows Event Logs

When looking to confirm whether or when a system event has occurred the Windows event logs tend to be a good place to start. Manual modification of the date/time will result in an event within the Windows Security log, specifically ‘Event ID 4616: The system time was changed’ events are generated:

Event ID 4616 in the Windows Security Log

This event will detail the time before and after the change as well as the account which was responsible for the change. A review of most Windows 10 systems will identify a number of these log entries associated with the NTP service. 

All very useful information… However, the question posed relates to a change in system time zone and NOT a change in system time. Unfortunately, this event is not generated when the system time zone is modified, whether automatically or by the user.

Notably however, modification of the time zone on a system will generate a log entry, specifically Event ID 1 within the System Log. There are many events which will cause this log entry to be created but we are interested in those which detail “Change Reason: System time adjusted to the new time zone”. An example is displayed below:

Event ID 1 in the Windows System Log

The keen-eyed reader will note that this event also displays an OldTime and NewTime value, however the keener-eyed reader will note that these are recorded in UTC and as the time zone change does not have any impact on the time itself they both contain the same value. Ace!

But we aren’t completely giving up on event logs just yet. A further entry, Event ID 6013 within the System log does provide us insight into the time zone configuration of a system. In normal usage this event occurs every 24 hours and records the system uptime:

Event ID 6013 in the Windows System Log

On the face of it this doesn’t look too helpful, but if we select the details tab we can see that the event data does in fact include the system time zone at the time the event is recorded.

Details tab for Event ID 6013 in the Windows System Log

On this basis, if we have event log visibility for an adequate time window and the time zone of the system remains the same for at least 24 hours then we are guaranteed to be able to answer the posed question by combining the information from Event 1 and Event 6013 within the System log. On my system the uptime value is updated at 1200hrs UTC daily, and although I have not performed further testing I would hypothesise that this is likely to be the same for other systems. Additionally, this event appears to also be populated at system startup with an initial value (commonly detailing an uptime of between 9-12 seconds), thus adding to the likelihood that you will have one available within your in-scope window. For reference, in this post the ‘in-scope window’ refers to the time between the old time zone configuration being applied and the current time zone configuration being applied, during which the time zone configured is unknown.

In the absence of useful logs, if for instance event logs have been cleared or rotated, or if we don’t have any useful Event 6013s, there are other ways in which we may be able to confirm the previous time zone configuration.

Windows Registry

The time zone information associated with a system is recorded within the registry key:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\TimeZoneInformation 
While we are specifically not interested in the values in the active registry because these have been changed one notable piece of information we can derive from analysing the registry in it’s current state is the Last Write Time associated with this registry key. This key is updated if the time zone is changed and as such the Last Write time will also be updated, potentially indicating when the time zone was last changed. This can be used to corroborate findings from event logs or replace them where event logs are not available.

Volume Shadow Copy

Noting that we are interested in historical data from the registry we can look to Volume Shadow Copies to see if these can help.

If we have a volume shadow copy that was created during the in-scope timeframe then we can determine the time zone of the system at the time the shadow copy was created by extracting and analysing the registry from the VSC. We can determine what time zone the system was in and the last modified time of the key at that time to determine how long the unknown configuration was in place for.

Additionally, while we may be lacking in logs in the active system, there may be useful copies of Windows Event Logs captured within Volume Shadow Copies, these can be analysed as detailed previously.

By way of a worked example, on my personal system I was able to identify a Volume Shadow Copy which was dated 2018-07-19 00:09:14, examining the TimeZoneInformation key from within the SYSTEM hive in the VSC I identified that the TimeZoneKeyName was ‘GMT Standard Time’ which is consistent with its current state (Don't forget to account for the other Bias keys). The key was last modified on 2018-07-05 13:07:56 UTC. A review of the System log recovered from the VSC identified an Event ID 1 at 2018-07-05 13:07:56 UTC which indicates that the time had changed with “Change Reason: System time adjusted to the new time zone” and I was able to confirm that there was no similar log event for the preceding 24 hours. Analysis of the Event 6013 log entries confirmed that for a number of days prior to this event, system uptime had been recorded and that the system time zone listed in the details of each log entry was ‘600 AUS Eastern Standard Time’. 

Volume shadow copies can be a great resource when looking for historical data on a system, particularly for recovering old copies of the registry or otherwise overwritten log files. But what if there wasn’t a VSC generated at a convenient time? All hope is not lost. One other way we may get access to historical registry information could be the Hibernation File.

Hibernation File

A copy of the registry is resident in RAM while a system is online and causing a system to hibernate will result in RAM being written to disk as hiberfil.sys. If the system last hibernated during the in-scope period (while configured with the previous time zone) then analysing the hibernation file may prove fruitful.

It should be noted that Volatility has limited profile support for recent versions of Windows 10, as such it may not be possible to use the hibernation file in all instances. However, if you are dealing with a supported version of Windows 10, Hiberfil.sys can be converted to a regular image using the following command:

vol.py imagecopy -f hiberfil.sys -O hiberfil.img

Then the below command can be used to search memory resident hives for the TimeZoneInformation key: 

vol.py -f hiberfil.img --profile=[PROFILE] printkey -K "SYSTEM\CurrentControlSet\Control\TimeZoneInformation"

I have not personally used this method in anger to try to review TimeZoneInformation, but I have used it successfully to examine other keys in the past. The presence of a hibernation file at the perfect time may be slim and if none of the above have worked then we might be getting a bit desperate. One option if all else fails is to examine the system to look for any Third Party Application Logs which might be able to shed light on the situation.

Third Party Application Logs

Windows systems are commonly littered with various log files associated with Windows activity and that of third party applications. Additionally, while it is an undisputed fact that the 11th commandment is “Thou shalt record your application logs in UTC”, many applications do not conform to this standard. This may be the bane of many an investigation, but in this case such logs can be useful.
One trick to try and determine the time zone of a system at a particular point in time is to find a logfile which was created during the in-scope period. Simply searching for files with a ‘.log’ extension and a created date prior to the last time zone change is often effective. Two different log types can be of use here:
  • Logs which record the time zone of the system along with timestamps (duh)
  • Logs which record the time zone in local time

The latter is more common than the former unfortunately, but they can often still be useful. A couple of examples are shown below:

Kaspersky:
Extract from Kaspersky Log File

The associated log file had a created date of 2017-06-14 17:27:58 (UTC), per the filesystem metadata.

Garmin:

Extract from Garmin Log File

The associated log file had a created date of 2017-08-17 08:15:34 (UTC).

Log files will sometimes have an opening entry detailing that logging has commenced, alternatively we may have to rely upon the time of the first logged event. In both these examples we see that the filesystem recorded a file created time in UTC which was one hour behind the time as recorded in the log files first entries, this is consistent with the system having been configured to UTC+1. 

This method is of course not fool proof and as such it is best to be used in combination with other techniques. If it really is the only available evidence, then it would be recommended to test the logging behaviour of the specific application(s) which have produced suitable logs before relying upon these findings too heavily.

If logs don’t float your boat then there are alternative artefacts we can use. Forensicators love a bit of USB device analysis and in this instance the behaviour of setupapi.dev.log can be helpful.

setupapi.dev.log

The Setupapi.dev.log records timestamps in local system time. Therefore, if the user connected a USB device for the first time during the period of the unknown time zone configuration we can determine what the local system time was at that time. 

We can then analyse the registry keys associated with USB device activity as their last modified time will be recorded in UTC. The difference between these two timestamps will be the UTC offset in use at the time that the device was connected. 

In the example below, we have used USBDeview to view a listing of USB devices and selected one which was last interacted with during the in-scope period:

USBDeview displaying details of one USB Mass Storage Device


A search of the setupapi.dev.log for the device serial number identifies log entries associated with its installation.

Extract from setupapi.dev.log


Notably the UTC timestamp derived from registry artefacts states 21:28:54 while the local time recorded in the log is 22:28:56 indicating that a time zone offset of UTC+1 was in use at the time.
On the subject of USB devices… exFAT to the rescue.

exFAT Volumes

In the (granted unlikely) event that the system was used to write to an exFAT volume during the in-scope period, and if you have access to that same exFAT volume then you are in luck! Windows 10 when writing files to an exFAT volume will populate the timestamp of a file with the date/time as they appear on the system at the time and will populate the timezone offset field such that the time zone offset in use on the computer as a 7-bit signed integer, the integer itself represents 15-minute increments from UTC. Further information can be found in my recent blog post on the topic.

While the likelihood of having a useful exFAT volume interacted with during the period is pretty slim other activities are more commonplace, particularly on business issued equipment. The use of an email client to reply to email messages can give away clues as to the time zone of a system at a particular time.

Replied Email Analysis

By default, when an email message is replied to in Outlook the child message is appended onto the reply. The appended message will be displayed as well as metadata such as the ‘Sent’ time and notably the time displayed will be that as seen on the local system with any time zone adjustment applied. 

If we examine a mailstore on the in-scope system and can pair up a reply in ‘Sent Items’ to an original message as received we can compare the time listed for the original message as shown in the reply and can compare this to the UTC timestamp stored within the mailstore, thus deriving the time zone offset in use on the system at the time that the reply was sent. 

Below we see a sent item, this is a reply to another message and was sent during the time that the unknown time zone was set:

Email message reply


This message shows a sent time for the original email as being 09 July 2018 at 1647hrs. The original email message as it exists within the mailbox, viewed in Kernel OST Viewer and with the timestamps displayed as UTC is shown below:

Original Message (as replied to)


From this we can derive that the system was configured with a system time of UTC+4 at the time the reply was sent.

This behaviour is true of other email applications besides just Outlook but it is important to remember that mailbox synchronisation across multiple devices (with different time zone information) could cause confusion. Extended MAPI properties may be used to rule out messages which are not good candidates for this type of analysis and mailboxes configured with pop3 will not have this issue.

Other suggestions

While the above suggestions did get progressively more obscure and less reliable there were several other even more questionable ideas I had with varying degrees of usefulness. These included:
  • Calendar analysis – If ‘Use local time zones for each event’ is enabled maybe something can be derived from calendar events created during the time period.
  • Tracking cookies – I’m sure there will be some tracking cookies which might be helpful here.
  • Screenshots – Screenshots if saved can potentially capture the time the system is displaying while also having metadata indicating when the screenshot was made.
  • Read the user’s emails – Maybe they sent an email saying what time zone they were in…

Summary

In summary, I would look to the following evidence sources in this order:
1. Windows Event Logs (System Event IDs 1 and 6013)
2. Windows Registry
3. Volume Shadow Copies (for historical copies of the registry)
4. Hibernation File (to dump historical copy of the registry)
5. setupapi.dev.log vs. registry entries
6. Third Party Application Logs
7. exFAT volumes
8. Email Reply Analysis
9. Other miscellaneous suggestions

2018-07-12

Investigating Office365 Account Compromise without the Activities API

With the recent demise of the Office 365 Activities API, David Cowen at HECFBlog has chosen to focus his recent Sunday Funday Challenge on the remaining evidence sources available when investigating instances of Office365 account compromise. David posed the following question:
“Explain in a compromise of a Office365 account what you could review in the following circumstances.
  • Scenario a: only default logging in a E3 plan
  • Scenario b: Full mailbox auditing turned on
You are attempting in both scenarios to understand the scope of the attacker's access.”
The first point to note is that a compromise of Office 365 (while commonly referred to as Business Email Compromise (BEC)) is not necessarily limited to email accounts. Depending on how an organisation employs Office 365 they may host a wealth of information besides just email and attachments in O365, much of which could be valuable to an attacker. In the case of the in-scope E3 plan, each compromised user account could potentially expose:
  • Exchange — Email messages, attachments and Calendars (Mailbox size up to 100GB)
  • OneDrive — 1TB per user, unless increased by admins to up to 25TB.
  • SharePoint — Whatever sites that user has access to.
  • Skype — Messages, call and video call history data
  • Microsoft Teams — Messages, call and video call history data as well as data within integrated apps.
  • Yammer — Whatever it is people actually do on Yammer. Are you prepared for a full compromise of your organisation's memes, reaction gifs and cat pictures?

All of that before you concern yourself with the likelihood of credential reuse, passwords which may be stored within O365 (Within documents and emails) for other services, delegated access to other mailboxes and MDM functionality.

A Short(er) Answer

David has chosen to focus on an E3 Office 365 instance, with and without additional logging functionality enabled. Some evidence sources available in these two circumstances will be as follows.

Scenario a: only default logging in a E3 plan
Below is a non-comprehensive list of evidence sources which may be available to an examiner to assist in understanding the scale/scope of an O365 compromise:
  • Unified Audit Log, via Audit Log Search in the Security & Compliance Centre and accessible using Search-UnifiedAuditLog' cmdlet. This will need to be enabled if not already enabled and appears to provide limited retrospective visibility if enabled after the fact.
  • Mailbox Content
  • Read Tracking 
  • Message Tracking Logs
  • Mailbox Rule information
  • Proxy Logs/ DNS Logs/ Endpoint AV Logs / SIEM
  • Office 365 Management Activity API
  • Azure Active Directory reports and Reporting Audit API (With Azure AD P1/P2)

Scenario b: Full mailbox auditing turned on
By default, Auditing is not enabled, nor are the more granular Mailbox Auditing and SharePoint Site Collection Audit options. However, if we assume that 'audit log search' has been enabled as well as the optional logging associated with enabling 'mailbox auditing' and that audit has been configured for all SharePoint site collections then the following additional evidence sources become available.
  • Unified Audit Log, includes events recorded as a result of enabling 'mailbox auditing'.
  • SharePoint Audit log reports

It should be noted that simply enabling mailbox audit logging for all mailboxes is not enough to capture all useful events. By default, only the 'UpdateFolderPermissions' action is logged with additional events requiring configuration, these include Create, HardDelete, MailboxLogin, Move, MoveToDeletedltems, SoftDelete and Update events.

SharePoint audit logging is pretty granular and, in my experience, rarely enabled. However, if correctly configured a record of user actions including document access, modification and deletion actions can be generated.

These evidence sources, their usefulness and some suggested methodologies to leverage them are outlined in the following sections. In a number of cases I have listed links for suggested additional reading as many of these topics have been well documented by Microsoft or others before me.

Unified Audit Log

The Unified Audit Log (UAL) is currently the single best source of evidence (when available) for Office 365 account compromise investigations. If enabled, user and admin activity can be searched via the Security & Compliance Center or using the ‘Search-UnifiedAuditLog’ cmdlet. Logged activity from your tenant is recorded in the audit log and retained for 90 days. It should be noted that some latency occurs between events occurring and appearing in logs, in some cases (and for some event types) Microsoft detail that this can be up to 24 hours. I have had mixed results in testing whether events prior to enabling auditing become searchable if auditing is enabled after the fact and I plan to perform additional testing and update this post with the results.

By default, Audit log Search is not enabled and attempts to access or use the Audit Log Search functionality within the Security & Compliance Centre will be met with various errors and warnings:


Likewise, attempts to use the `Search-UnifiedAuditLog' cmdlet will fail.

UAL search functionality can be enabled with the following PowerShell command: 

Set-AdminAuditLogConfig -UnifiedAuditLogIngestionEnabled $true

Per Microsoft’s 'Search the audit log in the Office 365 Security & Compliance Center' support article, by default the UAL will contain records of:
  • "User activity in SharePoint Online and OneDrive for Business
  • User activity in Exchange Online (Exchange mailbox audit logging)
  • Admin activity in SharePoint Online
  • Admin activity in Azure Active Directory (the directory service for Office 365)
  • Admin activity in Exchange Online (Exchange admin audit logging)
  • User and admin activity in Sway
  • eDiscovery activities in the Office 365 Security & Compliance Center
  • User and admin activity in Power BI for Office 365
  • User and admin activity in Microsoft Teams
  • User and admin activity in Yammer
  • User and admin activity in Microsoft Stream"

The same article includes the following important note:
“Important: Mailbox audit logging must be turned on for each user mailbox before user activity in Exchange Online will be logged. For more information, see Enable mailbox auditing in Office 365.”
Additionally, if the Audit Log Search functionality is enabled after the fact it will take some hours to become available and thereafter 24 hours for some events to be populated.  

If enabled the Audit Log can be searched and exported in one of two ways. Firstly, it is accessible via the Security & Compliance Center by navigating to Search & Compliance -> Search & Investigation -> Audit log search. This will present you with a screen as below which can be used to perform searches and export results:



Alternatively, the ‘Search-UnifiedAuditLog’ cmdlet can be used to perform searches and output targeted results. Some useful commands are provided below:

Dump ALL available Audit Data within a date range:

Search-UnifiedAuditLog -StartDate [YYYY-MM-DD] -EndDate [YYYY-MM-DD] | Export-csv "E:\Cases\InvestigationXYZ\BadIPsActivity.csv"

Unsurprisingly on even medium size tenants or where the date range is too large this fails. Specifically, due to the maximum number of records which can be retrieved during a particular session being capped at 50,000. The Microsoft blog post 'Retrieving Office 365 Audit Data using PowerShell' addresses this issue and provides a script which can assist. In any event, more targeted searches are advisable:

Dump ALL available Audit Data within a date range for a particular user:

Search-UnifiedAuditLog -StartDate [YYYY-MM-DD] -EndDate [YYYY-MM-DD] -UserIds [USER,USER,USER] | Export-csv "E:\Cases\InvestigationXYZ\BadIPsActivity.csv"

Review failed login attempts for all users:

Search-UnifiedAuditLog -StartDate [YYYY-MM-DD] -EndDate [YYYY-MM-DD] -Operations UserLoginFailed -SessionCommand ReturnLargeSet -ResultSize 5000 | Export-csv "E:\Cases\InvestigationXYZ\FailedLogins.csv"

Find all log entries associated with a known malicious IP(s) during a specific date range:

Search-UnifiedAuditLog -IPAddresses [IPAddress] -StartDate [YYYY-MM-DD] -EndDate [YYYY-MM-DD] -ResultSize 5000 | Export-csv "E:\Cases\InvestigationXYZ\BadIPActivity.csv"

And for a list of IPs:

Search-UnifiedAuditLog -IPAddresses [IPaddress1],[IPaddress2] -StartDate [YYYY-MM-DD] -EndDate [YYYY-MM-DD] -ResultSize 5000 | Export-csv "E:\Cases\InvestigationXYZ\BadIPsActivity.csv"

Particular record types can be targeted using the '-RecordType' attribute and a list of attributes is provided in the MS documentation, here.

As previously mentioned, having Mailbox Auditing enabled will cause additional events to be logged in the UAL. To enable Mailbox Auditing for all mailboxes the following command can be run:

Get-Mailbox -ResultSize Unlimited -Filter {RecipientTypeDetails -eq "UserMailbox"} | Set-Mailbox -AuditEnabled $true

Separately from enabling UAL Search, additional logging detail can be captured by enabling ‘Mailbox Auditing’. The resulting events will be accessible using the UAL Search but it should be noted that if auditing is not enabled prior to an incident then visibility cannot be added after the fact. I have only performed limited testing of this over a short period so would be interested to hear if anyone has contrary experience.

By default, enabling mailbox auditing will only record 'UpdateFolderPermissions' events so additional configuration is required to ensure that other owner actions are captured. The below command will enable all available owner actions for all mailboxes:

Get-Mailbox -ResultSize Unlimited -Filter {RecipientTypeDetails -eq "UserMailbox"} | Set-Mailbox -AuditOwner @{Add="MailboxLogin","HardDelete","SoftDelete","FolderBind","Update","Move","MoveToD eletedItems","SendAs","SendOnBehalf","Create"}

Be aware that enabling all of these auditing settings for an entire tenant will flood the UAL with audit events and can cause a lot of noise (and potentially a performance impact for searches). More details on enabling Mailbox Auditing is available here. The data is also apparently stored in such a way that it contributes to their mailbox storage allocation and this can cause issues if recorded activities become too large.

To review the audit status of a particular account the following command can be used: 

Get-Mailbox -Identity [target mailbox] | fl name,*audit*

Once enabled the same UAL queries above will return more detailed results of user activity within mailboxes and most notably MailboxLogin events. In addition to the ‘Search-UnifiedAuditLog’ cmdlet there is also a 'Search-MailboxAuditLog' cmdlet which can be employed, documentation for which can be found here.

Some further useful resources on UAL and Mailbox Auditing are as follows:
While traditional logs (enabled or otherwise) are always a go-to source of evidence there are a number of other valuable sources which can help to identify compromised accounts and understand the scope and impact of an account compromise.

Mailbox content

The content of mailboxes is important in BEC cases for several reasons; being able to search, access and review it can help answer the following questions:
  • Who currently has a known malicious email message in their inbox?
  • Who may have read a malicious email message?
  • What was the provenance, payload and content of a malicious email message?
  • What items have been sent from a compromise account?
  • What the possible exposure may be?

The content of a mailbox, or all mailboxes, as they exist at the time of analysis can be determined through the use of a number of PowerShell cmdlets. It should be noted that mailbox content searches/reviews can be targeted or tenant wide and the usefulness of these investigative methods will often be contingent on the number of users in a tenant.

Please also note that the use of these cmdlets to identify and delete malicious messages is not without risk. Typos, unnecessarily broad queries and other unforeseen complications can become CV generating moments when you delete data you shouldn't have.

Hunting for Known Malicious Messages
It is quite common in the early phases of incident investigation for IT/IR staff to be provided with a copy (often forwarded) or a description of a phishing message. It can be desirable to capture a forensically sound copy of such a message for analysis.
Per it's Microsoft documentation "you can use the Search-Mailbox cmdlet to search messages in a specified mailbox and perform any of the following tasks:
  • Copy messages to a specified target mailbox.
  • Delete messages from the source mailbox. You have to be assigned the Mailbox Import Export management role to delete messages.
  • Perform single item recovery to recover items from a user's Recoverable Items folder.
  • Clean up the Recoverable Items folder for a mailbox when it has reached the Recoverable Items hard quota."

"Note: By default, Search-Mailbox is available only in the Mailbox Search or Mailbox Import Export roles, and these roles aren't assigned to any role groups. To use this cmdlet, you need to add one or both of the roles to a role group (for example, the Organization Management role group). Only the Mailbox Import Export role gives you access to the DeleteContent parameter."

Additionally, as Office 365 E3 Subscriptions (or Exchange Plan 2) come with eDiscovery functionality, we can also leverage the Discovery Search Mailbox and eDiscovery functionality to assist in collating the messages we wish to analyse. The below examples show queries which can be used to identify and copy samples of malicious messages:

Get-Mailbox | Search-Mailbox -SearchQuery "Subject:phish" -TargetMailbox "Discovery Search Mailbox" -TargetFolder "IncidentXYZ" -LogLevel Full

This command searches all mailboxes for email messages containing the string "phish" in their subject and copies them to the Discovery Search Mailbox within a folder called 'IncidentXYZ', if the folder does not exist it will be created. Setting the ‘-LogLevel’ parameter to Full will cause a CSV of results to be generated and emailed to the target mailbox, this can be extremely useful so is recommended.

Alternatively, we can export to any other mailbox as below:

Get-Mailbox | Search-Mailbox -SearchQuery "Subject:phish" -TargetMailbox "anyone@yourdomain.com" -TargetFolder "IncidentXYZ" -LogLevel Full

Note however that the TargetMailbox will be excluded from any search so you better make sure it doesn't contain any respondent data or it will be missed.

In both cases it can be preferable to get a feel for the number of matching responses prior to executing a copy command, this can be achieved with the '-EstimateResultOnly' parameter which will perform the search but not copy any messages, example below:

Get-Mailbox | Search-Mailbox -SearchQuery "Subject:phish" -EstimateResultOnly

In these examples we have relied upon a known string within the subject however there are a number of search criteria which can be used as alternatives or in combination, e.g.:

from:hax0r@baddomain.cf
attachment:trojan*
sent:"last week" 

The last, and indeed any other date queries can accept a date (YYYY-MM-DD), date range (YYYY-MM-DD..YYYY-MM-DD) or date interval (e.g. today, yesterday, this week, this month). A fuller list of queryable attributes and descriptions is available here.

Also associated with eDiscovery functionality and again requiring the Mailbox Search role are the New-ComplianceSearch, Get-ComplianceSearch and Start-ComplianceSearch cmdlets.

Performing a Compliance Search will allow you to group messages for export, preservation or deletion. As with all of these queries is important to be specific enough, so as to only capture the messages of interest, as such as source email addresses and unique strings from subjects (particularly when combined tight date ranges) make for good queries.

An example command below will capture all messages from 'hax0r@baddomain.cf containing the string 'phish' in the subject and name the search ‘IncidentXYZ-PhishingMessages’.

New-ComplianceSearch -Name "IncidentXYZ-PhishingMessages" -ContentMatchQuery " (From:hax0r@baddomain.cf) AND (Subject:"*phish*")"

While we are here, we are also able to remove identified malicious messages using New-ComplianceSearchAction, assuming we have already used New-ComplianceSearch to identify the malicious messages, as in the above example:

New-ComplianceSearchAction -SearchName "IncidentXYZ-PhishingMessages" -Purge - PurgeType SoftDelete

This methodology is detailed in greater detail here.

Extracting a sample of malicious message(s)
An alternative method for searching for known malicious messages to is to use the Security & Compliance Center as detailed here. The Security & Compliance Center is probably the easiest way to extract sample messages as you can perform searches, review results then download of individual email messages as .eml or groups to a .pst, all from the comfort of a GUI.

Analysis of such samples help in understanding the provenance, payload and content of a malicious email message. The same methodology can be used to export email messages sent from compromised accounts by malicious actors or to perform wholesale exports of mailbox(es) for legal review when trying to assess the impact associated with a mailbox compromise.

Read Tracking

If enabled, and unfortunately disabled by default, Read Tracking via the `Get-MessageTrackingReport' cmdlet can be used to determine whether email messages within the organisation have been read. Commonly attackers will compromise one account and then use it to phish other accounts for credentials so being able to quickly determine how many users received and read these messages can be helpful.

You can check whether Read Tracking is enabled with the following PowerShell command: 

Get-OrganizationConfig I Select ReadTrackingEnabled

If it is enabled then you are in luck and you can follow the below guides, or associated scripts to confirm who has read a particular email message:


Message tracking can be enabled with the below command: 

Set-OrganizationConfig -ReadTrackingEnabled $true

However, it should be noted that this will not have a retroactive effect. It needs to have been enabled before the notable messages were sent.

Using the methods detailed above, we can identify, extract and if required delete malicious messages from the mailboxes of users. However, content searches are rarely the most efficient method to answer some of these questions. Commonly we just require details of all recipients of a particular malicious email, or we want details of all accounts who have interacted with known malicious email addresses or maybe we want a list of all addresses which were contacted from a compromised account during the known window of compromise. In these instances, Message Tracking Logs can be very helpful.

Message Tracking Logs

Message tracking logs provide a record of messages which have been transmitted into, out of and within a tenant and as such can be invaluable in instances of BEC.
An example command is detailed below:

Get-MessageTrace -StartDate [START_DATE] -EndDate [END_DATE} -PageSize 5000 I Where {$ .Subject -like "*phish*"} I ft -Wrap

This command searches Message Trace Logs for messages sent between two dates, with the string "phish" in the subject. While it increases the page size to 5000 (the maximum) from a default of 1000 this still may not be adequate in a large tenant or in a wide date range. In these cases a script may be the best solution and one such script can be found here.

Note that the StartDate can't be greater than 30 days from the date of the script running. Also, the results can be truncated when long lists of recipients are associated with a single message and any mailing lists will have to be manually enumerated to confirm which users would be expected to receive an email message addressed to a group or shared mailbox.

It can also be useful to target messages originating from known bad email addresses or domain e.g.: 

Get-MessageTrace -SenderAddress *@baddomain.cf I ft -Wrap

There are a number of other uses for the Message logs, and different ways queries can be used. The Microsoft Documentation associated with the cmdlet provides a number of examples and details the different query constraints which can be used.

Additionally, message trace information can also be sought within the Exchange Admin Console by navigating to EAC -> mail flow -> message trace.

Mailbox Rule information

A common technique in cases of Business Email Compromise is the use of rules by attackers to cover their tracks. The presence of maliciously added rules can often act as a quick and effective indicator to identify accounts which have been compromised.

Attackers will commonly employ sets of rules which I have come to refer to as "folder and forward" rules. They will set mailboxes to forward emails (either wholesale or subject dependent) to an external email address so they don't need to monitor the account constantly. They also often use rules to hide their malicious messages by employing foldering rules. These rules will cause any message with a specific subject (i.e. the subject they use in their fraudulent or phishing emails) to be marked as read and send directly to a folder such as Junk, RSS or any other folder the user isn't likely to review as soon as the message is received.

In some organisations rules of this type are rare and therefore stand out immediately but YMMV, particularly in organisations where the use of these types of rules is widespread.

The following PowerShell command can help to identify this activity:

foreach ($user in (get-mailbox -resultsize unlimited).UserPrincipalName) {Get-InboxRule -Mailbox $user I Select-Object MailboxOwnerID,Name,Description,Enabled,RedirectTo,MoveToFolder,ForwardTo | Export-CSV E:\Cases\InvestigationXYZ\AllUserRules.csv -NoTypelnformation -Append}

This command will iterate through the list of all users, returning details of the rules they have configured and wil produce a CSV of the results for analysis.

In addition to the use of rules an attacker can employ the forwardingSMTPAddress feature to forward all email messages received at a compromised account to another address. This can be identified with the below command.

Get-Mailbox -ResultSize unlimited | where { $.forwardingSMTPAddress -ne $NULL }

This command will produce a listing of all accounts where the forwardingSMTPAddress is not blank. It isn't uncommon for the above command to return no results as the use of forwardingSMTPAddress is not widespread in my experience. But blank results can make some uneasy so an alternative approach is to use the below command:

Get-Mailbox -resultSize unlimited | select UserPrincipalName,ForwardingSmtpAddress,DeliverToMailboxAndForward

This command will produce a listing of all accounts and include details of Forwarding Address (if enabled) and DelierToMailboxAndForward status.
We can pipe these commands to csv for later review (recommended) as follows:

Get-Mailbox -resultSize unlimited | select UserPrincipalName,ForwardingSmtpAddress,DeliverToMailboxAndForward | Export-csv E:\Cases\InvestigationXYZ\FullForwarding.csv -NoTypelnformation

While all of the above listed commands are currently written to search a full tenant, if required they can be modified to targets specific mailboxes or groups/lists of users.

Proxy Logs/ DNS Logs/ Endpoint AV Logs

I raise these evidence sources as a reminder not to be blinkered by the Office 365 component of a compromise. A significant number of users will never use Office 365 off premises and as such evidence of accessing phishing links, of malware infection and other user activity may be in more traditional locations.

Commonly it is desirable to understand which users have not just received a phishing email but also followed malicious links. While the use of DNS logs, Proxy and Firewall logs may not provide 100% coverage they can be an invaluable source of evidence in identifying at least some of the impacted users.

Office 365 Management Activity API

While Unified Audit Log Search may not be enabled on a tenant, much of the same data is still accessible (albeit with less granularity where mailbox auditing is not enabled) via use of the Office 365 Management Activity API.

This API is worthy of a separate post on its own and this post is long enough as it is, so I won't go into full detail here, but in the meantime some useful resources are provided below:


Additionally, while the above resources would assist in writing your own tool/application to query the API for useful information some of the hard work has already been done for you and some example tools and scripts which make use of the API are as follows:

AdminDroid Office 365 Reporter is one third party tool I have used in the past as it was used by a client and it leveraged the API and allowed for user activity reports along with many other useful reports to be pulled.

Likewise, if the organisation in question has a SIEM with O365 integration it is likely already using the Office 365 Management Activity API to pull data out of the Audit logs and this may be retained longer than the 90-day limit. In such cases the organisations own SIEM may be the best source to query this data.

Azure Active Directory reporting audit API

Azure AD has an 'audit logs activity report' and 'sign-ins activity report', as well as 'Risky sign-ins' and 'Users flagged for risk' functionality available. These reports and metrics however require an Azure Active Directory premium subscription (P1 or P2).

If at least one user has a license for AzureAD Premium then the sign-ins activity report within the AzureAD Porte can be used to provide information regarding sign-ins. These reports are accessible via the GUI and can be downloaded as CSV. Further details are provided here.

While I have used the AzureAD GUI to pull logs I haven't played with the associated API yet and need to perform some testing. I am particularly interested to determine whether adding an Azure AD P1 subscription to one user post incident will allow for historical visibility. My limited testing of the GUI suggests that logs from prior to an AzureAD subscription being added are available once the subscription starts, but I have red contrary reports in a number of places.

A PowerShell script called 'Pull Azure AD Sign In Reports' has been put together by Microsoft employee Tim Springston and will pull these reports if the appropriate subscription is available.

Other Sources

No doubt other evidence sources will exist depending on the type of incident which occurs. One notable complication will be if an administrative account is compromised as there may be concerns that unauthorised admin actions have been performed.

Besides various queries which can be used to search for evidence of admin credential abuse (e.g. looking for recently added and modified accounts etc) it is also possible to use the Search-AdminAuditLog cmdlet and Admin Audit reports within the Security & Compliance Center to investigate such concerns.

Additionally, if evidence of unauthorised SharePoint access is identified or suspected, then having audit settings enabled for the associated site collection will be invaluable. Details of Sharepoint audit settings are available here. The associated events will be populated in the UAL and an example command is as follows:

Search-UnifiedAuditLog -StartDate [YYYY-MM-DD] -EndDate [YYYY-MM-DD] -RecordType SharePointFileOperation -Operations FileAccessed -Sessionld "SharepointInvestigation" -SessionCommand ReturnNextPreviewPage

This command will return FileAccess events during a specified date range for all sites where this event is recorded.

--

Hopefully this post is useful to those engaged in investigating instances of Office 365 account compromise. No doubt there are other scenarios and other evidence sources that I wont have thought of but that's what the comments section is for. I will keep this post updated as I continue to test some areas further.

2018-07-01

exFAT Timestamps: exFAT Primer and My Methodology

Following my quick and dirty post on exFAT timestamp behavior I wanted to follow up with a fuller post (or posts) detailing my methodology and observations. I started looking into the workings and different implementations of exFAT because it had been raised David Cowen in one of his recent ‘Sunday Funday' challenges but everywhere I looked there was something interesting requiring further analysis. To that end I will be following up with a few posts detailing how different operating systems handle exFAT and how different tools (forensic and non-forensic) interpret the filesystem.

The original question posed by David was:
ExFAT is documented to have a timezone field to document which timezone a timestamp was populated with. However most tools just see it as FAT and ignore it. For this challenge document for the following operating systems how they populate ExFAT timestamps and which utility will properly show the correct values.
Operating systems:
  • Windows 7
  • Windows 10
  • OSX High Sierra
  • Ubuntu Linux 16.04
At first this seemed like a pretty basic challenge, but it wasn't until I started poking around that I realised the myriad of different issues and inconsistencies in how alternative operating systems implement exFAT. 

This post will serve as a brief introduction to exFAT, focusing principally on how it handles the recording of MAC times. I will also detail my testing methodology for those who are interested before I expand on results in some follow up posts.

exFAT Primer

I won’t go into full detail of the history, functionality and workings of exFAT. Principally because I would all too quickly expose my ignorance, and more importantly, a much better job than I could hope to match up to has already been done by Robert Shullich and Jeff Hamm with various resourced which I came to rely upon when researching exFAT. 

Key resources/ references were:
Robert Shullich’s GCFA Gold Paper ‘Reverse Engineering the Microsoft Extended FAT File System (exFAT)’ (2009)
Jeff Hamm’s paper ‘Extended FAT File System’ (2009)
Jeff Hamm’s post on the SANS Digital Forensics and Incident Response Blog entitled ‘exFAT File System Time Zone Concerns’ (2010)
Jeff and Robert presentation/ talk entitled ‘exFAT (Extended FAT) File System  - Revealed and Dissected' (2010)

I highly recommend the above resources if you need a crash course in exFAT.

As this research focused on time zone field used in exFAT directory records and how exFAT timestamps are populated by different operating systems I have ignored significant other portions of the filesystem in this primer. 

Within exFAT file metadata is stored with Directory Entries. The Directory Entry associated with any particular file will comprise at least three records, a Directory Entry Record, Stream Extension and Filename Extension. Records are 32 bytes long with the first byte being a Type Code which identifies the type of record. Type Codes are addressed in more detail later. 

Due to the details of the testing performed later, the overwhelming majority of Directory Entries reviewed in this post will contain only three records however it should be noted that it is common for a Directory Entry to contain more than three records. This is caused when a filename exceeds a certain length. The maximum filename length which a single Filename Extension Record can support is 15 characters with longer filenames requiring additional Filename Extension Records. Examples of both are seen in the screenshot below:


This screenshot is associated with an exFAT volume named ‘WinFormat’, at the root of the volume there is one directory ‘System Volume Information’ (highlighted in red) and four files, ‘16_04_txt’, ‘18_04.txt’, ‘Win10.txt’ and ‘Win7.txt’, the last of which is highlighted in green. 

Focussing first on the root entry, we have Directory Entry Records, a 'Volume Label Directory Entry' (Starting 0x83), an 'Allocation Bitmap Directory' Entry (Starting 0x81) and an 'UP-Case Table Directory Entry' (Starting 0x82). These are covered in more detail in the recommended reading, however it is the Directory Entries associated with files and containing our file metatada which we are most interested in. Incidentally, 0x83 is the Type Code associated with a named Volume Label Directory Entry, if the volume were unnamed, the Type Code would be 0x03.

This screenshot shows the two entries associated with a directory (Red) and file (Green) respectively, the latter serves as an example of the above detailed use of additional records when a filename is too long. We can see that there are three records associated with the red Directory Entry, a Directory Entry Record (0x51), Stream Extension (0xC0) and Filename Extension (0xC1) while our Directory Entry highlighted in green is the same but contains 2 Filename Extensions (Two concurrent records with the 0xC1 Type Code).

In the following screenshot we dive into the Directory Entry associated with the same file. I couldn’t find any good worked examples for manual decoding so made this one and hope it is useful. 

A brief description of the components is as follows:

Type Code – Details the Record Type. When dealing with active (not deleted) Directory Entry Records, we are interested in 0x85, 0xC0 and 0xC1. We will only deal with these three as we explore Timestamp and Timezone behaviour.

Notable Type Codes you are likely to encounter are:

Type Code  Definition
0x85  Directory Entry Record
0x83  Volume Name Record, Master Entry (Named Volume)
0x03  Volume Name Record. Master Entry (Unnamed Volume)
0x82  Up-Case Table Logical Location and Size
0x81  Bitmap Logical Location and Size
0xC0  Directory Entry Record, Stream Extension
0xC1  Directory Entry Record, Filename Extension
0x05  Deleted File Name Record
0x40  Deleted File Name Record, Stream Extension
0x41  Deleted File Name Record, Filename Extension


Number of Secondary Entries – The number of secondary records associated with the entry. Mentioned previously, my test files generally have short filenames and so their Directory Entries comprise a Directory Entry Record, Stream Extension and Filename Extension, as such they have a value of 2 for the Number of Secondary Entries.

Checksum – Checksum of Record Entry, beyond the scope of this analysis.

Flags – Metadata Flags (Read Only, Hidden, System File and Archive)

Created/ Last Modified / Last Accessed – Now we are talking… These are 32-bit MSDOS timestamps with the associated limitation of having a granularity of 2 seconds. This limitation is accounted for by the use of additional fields to permit greater granularity as detailed later.
A detailed explanation of the 32-Bit Windows Time/Date Format can found here, but to be honest most analysis tools or hex editors are perfectly capable of parsing it. 

Creation/ Last Modified centisecond offset – Additional fields associated with the Creation and Last Modified timestamps allow an operating system to record a value between 0 and 199 to denote the number of centiseconds which should be added to the recorded MSDOS timestamp. Note, no such field exists for the ‘Last Accessed’ value.

Created Time Zone Code / Modified Time Zone Code / Accessed Time Zone Code – 
The Time Zone Code is a one byte value, the most significant bit denotes whether the application/OS which last updated the timestamp supported and made use of the Time Zone Code and therefore whether the time zone offset should be applied during subsequent interpretation. The required offset is then stored as a 7-bit signed integer. The integer itself represents 15-minute increments from UTC, allowing for the accommodation of timezones which do not fall on the hour. In case your were wondering, examples include Eucla, Western Australia (UTC +8:45), Chatham Islands, New Zealand (UTC +12:45) and Nepal (UTC +5:45)

For ease of reference I have transcribed some common (read: “not Eucla, WA”) timezones and their corresponding Time Zone code.

Hex TimeZone Hex TimeZone
0xB8 UTC+14 0x80 UTC
0xB4 UTC+13 0xFC UTC-1
0xB0 UTC+12 0xF8 UTC-2
0xAC UTC+11 0xF4 UTC-3
0xA8 UTC+10 0xF0 UTC-4
0xA4 UTC+9 0xEC UTC-5
0xA0 UTC+8 0xE8 UTC-6
0x9C UTC+7 0xE4 UTC-7
0x98 UTC+6 0xE0 UTC-8
0x94 UTC+5 0xDC UTC-9
0x90 UTC+4 0xD8 UTC-10
0x8C UTC+3 0xD4 UTC-11
0x88 UTC+2 0xD0 UTC-12
0x84 UTC+1

Length – Filename Length (Unicode Characters)

Filename Hash – a hash of the filename, used for expediting searches.

Valid Data Length – Logical File Size (Bytes)

First Cluster Address – Address of the first cluster...

Data Length – Logical File Size (Bytes)

Filename – Unicode string of filename

On the basis of the above it is clear that if we concerned with having a correct understanding of the time/date artefacts associated analysed evidence then there are 8 fields we are interested in, the Created, Modified, and Accessed TimeStamps, each of their corresponding Time Zone Codes and the 0-199 'Centisecond Offset’ fields associated with the Created and Modified Timestamps.

OS Population of Time Stamp and Time Zone fields

The first half of the puzzle is understanding how different operating systems employ the different fields. David posed the question in relation to Windows 7, Windows 10, OSX High Sierra and Ubuntu Linux 16.04, however as I am a glutton for punishment I have chosen to extend the scope to Windows 8.1 and Ubuntu 18.04. I will note here that I strongly considered playing with XP with added exFAT support and Vista as I know that they throw a further spanner in the works with partial implementation of Microsoft’s own filesystem’s functionality… but maybe another day.

Before I detail the testing performed I want to highlight some limitations (deliberate and otherwise):
  1. I am only considering removable media, which is to say, I am considering the impact on an analyst who receives a piece of removable media with no indication as to the configuration of system(s) which have interacted with it. 
  2. I have had to employ virtual machines for some of the OS testing. My assumption is that USB passthrough within VM Worksation works at the physical layer and that a mass storage device used in this way will operate in the same manner as a directly connected device. But it IS an assumption.
  3. I’m a human being. Much of the operations were deliberately performed manually as opposed to using scripts. As such time/date values which I recorded in notes are expected to be off by a few seconds. This could cause errors in my interpretation of the results of any testing, but it was considered throughout the process.
Test Environment
The following systems were used for testing:
  • Windows 10 – Custom Build PC (Windows 10.0.17134.112)
  • Windows 8.1 – VMWare Workstation 14 hosted VM using USB passthough (Version 6.3.9600)
  • Windows 7 – VMWare Workstation 14 hosted VM using USB passthough (Version 6.1.7601)
  • Ubuntu 18.04 – VMWare Workstation 14 hosted VM using USB passthough (Fresh install required addition of exfat-fuse driver)
  • Ubuntu 16.04 – VMWare Workstation 14 hosted VM using USB passthough (SIFT Workstation)
  • OSX High Sierra – MacBook Pro installed with OSX 10.13.3
Tests Performed
Broadly, my testing procedure was as follows:

Volume creation tests
Connect a physical USB device to each of the operating systems and format the drive as exFAT, capturing an image of the drive after each format. This was to facilitate examination of differences (if any) in how the tested operating systems generated the exFAT volume.
File Write Tests (A)
  1. Format a USB device using Windows 10 to employ the latest ‘official’ implementation of exFAT.
  2. Create directories associated with each tested OS on the USB flash drive
  3. Present the flash drive to each test system and then manually perform the following steps:
    1. Create file on USB device (noting time)
    2. Create file on desktop of test system (noting time)
    3. Copy file from desktop to USB device (noting time)
    4. Change Timezone (noting change details)
    5. Create file on USB device (noting time)
    6. Copy file from desktop to USB device (noting time)
    7. Revert timezone - London (UTC+1)
    8. Cut/Paste (Move) file to USB device (noting time)
The aim of this test was to establish how the different operating systems employed and modified (or didn’t) the different timestamps during these operations.

During this testing phase all actions were performed manually where possible. E.g. Using right click “new file” etc to simulate user behaviour and copy/paste within the GUI to move files. Ubuntu does not support this method of creating a file (to my knowledge) and I didn't want to rely upon any other significant applications lest they have an impact on how the OS operates. In those instances the ‘touch’ command was used on the command line to generate the test files.

File Write Tests (B)
1. Format a USB device using Windows 10 to employ the latest ‘official’ implementation of exFAT.
2. Present the flash drive to each test system and then employ scripting methods to create a textfile on the root of the drive named per the tested OS and containing a string representation of the date/time at which the action was performed.
a. E.g. for Ubuntu: date > 16_04.txt

File Write Tests (C)
Replicated File Write Tests (A) in a fully automated fashion where unexplained anomalies had been observed. Only performed for a limited number of Operating Systems and specific tests. 

The results of all test were then written to a spreadsheet where the date/time as recorded in notes could be compared to the relevant fields as they appear on disk. I then furnished the same spreadsheet with details as to how each tool I analysed displayed the timestamps, and as I am sure any reader will agree this made things immediately clear:


While the spreadsheet of doom is fairly daunting it does allow for the easy identification of patterns and anomalies, the core conclusion at this point is that there were more anomalies than consistencies and that there was a lot more work to do.

I will be exploring the different operating systems one by one in follow up posts and will go into how the relevant fields are (or are not) populated.