Tuesday, January 31, 2017

ED without the R Lab Setup

Hello readers, unfortunately in the latter of 2016 I have not been able to write as much as I intended due to personal matters but this year my goal is to at least write one blog post per month.

For my first blog post of 2017, I wanted to show how to setup an endpoint detection lab using sysmon, windows auditing and the free version of Splunk. There are many endpoint detection and response products such as CarbonBlack, Crowdstrike's Falcon Host, Tanium Trace, Endgame, etc. If you have used any of these tools then you'll know the value they provide from an incident response, detection and threat hunting perspective. Unfortunately these tools come at a cost, thankfully the great team at sysinternals released sysmon which provides on par visibility as the commercial tools I mentioned.

Laptop or Desktop capable of running 3+ virtual machines
Virtualization software (Virtual Box, VMWare, etc)
Windows 7, 8.1, or 10 Pro or Enterprise ISOs and license x64 or x86
Windows Server 2008R2+ (optional)
*note that the pro and enterprise versions are needed for windows auditing policy.
Setup a free account to download the installer and the addons.

Splunk Add-Ons
TA for Microsoft Windows — this will provide the extracted fields for windows event logs

add-on for Sysmon

Splunk forwarder for Windows


Google Chrome

For licensing if you are a working pro you may have access to all of these due to job perks if not, have a chat with your manager :) . For students, look into Microsoft Imagine formerly known as Dreamspark, depending on the school's subscription level you may get access to all of the software for free or at discount.

Splunk Instance
Your first VM will host the Splunk instance, assign at least 2 cores and 4GB of RAM, 120GB of Drive Space, if you can do more by all means increment. I recommend a 64bit install of windows for this VM along with applying the latest patches. Install Splunk as per the video and install Google Chrome and set as the default web browser. If all goes well, chrome will open to http://localhost:8000 and you will be presented with the splunk interface. Yay!

Installing Add-Ons
You may be wondering what these add-ons are for? Short answer is they contain the extraction field syntax for the logs we will be scraping from our second VM so that we can search on relevant things like EventCode (Event ID), Process(Process Name), etc.

There are two ways to install add-ons, you can download the compressed packages using the links I posted and install through the splunk interface or use the splunk apps button you see in the screenshot. Both require a splunk account, which one is easier? The second option. Search for the add-on names as I have posted them, when you click on the install button you will be prompted for your splunk account credentials, enter them and the add-ons will be automagically installed. If you run into issues or want to do it manually, here is a guide from Splunk.

After you have finished installing the add-ons, set the networking type of the Splunk VM to host only.
make note of the ip address that is assigned. Create a snapshot.

The second VM can be either Windows 7, 8.1 or 10 x86 or x64 and even Windows server versions 2008R2+ . It depends on your testing/analysis goals. Patching is optional.
You can set up this VM with a single core, 2-4GB of memory, 60-80GB of drive space.

To configure the windows auditing policy you can use the Local Security Policy MMC console or import my settings with auditpol. if you choose the latter, you can download the file from here . The file is in a csv format exported with the auditpol utility from my test system. It can be opened in notepad ++ or excel for review. To import the settings, save the file to a location inside of the 2nd VM, something like C:\test\audit.csv then open an elevated command prompt. Type...

                                          auditpol /restore /file:C:\test\audit.csv 

If all goes well, it should look like per the screenshot below
Next, transfer or download sysmon to your 2nd VM. Change into the directory where sysmon.exe or sysmon64.exe resides and execute the following.... (replace with 64 if VM is 64bit)

sysmon -accepteula –i –h md5,sha256 –n -l

This installs sysmon as a service/driver and logs md5,sha256 of executables launched, network connections and module loads of processes. This casts a wide net of events but since this is a lab setup, this is to get us familiar with how things look like and then improve our config.

I like to increment the log size to at least 1GB for Sysmon, Application, Security and System event logs so that I don't rollover events when I am not using snapshots. To do this type the following command.

wevtutil sl Microsoft-Windows-Sysmon/Operational /ms:1073741824

Replace Microsoft-Windows-Sysmon/Operational with Application, Security, System each time you execute.

set the networking type of the 2nd VM to host only. Make note of the ip address that is assigned.

Are We Done Yet???
almost :) . the last piece is installing the splunk forwarder. Use the x86 or x64 MSI installer, choose customize options.

Click next on the following screens and leave any settings as is, do check on any of the boxes on this screen. We will configure these manually later.

When you get to this screen, fill in using the ip address of your splunk instance, click next and finish the install.

Open windows explorer and browse to 
C:\Program Files\SplunkUniversalForwarder\etc\system\local

edit the inputs.conf file

host = W701-64   <<<< this is my hostname, yours will be different
———paste the text below after the above————
disabled = false
renderXml = true
disabled = 0
renderXml = true
disabled = 0
renderXml = true
disabled = 0
renderXml = true

Save the inputs.conf file and restart the VM or the Splunk Forwarder service. To verify that the 2nd VM is communicating with the Splunk instance, open task manager, click on the performance tab, click on the resource monitor button.

In the resource monitor, click on the network tab, look for/filter on the splunkd.exe process and look at the Network Activity and TCP connections section. There should activity under the Send column of the Network Activity section. Once validated, create a VM snapshot.

You can also check in your splunk instance using the following query, please see screenshot, replace host with the hostname of your 2nd VM.

Why Bother?
Well if you like experimenting and thinking like an attacker but also as a defender this setup allows you test different scenarios. For example, If I had such logging enabled in my enterprise how would i detect someone using one drive via the net use command? Is that common? or how does copying from the volume shadow copies look like using the @GMT method?

I will discuss these scenarios and others in upcoming posts, until then happy hunting!

If you have any questions, please leave a comment and i'll try to answer as best as i can.

Monday, July 11, 2016

A word on certs and RFC...

A friend of mine recently asked me, what info-sec certs would be worthwhile to pursue?
It's a bit of a loaded question because I feel it depends on 

1.    The career interests of the individual
2.    Knowledge level of the individual
3.    Time investment
4.    Use of the certification

To expand on point number one, info-sec has many areas such as defense, offense, and vulnerability management to name a few. So it depends on what is interesting to an individual. As an example let’s go with offense. There are a couple of certifications from EC-Council, SANS and Offensive Sec. Respectively Ethical Hacker, GCIH, GPEN and OSCP.

This follows into point number two, some of these certifications (not just offensive) and accompanying courses require varying levels of technical knowledge in order to understand the material.

Point number three, as mentioned previously due to varying levels of technical knowledge needed an individual must be ready to make time to understand unfamiliar concepts. Read the course material and if taking a class with an instructor, prepare to take notes. An individual should not only focus on the course materials but also augment with additional resources such as books and blogs applicable to the subject. Some of the courses and training material will highlight tools but an individual should heed from only focusing on learning command line switches or buttons to press without understanding the concepts behind the tool. A quick analogy I would like to reference is, you may be able to buy the tools you need to change the oil on your car but without understanding why you need to change your oil and which components are involved then you may use these tools incorrectly.

How will an individual use the certification? It simply to have an alphabet of letters proceeding a name? Or for career advancement to pass HR filters? Is there personal fulfillment in achieving a certification?

So I open a request for comments to those who may read my blog. I look forward to a discussion. What do you feel about info-sec certs?

**********Update 1****************
Thanks to Harlan for commenting! Harlan brings up a good point in regards to accountability. Another reason for pursuing a cert would be due to management objectives for an individual. But how does management hold an individual accountable for what was learned through a course and accompanying cert?

IMHO i think that management, particularly in Info-Sec should to some extent be proficient at a technical level in the info-sec area they are managing. This would allow them to assess the material, its benefits and then assess ways to hold an individual accountable. But I know this sounds like a perfect world scenario for those that have management who are not technical and only managerial. 

The next question to readers would be, how would you help non-technical management understand the value of the training/cert an individual is seeking and how would you help them assess you afterwards? 

Monday, April 25, 2016

Book Review: Windows Registry Forensics 2E

 I had the opportunity to read Harlan Carvey’s second edition of windows registry forensics through a kindle purchase and a hard copy that Harlan was awesome enough to give away when I asked for his feedback on my first blog post. (Thanks Harlan!) Given that a short review is available on Amazon I wanted to expand on that review further.

In the first chapter Harlan lays out,
            -The structure of the registry and nomenclature
-Analysis concepts and examples of how the operating system (Windows) uses the registry.
            -Location of registry hives (SYSTEM,SOFTWARE, NTUSER, USRCLASS, etc.)
            -Registry redirection and virtualization (32bit vs 64bit OS architecture impact)

The takeaway from this chapter is for the analyst to understand how the registry structured, where things are as these bits of information will be useful in later chapters. Harlan sprinkles in some anecdotes (see tip, note and warning sections)

In chapter two a brief overview of tools such as Microsoft’s regedit, Mitec Windows Registry Recovery, Registry Explorer, AutoRuns, Mandiant’s shimcache parser, Userassist (Stevens) and Regripper. Deleted keys and values are also discussed along with a perl tool called reglack by Jolanta Thomassen. The main takeaway that I got from this chapter is that there are many tools out there that can help you view or parse the registry but the analyst must choose what tool to use based on the analysis at hand.

In chapter 3 the concept of artifact categories is discussed along with analysis of the Security, SAM, SYTEM, SOFTWARE and AmCache hives.

In this section/hive we can find the audit policy settings of a computer. How is this useful? Well this would help in identifying the reason why certain event Ids are not seen if you retrieved a copy of the event logs and would be a great place to start and review before attempting to run tools that parse out specific event IDs from the event logs.

Another key from this hive is the last write time of the policy\secrets key which could indicate possible use of the GSecDump (credential theft tool). Granted this needs further context from other time stamped sources of information but could provide a pivot point to use when hunting for this tool on endpoints.

In this hive we can find
              -Local user information details
                           Account creation, last login
              -Local group memberships
              users are presented in SID format
How is this useful? As in the case of finding the audit policy we can use this information to profile a system. For example, what user accounts are members of the local administrators groups? Finding suspicious non-standard local user accounts.

The anecdotes in this section are quite useful.

Some key takeaways from this section/hive
     -System Name
              Make sure you are analyzing the right system
     -Prefetch settings
          Is it enabled?
     -ShimCacher (Appcompat)
       The application compatibility feature in windows keeps tracks of executable that require                      shimming for compatibility reasons and stores information in the registry.
      -Windows services
      Information such as file paths and services names is recorded in this hive an attention to non-               standard paths can help in identifying suspicious services.
      -Legacy_*Keys (Windows XP)
       Can be used to correlate when a service starts and stops, to be used in conjunction with other              keys that provide service info.

Other useful information is found in this section and I’ll stop here so that future readers of the book can explore.

In this section/hive we can find the following takeaways
          What user profiles are on the system (local and domain)
      -Windows version
          Are we dealing with windows XP, 7, 8, 10?
       -Installed Software
          uninstall key
        -Run Key
         Used by legitimate and malicious programs as persistence mechanism. The executable will                 start at system start up. (Be mindful of registry redirection)
        -Image File Execution Options
         You can add a “debugger value” to another program, attackers like to use this for the sticky                 keys method of persistence.

Again I will stop here and allow future readers to explore.

An overview is provided of the AmCache artifact and I will leave it to future readers to explore J

Chapter four discusses the NTUSER and USRCLASS hives, key takeaways
    -User Assist Keys
          Keeps track of items that are double clicked through explorer.
   -Program Compatibility Assistant
            Similar to shimcache
    -Run Keys
Note that these executable(s) are launched when a user logs in unlike those found under the   Software registry hive

-File Acces
Several keys are discussed that provide information on files accessed by the user profile, also keywords used in searches through windows explorer.

-File Association
Useful when finding artifacts of a suspicious extension type, this key can provide information on how the system is configured to handle the extension type. ie. is .mp4 handled by windows media player or VLC?


I may be wrong in my understanding and please excuse if this explanation is wrong. This artifact has to do with windows explorer and in relation to windows size and usability. This information is stored in the NTUSER hive (XP) and USRCLASS (Win 7+) and can help in identifying folder traversal, control panel applet access and FTP artifacts.
Chapter five discusses regripper and explains what it is and how to use it. Note that throughout the previous chapters several regripper plugins are discussed which hopefully will allow you to use the tool based on your analysis needs.

I really enjoyed the book and definitely recommend it to anyone in the DFIR field as a must read. Especially if you have seen presentations at conferences discussing shimcache, shellbags, and stories of how incident responders were able to figure out what the attacker did on a system. Pay close attention to setting analysis goals and on chapters three and four as these will help in that regard. This book will assist in strengthening your DFIR skills and some of this information once understood can help you start hunting for evil in your environment.

TL;DR… from my Amazon review

“The book provides a detailed discussion on the structure of the registry, its keys and relevancy to digital forensics & incident response (DFIR). The author also focuses on presenting examples and use cases on how the reader can leverage information in the registry as part of an analysis. Discussion of tools is given and the tools presented are free and some are open source which you can modify if you understand the programming language they are written to fit your needs. The author dedicates a chapter on regripper, a tool that he wrote to parse registry hives and serves as a mini manual. After reading the previous chapters, hopefully the reader will understand the flexibility of the tool and how one can expand functionality. Overall the author does a great job in presenting the information, although short (191 pages) the content is targeted at what can bring value to the reader/analyst. I recommend to all who work in the DFIR field or are starting to.”

Friday, April 22, 2016

BSides NOLA and Threat Hunting

Last weekend my wife and I had a chance to head to Bsides in New Orleans. There was a mixture of presentations but the ones I liked most were around hunting and what to look for on endpoints. Devon Kerr presented artifacts mapped to Mandiant’s attack lifecycle while Wesley Riley from RSA presented on what artifacts to collect if your budget is slim. Michael Gough’s presentation was also along these lines but focusing on windows audit logging and what event IDs are of value plus a new tool called Log+MD which helps to audit systems by comparing the audit policy to recommended settings found on Michael’s website malwarearcheology.com.
What really struck a chord with me is the artifacts that were mentioned, for example
-Scheduled tasks
-Auto start entry points
-Registry Hives

Among others….these are artifacts that most in the DFIR field recognize. Why are they mentioned repeatedly?
Because THEY WORK. They are useful if you are trying to find suspicious or malicious behavior on an endpoint. But in order for these artifacts to be relevant it’s not about running a tool whether commercial or open source and analyzing the output or trusting the logic of said tool to flag maliciousness. No, it is the analyst’s responsibility to understand the artifacts. Through this understanding the analyst will choose the artifacts sources relevant to the investigation they are performing.
As Wesley mentioned in his presentation, the best tool in a cyber security organization is a highly motivated analyst. Which is why it is important if you are starting out in this field, read and reread material that is available on some of these artifacts and DFIR in general. Tool specific training will only get you so far.
I will be posting learning materials in a separate page that I have found useful for DFIR work.

Keep on the good fight! Until next time...

Monday, April 4, 2016

The Curious Case Of The Chan Pelana Device

Last year around July a buddy of mine sent me an email regarding curious message he saw after unlocking his pc (Win7) in the morning. The pc had been left on overnight. He wasn’t sure what this chan pelana device was or how it got installed on his system and a quick google search did not turn up any meaningful results.

So I collected the event logs (Application,System and Security) and registry hives (System, Software, Security, Ntuser, Usrclass, etc.) from the system among other files to further help out. My analysis goals were
  • Identify what Chan Pelana was
  • Identify the approximate time of when it was installed
  • Identify whether this was a malicious event or not

Why start with these sources?
Well, the system event log would contain events related to plug and play activity, services and other system related activity. The system registry hive contains configuration information for services and drivers to name a few. As the registry hives contain time stamps I may find one related to driver installation activity.
To create the micro timeline I used Harlan Carvey’s wevtx batch script and regtime.exe to get the time stamps from the system registry hive.

F:\analysis\user-pc\> wevtx logs\*system ntfs\events
Regtime -r registry\SYSTEM -m HKLM/SYSTEM_ >> ntfs\events
Parse -f ntfs\events -o > ntfs\event_tln.txt

As my friend's time zone is EST and the tools output to UTC I kept that in mind as I looked through the output. The excerpt below is from the micro timeline. I pivoted on events that occurred during the overnight time frame he gave which would put these events happening around 11:00 PM EST.
Fri Jul 24 03:04:33 2015 Z
  EVTX     Server            - Microsoft-Windows-DriverFrameworks-UserMode/10002;user-pc.ent.acme.corp,S-1-5-18,WpdMtpDriver,{AAAE762B-A6A2-4C45-B5D8-9A83AFB6BB70},1.9.0,true
  REG                        - M... HKLM/SYSTEM_ControlSet001/Control/Class/{EEC5AD98-8080-425F-922A-DABF3DE3F69A}
  EVTX     Server            - Microsoft-Windows-DriverFrameworks-UserMode/10000;user-pc.ent.acme.corp,S-1-5-18,USB\VID_04E8&PID_6860\03157DF3EB6F073F,1.9.0

Fri Jul 24 03:04:34 2015 Z
  EVTX     Server            - Microsoft-Windows-UserPnp/20003;user-pc.ent.acme.corp,S-1-5-18,WUDFRd,system32\DRIVERS\WUDFRd.sys,USB\VID_04E8&PID_6860\03157DF3EB6F073F,true,true,0
  REG                        - M... HKLM/SYSTEM_ControlSet001/Enum/USB/VID_04E8&PID_6860/03157df3eb6f073f/Device Parameters/WpdMtpDriver
  EVTX     Server            - Microsoft-Windows-UserPnp/20003;user-pc.ent.acme.corp,S-1-5-18,WinUsb,system32\DRIVERS\WinUsb.sys,USB\VID_04E8&PID_6860\03157DF3EB6F073F,false,true,0

  EVTX     Server            - Microsoft-Windows-DriverFrameworks-UserMode/10100;user-pc.ent.acme.corp,S-1-5-18,0

The following information stood out…
WpdMtpDriver - MTP device, cellphone, camera, mp3 payer
03157DF3EB6F073F- possible serial number
HKLM/SYSTEM_ControlSet001/Enum/USB/VID_04E8&PID_6860/03157df3eb6f073f/Device Parameters/WpdMtpDriver -Registry key being modified

It seems I was possibly dealing with an USB device that uses the media transfer protocol, to further validate this I looked up a presentation by Nicole Ibrahim on MTP devices. (link) Page 15 of the PDF slide deck explains the process that happens when an MTP device is inserted and the plug and play events seen in the system event log seem to match. On Page 23 the sub key that I saw in my micro timeline was a child of one that was modified in Nicole’s testing. (CurrentControlSet\Enum\USB\)
I then loaded the system registry hive to Eric Zimmerman’s registry explorer and navigated to the key ControlSet001/Enum/USB/VID_04E8&PID_6860/03157df3eb6f073f to see what other information I could find.

The devicedesc value denotes a string data value of “SM-G920W8” and our FriendlyName value contains a string data value of Chan Pelana. Aha! A match to what my friend had seen. A quick google search for the “SM-G920W8” string turned up a hit to a Samsung cell phone model for the Canadian market.

So let’s recap on the analysis goals
  • Identify what Chan Pelana was
    • A Canadian Samsung cell phone.
  • Identify the approximate time of when it was installed
    • Without the setupapi.dev.log my next best source was the windows event log and registry timestamps and based on these the install approximately occurred on Fri Jul 24 03:04:34 2015 UTC
  • Identify whether this was a malicious event or not
    • I contacted my friend with my findings and as it turned out they had visitors from Canada and one of them happened to plug their phone to my friend’s computer to charge during the night. So that ruled out the malicious component.

And that concludes the curious case of Chan Pelana.

Hello World....

   Over the past couple of years I have been soaking in all information I could get in regards to digital forensics and its specific application to incident response as well as malware analysis. I have also been working in the field for the past couple of years and while I have sat on the sidelines in regards to blogging I thought it best to start sharing my experiences where I can. The purpose of this blog is to share analysis techniques, links, and tools. I’m not sure about being an expert as in the DFIR field there is a lot to learn and at times changes but just maybe one day I’ll reach that level but for the time being I consider myself an experienced learner. I hope that what I share can help someone else no matter what their experience level is. Enough rambling… let’s start J