Installing Elastic EDR & SIEM
A few months ago I released a couple of blog posts on how to create enterprise monitoring at home with ELK and Zeek. This post is a continuation of that series….sort of. A lot has changed since those posts, mainly updates to the ELK stack and the release of a number of free EDR tools. OpenEDR released by Comodo and Elastic EDR. So I thought now would be a good time to see what’s changed with Elastic, and try out their new EDR. So for this post, I’m going to show how to install Elastic SIEM and Elastic EDR from scratch.
Network Design
Below is a very simple network diagram for this post. ELK is running on a Ubuntu 20.04 Server hosted on ESXi. Zeek is also running on a Ubuntu 20.04 server, and a port on my switch is being mirrored to a port on my ESXi server. You can read more about Zeek and port mirroring in my previous blog here.
Also running on ESXi is a Windows 10 machine, where we will install the Elastic EDR agent. I also have a Windows AD machine. This host does not feature in this post but will be used in future posts where I perform additional testing with the Elastic EDR.
Installing ELK Basics
I’m going to breeze through this section, as I’ve covered it before, and there are tons of guides out there already on how to get a basic ELK setup working. In my case, I’m using the newest release of ELK which is 7.10.
For my ELK setup, I’m using a single Ubuntu Server 20.04 virtual machine running on ESXi. This server will run Elasticsearch and Kibana.
First install transport-https
apt-get install curl apt-transport-https
Next add the Elastic repositories to your source list.
curl -s https://artifacts.elastic.co/GPG-KEY-elasticsearch | apt-key add -
echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | tee /etc/apt/sources.list.d/elastic-7.x.list
Then update.
apt-get update
Now install Elasticsearch
apt-get install elasticsearch
Once Elasticsearch is installed, we need to make a couple of changes to its configuration file. This file is located in /etc/elasticsearch/elasticsearch.yml
In order to access this file, you need root privileges.
First, we need to change the network.host
value. Its default is set to the localhost. Change it to the IP address of the host you installed Elasticsearch onto.
network.host: <elasticsearch_ip>
Next within the same file, we need to change two node name values. In my case, I’ve just set both values to node-1
node.name: <node_name>
cluster.initial_master_nodes: ["<node_name>"]
Now you should be ready to start Elasticsearch and check that it’s started correctly.
service elasticsearch start
service elasticsearch status
You can also check that Elasticsearch is accessible from other hosts by running:
curl http://<elasticsearch_ip>:9200
The output should look similar to the one below, (Disclosure: this pic is stolen from my previous ELK post, hence why the details don’t match my new deployment)
Installing Kibana
Run the command below to install Kibana.
apt-get install kibana
Once the installation is complete, edit your /etc/kibana/kibana.yml
and specify the IP address hosting Kibana.
server.host: "Your_IP"
In the same file, also specify the IP address of your Elastic Instance.
elasticsearch.hosts: ["https://Your_ElasiticSearch_IP:9200"]
Start Kibana and check it’s status.
service kibana start
serivce kibana status
Installing Beats
Filebeat is used to ship data from devices to Elasticsearch. There are a number of different Filebeat modules for different products that send logs and data to Elasticsearch in the required format. In this example, we’re using the module for Zeek, but Elastic has greatly expanded it’s support for additional products in recent months, including AWS, CrowdStrike, ZScaler and more.
For now, we’re just going to install Filebeat on our host running Zeek, we’ll worry about configuring it later.
apt-get install filebeat
Configuring X-Pack
As it stands, the only functionality we have within our ELK deployment is log ingestion and visualisation. We can ingest logs into ElastiSearch, and manipulate the data with Kibana visualisations, but the core functionality of a SIEM is missing. We’re unable to build detections or use cases. This feature is not available “out of the box” and in order to use it, we must first configure security between all of our different nodes. X-Pack is the Elastic package which is basically responsible for all Elastic Security functionality.
One key component that is required is to configure SSL connections between each node, there are a number of ways to do this. We are going to use X-Pack to do this as well.
Firstly, on the host with Elasticsearch installed, we need to create a YAML file, /usr/share/elasticsearch/instances.yml
which is going to contain the different nodes/instances that we want to secure with SSL. In my case, I only have Elasticsearch, Kibana and Zeek.
instances:
- name: "elasticsearch"
ip:
- "192.168.1.232"
- name: "kibana"
ip:
- "192.168.1.232"
- name: "zeek"
ip:
- "192.168.1.234"
Next, we’re going to use Elastic’s certutil tool to generate certificates for our instances. This will also generate a Certificate Authority as well.
/usr/share/elasticsearch/bin/elasticsearch-certutil cert ca --pem --in instances.yml --out certs.zip
This will create a .crt
and .key
file for each of our instances, and also a ca.crt
file as well.
You can unzip the different certificates with unzip.
unzip /usr/share/elasticsearch/certs.zip -d /usr/share/elasticsearch/
Now we have our certificates, we can configure each instance.
Configuring Elasticsearch SSL
Firstly, we need to create a folder to store our certificates in on our Elasticsearch host.
mkdir /etc/elasticsearch/certs/ca -p
Next, we need to copy our unzipped certificates to their relevant folders and set the correct permissions.
cp ca/ca.crt /etc/elasticsearch/certs/ca
cp elasticsearch/elasticsearch.crt /etc/elasticsearch/certs
cp elasticsearch/elasticsearch.key /etc/elasticsearch/certs
chown -R elasticsearch: /etc/elasticsearch/certs
chmod -R 770 /etc/elasticsearch/certs
Next, we need to add our SSL configuration to our /etc/elasticsearch/elasticsearch.yml
file.
# Transport layer
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.key: /etc/elasticsearch/certs/elasticsearch.key
xpack.security.transport.ssl.certificate: /etc/elasticsearch/certs/elasticsearch.crt
xpack.security.transport.ssl.certificate_authorities: [ "/etc/elasticsearch/certs/ca/ca.crt" ]
# HTTP layer
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.verification_mode: certificate
xpack.security.http.ssl.key: /etc/elasticsearch/certs/elasticsearch.key
xpack.security.http.ssl.certificate: /etc/elasticsearch/certs/elasticsearch.crt
xpack.security.http.ssl.certificate_authorities: [ "/etc/elasticsearch/certs/ca/ca.crt" ]
Now restart Elasticsearch.
service elasticsearch restart
Configuring Kibana SSL
Now we’re going to repeat the process but this time for Kibana. The configuration is slightly different for Kibana.
Again, move your certificates to the correct folder and set the correct permissions.
Note: The following steps are assuming you are running a standalone ELK configuration. If you are running a distributed deployment, you’ll need to move certificates to the appropriate hosts.
mkdir /etc/kibana/certs/ca -p
cp ca/ca.crt /etc/kibana/certs/ca
cp kibana/kibana.crt /etc/kibana/certs
cp kibana/kibana.key /etc/kibana/certs
chown -R kibana: /etc/kibana/certs
chmod -R 770 /etc/kibana/certs
Next, in your file /etc/kibana/kibana.yml
add the settings for SSL between Elasticsearch and Kibana.
# The URLs of the Elasticsearch instances to use for all your queries.
elasticsearch.hosts: ["https://192.168.1.232:9200"]
elasticsearch.ssl.certificateAuthorities: ["/etc/kibana/certs/ca/ca.crt"]
elasticsearch.ssl.certificate: "/etc/kibana/certs/kibana.crt"
elasticsearch.ssl.key: "/etc/kibana/certs/kibana.key"
And then in the same file, add the configuration between Kibana and your browser.
# These settings enable SSL for outgoing requests from the Kibana server to the browser.
server.ssl.enabled: true
server.ssl.certificate: "/etc/kibana/certs/kibana.crt"
server.ssl.key: "/etc/kibana/certs/kibana.key"
Then, restart Kibana.
service kibana restart
Configuring Beats (Zeek) SSL
For the next step, we need to configure SSL for our host running Zeek and Beats. If you’re not running Zeek or any other product that uses a Filebeats module like Suricata, Windows Event Log, etc.. then you can skip this step.
First copy your certificates over to your host running Zeek, then create the cert directories with the correct permissions. You’ll need to copy both the Zeek certificates and the CA certificate.
mkdir /etc/filebeat/certs/ca -p
cp ca/ca.crt /etc/filebeat/certs/ca
cp zeek/zeek.crt /etc/filebeat/certs
cp zeek/zeek.key /etc/filebeat/certs
chmod 770 -R /etc/filebeat/certs
Next, we need to add our changes to the /etc/filebeat/filebeat.yml
First our Elasticsearch config settings.
# Elastic Output
output.elasticsearch.hosts: ['192.168.1.232:9200']
output.elasticsearch.protocol: https
output.elasticsearch.ssl.certificate: "/etc/filebeat/certs/zeek.crt"
output.elasticsearch.ssl.key: "/etc/filebeat/certs/zeek.key"
output.elasticsearch.ssl.certificate_authorities: ["/etc/filebeat/certs/ca/ca.crt"]
And then our Kibana config settings.
# Kibana Host
host: "https://192.168.1.232:5601"
ssl.enabled: true
ssl.certificate_authorities: ["/etc/filebeat/certs/ca/ca.crt"]
ssl.certificate: "/etc/filebeat/certs/zeek.crt"
ssl.key: "/etc/filebeat/certs/zeek.key"
Then restart Filebeat.
service filebeat restart
Now you can check that FileBeats is able to contact Elastic by running the command below. Everything should return back “ok”.
filebeat test output
Adding Authentication
We also need to add authentication to Elastic. This is pretty easy to do. Firstly enabled X-Pack by editing your /etc/elasticsearch/elasticsearch.yml
# X-Pack Setting
xpack.security.enabled: true
Next, we need to generate passwords for all of our built-in Elastic roles and users. There’s a tool included with Elasticsearch to do this. Run the command below to generate these passwords and save them somewhere safe (password manager). We’ll be using the Elastic password for the next part.
/usr/share/elasticsearch/bin/elasticsearch-setup-passwords auto
Next, add the newly created Elastic credential to your Filebeat config file, /etc/filebeat/filebeat.yml
# Elastic Credentials
output.elasticsearch.username: "elastic"
output.elasticsearch.password: "Your_Elastic_Pass_Here"
Restart Filebeat.
service filebeat restart
Next, do the same for your Kibana config file, /etc/kibana/kibana.yml
. Also, enable X-Pack here as well.
# Elastic Credentials
xpack.security.enabled: true
elasticsearch.username: "elastic"
elasticsearch.password: "Your_Elastic_Pass_Here"
Restart Kibana.
service kibana restart
You should now see a login box when you navigate to your Kibana IP address.
Adding data to Elasticsearch
So we now have everything set up, the next thing we need to do is start ingesting data into Elasticsearch. In my case, I’m going to be using Zeek to tap my network traffic. I’ve already covered how to do this in my previous two blogs so I’m not going to cover it again here.
To see how to install Zeek, read part 1 here.
And to see how to configure the Filebeat Zeek module, read from “Configuring Zeek” in my part 2 blog here.
Enabling Detections
Once you have completed all the above steps, and have data ingesting into Elastic, you will probably notice that you are still unable to create detections. There is one final step to complete. Edit your Kibana config file, /etc/kibana/kibana.yml
and add a xpack.encryptedSavedObjects.encryptionKey. I believe this can be any 32 character string.
# X-Pack Key
xpack.security.encryptionKey: "something_at_least_32_characters"
You should now be able to view the built-in detection rules and create your own. We’ll look at how to create rules in another future post.
Installing Elastic EDR Agent
Now we’re ready to install Elastic EDR First, navigate to the “Fleet” dashboard by clicking on the link under the management tab located on the side menu.
From the fleet management menu, click “add agent”. Now it’s likely that you’ll be requested to add an integration policy before you can install agents, just follow the wizard and keep the defaults.
We’re going to use the “Enroll in Fleet” option to install the EDR.
First, download the Elastic Agent onto your Windows/Linux Host.
Once you have the agent downloaded, keep the default policy selected under the Agent policy.
Before moving onto Step 3 we have another step to complete first.
If we were to install the agent onto our Windows host now, we would receive an error that our self-signed certificate is from an untrusted source, and our agent install would fail.
So what we need to do is to add our CA certificate as a trusted certificate on our Windows host.
Install Certificates
First search for “Local Security Policy” in your Windows search bar. Then go to Security Settings > Public Key Policies > Certificate Path Validation Settings
Then, check the Define these policy settings and also check Allow user trusted root CAs to be used to validate certificates and Allow users to trust peer trust certificates options if they’re not already selected.
Finally check Third-Party Root CAs and Enterprise Root CAs.
Click apply and ok.
Next search for certmgr.msc
in the Windows Search Bar.
Then go to Trusted Root Certification Authorities > Certificates and right-click anywhere in the whitespace and select All Tasks > Import
Then simple follow the wizard and select the CA.crt we created earlier.
Install the Agent
Now we’re ready to install the Elastic Agent. Copy the install command from the Fleet Dashboard and run it with PowerShell on your Windows host where the agent is located.
If all has gone right, you should see the agent has been successfully enrolled via the fleet dashboard.
We’re not done yet however, we need to check that data is being ingested correctly into ElasticSearch from our agent. You can do this by navigating to the Data Streams tab. You should see this populated with endpoint data. If there is no data here, check your fleet settings by clicking the settings cog in the top right corner. Ensure that your ElasticSearch settings are properly set to the correct IP and not set to LocalHost.
Finally, we need to enable detection rules for our EndPoint agent. Go to the detections dashboard in the Kibana Security app, and “enable” any endpoint rules you want.
Now you can test that it works, perform some badness on your Windows host and see what happens! In my case, Elastic EDR successfully detected and prevented both the Mimikatz files and also the process execution of Mimikatz.
We can also see this in our Kibana Security Detections Dashboard.
And we can also dive a little deeper with a nice little process try graphic, and overview of the event. Very cool!
Conclusion
And so that brings this post to an end, I hope you’ve found it useful. So far with the limited testing I have done, I have found Elastic’s EDR to be impressive. It has the look and feel of an Enterprise-grade tool, which considering its open-source, is amazing. I plan to do some more thorough testing in the future to see how it performs, but as it stands it certainly seems to be leading EDR in the open-source world. It would be my go-to recommendation for small to medium businesses who may not have the budget for other expensive tools. The question is, can it give the big paid tools a run for their money.
Resources
https://www.elastic.co/guide/index.html