Automating Zeek & Suricata IDS installs with Ansible, Part 2

Let’s learn you some Ansi-bubbles!

Let’s start with this. I learn by doing. Rarely do I sit down and read 300 pages of documentation and then set off to accomplish something. The impact it has on this little series is that we’re going to do something with a goal in mind: build some IDS sensors with Ansible so it’s repeatable. If you just want to use this to learn how to set up a few things on one system, that’s cool, but the intent here is to get a fleet of IDS sensors working as quickly as possible, with minimal variation. Just keep that mind, dear reader, if I’m skipping over some details in favor of just getting to the meat of it.

Minimal concepts you need to know to make Ansible do a thing:

  • Control Node (Ansible Master)
  • Inventory File
  • Playbook

The control node is where you actually have Ansible installed, and where you connect to your managed nodes (IDS boxes in this case). “apt install ansible -y” and you’re done. Oh yeah, I’m an Ubuntu guy, so you’ll have to translate into RHEL-based commands on your own.

The inventory file is just a list of the machines you want to manage, with some variables like how to connect to said machines. Warning: I am not an Ansible expert. We’re here to learn by doing. Some of these things are not best practice.

The playbook is your set of instructions, your recipe, your MOP, etc. This is what makes the magic happen so you don’t have to spend a bunch of time in vi replacing the same strings in the same files, box by box, over and over and over and over and over.

Let’s look at an inventory file in more detail!

[lab-ids]
172.29.249.46
172.29.120.230

[lab-ids:vars]
ansible_connection=ssh
ansible_user=ansible
ansible_ssh_pass=migrant6vermouth3DOCK8charmer
ansible_sudo_pass=migrant6vermouth3DOCK8charmer

First we define a group for our hosts, lab-ids in this case. Then we put some hosts in the group (172.29.249.46 and 172.29.120.230). Next we assign some useful variables tied to the group, that allow Ansible to talk to the machines. This example is telling Ansible to use SSH to connect to the hosts in the group, use “ansible” as the username when connecting via SSH, use the password “migrant6vermouth3DOCK8charmer” for the “ansible” account, and finally use the same password to sudo your commands.

The playbook gets a little more complicated. Ansible has a [metric|imperial] boatload of built in actions. It has matured and grown substantially since I first started playing with Ansible 7+ years ago.

---
- hosts: "*" 
  become: yes
  tasks:
    - name: Update all packages to their latest version
      apt:
        name: "*"
        state: latest
    - name: Remove useless packages from the cache
      apt:
        autoclean: yes
    - name: Remove dependencies that are no longer required
      apt:
        autoremove: yes

The above example says “do this to all hosts”, which we’ll control by referencing the hosts file when we execute the playbook. Next we say “become” a privileged user via sudo. Then we define the tasks we want to do — in this case they are all various apt commands we use to update our systems and then do a little housekeeping of apt itself.

The “name” variable is what is shown when you run the task in a playbook, so make it somewhat descriptive. The next line says use the “apt” module, which is pretty much a core function of Anisble at this point. From there we define some variables to run against apt. The first is saying “change the state of all packages to latest”, in other words please run “apt update all -y” for me. The second is running “apt autoclean”, and the third is “apt autoclean”. I’m sure there’s some yum equivalent given that Ansible is owned by IBM RedHat…

Now to the fun part – let’s make Ansible do something!

ansible-playbook -i /opt/ansible_ids/hosts /opt/ansible_ids/playbooks/apt_updates.yml 

Not super exciting is it? Run ansible-playbook, specify the hosts file we have above, specify the playbook YAML file that you want to run, and please go forth nice computer and do the thing for me (enter).

Stay tuned for Part 2!

Automating Zeek & Suricata IDS installs with Ansible, Part 1

Some time ago, I pitched a concept for a high performance Network IDS (NIDS) server that could be deployed on relatively cheap and physically small hardware (e.g. Intel NUC) at our various locations with their own local Internet egress circuit(s). The CIO asked me why I didn’t just use Security Onion. Well, Mr. CIO, that’s because Security Onion is an entire platform that largely duplicates the rather expensive commercial data lake and SIEM we already have. I don’t need to waste hardware resources running an Elasticsearch cluster, NIDS, etc. when I already have Splunk and Splunk Enterprise Security in the cloud.

With that in mind, I’m going to write up a couple of articles here to outline exactly what I did, why I did it, and how I did it.

Part 1 – Design and Purpose
The core concept here is fairly simple. In the traditional SOC visibility triad, you need a SIEM and two primary types of data being sent to it – Endpoint and Network. Nearly every fancy security tool out there is filling in one of these areas, or living directly adjacent to one. The reason for these two legs of the triad is that knowing 1) what your endpoints are doing and 2) what your network is doing gives you the foundational observation of what the fuck is going on in userland. By the way, if for some reason you can only do one of these, go for endpoint and go for broke.

Look! Something useful from Gartner! I don’t know about the triangle within a triangle, though. Triangle Man, Triangle Man. Triangle Man hates Particle Man.

While getting amazing visibility out of an Open Source EDR tool is a little on the fringes still, there are two very mature options for Open Source NDR that work well together: Zeek and Suricata.

Zeek is a packet metadata generator that breaks down your network traffic stream into consumable information instead of, well, a packet flow. Do you want to dig through a bunch of packets to understand the bit by bit nuances of a client making a DNS query to a server? If so, you’re probably a network dude, and that’s cool. I used to be one. But these days, I just want the damn data. I don’t necessarily care about the latency and all that jazz. Enter Zeek.

Suricata, on the other hand, is a traditional Intrusion Detection System that can also do some packet metadata work, but it’s power lies in the intrusion detection ruleset to find sketchy activity happening on your network and generate a specific alert.

Ultimately we want to take these two great tastes that taste great together and run them on a server, then send a bunch of network data copies at that server. Then we take the data Zeek and Suricata generate in the form of log files, and send them to Splunkity Splunk for indexing and search capabilities. You can then use a variety of tools and apps in the Splunkiverse to analyze that data and give you magical context about what’s happening.

When we have those logs in a place where we can search and visualize them, it makes it a lot easier to see, understand and communicate potential problems like security policy non-compliance. For example, we are a Cisco Umbrella (fka OpenDNS) shop. If all of my DNS traffic, by policy, is supposed to get directed to Umbrella, then why the hell do I still have 64k queries to Cloudflare DNS and 204k queries to Google DNS in the past 24 hours?!? Something is done broke, people, and now I know it.

Sample of upstream DNS dashboard from the Zeek App for Hunting in Splunk.
Suricata alert data in the Stamus Networks App for Splunk

In Conclusion…
I’m not entirely sure how I’m going to structure the rest of this yet, but here are the big ticket items we’ll cover:

  • Ansible basics: hosts, playbooks, templates
  • Ansible Playbooks for:
    • Zeek and Suricata installation
    • Customizations to Zeek and Suricata configurations
    • Keeping the system clean (logs are great, but they fill up your disk real quick
    • Installing and configuring Splunk Universal Forwarder

Catching Up

I have been pretty much killing myself with work for the past few years. My health has been seriously impacted from the stress of this job (and COVID), so it’s time to make some changes.

Relevant to this little website is making an effort to document some of what I do in my “spare” time as far as technology and InfoSec goes. At this point in my career, I do a lot of Minimum Viable Product type of activities, versus trying to make things perfect. That said, you’ll probably find better, more efficient ways to do what I document here. That’s totally cool, and I hope you take it upon yourself to document those improvements yourself and share the wealth with the collective Interwebz!

Bad form, HW group

TL;DR: This is frustrating. We’re in the year 2020. TLS has been around for 25 years, we have great free services like LetsEncrypt, yet people are still choosing not to deploy TLS in their applications.

I have a small fleet of HW group’s STE2 temp/humidity sensors. They work wonderfully for how I use them, which is collecting the data via SNMP for historical graphing, and sending email alerts to OpsGenie when there is a temperature excursion.

I decided to look at their freemium cloud-based sensor management tool called SensDesk for use at home, since it could offer a lot of what I’m doing with it all in one place instead of spread across a couple different tools. It’s free up to a certain number of devices, so hey, let’s give it a go!

Problem #1: When you go to their site and click on “sign up”, you’re sitting at a HTTP page. No TLS. Yet you can manually change the URL to https://, and it still works. WTF, really? The first time around, I didn’t catch it, so I signed up and sent my credentials in clear text to a hosted website in Czechia. Great. Thankfully I use random passwords!

Problem #1.5: Sign up, redirect to the login page, and IT’S STILL NOT ENCRYPTED. Switch the URL to https:// and we’re set. For now.

Problem #2: I go to my STE2 device to start sending data to SensDesk, lo and behold, it requires my SensDesk login and is going to send the data clear text to their service. JFC, how hard is it to just send data over HTTPS?

At this point, I’m pissed. I send an email to these folks: (paraphrasing) Bro, Y U NO TLS? They’re halfway around the world, so I’m not expecting a response until the next day. It would be awesome customer service if they responded, but it’s a small outfit in CZ so whatever.

Problem #3: Their email response: (paraphrasing) if you don’t want your shit to be fucked, then buy the new version of the hardware and pay for a license so you can run it on prem.

Come on! That’s your response? We’re too lazy to mandate HTTPS on our website, so buy our software? We couldn’t figure out a HTTPS client library on the old hardware, so upgrade?

Welcome to the inherent nightmare of old school electrical controls people moving into the world of IoT/IIoT.

BSides MSP 2019

I attended the BSides MSP conference last week for the first time. It was an interesting conference, and a few talks really stand out.

First was Tim Medin (from Red Siege) and his session “Purple Yourself”. As a Purple Teamer myself, I thought it was great to hear some affirmations that yes, you have to be able to think like a Red Teamer in order to be an effective Blue Teamer. You can check out the presentation yourself, but I think this is a great bit:

Red only exists to make blue better, anything else is a waste of time, money, and resources.

The second noteworthy talk was Dan McInerney’s “Physical Security, Try Not to Get Covered in Poo!”. I learned a metric shit ton (no pun intended) about physical security breaches in this one. Lots of tools on my wishlist now! Fun fact: The co-presenter could not make it for some reason.

Third, Yossi Appleboum (from Sepio Systems) talking about rogue device mitigation. In a nutshell, there is exactly one (1) product on the market that can detect Layer 1 exploits like most of the stuff Hak5 and the NSA make. I did not realize how invisible these tools really are. Fun fact: Yossi and crew were the ones who went on record after the StupidMicro supply chain exploit fiasco stating they have seen similar things.

Last but not least was Jason Blanchard from BHIS presenting “How to Persuade Humans by the Ways Humans Like to be Persuaded”. Jason’s background in film production provided some enlightening bits about how to present to people. Having done presentation training for a sales job in the past, I think this was far more relevant.

There were a few other really good talks, but I’m sick of writing!