Let’s learn you some Ansi-bubbles!
Let’s start with this. I learn by doing. Rarely do I sit down and read 300 pages of documentation and then set off to accomplish something. The impact it has on this little series is that we’re going to do something with a goal in mind: build some IDS sensors with Ansible so it’s repeatable. If you just want to use this to learn how to set up a few things on one system, that’s cool, but the intent here is to get a fleet of IDS sensors working as quickly as possible, with minimal variation. Just keep that mind, dear reader, if I’m skipping over some details in favor of just getting to the meat of it.
Minimal concepts you need to know to make Ansible do a thing:
- Control Node (Ansible Master)
- Inventory File
- Playbook
The control node is where you actually have Ansible installed, and where you connect to your managed nodes (IDS boxes in this case). “apt install ansible -y” and you’re done. Oh yeah, I’m an Ubuntu guy, so you’ll have to translate into RHEL-based commands on your own.
The inventory file is just a list of the machines you want to manage, with some variables like how to connect to said machines. Warning: I am not an Ansible expert. We’re here to learn by doing. Some of these things are not best practice.
The playbook is your set of instructions, your recipe, your MOP, etc. This is what makes the magic happen so you don’t have to spend a bunch of time in vi replacing the same strings in the same files, box by box, over and over and over and over and over.
Let’s look at an inventory file in more detail!
[lab-ids]
172.29.249.46
172.29.120.230
[lab-ids:vars]
ansible_connection=ssh
ansible_user=ansible
ansible_ssh_pass=migrant6vermouth3DOCK8charmer
ansible_sudo_pass=migrant6vermouth3DOCK8charmer
First we define a group for our hosts, lab-ids in this case. Then we put some hosts in the group (172.29.249.46 and 172.29.120.230). Next we assign some useful variables tied to the group, that allow Ansible to talk to the machines. This example is telling Ansible to use SSH to connect to the hosts in the group, use “ansible” as the username when connecting via SSH, use the password “migrant6vermouth3DOCK8charmer” for the “ansible” account, and finally use the same password to sudo your commands.
The playbook gets a little more complicated. Ansible has a [metric|imperial] boatload of built in actions. It has matured and grown substantially since I first started playing with Ansible 7+ years ago.
---
- hosts: "*"
become: yes
tasks:
- name: Update all packages to their latest version
apt:
name: "*"
state: latest
- name: Remove useless packages from the cache
apt:
autoclean: yes
- name: Remove dependencies that are no longer required
apt:
autoremove: yes
The above example says “do this to all hosts”, which we’ll control by referencing the hosts file when we execute the playbook. Next we say “become” a privileged user via sudo. Then we define the tasks we want to do — in this case they are all various apt commands we use to update our systems and then do a little housekeeping of apt itself.
The “name” variable is what is shown when you run the task in a playbook, so make it somewhat descriptive. The next line says use the “apt” module, which is pretty much a core function of Anisble at this point. From there we define some variables to run against apt. The first is saying “change the state of all packages to latest”, in other words please run “apt update all -y” for me. The second is running “apt autoclean”, and the third is “apt autoclean”. I’m sure there’s some yum equivalent given that Ansible is owned by IBM RedHat…
Now to the fun part – let’s make Ansible do something!
ansible-playbook -i /opt/ansible_ids/hosts /opt/ansible_ids/playbooks/apt_updates.yml
Not super exciting is it? Run ansible-playbook, specify the hosts file we have above, specify the playbook YAML file that you want to run, and please go forth nice computer and do the thing for me (enter).
Stay tuned for Part 2!