Simple CI with Chef

So I needed to work out a way to make a script I wrote recently be deployed across a whole host of systems, turns out the only option is Chef so I had to dive into it and read a bunch of stuff.  Also had to try a bunch of things and ended up with my own Chef server in the lab to test against.  Several hours of clicking and clacking later and I have my task worked out, so here it is.

First we need to create a new cookbook and drop a pretty simple default recipe in, all it does is make sure git is installed then clone a repo to /opt/nhlapi.

Once we have the recipe we need a role to tell it what to do.

Create the role with # knife role from file repo-update.json  (or whatever you named the file to create the role from).

Now all that is left is to assign the role to the node so use #knife node edit itsj-cheftest.itscum.local  and assign the role and repo to the node we want

That is enough to get it working, you can kick back and watch it with # while :; do knife status ‘role:repo-update’ –run-list; sleep 120; done and wait to see it run in about 30 minutes based on the interval and splay values.  Speaking of which Interval is pretty self explanatory, but Splay not-so-much; Splay is used keep a bunch of nodes from all running at once basically so it doesn’t overwhelm a system that they might be checking into or otherwise digitally assaulting.

Simple Icinga2 Plugin

I’ve seen bits and pieces of the process of creating an Icinga2 (or Nagios) plugin, so here are my notes dumped straight from my brain.

First and foremost we need a script to call from Icinga, in this case I created a very simple Python script to simply get the version of LibreNMS running on my monitoring system.

This is a pretty simple script, you could call it with ./check_lnms_ver.py -H 192.168.1.100 to see how it works.  With the script working the next portion is done in the command line, first create the directory that will later be referenced as CustomPluginDir

# mkdir -p /opt/monitoring/plugins

Now we need to tell Icinga2 about the directory, this is done in a few different places

in /etc/icinga2/constants.conf add the following

const CustomPluginDir = "/opt/monitoring/plugins"

and in /etc/icinga2/conf.d/commands.conf we add the following block

The block above defines the custom command, specifies the script we created first and also passes the correct flags.  Now its time to add the check into the hosts.conf file, so place the following block into /etc/icinga2/conf.d/hosts.conf

And with that we wait for the next polling cycle and should see something like the screenshot below

This is a highly simplistic example, but figuring it out was necessary for me because I had to port some existing code from Ruby to Python so I wanted to know exactly how a plugin was created to understand what values were returned and how it all fits together.

Homelab: Synology failure post-mortem

I take my homelab very seriously, its modeled after several production environments I have worked on over the years. What follows is my recap of events over a few weeks leading up to the total failure of my central storage system, my beloved Synology DS1515 hosting 5.5TB of redundant network storage. The first signs of problems cropped up on May 31st and culminated over the last week in June.

Read moreHomelab: Synology failure post-mortem

low tech Salt deployment

So I have been tearing down and rebuilding a lot of crap in the lab lately (kubernetes clusters, ELK stack, etc) and I have been constantly having to re-add salt to the VMs because salt-cloud doesnt yet play nice with Xen.  After about the 3rd time of doing this I got tired of manually installing epel-release, salt-minion and then changing the config so I wrote perhaps the worst script ever to remotely do all that work for me and possibly be used later when I finally get salt-cloud working with Xen.

Granted this relies upon me still manually doing ssh-copy-id so I don’t have to keep typing in passwords thats a lot fewer commands, maybe if I get the time I will add in some logic to then auto-accept the key in salt so that I don’t have to manually do that either.

Salt States for the Homelab

Over the past year or so I have been playing around with saltstack to automate as much as I possibly can in my lab, from updates to base vm configuration and making lab wide configuration changes (such as setting up SNMP for monitoring).  Here are my collection of states I currently use to carry out that baseline setup, they are all called from within my top.sls so at highstate they all are applied and make things suck just a little less when running updates and helps prevent typos from making things take longer than necessary.

user.sls

snmp.sls

repo.sls

packages.sls

 

And finally my favorite of all, a working curl from within a state to hit an API target to kick off discovery, in this case its a discovery within EM7 but it can be easily modified as necessary

 

Bitnami