Unconventional Zabbix: Part 1 — Extended External Monitoring

Zabbix 2.2 will be the first version which will allow web scenario templating. But in the days of 1.6, 1.8 and 2.0, I had (still have) serious shortcomings in web monitoring to overcome. I had hundreds of URLs which needed monitoring with a large bulk of them being nearly identical, or at least having identical test URL responses. I had perhaps 5 basic URL types, of which one type alone was over 80% of the sample pool.

Using Zabbix’s native URL monitoring I would have spent hours, perhaps days, entering in each URL, each grep string, each potential response code, just to create the items. After that I’d still have to create triggers (though this part would be largely copy/clone-able). There had to be another way which would allow me to save time and preserve my sanity. Enter custom external monitoring.

Zabbix allows customized external monitoring with a few provisos: Do not overuse external checks! It can decrease performance of the Zabbix system a lot.  I have done some digging and found the reasoning behind this, and found a work-around for the aforementioned “decrease” in performance.

In ordinary Zabbix checks, after a database call for parameters, the call to the remote machine is initiated from within the Zabbix binary (already running).  It’s just a network call.  For an external check, Zabbix must spawn a shell and wait for the exit code to that process before it can free up resources and return the value.  Imagine what happens if you have hundreds of these per minute!  Zabbix could—and likely would—get held up.  The simplest response to this is to make Zabbix call a wrapper script as the external script.  The wrapper script will do little or no processing, pass the args to the actual process and run with an appended ampersand (&) to background the “actual” process and give a near-immediate return.  The “actual” script will send the values back as trapper type using zabbix_sender.  By using this method, I am able to process hundreds of new values per minute in 100% external script setups.

The only down side to this setup that I can ascertain is that I am tracking a Zabbix item whose only purpose is to initiate the external script (the wrapper).  I technically don’t need to have a trigger watching this item as it is always likely to have a zero as its return code (backgrounded shell).  This logic can be extended to a number of other clever tests, such as SSL certificate expiration watching and SSL certificate chain validation.  I will share these in a different post as this post is more about the methodology than the implementation.

Introducing zoop

Drumroll please…

Introducing… zoooooooop!

I got sick of hard-coding calls to the python Zabbix API module (https://github.com/gescheit/scripts), so I wrote zoop: Zabbix Object-Oriented Python.

With zoop, I have made (and will continue to add) classes, or objects if you will, of Zabbix API calls. Need to create a new item?

from zoop import *

api = zoop(url='http://www.example.com/zabbix', username='zabbixusername', password='zabbixpassword')
host = api.host()
host.get(name='zabbix.example.com')


The .get() method will fill the object with the API information, if the host exists (will search by either host, name or hostid)
Once filled, the host object behaves like a python dict type, so you can call the values or set them like this: host[“hostid”]

item = api.item()
item["hostid"] = host["hostid"]
item["key_"] = 'myitemkey'
item["other item values"] = etc.

There is a .get() method for items as well, e.g. item.get(key_=’my_item_key’, hostid=host[“hostid”]), which will fill the item object with values from the API.

Please feel free to browse the source. There is a LOT that could be done here to extend this and make it better, and encompass more API calls. This is just for starters!

zoop is available at https://github.com/untergeek/zoop

The Zabbix Grab-Bag

I finally created a repository on GitHub for all of my Zabbix scripts (or, if you want to go directly to the repository: https://github.com/untergeek/zabbix-grab-bag)

This is the culmination of a dream that started a few years ago. I wanted a way to share my scripts in a way that others would be able to both use and improve them. GitHub is the chosen vessel.

Rather than making this a true project, I envision it as more of a “grab-bag” of projects/scripts/templates from myself and others. And you should be able to license your own scripts however you want, too.

So check it out! Contribute! Let’s make Zabbix even more awesome!

ls-zbxstatsd – Part 1: Wrangling a zabbix key from a statsd key string.

I have just forked zbx-statsd from github into ls-zbxstatsd.

The reason for this is that zbx-statsd was not compatible with the format coming from logstash’s statsd output plugin.

Statsd format is simply “key:value|[type]”.
In logstash, “key” is different, and the format becomes “namespace.sender.’whatever you named it in the statsd output plugin’:value|[type]”. Things get more complicated when you need to split an already period-delimited “key” and figure out which part is which. What if the “sender,” which is the zabbix host you want the metrics to be stored under, is a period-delimited FQDN?

This was too much to handle so I added a delimiter. Double semicolons. With this, the format sent from logstash now looks like “namespace.sender;;.’whatever you named it in the statsd output plugin’:value|[type]”. This is much more easy to split.

For now, I strip the namespace altogether. I don’t need it, and while it might be useful later, I couldn’t think of a reason to keep it, so my script expects the default “logstash” and strips that out. If you’re using this script at this time, don’t change the default namespace, or expect to edit the code. Now I’m left with “sender;;.’whatever you named it in the statsd output plugin’:value|[type]”, where:

  • sender = zabbix host
  • ‘whatever you named it in the statsd output plugin’ = item key

With the double semicolons I can easily separate the zabbix host name from the zabbix key, even if there are many periods in each.

With the resolution of this, it was time for stage two: Automatic item creation.

Time to geek blog about Logstash and Zabbix again

It’s been so long since I did any kind of geek blogging I figure it’s time I lived up to my name again.

I’ve taken to running Logstash and Elasticsearch as a centralized logging engine.  I’ve been doing so for over a year now.  The cluster I created and managed for my company was indexing over 16,000 records per second during the busiest time of day.  Hopefully I’ll be able to put some useful data here for my fellow Logstash users.

I also did a lot of work with Zabbix for my company, plus consulted for another company on the side.  I’ll have to put some of those details here for anyone interested.