Update: This page is for the now deprecated Logstash 1.1.x and older. Look for the updated version of this here: https://untergeek.com/2013/09/11/getting-apache-to-output-json-for-logstash-1-2-x/
Last time we looked at ways to improve logstash/elasticsearch with elasticsearch templates. Today we’ll save ourselves a lot of grok parsing pain with apache’s custom log feature.
Disclaimer: This only works with versions of logstash supporting the UDP input. You can adapt this to send or log in another way, if you like, e.g. send the json to a file and have logstash tail it.
Let’s look first, then explain later. If you are using an include line in your apache config (e.g. Include conf.d/*.conf) then all you need to do is to put this in a standalone or vhost. If it’s a single-host apache, I will create logstash.conf and put this in it:
LogFormat "{ "@vips":["vip.example.com","customer.example.net"], "@source":"file://host.example.com//usr/local/apache2/logs/access_log", "@source_host": "host.example.com", "@source_path": "/usr/local/apache2/logs/access_log", "@tags":["Application","Customer"], "@message": "%h %l %u %t \"%r\" %>s %b", "@fields": { "timestamp": "%{%Y-%m-%dT%H:%M:%S%z}t", "clientip": "%a", "duration": %D, "status": %>s, "request": "%U%q", "urlpath": "%U", "urlquery": "%q", "method": "%m", "bytes": %B } }" ls_apache_json CustomLog "|/usr/local/bin/udpclient.pl 127.0.0.1 57080" ls_apache_json
Some of this should look straightforward, but let me point to some pitfalls I had to dig myself out of.
- bytes: in the @message, I use %b, but in @fields, I use %B. The reason is summed up nicely on http://httpd.apache.org/docs/2.2/mod/mod_log_config.html :
%B Size of response in bytes, excluding HTTP headers.
%b Size of response in bytes, excluding HTTP headers. In CLF format, i.e. a ‘-‘ rather than a 0 when no bytes are sent.In other words, since I’m trying to send an integer value, if I don’t choose %B, I may send a – (dash/hyphen) when there is no value to send, making the field be categorized as a string. Jordan says that sending the JSON as an integer (i.e. no quotes) should make it into ES as an integer. This may yet require a mapping/template.
- @message: This is the apache common format. You could easily substitute in the same fields as go in the apache combined format. Or, you could leave the common format as @message and simply add the fields for user-agent and referrer if you want to collect those:
"referer": \"%{Referer}i\", "useragent": \"%{User-agent}i\"
- timestamp: Jordan has had no problems with passing @timestamp directly, but I have had nothing but problems. Perhaps I can get a solution linked here some time, but in the meanwhile, I simply spit out the timestamp here in ISO8601, and then use date and mutate in logstash.conf:
input { udp { port => 57080 type => "apache" buffer_size => 8192 format => "json_event" } } filter { date { type => "apache" timestamp => "ISO8601" } mutate { type => "apache" remove => [ "timestamp" ] } }
What comes out is ready (except for the date munging) for feeding into elasticsearch, and even has an @message field for searching. This method also makes it trivial to add extra fields (get them from http://httpd.apache.org/docs/2.2/mod/mod_log_config.html) without doing anything extra or having to re-work your patterns for grok. As I mentioned previously, I keep the common log format for @message, then add the other fields (like duration, user-agent and referer) as needed. An apachectl restart is all it takes to get the new values into elasticsearch.
And for the sake of a complete solution, the udpclient.pl script:
#!/usr/bin/perl
#udpclient.pl
use IO::Socket::INET;
my $host = $ARGV[0];
my $port = $ARGV[1];
# flush after every write
$| = 1;
my ($socket,$logdata);
# We call IO::Socket::INET->new() to create the UDP Socket
# and bind with the PeerAddr.
$socket = new IO::Socket::INET (
PeerAddr => "$host:$port",
Proto => 'udp'
) or die "ERROR in Socket Creation : $!n";
while ($logdata = <STDIN>) {
$socket->send($logdata);
}
$socket->close();
I also tend to think that one of the best things about this solution is that it does not interfere with your current logging solution in any degree. It simply catches more and sends it, pre-formatted, over local udp to logstash, and then to whatever output(s) you have defined.