My last post was about sending pre-formatted JSON to logstash to avoid unnecessary grok parsing. In this post I will show how to do the same thing from rsyslog.
And again, this comes with a disclaimer. My exact model here depends on a version of logstash recent enough to have the udp input. You could do tcp here, but that’s not my example.
Prerequsites: rsyslog version 6+ (I used version 6.4.2, with more recent json patching). UPDATE: I just tested with 7.2.5 with success. The problem with earlier versions in the v7 branch were addressed in 7.2.2
bugfix: garbled message if field name was used with jsonf property option
The length for the field name was invalidly computed, resulting in either
truncated field names or including extra random data. If the random data
contained NULs, the rest of the message became unreadable.
For the record: 6.4.2 works, 6.6 does not, 7.2.5 does. These are the limits of my testing, so far.
rsyslog does what apache does (if you tell it to): escapes quotes and other characters so you can send legitimate JSON. What I did was create a template (including an @message field, to mimic what is normally logged), and then send everything to a local logstash agent over a UDP port.
## rsyslogd.conf
$ModLoad immark.so
$ModLoad imuxsock.so
$ModLoad imklog.so
$ModLoad imudp
# You only need $UDPServerRun if you want your syslog to be a centralized server.
$UDPServerRun 514
$AllowedSender UDP, 127.0.0.1, 172.19.42.0/24, [::1]/128
$template ls_json,"{%timestamp:::date-rfc3339,jsonf:@timestamp%,%source:::jsonf:@source_host%,"@source":"syslog://%fromhost-ip:::json%","@message":"%timestamp% %app-name%:%msg:::json%","@fields":{%syslogfacility-text:::jsonf:facility%,%syslogseverity-text:::jsonf:severity%,%app-name:::jsonf:program%,%procid:::jsonf:processid%}}"
*.* @localhost:55514;ls_json
This sends out everything (every level/severity) to localhost:55514 (UDP) and formats with the ls_json format as defined above.
Here’s the logstash agent config, which is listening on 55514:
## logstash.conf input { udp { port => 55514 type => "syslog" buffer_size => 8192 format => "json_event" } }
I don’t need any date magic here as the @timestamp works (still don’t know why it’s flaky for apache). These events are ready for consumption, no filters necessary!
You could send the JSON out to file. This is what an example line looks like:
## JSON output
{"@source":"syslog://127.0.0.1","@type":"syslog","@tags":[],"@fields":{"facility":"cron","severity":"info","program":"","processid":"10522"},"@timestamp":"2012-09-29T17:30:00.975141-05:00","@source_host":"blackbox","@message":"Sep 29 17:30:00 : (root) CMD (/usr/libexec/atrun)"}
If you have an existing, native syslog config, you can keep it as-is, and just add the lines above to it (and re-name it to rsyslogd.conf or something). rsyslogd will continue to write out to your same files in /var/log/*whatever* and also send in json format to port 55514. Again, the idea here is minimal invasiveness: Allow the logging to continue in the way it has been, but also forward it along to a centralized server.