My last post was about sending pre-formatted JSON to logstash to avoid unnecessary grok parsing. In this post I will show how to do the same thing from rsyslog.
And again, this comes with a disclaimer. My exact model here depends on a version of logstash recent enough to have the udp input. You could do tcp here, but that’s not my example.
Prerequsites: rsyslog version 6+ (I used version 6.4.2, with more recent json patching). UPDATE: I just tested with 7.2.5 with success. The problem with earlier versions in the v7 branch were addressed in 7.2.2
bugfix: garbled message if field name was used with jsonf property option
The length for the field name was invalidly computed, resulting in either
truncated field names or including extra random data. If the random data
contained NULs, the rest of the message became unreadable.
For the record: 6.4.2 works, 6.6 does not, 7.2.5 does. These are the limits of my testing, so far.
rsyslog does what apache does (if you tell it to): escapes quotes and other characters so you can send legitimate JSON. What I did was create a template (including an @message field, to mimic what is normally logged), and then send everything to a local logstash agent over a UDP port.
# You only need $UDPServerRun if you want your syslog to be a centralized server.
$AllowedSender UDP, 127.0.0.1, 172.19.42.0/24, [::1]/128
If you have an existing, native syslog config, you can keep it as-is, and just add the lines above to it (and re-name it to rsyslogd.conf or something). rsyslogd will continue to write out to your same files in /var/log/*whatever* and also send in json format to port 55514. Again, the idea here is minimal invasiveness: Allow the logging to continue in the way it has been, but also forward it along to a centralized server.