Fast development of Grok / Logstash extractions and fields

I had the fun times of trying to write grok rules in a particular way along with a complicated pipeline. I got tried of pushing the rules and restarting logstash, there had to be a better way!

This is want I ended up doing on my development system:

wget https://artifacts.elastic.co/downloads/logstash/logstash-6.3.1.rpm
yum localinstall logstash-6.3.1.rpm 

Create your pipeline in: /etc/logstash/conf.d/

Create the following example files:

/tmp/input.txt:

2018-07-16T01:53:28.716258+00:00 acme-host1 sshd[12522]: Disconnected from 8.8.8.8 port 37972

000-file-in.conf:

input {
    file {
	path => [ "/tmp/input.txt" ]
	start_position => beginning
	type => "test"
	add_field => { "sourcetype" => "test" }
	sincedb_path => "/dev/null"
    }
}

25-filter.conf:

filter {
    if [type] == "test" {
        grok {
            match => { "message" => "%{TIMESTAMP_ISO8601} %{SYSLOGHOST:logsource} %{SYSLOGPROG}?: %{GREEDYDATA:message}" }
            overwrite => [ "message" ]
            add_tag => [ "p25vls" ]
        }
    
        date {
            locale => "en"
            match => [ "timestamp", "MMM dd HH:mm:ss", "MMM  d HH:mm:ss"  ]
            timezone => "UTC"
        }
    }
}

999-output.conf:

output {
    stdout { codec => rubydebug }
}

Run:

/usr/share/logstash/bin/logstash -r -f /etc/logstash/conf.d/

Give it a minute, because well Java

Now in a second window, modify you pipeline (or file 25-filter.conf etc), save it.

You should see Logstash reprocess the data from ‘/tmp/input.txt’

Happy iterational development :-)