In my previous blog posts, I explained how to use FluentD to parse and ship logs to a centralized logging server (EFK stack), Parse multi-line logs for Rails applications, and provided an overview of some of the most used plugins for parsing logs with FluentD. You can check the mentioned blog posts using the links below:

In this post, I will focus on another issue that can be solved with Fluentd. It will result in improving the quality of the logs and provide a better chance for building visualizations based on the parsed logs.


The Problem

Software applications generate logs in several different log formats, such as plain text logs, JSON log, key-value, and mixed formats such as plain text and key-value lines, as shown below.

By default, FlunetD will treat the above logline as a long text and you will not be able to filter the logs in Kibana based on the method or the format of the response. It is possible to extract the keys from the logline and add them to the event keys using the Regex parser or using a key-value parser. However, these solutions have some limitations and only works if you know exactly the keys that will be present in the logline or lines only contain key-value pairs (no plain text).

In my case, the keys of the log lines were dynamic depending on the request or the event. Therefore, I could not use the Regex parser and I could not also use the key-value parser since the log lines contain a plain text prefix for the event time 😢.

#fluentd #ruby-on-rails #programming #rails #ruby

Centralized Logging: Parse Key-Value Logs With FluentD
6.10 GEEK