Home | Blog | Impress | Data Protection Policy

Use uptime of DLT messages in Elastic Search

March 22, 2019, written by Raphael Geissler, raphael.geissler@systemticks.de

In my last blog Analyze DLT files fast and easy with Elastic Stack, we have seen how we can import DLT messages in Elastic Search to search them with Kibana. Today we want to show, how you have to change your configuration that the original DLT timestamp is used for filtering and searching in Kibana and not the import time.

Extract DLT timestamps from messages

DLT messages have two timestamps in place. One is the absolute time of the device and one is the uptime of the device. Usually developers work with the so called 'ticks', the uptime value. But we will parse both and put it as seperate fields in the Elastic Search database. Therefore we have to filter the messages with logstash.

Add following filter section to your logstash configuration.

filter {
    grok {
      patterns_dir => ["./patterns"]
      match => { "message" => "%{NUMBER}%{SPACE}%{DLT_TIMESTAMP:rtimestamp}%{SPACE}%{NUMBER:ticks}%{GREEDYDATA}" }

   # Stripping to millis
   # by removing last digit
   mutate {
      gsub => ["ticks", "\d{1}$", ""]

   # Converting to date
   date {
      match => [ "ticks", "UNIX_MS" ]

We are using three filter plugins to use 'ticks' as timestamp: grok, mutate, and date.


Grok parses arbitrary text and structures it. Grok is a great way to parse unstructured log data into something structured and queryable. [1]

We use Grok to parse the DLT message and extract the raw timestamps. Therefore we need following Grok regex patterns. Put them in a file in a patterns folder in the logstash bin folder.


BASE10NUM (?<![0-9.+-])(?>[+-]?(?:(?:[0-9]+(?:\.[0-9]+)?)|(?:\.[0-9]+)))

YEAR (?>\d\d){1,2}
MONTHNUM (?:0?[1-9]|1[0-2])
MONTHDAY (?:(?:0[1-9])|(?:[12][0-9])|(?:3[01])|[1-9])
HOUR (?:2[0123]|[01]?[0-9])
MINUTE (?:[0-5][0-9])
SECOND (?:(?:[0-5]?[0-9]|60)(?:[:.,][0-9]+)?)

MICROS (?:[0-9][0-9][0-9][0-9][0-9][0-9])

Following match entry parses each message and stores the DLT timestamp and the ticks in the fields rtimestamp and ticks:

match => { "message" => "%{NUMBER}%{SPACE}%{DLT_TIMESTAMP:rtimestamp}%{SPACE}%{NUMBER:ticks}%{GREEDYDATA}" }


DLT stores the uptime as one-tenth of a millisecond. Elastic Search only support milliseconds as date. So we have to remove the last digit from the ticks field with:

gsub => ["ticks", "\d{1}$", ""]


In the end we convert the date with

match => [ "ticks", "UNIX_MS" ]

to a unix timestamp.

Reimport the log file

To reimport the logs you should first delete the old one from the first blog. Therefore you have to go to the Dev Tools section in Kibana and execute the command DELETE events.

dlt es blog 05

Afterwards you have to stop logstash with CTRL-C and restart with the same command

~/dev/tools/logstash-6.5.2/bin/logstash -f ../config/logstash.conf

Then you can switch back to the Discover section. To see your events again you have to set the date with the timepicker as shown in the following picture.

dlt es blog 06

Now you should see the logs with ticks as timestamp.

dlt es blog 07


In this blog post we have seen how we can modify the logstash configuration to get the DLT ticks timestamp in Elasticsearch.

Do not hestitate to contact me via mail raphael.geissler@systemticks.de, if you have questions or if you need further support.