Home | Blog | Impress | Data Protection Policy

Analyze DLT files fast and easy with Elastic Stack

March 13, 2019, written by Raphael Geissler, raphael.geissler@systemticks.de

DLT is yet the standard tool to log messages in the automotive world. It was initially used to analyze small Classic Autosar controllers in addition to hardware debugging. With the upcoming Adaptive Autosar and Genivi based systems, DLT is more and more used to analyze bigger controllers that run multiple applications on Linux systems and even the communication between many controllers.

analyze huge dlt files title 2

Why is DLT Viewer not the right solution for analyzing huge DLT log files?

Tracing such distributed application stacks will lead to tons of log messages. This becomes difficult to be handled by the DLT Viewer - the current default solution in the Adaptive Autosar world. The DLT Viewer is designed for collecting data and filtering messages, but if it comes to tons of messages it searching and filtering is getting slow. Hence - from my perspective - there is a need for other solutions which are designed for searching and filtering big amounts of data.

Why the Elastic Stack?

In this blog post we want to investigate, how Elasticsearch and Kibana can be used for analyzing such complex and distributed embedded systems.

Elasticsearch is a realtime, distributed search and analytics engine that is horizontally scalable and capable of solving a wide variety of use cases. [1]

Elasticsearch is designed for searching huge amounts of log data. It can handle unstructured and structured data. This means you can use it for well-defined metrics such as system cpu load values as well as for unspecified developer log messages.

With Kibana - a web frontend - it provides also a powerful query DSL, that helps you to search data in a fast and easy manner.

How does the highlevel architecture look like?

In this article we want to show, how we can import DLT messages in Elasticsearch and how we do a simple query with Kibana. To import the DLT messages in Elasticsearch we combine the tools dlt-convert and logstash. The overall architecture looks like this:

----------
| Kibana |             Visualization
----------
    |
-----------
| Elastic |            Document
| Search  |            Database
-----------
    |
------------           Convert DLT Stream
| logstash |           in database format
------------
    |
---------------
| dlt-convert |        Convert DLT file in ASCII
---------------

How to install the Elastic Stack and DLT tooling?

As already mentioned we need the following tools to analyze DLT messages with Elastic Stack :

  • logstash to filter and convert the DLT messages in the right format

  • Elasticsearch to store and index the DLT messages

  • Kibana to search the DLT messages

Additonally we use dlt-convert for converting the DLT message from binary format into ASCII format. That’s why we have also to compile the DLT daemon, which contains dlt-convert.

We tested this installation with Ubuntu 18.04 VirtualBox image and Elastic Stack 6.5.2.

Manual Installation

Create folder structure:

mkdir -p ~/dev/tools
mkdir -p ~/dev/config/logstash

Install logstash:

cd ~/dev/tools
wget --progress=bar:force https://artifacts.elastic.co/downloads/logstash/logstash-6.5.2.tar.gz
tar xfz logstash-6.5.2.tar.gz

Install ElasticSearch:

cd ~/dev/tools
wget --progress=bar:force https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.5.2.tar.gz
tar xfz elasticsearch-6.5.2.tar.gz

Install Kibana:

cd ~/dev/tools
wget --progress=bar:force https://artifacts.elastic.co/downloads/kibana/kibana-6.5.2-linux-x86_64.tar.gz
tar xfz kibana-6.5.2-linux-x86_64.tar.gz

Build dlt-daemon (we need dlt-convert):

sudo apt-get -y install cmake zlib1g-dev pkg-config libdbus-1-dev
cd ~/dev/tools/
git clone https://github.com/GENIVI/dlt-daemon.git
cd ~/dev/tools/dlt-daemon
mkdir build
cd build
cmake ..
make -j4
sudo make install

Importing dlt logs

Create example DLT file

Now everything should be installed fine. To import DLT message we need a DLT file to be converted first. If you don’t have one by hand, you can simply generate one with this bash script (we name it generate-dlt.sh):

#!/bin/bash

while true; do
  echo "Hello, world!" | dlt-adaptor-stdin
  sleep 0.1
done

Put the above content to the file 'generate-dlt.sh' and save it.

To generate the DLT file we need a running 'dlt-daemon'. Start it with:

dlt-daemon &

It is now running in the background.

Now we can start to generate DLT message with our bash script. Run it with the following command:

chmod 755 generate-dlt.sh
./generate-dlt.sh

To receive the message we need another handy DLT tool - dlt-receive. Use this command to receive a DLT from the dlt-daemon and pipe the output to a file.

dlt-receive -o example.dlt localhost

Last step is to convert the binary DLT file in ASCII format with following command:

dlt-convert -a example.dlt > dlt-ascii.txt

Transforming dlt logs with logstash

logstash is part of the Elastic Stack and it helps to transform different types of log messages in the right format and forward these transformed messages to ElasticSearch. How the transformation has to be done can be configured in the logstash configuration file. The simplest one looks like this:

input {
    # File(s) that should be imported
    file {
        # Use here the correct path to your dlt file - be careful file path must be absolute
        path => "/home/ubuntu/dlt-files/dlt-ascii.txt"
    }
}
output {
    # Receiving application - here Elasticsearch
    elasticsearch {
      # hostname/ip and port
      hosts => ["localhost:9200"]
      # index of the receiving messages
      index => "events"
    }
}

The inline comments should explain the config parameter. Please ensure that the given path is also the right one in your demo setup. Copy and paste the config content in a file with the name logstash.conf and store the file in the folder ~/dev/tools/logstash-6.5.2/config.

But before starting the import we have to start Elasticsearch with following command:

~/dev/tools/elasticsearch-6.5.2/bin/elasticsearch

When Elasticsearch is running correctly, you can trigger the import with following command:

~/dev/tools/logstash-6.5.2/bin/logstash -f ../config/logstash.conf

Now the import should be started. Depending on the file size this could take some time.

Searching dlt logs

Now we can start Kibana with following command:

~/dev//tools/kibana-6.5.2-linux-x86_64/bin/kibana

Open a brower and enter following url:

http://localhost:5601

Click on Discovery and enter following index pattern and click on Next Step:

events*
dlt es blog 01

Select @timestamp and click on Create Index Pattern.

dlt es blog 02

When the index pattern was created, click again on Discovery. Now you should be able to find your DLT events in Kibana!

dlt es blog 04

Be careful: The timestamp Kibana is using for searching and filtering is the import time, not the time the DLT messages were recorded! How to change that I will show you in my next blog post!

Summarize

In this blog post we have seen how we can use the power of the Elastic Stack to analyze automotive devices that logging via DLT.

Do not hestitate to contact me via mail raphael.geissler@systemticks.de, if you have questions or if you need further support.


1. Learning Elastic Stack 6.0, ISBN 978-1-78728-186-8