Trace, Analyze, Validate - We care about quality in distributed systems.
System Architecture, Visualization and Validation are core areas that we address. All for better quality and future-proof systems.
Analysis Tool Development
Measurement and analysis tooling to gain insight and understanding of the inner workings of your system.
We’d love to hear from you! → firstname.lastname@example.org
Understanding distributed software systems
Distributed software systems are composed of multiple parts. They use multiple technologies and communicate with each other. And they are becoming more and more common. As they are very complex in structure, they are hard to understand and analyze in case of errors or when trying to extend them. Developers and teams across industries are facing this problem. It usually causes the ramp-up of huge testing infrastructure and personnel.
We believe, there is a more elegant and cost-effective way to support the development of complex software systems.
Find out how we can be the reliable partner to you, when it comes to system understanding and analysis!
What can we do for your role?
You get relevant metrics to project quality trends and you will be able to identify hot spots in the integrated system.
You are responsible for the success of the project. That’s why you need to make sure, delivery is within time and budget. You need to satisfy your customer’s requirement.
What can prevent you from problems within the software that are found late in the project? Especially, if the development team is already working on new things?
You might be thinking about ramping up the team again, request work on weekends, or start compromising by implementing workarounds. None of these solutions really feel good and all of them drain budget and motivation.
With us, you will learn about hot spots as soon as possible through metrics and quality projections concerning the integrated system. Hence, you will be able to take reasonable and effective counter measures instead of blindly ramping up the team. Your team will be able to fix these problems quicker, due to better analysis tooling.
You get continuous monitoring of system quality based on automated integration tests.
Your job is to ensure the quality of the integrated software system. It is crucial for you to detect quality regressions early on. And - you need a reliable overview of how overall quality develops.
Issues that are only visible in the integrated system are often hard to detect, because they only occur sporadically or under certain conditions. Usually, these types of issues are found when the system is in a close-to-ready state and the project is in the "bug-fixing" phase. That is when large testing teams are doing heavy manual testing. In earlier project phases it is very hard to guess if the desired system quality will be reached in time.
The standard tools of choice are ticket systems with well categorized and pre-qualified tickets, issue reports and quality assessments of the development process. What is missing is an actual look into the software from a system integration perspective.
We will provide you with a way to continuously monitor relevant metrics and detect trends and regressions as soon as possible - based on the running system. You will be able to monitor integration test coverage and regressions in a dedicated dashboard.
You will be able to identify structural problems in the integrated system.
You need to make sure that the software system can be developed as required. Architectural design needs to be kept up to date and reflected in the actual development.
Dynamic requirements make your work challenging. Your design might become outdated very quickly, technologies or 3rd party interfaces might change. Even the most flexible architecture will reach its limits.
Best practices are Model Driven Design, code generation, static source code visualization and dependency checking. As beneficial as all of these may be, they don’t indicate if the actual software adheres to your design when it is executed. You need to make a real effort to find out if your design is incomplete or the implementation is faulty.
Our promise is to help you setting up a system that allows for instant feedback on structural problems on each commit. We will set you up with tools to look into the live system from an architectural point of view and make it possible to compare it with the design. Automated verification of the architecture will be possible with system integration tests.
You can support the bug-fixing with more detailled and faster issue analysis.
Your need to verify if the software does what it is supposed to do. You are doing everything to make sure, there are no bugs in the software that prevent it from being rolled out. But you go one step further: in case you find an issue, you are going out of your way to support the root cause analysis and eventually the fix of the issue. Therefore, you need enough information on the system behavior while an issue occurs.
However, in large distributed systems it can be hard to get this information and point out the responsible component. In case of manual integration testing, sometimes the only material available is a log file with heterogenous developer logs. If you are lucky, you have a hint on the conditions in which the issue occurs.
Now one thing you can do to pre-qualify the issue is to do an analysis of the log file which you can script to spare some effort the next time. You could consult architecture documentation or the component owners themselves.
We will help you setting up your environment in a way that allows you to get an insight on how all interfaces are called (parameters and sequences) and compare it to the specification. You will have dedicated tooling and a simple workflow that can even be automated. In many cases it won’t be necessary to bother the component owners, because the analysis tooling gives you more up-to-date information and you can approach responsible people with more informed analysis results.
You will get quicker feedback on systematic issues within a commit and you will be able to resolve issues much faster.
As a programmer, you take pride in writing the software and you want it to be flawless. If you are working on larger software systems, two of your main tasks are to extend the system according to requirements and to fix bugs. For extending the system, it is crucial to fully understand the system and the implications of your changes. So you need fast feedback if you commit code that does not comply with the specification or causes problems in the integrated system. If bugs are found, you need to know quickly if they are rooted in code that you can or cannot change.
There are a few things that you can do to make your life a little easier. For understanding the system, there should be documents like a requirements specification, an architectural documentation and so on. While you are coding, you can use a debugger to understand the inner workings of the application. At least the parts that you have debug information for. You might also add some traces in the form of logging output for a later analysis. There are countless very valuable tools to support either step.
Still the effects that your changes have on the complete integrated system are a blind spot. Here is where systemticks comes in. With our support, you will get visual insight on how the system components interact, which makes it easy to understand the run-time behavior. You will be notified on every commit if your code changes break the system. You will have much less effort in analyzing and root causing issues because they will be pre-qualified by the test department and root-caused down to your component. When you are reproducing and analyzing an issue, you will have insight on the way your interfaces are called - parameters and call sequence.
Our solution is to focus on the collection of run-time data about the system’s components and their communication to analyze, understand and validate it and to ensure quality and scalability.
We suggest a step-wise approach to allow for progress tracking and making adjustments along the way.
#1 Measure - Where are we now?
We will assess your project landscape regarding technologies, requirements and documentation. This will include document review and interviews.
An in-depth review of your current situation is vital for the success of the efforts in making your system analyzeable and understandable. It also enables you to track progress of the implementation.
As a tangible result, we will have a maturity assessment and a prioritized list of actions. These will support a strategic implementation of a well analyzable system.
#2 Plan - How do we make the system analyzable?
This step involves workshops, design work and prototyping as required. We will identify the detailled information needs for each project stakeholder and plan the technical integration of the analysis toolchain into the project landscape.
A structured approach like this enables you to depict what is valuable information and what is noise. As it involves all the stakeholders, it makes sure that we have a common goal, that nobody will be irritated during project execution and everyone is aware of their important contribution to the successful implementation.
We will be ready for the next phase when we have defined a list of important metrics and system test cases. They will be part of a larger definition of reporting requirements. Also, we will have layed out the data analysis architecture.
#3 Execute - Create customized and effective toolchain
We will apply our experience from tool development, system design, integration, analysis and validation to create an effective analysis toolchain. We will guide and - if desired - execute the implementation.
You will get reliable analysis results at the time they are most relevant and valuable. You might think that your team would be capable of executing this step yourself and this is what happens regularly in large software projects. However if there is no full dedication to the topic, implementation often ends half-way. Due to priority shifts and lack of focus, the result is often not as valuable as it could be.
We will ensure the deliberate implementation with the result of a running and reliable toolchain.
We are …
a team of quality minded individuals. We’ve collected heaps of experience in a wide range of software projects. The biggest part of them were automotive related.
I strongly believe that in complex software projects spending sufficient time in (re-)designing managable service interfaces is the crucial aspect if a project is successful or not.
Service interfaces determine how a software systems is tailored, technically and organizationally. They have impact on the selection of technologies like programming languages, protocols and middleware.
Their implementations might have negative impact on the runtime behaviour if the interface provider and its clients do not share the same idea of using the interface dynamically.
Features typcially are cross-cutting interface boundaries and thus shortcomings and errors are difficult to localize.
Mastering your interfaces means mastering your projects.
Therefore, within systemticks I spend my time on developing tooling and methodologies for working out robust, expressive and tracible interfaces.
I like the elegance of a strict test-first approach. It does not guarantee bullet proof code, but it gives me a very good feeling about the code. It provides confidence for changing the code. Thinking the code from the caller’s end lets interfaces emerge almost naturally.
I like to apply this way of thinking not only to the code as a base layer but to the architecture and also to the user interface.
As I am a curious person, I don’t have the one and only programming language that I master. Rather I came across a lot of different languages and tools in different phases of my professional past.
My focus area within systemticks is the visualization of system behavior to gain as much insight as possible.
I am all about Software Architecture with a focus on the big picture.
A healthy system for me is functionally complete on the outside but also well designed on the inside. Clean architecture, modularity, separation of concerns - these are amongst the principles I like to see reflected in the systems I work on.
Autonomous teams and strong control in the relevant aspects are two necessities. They seem contradicting at first but in my opinion they can be combined to reach real system quality.
Within systemticks, I am the one who will be on direct line to the project’s system architect.
Please reach out to any of us directly or to email@example.com!