Getting Started

Getting Started

The Kadeck Team says: Thank you!

Kadeck is the result of experienced developers, operations, architects and project managers from the IT field, who were looking for an easier and better way to implement, communicate and operate data streaming applications and projects. A software that gives the creators and pioneers the technology of data streaming into their hands as a new powerful instrument and helps them in implementing their ideas, during their daily work or when creating new products. This is where our slogan "Making data streaming accessible" was born from. 

A lot of work and thought has gone into Kadeck to support the entire application lifecycle, from idea to production of Apache Kafka and Kinesis applications. Our goal is to support you in your work. This effort continues as you read these lines... a new, improved Kadeck version might even be ready for download!

This guide is meant to give you an overview of Kadeck and the possibilities.

First steps

Kadeck comes in two flavors: Kadeck Teams and Kadeck Desktop. Kadeck Teams is the ideal choice for multiple persons, teams or departments that want to work hand in hand. A free version is available that supports teams of up to 5 persons. For users who need to change their work environment frequently (e.g. because of different clients) or simply experiment and bring their own product vision to life, Kadeck Desktop is the right choice.

This article is designed to be used for both variants.


After starting Kadeck you will land in the Connection View. One connection corresponds to one Apache Kafka cluster. Kadeck supports management of multiple Apache Kafka clusters at the same time, e.g. one Apache Kafka cluster for DEV, one for TEST and one for PROD. 

Kadeck Desktop provides the additional feature of starting and controlling a local Apache Kafka cluster directly from Kadeck. This is interesting when an Apache Kafka cluster is needed quickly for local testing.

Create a connection in Kadeck

Create a connection to an Apache Kafka cluster

To connect to an external Apache Kafka cluster, create a connection using the "Add connection" button.

You will be prompted to specify the type of your connection. For a connection to an Apache Kafka cluster, select the appropriate option.

Next, give your connection a name. This will be used in Kadeck so that you always know which connection you are connected to. 

Enter the appropriate connection details for your Apache Kafka cluster. With "Test connection" you can always check your entries and establish a test connection. Note that this process may take a little longer depending on the server and timeouts.

For a complete guide to the connection settings, read our help document on this in our Support Center.

Establish connection

Once you have added a connection, you can connect to the Apache Kafka cluster by clicking on "Connect". On Kadeck Teams, you can also directly go to the Data Catalog, as all connections are always connected.

Data Catalog

The more data streaming applications you develop, the more topics you will create. This can quickly come at the expense of overview and transparency. 

To help you and your team stay on top of things, we've developed the Data Catalog. In Kadeck Teams, topics are even displayed across all created connections: so it doesn't matter if you forget which cluster contains the data or topics you are looking for. 

Note that with Kadeck Teams you can specify which users or teams have access to which topics via the rights management in the administration area. This can even be synchronized with LDAP (e.g. Active Directory) or other directory services.

To organize your topics, add labels (e.g. „logs“, „finance“, ...) and assign users as data owners to topics.

Click on a topic to open the Topic Overview. The Topic Overview allows you to view more information about the topic, the number of records, the topic's documentation and configuration. 

Click on "Browse data" on the top to jump to the Data Browser. The Data Browser shows you all data of the topic and is your window into your real-time data streams.

Data Browser

This is the most powerful view in Kadeck - this is where you will spend most of your time.

When you're building data streaming applications, the data is at the center. From initial conception, to development, to testing, to production: having quick insight into the data you need to process at any time is key. The Data Browser takes this into account: here, the focus is on the data and data exploration.

Not only can you look at data and jump to different times or offsets. The Data Browser allows you to analyze the data or create views for different stakeholders (e.g. a view for erroneous data for operations or special snippets of data by type for business). 

You can not only filter by simple regular expressions (using the search icon), but also define filters for attributes or even write complex filters in JavaScript (Quick Processor).

Before we get started, here is an overview of the structure of the Data Browser. The Data Browser is divided into four sections:

Topic list

On the left side is the topic list. Click on a topic in the list to display its data, process data, export data, forward data or create test data. Views are also displayed slightly underneath a topic.
At the beginning there are no existing views - so nothing is displayed. You will create more and more views per topic over time. Views are a powerful feature in Kadeck. You can learn more about views later in this article.

By right clicking on an existing topic, you can delete the topic or just the content of the topic.

Click on a topic in the topic list to get started or create a new topic using the "Create topic" button.


In the upper area you will find the tools section. You can add a description for the topic and save views (more about this in a bit). The tools are:
  1. Info
    Description of the view/topic for your colleagues.
  2. Flow
    See producers & consumers of your data stream on a flowchart.
  3. Time Distribution
    See the time distribution of data that is currently displayed to you - ideal for detecting gaps and thus possible errors in pipelines.
  4. Quick Processor
    A powerful tool for complex filters and data transformations. Can be used e.g. for data corrections.

Record list

In the lower area of the window, Kadeck displays all the records. The „Display" module can be used to set the partition, the number of records or specific time windows (or offsets). In the free version of Kadeck, the number of records displayed at any given time (after applying filters, search windows and other parameters) is limited to 1,000 total. Hence, you can use search, filters and the parameters described above to to narrow down your search and find any record even in larger data streams.

To retrieve data with new parameters, click the "Play" icon in the upper right corner. This will retrieve the data with the desired criteria from the topic once. If you click on the "Play" icon with the dotted circle, the live mode will be started - this way you will see in real-time when new data arrives in your topic.

Below the „Display“ and „Filter“ module, the table that shows the queried data is located. Click on the toggle next to „View“ (the icon with three columns in the table's top bar) to switch between JSON or column view for structured data. In the Columns View mode, the individual attributes of an object are displayed as individual columns and can thus be sorted and read more easily.

The checkboxes next to a dataset allow you to select single (or all) datasets. You can export selected records as CSV, for example to pass them on to colleagues who don't work with Kafka. The columns for the CSV can be configured (more about this in the Detail View).
Selected records can also be copied to another (or the same) data stream: this is perfect for testing or for correcting and re-processing corrected data using the Quick Processor. 

By clicking on the header column in the record list, you can sort records. You can also add new columns for individual attributes using the Detail View. Sometimes, instead of the Columns Views mode, which shows all attributes as columns, you want to see only a few but relevant columns to keep the overview. In this case it makes sense not to use the Columns View mode, but to add individual attributes as columns via the Detail View.

By right-clicking on the column header you can hide a column or create a filter.

Detail View

As soon as you have clicked on a record in the Record list, the Detail View opens. This is an important view: it shows you all information about the record, like offset, partition or even header in the Meta tab. Since headers - just like Value and Key - are stored without type information in Apache Kafka, it is necessary to select the type for the respective header. To do this, click on the three dots next to a header entry and select from the list of available types. 

In the tab Key or Value the respective content of the record is displayed. For structured formats (e.g. Json, Protobuf, or Avro) we have added a special trick: if you want to filter for an attribute, or if you want to display an attribute as a separate column in the Record List table, click on the respective attribute in the Value tab (or Key tab - depending on where it is located). A small bubble will open above the attribute. You can now create a filter (by clicking on the Filter icon) or display the attribute as an individual column in the Record List. This way you can also define the columns for an export as CSV or sort records.

Concepts in the Data Browser

When you select a topic, the contents of the topic are displayed immediately. Apache Kafka does not store any type information for the records: key and value (as well as the headers) are all byte arrays. To save you from wasting time selecting the right data type, we implemented a function called "Codec Auto-Detection" in Kadeck. 


Kadeck comes with many codecs out of the box. However, you can also create your own codes and add them to Kadeck. 

Kadeck tries to automatically detect the codec for the records you want to be displayed. This usually works very well. 

If you want to change the codec manually, you can do this via the menu button "codecs" in the upper right corner. This button is displayed as soon as you have selected a topic.

Which codec you have selected can be saved as part of a view, so you don't have to repeat this every time.

Live Mode

Data streaming is all about data in motion. So what could be more natural than to not just statically query data at a point in time, but to also catch a real-time glimpse of the data as it arrives in your topic?

That's why we developed Live Mode. After you have selected a topic, you switch to Live Mode by clicking on the "Play" button with the "dotted circle" in the upper right corner (this is the button to the left of the traditional "Play" button). 

As soon as new messages arrive in the topic, they will be displayed in the record list. All components in Kadeck process this data in real time. So if you want to see simultaneously which producer is generating this data or how the time distribution is changing, select the corresponding tab "Flow" or "Time Distribution“ in the Tools section. 


A view is a powerful concept in Kadeck. A view stores the selected filters, codecs, descriptions, and many other settings. Views are ideal for communicating with colleagues or other stakeholders: e.g. a special view for erroneous data, or maybe the business wants to review a certain type of data - create a view with the appropriate filters and pass the link to the respective person. However, the latter is only possible in Kadeck Teams. In Kadeck Desktop, views are still suitable for preparing meetings or for your own work.

Imagine you are working on an application that only processes financial transactions with Euro as currency - create a filter for the currency Euro and save this as a view with the name "Euro". Now you have the relevant data immediately at your fingertips with just one click.

Create records

You can create new datasets in topics with Kadeck. To "ingest" new data, click on the plus icon in a circle in the upper right corner of the window (in the same bar of the play buttons). Alternatively you can right click on a record in the Record list and select "As new record". In both cases the "Ingestion dialog" will open. 
Multiple records can be created by selecting the individual records using the checkboxes next to the records in the record list and clicking "+ Produce" (this is useful to copy data or use existing records as a blueprint). Alternatively, you can switch to the "Json View" in the ingestion dialog and add records from scratch to the JSON array.

Quick Processor: filters and data transformation with JavaScript

The Quick Processor allows you to create complex filters using JavaScript and even transform individual values or the entire dataset. 

Think of the Quick Processor as an intermediary application between a topic and Kadeck. 

For example, if you select a topic and display the data of the last 7 days, this data is requested from the topic by Kadeck. It then flows to the codec, which converts the byte array into the appropriate format, and finally record by record to the Quick Processor. The Quick Processor can now reject data by having your JavaScript code return false, or modify data.

Data can be modified by returning the complete record object rec as return value instead of true or false:
  1. return rec;
To change the content of the record, you can access the value or the key of the record with rec.value or rec.key. This also applies to the timestamp, headers, offset, partition, ... . Press "CTRL+Space" in the editor to see a list of possible elements.

For structured data formats you can also directly access the attribute within a value or key with rec.value.myAttribute.

Example of data correction:
You have temperature data from a sensor as a JSON object. The JSON object contains the attributes "SensorId" and "Temperature". The value of Temperature is the exact value with decimal places. However, your application expects a rounded value without decimal places. So in the Quick Processor, write the following code:
  1. rec.value.Temperature = Math.round(rec.value.Temperature);
  2. return rec;
As a result, the rounded temperature is displayed in the record list. 
You can now select the records and copy them via "+ Produce" into the topic from which your application consumes. The change will be applied.


With the Connection View, the Data Catalog and the Data Browser, we have already talked about three of the four most important views in Kadeck.

Often it is necessary not only to see the data of a topic, but for the tuning or daily operation of Apache Kafka, to monitor other concerns, such as consumer lag and positions.

For this purpose we have created the Consumers view. Typical tasks are:

- Changing consumer offsets for inactive consumers,
- finding slow consumers, e.g. in order to cope with the data overload with additional applications working in parallel, and
- delete orphaned consumers

Kafka-specific views

The Kafka ecosystem has spawned many components, for example Kafka Connect or Schema Registry. But also features like ACLs for access control in Apache Kafka or Quotas are necessary in production use. 

For these and many other aspects, we have developed our own views in Kadeck to assist with the respective tasks, so that typical tasks can be completed in a few clicks. 


Kadeck is a comprehensive software solution that helps you create new ideas and products, test and debug them, and monitor them in operation, making you more productive. We hope that Kadeck will become your useful companion in working with data streaming and that you will have a lot of fun using it.

Please also feel free to give us feedback via or within the application via the feedback dialog (bottom left, icon "speech bubble"). 

We look forward to hearing from you!

The Kadeck Team

    • Related Articles

    • Topic Overview & Documentation

      Kadeck’s Topic Overview, provides a consolidated glance at all topic details and documentation. This article will guide you through what information you can find in this view and how to document your topic. Overview Page When you open the Data ...
    • Monitoring Overview

      The Monitoring Overview page is your infrastructure command post and concisely summarizes all activity in your Apache Kafka cluster. Cluster Overview AI Health Assistant Cluster Status: Get insights on the health status of your cluster, as monitored ...
    • Topic Monitoring Overview

      This guide provides a detailed overview of the Topic Monitoring view within Kadeck's Apache Kafka UI, allowing users to monitor and analyze various metrics associated with Kafka topics. Overview Topic select: Dropdown to select specific topics or ...
    • Broker Monitoring Overview

      The Cluster Monitoring Page provides comprehensive insights into the status and performance of your Apache Kafka cluster. This guide will help you understand the metrics and visualizations on this page. Time windows: The metrics are all relative to ...
    • Local cluster

      This article describes how to set up and connect to the local cluster with KaDeck. What is the local cluster functionality? The local cluster, previously known as "embedded cluster", allows you to start up an Apache Kafka broker (and Zookeeper) ...