Confluent
KSQL Recipes Available Now in the Stream Processing Cookbook
Datenstromverarbeitung

KSQL Recipes Available Now in the Stream Processing Cookbook

Joanna Schloss

For those of you who are hungry for more stream processing, we are pleased to share the recent release of Confluent’s Stream Processing Cookbook, which features short and tasteful KSQL recipes that help you solve specific, domain-focused problems using KSQL.

Organized according to various use cases (e.g., partitioning, streaming ETL, anomaly detection, data wrangling), KSQL recipes provide easy directions that you can follow to begin—or continue!—putting KSQL to use.

KSQL is the streaming SQL engine for Apache Kafka®, and is complementary to the Kafka Streams API. As KSQL and Kafka Streams continue growing in their adoption and support within the stream processing world and Kafka ecosystem, in parallel with increasing investments in evolving streaming applications, these recipes benefit both developers and the business alike by providing pre-baked KSQL techniques that involve little modification, regardless of whether or not you have a Java background.

Why use KSQL recipes?

KSQL recipes are designed to help people build event-driven, real-time systems. They are submitted by Confluent specialists and community members, and therefore serve as a place to collaborate, share and inspire one another with KSQL applications, broadening the approach of streaming application development.

KSQL recipes serve three main purposes:

  1. Making it easier to get started: Instead of starting from scratch, KSQL recipes present an established set of common technical patterns that provide a good starting point, especially for first-time users.
  2. Providing a source of inspiration: The cookbook is a place where you can see how others have solved problems that you are tackling yourself. As you browse, it may inspire you to either use their starting point or branch off and do something completely different. The recipes also offer additional ideas that are drawn from practical scenarios, which you may not have been aware of before.
  3. Encouraging collaboration: If you have created a KSQL code snippet that you would like to share, this is the place to share it! KSQL recipes do include contributions from the community, and we invite you to add your own unique recipe to the cookbook. But why would you share the snippet, you might ask? Well, others may use it, enhance it and create new applications with your original code, which may eventually lead to the development of new recipes that can benefit you in the future too. (Plus, if it’s popular, maybe even fame!)

How can I use KSQL recipes?

The recipes show you how to use KSQL in different ways. Although you have the option to read and follow the recipes as they are, you can also think about how to use the same code patterns but apply them to other use cases.

As an example, a very commonly used KSQL recipe is Processing Syslog Data. This general code pattern can apply to a variety of use cases, including fraud detection, network traffic, signal anomaly or propensity analysis. All of these use cases can leverage KSQL, as they use a similar code base despite deviating based on the specified implementation of the recipe.

In addition, building, testing and modifying applications becomes faster with the recipes. Because they help with getting started, you can quickly build in customizations and specifics, run the application, examine the output and iterate, all without the help of a Kafka operator.

This may seem like a marginal advantage, but with the number of and pace at which streaming applications and transformations are being built, all of these cycles are significant. The need to have a Kafka administrator results in a potential bottleneck, which slows the entire development experience intended to move toward agility and defeats the purpose of KSQL.

Check out the Stream Processing Cookbook

The Stream Processing Cookbook is now available for you to peruse. If you’re interested, you can browse the KSQL recipes for handy tutorials and examples to follow.

Let’s get cooking!

Subscribe to the Confluent Blog

Abonnieren

More Articles Like This

Running Apache Kafka with Spring Boot on Pivotal Application Service (PAS)
Todd McGrath

How to Run Apache Kafka with Spring Boot on Pivotal Application Service (PAS)

Todd McGrath .

This tutorial describes how to set up a sample Spring Boot application in Pivotal Application Service (PAS), which consumes and produces events to an Apache Kafka® cluster running in Pivotal ...

Confluent Operator | Kubernetes | Pivotal Container Service
Todd McGrath

How to Deploy Confluent Platform on Pivotal Container Service (PKS) with Confluent Operator

Todd McGrath .

This tutorial describes how to set up an Apache Kafka® cluster on Enterprise Pivotal Container Service (Enterprise PKS) using Confluent Operator, which allows you to deploy and run Confluent Platform ...

Schema Registry + Avro | Spring Boot
Viktor Gamov

How to Use Schema Registry and Avro in Spring Boot Applications

Viktor Gamov .

TL;DR Following on from How to Work with Apache Kafka in Your Spring Boot Application, which shows how to get started with Spring Boot and Apache Kafka®, here I will ...

Fully managed Apache Kafka as a Service

Try Free

Wir verwenden Cookies, damit wir nachvollziehen können, wie Sie unsere Website verwenden, und um Ihr Erlebnis zu optimieren. Klicken Sie hier, wenn Sie mehr erfahren oder Ihre Cookie-Einstellungen ändern möchten. Wenn Sie weiter auf dieser Website surfen, stimmen Sie unserer Nutzung von Cookies zu.