Join us as we build a complete streaming application with KSQL. There will be plenty of hands-on action, plus a description of our thought process and design choices along the way. Look out for advice on best practices and handy tips and tricks as we go. This is part 2 out of 3 in the Empowering Streams through KSQL series.Watch Now
This session covers the patterns and techniques of using KSQL. Tim Berglund discusses the various building blocks that you can use in your own applications, starting with the language syntax itself and covering how and when to use its powerful capabilities like a pro. This is part 1 out of 3 in the Empowering Streams through KSQL series.Watch Now
In this presentation, we’ll discuss best practices of monitoring Apache Kafka. We’ll look at which metrics are critical to alert on, which are useful in troubleshooting and what may actually be misleading. We’ll review a few “worst practices” - common mistakes that you should avoid. We’ll then look at what metrics don’t tell you - and how to cover those essential gaps.Watch Now
In this session, we’ll discuss disaster scenarios that can take down entire Kafka clusters and share advice on how to plan, prepare and handle these events. This is a technical session full of best practices - we want to make sure you are ready to handle the worst mayhem that nature and auditors can cause.Watch Now
In this short session, we’ll discuss the basic patterns of multi-datacenter Apache Kafka architectures, explore some of the use-cases enabled by each architecture and show how Confluent Enterprise products make these patterns easy to implement.Watch Now
In this session, we will go over everything that happens to a message – from producer to consumer, and pinpoint all the places where data can be lost. You will learn how developers and operation teams can work together to build a bulletproof data pipeline with Apache Kafka.Watch Now
In this talk, we will describe the reference architecture of Confluent Enterprise, which is the most complete platform to build enterprise-scale streaming pipelines using Apache Kafka®. This talk is intended for data architects and system administrators planning to deploy Apache Kafka in production.Watch Now
In this talk we’ll discuss the benefits of introducing a streaming platform to your architecture including how it can greatly simplify complexity, speed up performance, and help your team deliver the features they need with real-time data integration. This is part 3 out of 3 in the Streaming in Action Online Talk Series.Watch Now
With Apache Kafka® and its Streams API, it’s possible to move much of what you would have done in a batch-oriented, sluggish process into a real-time one. In this talk, we’ll cover the benefits of bringing concepts of Hadoop to real-time applications. This is part 2 out of 3 in the Streaming in Action Online Talk Series.Watch Now
Without seeing what’s wrong with today’s messaging queues, it can be initially confusing to view Apache Kafka® as more. By adding additional functionality, true storage, and guarantees it opens opportunities to take full advantage of a publish/subscribe model. This is part 1 out of 3 in the Streaming in Action Online Talk Series.Watch Now
Join three Apache Kafka® community members as they share how they leverage a streaming platform to speed up development, evolve their infrastructure, and provide real-time streaming applications and data pipelines.Watch Now
Leading companies across industries are powering their digital transformation by transitioning from RDBMS to NoSQL – and many are using Apache Kafka to make it easier. Kafka effectively transforms SQL table data into JSON documents and vice versa. And it’s also ideal for business-critical applications that drive real-time stream processing and analytics, intersystem messaging, high-volume data ingestion, and operational metrics collection.Watch Now
Learn about the recent additions to Apache Kafka to achieve exactly-once semantics (EoS) including support for idempotence and transactions in the Kafka clients. The main focus will be the specific semantics that Kafka distributed transactions enable and the underlying mechanics which allow them to scale efficiently.Watch Now
Neha Narkhede talks about the experience at LinkedIn moving from batch-oriented ETL to real-time streams using Apache Kafka and how the design and implementation of Kafka was driven by this goal of acting as a real-time platform for event data. She covers some of the challenges of scaling Kafka to hundreds of billions of events per day at Linkedin, supporting thousands of engineers, etc.Watch Now
Join David Tucker, director of partner engineering at Confluent, and Keith Chambers, product manager at Mesosphere, to learn more about managing real-time data and how Confluent and DC/OS make it easier to build and run modern enterprise apps.Watch Now
Join us as we walk through an overview of this exciting new service from the experts in Kafka. Learn how to build robust, portable and lock-in free streaming applications using Confluent Cloud.Watch Now
Chrix Finne and Bob Lehmann share their experience building and implementing a Kafka-based cross-data-center streaming platform to facilitate the move to the cloud—in the process, kick-starting Monsanto’s transition from batch to stream processing.Watch Now
This three-part online talk series introduces key concepts, use cases and best practices for getting started with microservices. Get a thorough understanding of the design principles behind microservices, the problems that arise as you grow and how you can leverage a streaming platform as the foundation for building your modern application architectures.Watch Now
Apache Kafka committer Gwen Shapira will review the benefits of a schema registry for large-scale Kafka deployments and will give a high-level overview of how the Confluent schema registry is being used in enterprise architectures across industries.Watch Now
In this session, we’ll talk about the new single message transform capabilities, how to use them to implement things like data masking and advanced partitioning and when you’ll need to use more complex tools like the Kafka Streams API instead.Watch Now
It's 3 am. Do you know how your Apache Kafka cluster is doing?
Watch this on-demand recording to learn how the Confluent Control Center is used to simplify deployment, operability and ensure message delivery.Watch Now
In this series, we’ll discuss and demonstrate the latest advancements available in Apache Kafka 0.10.2 Apache Kafka 0.10.2 and Confluent Enterprise 3.2, and show how to apply them to deploy a production-ready streaming platform at scale with Confluent.Watch Now
Confluent and KPI Partners give a brief introduction to Apache Kafka and describe its usage as a platform for streaming data. We'll explore how Kafka serves as a foundation for both streaming data pipelines and applications that consume and process real-time data streams and introduce some of the newer components of Kafka that help make this possible.Watch Now
Join experts from Confluent, Attunity and Capgemini to learn how you can unlock your mainframe data with unique change data capture (CDC) functionality, deliver ongoing streams of data in real-time to the most demanding analytics environments and identify use cases that can help you get started delivering value to the business moving from POC to Pilot to Production.Watch Now
This talk focuses on how to integrate all the components of the Apache Kafka ecosystem into an enterprise environment and what you need to consider as you move into production.
This concluded our 6-part Apache Kafka: Online Talk Series.Watch Now
In this talk, we survey the stream processing landscape, the dimensions along which to evaluate stream processing technologies, and how they integrate with Apache Kafka.
This was part 5 of our 6-part Apache Kafka: Online Talk Series.Watch Now
Get an introduction to Apache Kafka and how it serves as a foundation for streaming data pipelines and applications that consume/process real-time data streams.
This is part 1 of 6 in the Apache Kafka: Online Talk Series.Watch Now
Get details on what’s new in Confluent Platform 3.1 to help simplify running enterprise-ready Apache Kafka in production at scale, and increase the performance and reliability of your Kafka environment.
Get answers to: How you would use Kafka in a micro-service application? How do you build services over a distributed log and leverage the fault tolerance and scalability that comes with it? When should you use these tools and when should you not? And where does stream processing fit in?Watch Now
Experts from Confluent and Attunity share how you can: realize the value of streaming data ingest with Kafka, turn databases into live feeds for streaming ingest and processing, accelerate data delivery to enable real-time analytics, and reduce skill and training requirements for data ingest.Watch Now
Learn typical use cases for Apache Kafka, how you can get real-time data streaming from Oracle databases to move transactional data to Kafka, and enable continuous movement of your data to provide access to real-time analytics.Watch Now
Get an understanding of how leading enterprises are securing their streaming data. Learn the basics of Kafka ACL authentication and security, policy-driven encryption practices for data-at-rest and the necessary steps to configure your streaming platform securely.Watch Now