Project Metamorphosis: Wir präsentieren die Event-Streaming-Plattform der nächsten GenerationMehr Erfahren
September | Global

Take Apache Kafka® Global with Confluent

  • Build a globally connected Kafka deployment without the operational complexity
  • Accelerate project delivery times with access to real-time events from anywhere
  • Increase reliability of mission critical applications by minimizing data loss and downtime

Global resources

Aug 24

Launch announcement | Global

Take Apache Kafka Global with Confluent

Aug 24 - Aug 25

Kafka Summit 2020

Discover the World of Event Streaming

Aug 24

CP 6.0 Announcement

Completing the Event Streaming Platform

Sept

Cluster Linking

Introduction to Cluster Linking

Why Global matters

Reduce operational complexity with global Kafka deployments

Problem: Making Kafka globally available means sharing data across fully independent clusters, regardless of where the clusters are hosted (on-prem or across multiple cloud providers) or their distance between one another. This requires replicating topics from clusters across different environments, which can result in additional infrastructure costs, operational burden, and architectural complexity. Confluent allows you to:

  • Scale your event streaming use cases across hybrid-cloud or multi-cloud architectures
  • Improve operational efficiency by joining clusters together with Cluster Linking, regardless of environment or physical distance, without running a seperate system to manage replication
  • Provide a consistent operational experience regardless of distribution, by simplifying and automating the deployment of self-managed Confluent clusters on market-leading Kubernetes distributions:
    • Red Hat OpenShift
    • VMware Tanzu (including PIvotal Container Services)
    • Google Kubernetes Engine (GKE)
    • Amazon Elastic Container Service for Kubernetes (EKS)
    • Azure Kubernetes Service (AKS)
    • Plus any Kubernetes distribution or managed service meeting the Cloud Native Computing Foundation’s (CNCF) conformance standards
Remove data silos and access real-time events from anywhere

Problem: The replication process to share data between two clusters can be difficult because the offsets of messages within a Kafka cluster cannot be preserved and it requires managing a separate system to replicate the data. Consumers may start reading from a different cluster and risk reading the same messages twice or miss messages entirely, resulting in topics with inconsistent data between environments. Furthermore, replicating data between clouds is expensive because providers will charge when data is retrieved, resulting in high data egress fees. Confluent allows you to:

  • Lower the risk of reprocessing or skipping critical messages by ensuring consumers know where to start reading topic-level data when migrating from one environment to another
  • Ensure event data is ubiquitously available by offering asynchronous replication without deploying any additional nodes with Cluster Linking
  • Simplify data management by replicating all event data once before allowing it to be read by unlimited applications in new cloud environment
Minimize data loss with streamlined disaster recovery operations

To replicate a Kafka cluster over to a backup data center using MirrorMaker 2, you need to spin up a separate Kafka Connect cluster to run the replication process, adding complexity to the overall architecture and putting greater management burden on your IT team. Even once the cluster is properly replicated, there are ongoing challenges such as DNS reconfigurations, imprecise offset translations, and siloed workflow burdens. Confluent allows you to:

  • Streamline disaster recovery operations by deploying a single cluster across multiple data centers with Multi-Region Clusters or connecting independent clusters with Cluster Linking
  • Achieve faster recovery time through automated failover without worrying about DNS reconfigurations and offset translations with Multi-Region Clusters
  • Ensure high availability by minimizing disaster recovery complexity with improved recovery objectives with Multi-Region Clusters

Confluent Benefits

Make Kafka Globally Available
Accelerate Project Delivery Times
Increase Reliability of Mission Critical Applications

Quickly connect Kafka regardless of environment or physical distance

Easily replicate events across clusters without the overhead

Ensure high availability by minimizing disaster recovery complexity

Funktionen

  • Vorschau

    Cluster Linking

    Cluster Linking connects Kafka clusters without spinning up extra nodes, while preserving the offsets

  • Vorschau

    Cluster Linking

    Cluster Linking connects Kafka clusters without spinning up extra nodes, while preserving the offsets

  • Available

    Multi-Region-Cluster

    Multi-Region Clusters deploys a single Kafka cluster across multiple data centers

  • Available

    Betreiber

    Operator simplifies running Confluent Platform as a cloud-native system on Kubernetes

More Project Metamorphosis releases

Elastic Scaling

How do you quickly scale Kafka to keep mission-critical apps running with no lag or downtime - and without over-provisioning expensive resources?

Weiterlesen

Everywhere

How can you ensure your Kafka infrastructure is flexible enough to adapt to your changing cloud requirements?

Weiterlesen

Infinite

How do you efficiently scale Kafka storage to make sure you can retain as much data as you need - without pre-provisioning storage you don't use?

Weiterlesen

Try it out

Cloud
Vollständig verwalteter Service

Bereitstellung in wenigen Minuten und Unterstützung von Pay-as-you-go. Erleben Sie Kafka ohne Server.

Plattform
Selbstverwaltete Software

Laden Sie unsere enterprise-ready Plattform herunter.
Machen Sie sich selbst ein Bild.

*Receive $200 off your bill each calendar month for the first three months