Conferences

Confluent is proud to participate in the following conferences, trade shows and meetups.

Gwen Shapira Talks Kafka and the Service Mesh

RSVP

5:15 p.m. Doors open
5:15 p.m. - 5:45 p.m. Pizza, drinks and networking
5:45 p.m. - 6:45 p.m. Gwen Shapira, Confluent
6:45 p.m. - 7:00 p.m. Additional Q&A and networking

Speaker: Gwen Shapira, Principal Data Architect, ConfluentSession: Kafka and the Service Mesh

Service Mesh is an infrastructure layer for microservices communication. It abstracts the underlying network details and provides discovery, routing and variety of other functionality. Apache Kafka® is a distributed streaming platform with pubsub APIs—also often used to provide an abstract communication layer for microservices.

In this talk we’ll discuss the similarities and differences between the communication layer provided by a service mesh and by Apache Kafka. We’ll discuss the different paradigms they help implement—streaming vs request/response, and how to decide which paradigm fits different requirements. We’ll then discuss a few ways to combine them and to use Apache Kafka within a service-mesh architecture. We’ll conclude with thoughts on how Apache Kafka project and its ecosystem can evolve to provide some of the functionality available in service mesh implementations.

RSVP

VMworld 2018

Speaker: Sagar Gunniguntla, Head of Cloud and Platform Partnerships, Confluent
Speaker: Tim Bergund, Senior Director of Developer ExperienceSession: Confluent Platform: Introduction and Deployment on PKS [CODE5593U]Wednesday, August 29, 9:00 a.m. - 9:30 a.m.

An industry-wide consensus is forming that streaming is the future, but building an entire streaming platform on your own—to say nothing of operating it—is a challenge most teams are not resourced to meet. Confluent Platform is becoming the de facto standard on which enterprise streaming systems are built. This talk will give you a lightning-fast introduction to the Confluent Platform and discuss container-based deployments on Pivotal Container Service (PKS) on vSphere.

Session Details
Event Details

BIRTE 2018

This is the 12th international workshop on real-time business intelligence and analytics focused on bridging academic and industrial innovation for prosperity and progress.

Speaker: Matthias Sax, Software Engineer, Confluent
Speaker: Guozhang Wang, Software Engineer, Confluent
Speaker: Matthias Weidlich, Humboldt-Universität zu Berlin
Speaker: Johann-Christoph Freytag, HU Berin
Session: Streams and Tables: Two Sides of the Same Coin16:00 - 16:40
Event Details

Exploring Kafka Support in Spring

Apache Kafka® is gaining immense popularity for writing messaging and data-intensive applications. Spring Framework provides excellent first class support for writing applications using Kafka through its widely used abstractions. In these talks, we will show you how easy it is to integrate Kafka with various types of Spring applications. Both Spring Kafka and Spring Cloud Stream are popular open source projects. Spring Kafka provides various Spring style abstractions for writing applications using Kafka. Spring Cloud Stream is a framework for writing data-driven microservices for various cloud platforms. We will explore some details about these technologies at the meetup.

Speaker: Gary RussellSession: Introduction to Spring Kafka

Gary Russell is the lead of various Spring Projects such as Spring Kafka, Spring Integration, Spring AMQP, etc.

Speaker: Soby ChackoSession: Introduction to Spring Cloud Stream and the Kafka support

Soby Chacko is a core committer to various Spring projects such as Spring Cloud Stream family of projects, Spring Cloud Data Flow, etc.

RSVP

Apache Kafka: Past, Present and Future

Speaker: Jun Rao, Apache Kafka Co-creator, Kafka Committer, Kafka PMC Member, VP of Kafka and Confluent Co-founder

In 2010, LinkedIn began developing Apache Kafka®. In 2011, Kafka was released an Apache open source project. Since then, the use of Kafka has grown rapidly in a variety of businesses. Now more than 30% of Fortune 500 companies are already using Kafka.

In this online talk, Confluent Co-founder Jun Rao will:

    Explain how Kafka became the predominant publish/subscribe messaging system that it is today
    Introduce Kafka's most recent additions to its set of enterprise-level features
    Demonstrate how to evolve your Kafka implementation into a complete real-time streaming data processing platform that functions as the central nervous system for your organization

Jun Rao is the co-founder of Confluent, a company that provides a stream data platform on top of Apache Kafka. Before Confluent, Jun Rao was a senior staff engineer at LinkedIn where he led the development of Kafka. Before LinkedIn, Jun Rao was a researcher at IBM's Almaden research data center, where he conducted research on database and distributed systems. Jun Rao is the PMC chair of Apache Kafka and a committer of Apache Cassandra.

RSVP

APIdays Melbourne 2018

Speaker: David Peterson, Systems Engineer, ConfluentSession: Real-Time Stream Processing with KSQL and KafkaTuesday, September 4, 1:15 p.m.

Unordered, unbounded and massive datasets are increasingly common in day-to-day business. Using this to your advantage is incredibly difficult with current system designs. We are stuck in a model where we can only take advantage of this *after* it has happened. Many times, this is too late to be useful in the enterprise. KSQL is the streaming SQL engine for Apache Kafka®. KSQL lowers the entry bar to the world of stream processing, providing a simple and completely interactive SQL interface for processing data in Kafka. KSQL (like Kafka) is open-source, distributed, scalable and reliable. A real-time Kafka platform moves your data up the stack, closer to the heart of your business, allowing you to build scalable, mission-critical services by quickly deploying SQL-like queries in a serverless pattern. This talk will highlight key use cases for real-time data and stream processing with KSQL: real-time analytics, security and anomaly detection, real time ETL/data integration, internet of things, application development and deploying machine learning models with KSQL. Real-time data and stream processing means that Kafka is just as important to the disrupted as it is to the disruptors.

Event Details

Aligning GDPR with Business Objectives in the Streaming Era

RSVP
Speaker: Paige Bartley, Senior Analyst, Data and Enterprise Intelligence, Ovum
Speaker: Cameron Tovey, Head of Information Security, Confluent

There’s a prevailing enterprise perception that compliance with data protection regulation, such as the EU’s General Data Protection Regulation (GDPR), is a burden: limiting the leverage of data. However, the core requirement of compliance—better control of data—has multiple downstream benefits. When compliance objectives are aligned with existing business objectives, the business can experience net gain.

For many organizations, streaming data represents a gap in governance efforts. While this certainly poses a risk for data protection regulations such as GDPR, it also limits the potential of data in broader enterprise initiatives that look to maximize the value of information.

Learning objectives:

    Understand how GDPR compliance can be a facilitator of existing business objectives rather than a burden
    Find out how to align existing business initiatives with compliance initiatives for maximum business benefit
    Learn about the place of streaming data and data-in-motion in the compliance effort
    Identify the governance and tooling gaps in existing open-source technology
    Discover your options for improving governance

Paige Bartley is a senior analyst in Ovum's Data and Enterprise Intelligence team specializing in all aspects of the data lifecycle including creation, cleansing, security, privacy, and productivity. Working across the information management space, Paige researches how data use affects both large organizations and individuals alike. She provides insight and analysis into data ROI and successful organizational strategy. Paige’s other areas of expertise include regulatory and legal matters, data preparation, data quality, unstructured data, master data and records management, as well as neuroscience and cognitive science. Prior to joining Ovum in 2016, she worked in research and marketing for ZL Technologies.

Cameron Tovey is the head of information security at Confluent. An information security leader with nearly 20 years of experience protecting data, he ensures that Confluent’s information security program is complete and running smoothly. Before Confluent he protected data for technology startups, healthcare organizations, retail companies, banking institutions and other Fortune 100 entities.

RSVP

Unleashing Apache Kafka and TensorFlow in the Cloud

Speaker: Kai Waehner, Technology Evangelist, Confluent10:00 a.m. BST | 11:00 a.m. CEST

In this online talk, Technology Evangelist Kai Waehner will discuss and demo how you can leverage technologies such as TensorFlow with your Kafka deployments to build a scalable, mission-critical machine learning infrastructure for ingesting, preprocessing, training, deploying and monitoring analytic models.

He will explain challenges and best practices for building a scalable infrastructure for machine learning using Confluent Cloud on Google Cloud Platform (GCP), Confluent Cloud on AWS and on-premise deployments.

The discussed architecture will include capabilities like scalable data preprocessing for training and predictions, combination of different deep learning frameworks, data replication between data centers, intelligent real-time microservices running on Kubernetes and local deployment of analytic models for offline predictions.

Join us to learn about the following:

    Extreme scalability and unique features of Confluent Cloud
    Building and deploying analytic models using TensorFlow, Confluent Cloud and GCP components such as Google Storage, Google ML Engine, Google Cloud AutoML and Google Kubernetes Engine in a hybrid cloud environment
    Leveraging the Kafka ecosystem and Confluent Platform in hybrid infrastructures

RSVP

Strata Data East

Event Details
Speaker: Tim Berglund, Senior Director of Developer Experience, ConfluentSession: Stream Processing with Kafka and KSQLRoom: 1E 14 Tuesday, September 11, 9:00 a.m. – 12:30 p.m.

Apache Kafka® is a de facto standard streaming data processing platform, being widely deployed as a messaging system, and having a robust data integration framework (Kafka Connect) and stream processing API (Kafka Streams) to meet the needs that common attend real-time message processing. But there’s more!

Kafka now offers KSQL, a declarative, SQL-like stream processing language that lets you define powerful stream-processing applications easily. What once took some moderately sophisticated Java code can now be done at the command line with a familiar and eminently approachable syntax. Come to this talk for an overview of KSQL with live coding on live streaming data.

Tim Berglund is a teacher, author and technology leader with Confluent, where he serves as the senior director of developer experience. Tim can frequently be found at speaking at conferences internationally and in the United States. He is the co-presenter of various O’Reilly training videos on topics ranging from Git to distributed systems and is the author of "Gradle Beyond the Basics."

Session Details
Speaker: Jay Kreps, Co-founder and CEO, ConfluentSession: Apache Kafka and the Four Challenges of Production Machine Learning SystemsRoom: 1A 21/22Wednesday, September 12, 5:25 p.m. - 6:05 p.m.

Machine learning has become mainstream, and suddenly businesses everywhere are looking to build systems that use it to optimize aspects of their product, processes or customer experience. The cartoon version of machine learning sounds quite easy: You feed in training data made up of examples of good and bad outcomes, and the computer automatically learns from these and spits out a model that can make similar predictions on new data not seen before. What could be easier, right?

Those with real experience building and deploying production systems built around machine learning know that, in fact, these systems are shockingly hard to build, deploy, and operate. This talk will explain some of the difficulties of building production machine learning systems and talk about how Apache Kafka and stream processing can help.

Jay Kreps is the co-founder and CEO of Confluent, the company behind the popular Apache Kafka streaming platform. Previously, Jay was one of the primary architects for LinkedIn, where he focused on data infrastructure and data-driven products. He was among the original authors of a number of open source projects in the scalable data systems space, including Voldemort (a key-value store), Azkaban, Kafka (a distributed streaming platform) and Samza (a stream processing system).

Session Details
Speaker: Jun Rao, Co-founder, ConfluentSession: A Deep Dive into Kafka ControllerRoom: 1E 07/08Thursday, September 13, 1:10 p.m. – 1:50 p.m.

The controller is the brain of Apache Kafka®. A big part of what the controller does is to maintain the consistency of the replicas and determine which replica can be used to serve the clients, especially during individual broker failure.

We will first describe the main data flow in the controller. In particular, (1) when a broker fails, how the controller automatically promotes another replica as the leader to serve the clients; (2) when a broker is started, how the controller resumes the replication pipeline in the restarted broker.

We then describe some of the recent improvements that we have made in the controller. Some of the improvements allow the controller to handle certain edge cases correctly. Some other improvements increase the performance of the controller, which allows for more partitions in a Kafka cluster.

Jun Rao is the cofounder of Confluent, a company that provides a streaming data platform on top of Apache Kafka. Previously, Jun was a senior staff engineer at LinkedIn, where he led the development of Kafka, and a researcher at IBM’s Almaden research datacenter, where he conducted research on database and distributed systems. Jun is the PMC chair of Apache Kafka and a committer of Apache Cassandra.

Session Details
Speaker: Gwen Shapira, Principal Data Architect, ConfluentSession: The Future of ETL Isn’t What It Used To BeRoom: 1A 23/24Wednesday, September 12, 11:20 a.m. – 12:00 p.m.

Data integration is a difficult problem. We know this because 80% of the time in every project is spent getting the data you want the way you want it. We know this because this problem remains challenging despite 40 years of attempts to solve it. Software engineering practices constantly evolved, but in many organizations data engineering teams still party like it's 1999.

We’ll start the presentation with a discussion of how software engineering changed in the last 20 years—focusing on microservices, stream processing, cloud and the proliferation of data stores. These changes represent both a challenge and opportunity for data engineers.

Then we’ll present three core patterns of modern data engineering:

    Building data pipelines from decoupled microservices
    Agile evolution of these pipelines using schemas as a contract for microservices
    Enriching data by joining streams of events

We’ll give examples of how these patterns were used by different organizations to move faster, not break things and scale their data pipelines. We’ll also show how these can be implemented with Apache Kafka.

Gwen Shapira is a system architect at Confluent, where she helps customers achieve success with their Apache Kafka implementations. She has 15 years of experience working with code and customers to build scalable data architectures, integrating relational and big data technologies. Gwen currently specializes in building real-time reliable data-processing pipelines using Apache Kafka. Gwen is an Oracle Ace Director, the co-author of "Hadoop Application Architectures" and a frequent presenter at industry conferences. She is also a committer on Apache Kafka and Apache Sqoop. When Gwen isn’t coding or building data pipelines, you can find her pedaling her bike, exploring the roads and trails of California and beyond.

Session Details
Event Details

AWS Summit Atlanta

The AWS Summit is a free event designed to bring together the cloud computing community to connect, collaborate and learn about AWS. Learn by attending sessions ranging in technical depth from introduction to advanced. Visit The Expo to speak with leading cloud technology providers and consultants who can help you get the most out of the AWS products, services and solutions in the cloud.

Event Details

AWS Summit Toronto

The AWS Summit is a free event designed to bring together the cloud computing community to connect, collaborate and learn about AWS. Learn by attending sessions ranging in technical depth from introduction to advanced. Visit The Expo to speak with leading cloud technology providers and consultants who can help you get the most out of the AWS products, services and solutions in the cloud.

Event Details

SpringOne Platform

Event Details
Speaker: Neha Narkhede, Co-founder and CTO, ConfluentSession: Keynote

Neha is the co-founder of Confluent and one of the initial authors of Apache Kafka®. She’s an expert on modern,
stream-based data processing.

Speaker: Rohit Bakshi, Product Manager, Confluent
Speaker: Prasad Radhakrishnan Manager, Data Engineering, Pivotal
Session: Cloud-Native Streaming Platform: Running Apache Kafka® on PKS (Pivotal Container Service)

When it comes time to choose a distributed streaming platform of choice for real-time data pipelines, everyone knows the answer: Apache Kafka. And when it comes to deploying real-time stream processing applications at scale without having to integrate some different pieces of infrastructure yourself? The answer is Kubernetes. In this talk, Rohit Bakhshi, product manager at Confluent, and Prasad Radhakrishnan, head of platform architecture for data at Pivotal discuss the best practices of running Apache Kafka and other components of a streaming platform, such as Kafka Connect, Schema Registry as well as stream processing apps, on PKS (Pivotal Container Service). In this talk, the presenters will cover the challenges and lessons learned from development of Confluent Operator for Kubernetes as well as various custom deployments on PKS.

Session Details
Speaker: Viktor Gamov, Solutions Architect, Confluent
Speaker: Gary Russell, Senior Staff Software Engineer, Pivotal
Session: Walking up the Spring for Apache Kafka Stack

Spring provides several projects for Apache Kafka. spring-kafka provides familiar Spring programming paradigms to the kafka-clients library. spring-integration-kafka adds Spring Integration channel adapters and gateways. The kafka binder for spring-cloud-stream provides kafka support to microservices built with Spring Cloud Stream and used in Spring Cloud Data Flow. In this talk we will take a look at developing applications at each layer of the stack, and discuss how to choose the layer for your application.

Session Details
Event Details

ApacheCon

Speaker: Kai Wähner, Technology Evangelist, ConfluentSession: How to Leverage the Apache Kafka Ecosystem to Productionize Machine LearningRoom: Ballroom 250Thursday, 27 September, 12:20 - 13:10

This talk shows how to productionize machine learning models in mission-critical and scalable real-time applications by leveraging Apache Kafka® as streaming platform. The talk discusses the relation between machine learning frameworks such as TensorFlow, DeepLearning4J or H2O and the Apache Kafka ecosystem. A live demo shows how to build a mission-critical machine learning environment leveraging different Kafka components: Kafka messaging and Kafka Connect for data movement from and into different sources and sinks, Kafka Streams for model deployment and inference in real time and KSQL for real-time analytics of predictions, alerts and model accuracy.

Kai Waehner works as a technology evangelist at Confluent. Kai’s main area of expertise lies within the fields of big data analytics, machine learning/deep learning, messaging, integration, microservices, stream processing, internet of things and blockchain. He is regular speaker at international conferences such as JavaOne and O’Reilly Software Architecture, and he writes articles for professional journals and shares his experiences with new technologies on his blog.

Session Details
Event Details

CarIT Congress

Confluent is connecting cars. Connecting customers. Connecting construction.

By placing a streaming platform as the central nervous system into the heart of every automotive company.

If you want to arrange a meeting, please follow the link below, fill out the form and feel free to give us some context around your request. We will then reach out to you to define the exact timing and your Confluent contact.

Speaker: Falko Schwarz, Vice President of Regional Sales, CEMEA, Confluent
Speaker: Perry Krolz, Senior Systems Engineer, Confluent
Session: Connecting Cars: The Streaming Platform as the Central Nervous System of Modern OEMs (Workshop Session)Room: Sydney 213:05 – 13:35

Event Details

Indy Big Data Technology Conference

Just when we thought big data couldn’t get any bigger, the International Data Corporation (IDC) predicted the $130 billion big data market of 2016 will nearly double with anticipated revenues of more than $203 billion by the start of the next decade. The year 2018 will shine a spotlight on artificial intelligence (AI) as the anticipated big data star of the year, debuting in everything from self-driving cars to healthcare and home products like Amazon’s Alexa and Google Home. Some experts forecast the voice-activated internet alone, will be a $10 billion industry by 2020. By 2021, the internet of things (IoT) is expected to draw in $6 trillion with a “T” in spending as a business solution. With the big data market explosion also comes pressure for Fortune 500 organizations to put strategies in place to monetize their big data and hire cybersecurity experts to protect their ever-expanding pipelines and portfolios of big data. IoT is especially susceptible to hacking as more and more devices hook in to the internet, more and more areas of vulnerability are being created. Big data is a maze of opportunity with as many twists and turns as there are new big data technologies. This year’s speakers will present on these topics and many more as they help you to navigate this maze and cash in on your next big data opportunity.

Speaker: Patrick Druley, Systems Engineer, Confluent
Event Details

Enterprise Cloud, DevOps and Data Centres

ECC is the U.K.’s leading event for cloud and DevOps thought leaders and innovators across government and all major business sectors.

Speaker: Tim Vincent, Systems Engineer, ConfluentSession: Keynote8:30 a.m. - 4:30 p.m.

Event Details

JAX London

Speaker: Tim Berglund, Senior Director of Developer Experience, ConfluentSession: Stream Processing with Apache Kafka® and KSQL

Apache Kafka is a de facto standard streaming data processing platform, being widely deployed as a messaging system, and having a robust data integration framework (Kafka Connect) and stream processing API (Kafka Streams) to meet the needs that common attend real-time message processing. But there’s more! Kafka now offers KSQL, a declarative, SQL-like stream processing language that lets you define powerful stream-processing applications easily. What once took some moderately sophisticated Java code can now be done at the command line with a familiar and eminently approachable syntax. In this workshop, you’ll get a thorough introduction to Apache Kafka, learn to understand what sort of architectures it supports, and most importantly, use the exciting new KSQL language to write real-time stream processing applications.

Session Details
Speaker: Tim Berglund, Senior Director of Developer Experience, ConfluentSession: The Database Unbundled: Commit logs in an age of Microservices

Microservice architectures provide a robust challenge to the traditional centralized database we have come to understand. In this talk, we’ll explore the notion of unbundling that database, and putting a distributed commit log at the center of our information architecture. As events impinge on our system, we store them in a durable, immutable log (happily provided by Apache Kafka), allowing each microservice to create a derived view of the data according to the needs of its clients. Event-based integration avoids the now well-known problems of RPC and database-based service integration, and allows the information architecture of the future to take advantage of the growing functionality of stream processing systems like Apache Kafka. This way we can create systems that can more easily adapt to the changing needs of the enterprise and provide the real-time results we are increasingly being asked to provide.

Session Details

Tim is a teacher, author and technology leader with Confluent, where he serves as the Senior Director of Developer Experience. He can frequently be found at speaking at conferences in the United States and all over the world. He is the co-presenter of various O’Reilly training videos on topics ranging from Git to Distributed Systems, and is the author of "Gradle Beyond the Basics."

Event Details

Confluent Streaming Event Munich 2018

Ob Einzelhandel, Gesundheitswesen, Automobilhersteller oder Banken - jede Branche erfindet sich im Moment neu, um den Anforderungen des Markts jetzt und in Zukunft gerecht zu werden. Dies bringt mit sich, dass neue Konzepte und Technologien Einzug halten müssen.

So gilt es IT-Abläufe zu schaffen, die auf Ereignisse der realen Welt in Echtzeit reagiert, um sofort einen Mehrwert für das Unternehmen und den Kunden schaffen zu können. Diese Ereignisse sind das zentrale Konzept einer Event-Driven-Architecture und Event-Streaming-Plattform, die Digitalisierung neu denkt.

Die meisten Unternehmen nutzen Apache Kafka und Confluent als Herzstück ihrer digitalen Strategie für Echtzeit-Einblicke in Kundendaten, Betrugserkennung oder IoT-Datenverarbeitung.

Am 09. Oktober 2018 treffen sich Business-Begeisterte, Kafka-Pioniere, Confluent-Nutzer und -Experten im SOFITEL MUNICH BAYERPOST in München, um mehr über aktuelle Streaming-Projekte in den unterschiedlichsten Branchen zu erfahren.

Verpassen Sie außerdem nicht die Keynote von Gwen Shapira, Principal Data Architect bei Confluent Autorin von The definitive guide to Kafka, sowie die technischen und unternehmerischen Use Cases unserer Kunden und Partner.

RSVP

Confluent Streaming Event Paris 2018

Companies are reinventing the retail, healthcare, automotive, travel, financial services and nearly every other possible sector around real-time event streams, creating a new technology model. Event architecture follows the idea of ​​designing software from events, things that happen in the real world and have real business sense, so that these events become the central concept of architecture.

More and more companies are using Apache Kafka® and Confluent at the heart of their digital strategy to get real-time information about customer data, fraud detection and IoT data processing.

On October 11, 2018, find decision-makers, data architects and IT developers at the Marceau Trade Shows in Paris to discover more about streaming platforms and Kafka projects underway in the most diverse industries.

Do not miss the intervention of Gwen Shapira, Senior Data Architect at Confluent and author of "The Definitive Guide to Kafka," as well as technical feedback from Confluent's customers and partners. Book your place now because they are limited.

RSVP

Open Source Automation Day

For a whole day there will be exciting presentations and news on current developments in the area of open source technologies, which you can discuss with the developers present and the specialist audience on site. Perry Krol from Confluent will present around Apache Kafka® and Kubernetes.

Speaker: Perry Krol, Senior Systems Engineer, ConfluentSession: Orchestrate Apache Kafka on Kubernetes 16:40 - 17:20
Event Details

Kafka Summit San Francisco

Discover the World of Streaming Data

As streaming platforms become central to data strategies, companies both small and large are re-thinking their architecture with real-time context at the forefront. Monoliths are evolving into Microservices. Data-centers are moving to the cloud. What was once a ‘batch’ mindset is quickly being replaced with stream processing as the demands of the business impose more and more real-time requirements on developers and architects.

This revolution is transforming industries. What started at companies like LinkedIn, Uber, Netflix and Yelp has made its way to countless others in a variety of sectors. Today, thousands of companies across the globe build their businesses on top of Apache Kafka®. The developers responsible for this revolution need a place to share their experiences on this journey.

Kafka Summit is the premier event for data architects, engineers, devops professionals, and developers who want to learn about streaming data. It brings the Apache Kafka community together to share best practices, write code, and discuss the future of streaming technologies.

Welcome to Kafka Summit San Francisco!

Event Details

QCon Shanghai

Speaker: Guozhang Wang, Software, Engineer, ConfluentSession: Apache Kafka, from 0.8 to 2.0

Guozhang is a Kafka committer and PMC member. He currently works at Confluent on Kafka Streams. Previously, he worked as a senior engineer in the LinkedIn Data Architecture Group, where he was responsible for real-time data processing platforms, including the development and maintenance of Apache Kafka® and Apache Samza systems. Previously, he obtained his Ph.D. in computer science from Cornell University in the United States in 2013. His research interests include database management and distributed data systems.

Session Details
Event Details

Reactive Summit

Event Details
Speaker: Antony Stubbs, Consultant Solutions Architect, ConfluentSession: Beyond the DSL—Unlocking the power of Kafka Streams with the Processor APIRoom: BallroomTuesday, 23 October, 12:20 - 13:10

Kafka Streams is a flexible and powerful framework. The domain-specific language (DSL) is an obvious place from which to start, but not all requirements fit the DSL model.

Many people are unaware of the processor API (PAPI)—or are intimidated by it because of sinks, sources, edges and stores—oh my! But most of the power of the PAPI can be leveraged, simply through the DSL `#process` method, which lets you attach the general building block `Processor` interface to your easy-to-use DSL topology, to combine the best of both worlds.

In this talk you'll get a look at the flexibility of the DSL's process method and the possibilities it opens up. We'll use real world use cases borne from extensive experience in the field with multiple customers to explore power of direct write access to the state stores and how to perform range sub-selects. We'll also see the options that punctuators bring to the table, as well as opportunities for major latency optimizations.

Key takeaways:

    Understanding of how to combine DSL and processors
    Capabilities and benefits of processors
    Real-world uses of processors

Session Details
Event Details

LISA18

Event Details
Speaker: Robin Moffatt, Developer Advocate, ConfluentSession: Apache Kafka and KSQL in Action: Let’s Build a Streaming Data Pipeline!

Have you ever thought that you needed to be a programmer to do stream processing and build streaming data pipelines? Think again!

Apache Kafka® is a distributed, scalable and fault-tolerant streaming platform, providing low-latency pub-sub messaging coupled with native storage and stream processing capabilities. Integrating Kafka with RDBMS, NoSQL, and object stores is simple with the Kafka Connect API, which is part of Apache Kafka. KSQL is the open source SQL streaming engine for Apache Kafka and makes it possible to build stream processing applications at scale, written using a familiar SQL interface.

In this talk we’ll explain the architectural reasoning for Apache Kafka and the benefits of real-time integration, and we’ll build a streaming data pipeline using nothing but our bare hands, the Kafka Connect API and KSQL.

Gasp as we filter events in real time! Be amazed at how we can enrich streams of data with data from RDBMS! Be astonished at the power of streaming aggregates for anomaly detection!

This will be a practical talk, after which attendees will have a clear idea of the power of stream processing and how to get started with it using the open source Apache Kafka and KSQL projects.

Robin is a developer advocate at Confluent, as well as an Oracle ACE director and developer champion. His career has always involved data, from the old worlds of COBOL and DB2, through the worlds of Oracle and Hadoop and into the current world with Kafka. His particular interests are analytics, systems architecture, performance testing and optimization.

Session Details
Event Details

O'Reilly Software Architecture Conference

Event Details
Speaker: Benjamin Stopford, Technologist, Office of the CTO, ConfluentSession: Event Streaming as a Source of TruthRoom: King's Suite - BalmoralTuesday, 30 October, 15:50 - 16:40

One of the most interesting and provocative patterns to face the software architecture community is the use of event streaming as a source of truth. A pattern where replayable logs, like Apache Kafka®, are used for both communication as well as event storage, incorporating the retentive properties of a database in a system designed to share data across many teams, clouds and geographies.

This is a concept Ben Stopford believes to be transformative. Such a bold claim should of course be met with a healthy degree of skepticism, but the interesting thing about communication patterns is that their value comes from often subtle, systemic effects, particularly where humans are involved. You will be familiar with these already: email, Twitter, Slack and Facebook are all conceptually similar forms of communication, but display very different dynamics in practice, yet zeroing in on exactly why these tools operate and feel so different is rarely as simple as it may seem.

So, in this talk Ben explores not only the event streaming pattern but also the systemic effects it has on the architectures we build around it, digging into where the value really lies. He examines the relationship between events, event sourcing and stream processing, leading the audience to the idea of a database unbundled, or turned inside out. He also explores how the pattern encourages subtler systemic effects: easier evolution, a more ephemeral view on data and systems that seamlessly span departments, cloud providers and geographies.

Ben Stopford is a technologist at Confluent (the company behind Apache Kafka), where he has worked on a wide range of projects, from implementing the latest version of Kafka’s replication protocol through to developing strategies for streaming applications. Before Confluent, Ben led the design and build of a company-wide data platform for a large financial institution, as well as working on a number of early service-oriented systems, both in finance and at ThoughtWorks. He is the author of the book “Designing Event Driven Systems,” O’Reilly, 2018.

Session Details
Event Details

W-JAX

Event Details

Confluent will be a Platinum Sponsor at W-Jax 2018, the conference for Java, architecture and software innovation,
taking place November 5 – 9 in Munich.

Take the advantage of meeting our streaming experts during the conference and learn more about how a real-time streaming platform can become the central nervous system of your enterprise.

Speaker: Ben Stopford, Technologist, Office of the CTO, ConfluentSession: The Future of Applications is Streaming
Session Details
Speaker: Thomas Trepper, Technical Instructor, EMEA, ConfluentSession: Apache Kafka® Workshop: Introduction to the Architecture and Ecosystem of Enterprise Data Streaming
Session Details
Speaker: Robin Moffatt, Developer Advocate, ConfluentSession: Apache Kafka and KSQL in Action: Build a Streaming Data Pipeline!
Session Details
Speaker: Kai Waehner, Technology Evangelist, ConfluentSession: Unleashing Apache Kafka and Tensor Flow in the Cloud
Session Details
Event Details

Scale by the Bay

Event Details
Speaker: Neha Narkhede, Co-creator of Apache Kafka and Co-founder, ConfluentSession: Keynote: IIFriday, November 16, 9:00 a.m. - 9:40 a.m.
Session Details
Speaker: Gwen Shapira, Principal Data Architect, ConfluentSession: Deploying Kafka Streams Applications with Docker and KubernetesSaturday, November 17, 2:10 p.m. - 2:50 p.m.

Kafka Streams, Apache Kafka®’s stream processing library, allows developers to build sophisticated stateful stream processing applications which you can deploy in an environment of your choice. Kafka Streams is not only scalable but fully elastic allowing for dynamic scale in and scale out as the library handles state migration transparently in the background. By running Kafka Streams applications on Kubernetes, you will be able to use Kubernetes powerful control plane to standardize and simplify the application management—from deployment to dynamic scaling.

In this technical deep dive, we’ll explain the internals of dynamic scaling and state migration in Kafka Streams. We’ll then show, with a live demo, how a Kafka Streams application can run in a Docker container on Kubernetes and the dynamic scaling of an application running in Kubernetes.

Session Details
Event Details

Big Data London

Speaker: Jay Kreps, Co-founder and CEO, ConfluentRoom: Keynote Theater13 November, 09:30

Jay Kreps is the CEO of Confluent, Inc., a company backing the popular Apache Kafka® messaging system. Prior to founding Confluent, he was formerly the lead architect for data infrastructure at LinkedIn. He is among the original authors of several open source projects including Project Voldemort (a key-value store). Apache Kafka (a distributed messaging system) and Apache Samza (a stream processing system).

Event Details

Big Data Spain

Discover innovative AI and BD strategies that will shape our future, where the ideas of tomorrow meet the experts of today.

Event Details

Codemotion Berlin 2018 Tech Conference

Speaker: Kai Waehner, Technology Evangelist, Confluent

Kai’s main area of expertise lies within the fields of big data analytics machine learning, integration, microservices, internet of things, stream processing and blockchain. He is regular speaker at international conferences such as JavaOne and O’Reilly Software Architecture or ApacheCon. He writes articles for professional journals and shares his experiences with new technologies on his blog.

Speaker Details
Event Details

XebiCon '18

“Nous sommes convaincus qu’allier vision business et excellence technologique est indispensable dans un monde globalisé et concurrentiel. Le SI doit se préparer dès aujourd’hui à intégrer les technologies de demain et leurs cas d’usage. Nous croyons fermement au partage de la connaissance pour tous et par tous. C’est pourquoi, nous avons créé XebiCon, la conférence qui vous donnera les clés pour tirer le meilleur des dernières technologies.

-Luc Legardeur, Président de Xebia

Détails de l'événement

AWS re:INVENT

Join us for deeper technical content, more hands-on learning opportunities, keynote announcements, a bigger and better Partner Expo, exciting after-hours events, and the best party in technology—re:Play.

At re:Invent 2018, you can dive into solving challenges and working on a team in our two-hour workshops. In the chalk talks or builders sessions, you will have the opportunity to interact in a small group setting with AWS experts as they whiteboard through problems and solutions. In addition, we will be repeating our most popular sessions and offering late night sessions, so you get the most out of re:Invent.

Event Details

UKOUG Conference

Event Details
Speaker: Robin Moffatt, Developer Advocate, ConfluentSession: Embrace the Anarchy (REDUX): Apache Kafka’s Role in Modern Data ArchitecturesRoom: Analytics & Big Data 2 - Default LocationTuesday, 4 December, 09:00 - 09:45
Session Details
Speaker: Robin Moffatt, Developer Advocate, ConfluentSession: Apache Kafka and KSQL in Action: Let’s Build a Streaming Data Pipeline!Room: Analytics & Big Data 2 - Default LocationWednesday, 5 December, 11:40 - 12:25
Session Details
Speaker: Robin Moffatt, Developer Advocate, ConfluentSession: No More Silos: Integrating Databases and Apache KafkaRoom: Database 2 - Default LocationWednesday, 5 December, 12:35 - 13:20
Session Details
Event Details

IT-Tage 2018

Confluent is proud to be a Platinum Sponsor at the IT-Tage 2018 in Frankfurt. Meet us at our booth and don’t miss the keynote by Ben Stopford and more exciting Confluent sessions around Apache Kafka®, KSQL, machine learning and more.

Speaker: Ben Stopford, Technologist, Office of the CTO, ConfluentSession: Keynote
Event Details

KubeCon + CloudNativeCon

The Cloud Native Computing Foundation’s flagship conference gathers adopters and technologists from leading open source and cloud native communities in Seattle, WA on December 11-13, 2018. Join Kubernetes, Prometheus, OpenTracing, Fluentd, gRPC, containerd, rkt, CNI, Envoy, Jaeger, Notary, TUF, Vitess, CoreDNS, NATS, and Linkerd as the community gathers for three days to further the education and advancement of cloud native computing.

Event Details

Ready to Talk to Us?

Have someone from Confluent contact you.

Contact Us

We use cookies to understand how you use our site and to improve your experience. Click here to learn more or change your cookie settings. By continuing to browse, you agree to our use of cookies.