Project Metamorphosis: Wir präsentieren die Event-Streaming-Plattform der nächsten GenerationMehr Erfahren

Log Compaction | Kafka Summit Edition | May 2016

Last week, Confluent hosted Kafka Summit, the first ever conference to focus on Apache Kafka and stream processing. It was exciting to see the stream processing community coming together in one event to share their work and discuss possible improvements. The conference sold out several weeks in advance and over 550 Kafka enthusiasts attended.

The sessions overall were well received thanks to all of the speakers that put in time and effort to contribute to the high quality of the conference – a special thanks to the speakers! I’d like to highlight few of the sessions and discussions that attendees were especially excited about.

Hacking on Kafka Connect and Kafka Streams

On the Monday evening before the the conference we held a Stream Data Hackathon. The room was packed with over 100 participants hacking away on experimental stream processing projects. There were many awesome projects and we will publish a separate blog post to share all of them. The winning projects combined creativity and usefulness:

  • Real-time sentiment analysis of tweets, used to evaluate and visualize how twitter collectively feels about the presidential candidates in the US. Both Kafka Connect and Kafka Streams were used to implement this project. The project is by Ashish Singh from Cloudera.
  • Measure electrical activity from the brain using a bluetooth device and using Kafka to stream the data to OpenTSDB and visualizing it with Grafana. The project is by a team from Silicon Valley Data Science.
  • Kafka Connector for streaming events from Jenkins to Kafka, in order to collect all the events regarding Jenkins Jobs in an organization to one central location. The project is by Aravind Yarram from Equifax.

Kafka Summit SF 2016

Keynote Sessions

The next day opened with a gourmet breakfast, immediately followed by three keynote talks. Neha Narkhede gave a wonderful overview of the growth of the Apache Kafka project and community since she and the other Kafka co-creators (Jay Kreps, Jun Rao, and others) started the project at LinkedIn. Then Jay Kreps shared his thoughts on the future of stream processing and how this new paradigm will change the way companies use data. Last (but not least) Aaron Schildkrout, Uber’s head of data and marketing (I love this title) discussed the ways his company uses Kafka and how their use cases are evolving. It’s pretty inspiring to think of drivers getting real-time feedback on how their driving from their phones.

Breakout Sessions

After the keynote session, we headed to the 28 breakout sessions across three tracks:

  • Systems Track – focused on stream processing
  • Operations Track – how to run Kafka in production
  • Users Track – use cases and architectures

After the conference I asked some of the attendees what were their favorite sessions.

In the Systems track, the attendees loved “
Fundamentals of Stream Processing with Apache Beam” by Frances Perry and Tyler Akidau from Google. I’ve heard many attendees discuss how this presentation changed the way they think about stream processing applications. “Introducing Kafka Streams: Large-scale Stream Processing with Kafka” by Neha Narkhede was also incredibly popular, and many attendees are looking forward to the imminent release of Apache Kafka 0.10.0 which will include Kafka Streams.

In the Operations track, attendees enjoyed “101 Ways to Configure Kafka – Badly”, by Henning Spjelkavik & Audun Strand from Finn.no, who shared all the mistakes they made as new Kafka users and how they corrected them. This presentation was a great mix of entertainment and education, and I’m sure no one who attended the session will end up with an 8-node ZooKeeper cluster.

In the Users track, attendees loved “Real-Time Analytics Visualized w/ Kafka + Streamliner + MemSQL + ZoomData” by Anton Gorshkov from Goldman Sachs, who developed a stream processing application, live, including processing SMS messages sent by the audience in real time.

Video Recordings and Photos

Yes, we did record the sessions and they will be available in a week or so. I highly recommend checking them out. Links to the video recordings will be added to each of the session pages on www.kafka-summit.org. Follow @ConfluentInc on Twitter and we’ll let you know as soon as they are ready. 

We’ll also post some photos from the conference soon on the Confluent Facebook page.

Networking

As it often happens at conferences, the sessions don’t tell the whole story. One of the highlights of the conference for me, was to interact and exchange ideas with the leaders of many different stream processing technologies. How often does it happen that leaders of Apache Storm, Apache Spark, Apache Flink, Apache Beam, and Apache Kafka get together to discuss abstractions, concepts, how to benchmark streams, and the best ways to educate an audience? Kafka Summit is, to the best to my knowledge. It’s the only conference where the community gets together and shares their vision.

The Confluent team is looking forward to hosting Kafka Summit again next year. If you weren’t able to make it last week, fill out the Stay-In-Touch form on the home page of www.kafka-summit.org and you’ll get updates about next year’s conference.

Thanks again to all that made it to Kafka Summit 2016 in San Francisco last week! The Confluent team enjoyed meeting everyone and we had a fantastic time!

Quick note on the next Apache Kafka release

A new release candidate for version 0.10.0 has been posted to the Apache Kafka mailing lists and a new vote was started. This release candidate actually contains two new features: Support for additional SASL authentication mechanisms (KIP-43) and a new API for clients to determine features supported by the brokers (KIP-35).

Did you like this blog post? Share it now

Subscribe to the Confluent blog

More Articles Like This

Analysing Changes with Debezium and Kafka Streams

Change Data Capture (CDC) is an excellent way to introduce streaming analytics into your existing database, and using Debezium enables you to send your change data through Apache Kafka®. Although […]

Improved Robustness and Usability of Exactly-Once Semantics in Apache Kafka

This blog post talks about the recent improvements on exactly-once semantics (EOS) to make it simpler to use and more resilient. EOS was first released in Apache Kafka® 0.11 and […]

Data Privacy, Security, and Compliance for Apache Kafka

Why data privacy for Apache Kafka®? As companies seek to leverage all forms of data for competitive advantage, there is a growing regulatory and reputational risk that calls for the […]

Jetzt registrieren

Starten Sie Ihren 3‑monatigen Test. Erhalten Sie bis zu 200 $ Rabatt auf jede Ihrer ersten 3 Monatsrechnungen von Confluent Cloud.

Nur neue Registrierungen.

Wenn Sie oben auf „registrieren“ klicken, erklären Sie sich damit einverstanden, dass wir Ihre personenbezogenen Daten verarbeiten – gemäß unserer und bin damit einverstanden.

Indem Sie oben auf „Registrieren“ klicken, akzeptieren Sie die Nutzungsbedingungen und den gelegentlichen Erhalt von Marketing-E-Mails von Confluent. Zudem ist Ihnen bekannt, dass wir Ihre personenbezogenen Daten gemäß unserer und bin damit einverstanden.

Mit Confluent Cloud loslegen

Erhalten Sie bis zu 200 $ Rabatt auf jede Ihrer ersten 3 Monatsrechnungen von Confluent Cloud.


Wählen Sie eine der nachfolgenden Optionen

Marketplaces

  • AWS
  • Azure
  • Google Cloud

  • Abrechnung über Ihren Cloud-Anbieter*
  • Streaming nur auf 1 Cloud
*Administratorrolle für die Rechnungsstellung erforderlich

Marketplaces

  • Abrechnung über Ihren Cloud-Anbieter*
  • Streaming nur auf 1 Cloud
  • Administratorrolle für die Rechnungsstellung erforderlich

*Administratorrolle für die Rechnungsstellung erforderlich

Confluent


  • Mit Kreditkarte bezahlen
  • Streaming über mehrere Clouds hinweg

Confluent

  • Mit Kreditkarte bezahlen
  • Streaming über mehrere Clouds hinweg

Wenn Sie oben auf „registrieren“ klicken, erklären Sie sich damit einverstanden, dass wir Ihre personenbezogenen Daten verarbeiten – gemäß unserer und bin damit einverstanden.

Indem Sie oben auf „Registrieren“ klicken, akzeptieren Sie die Nutzungsbedingungen und den gelegentlichen Erhalt von Marketing-E-Mails von Confluent. Zudem ist Ihnen bekannt, dass wir Ihre personenbezogenen Daten gemäß unserer und bin damit einverstanden.

Auf einem einzigen Kafka Broker unbegrenzt kostenlos verfügbar
i

Die Software ermöglicht die unbegrenzte Nutzung der kommerziellen Funktionen auf einem einzelnen Kafka Broker. Nach dem Hinzufügen eines zweiten Brokers startet automatisch ein 30-tägiger Timer für die kommerziellen Funktionen, der auch durch ein erneutes Herunterstufen auf einen einzigen Broker nicht zurückgesetzt werden kann.

Wählen Sie den Implementierungstyp aus
Manuelle Implementierung
  • tar
  • zip
  • deb
  • rpm
  • docker
oder
Automatische Implementierung
  • kubernetes
  • ansible

Wenn Sie oben auf „kostenlos herunterladen“ klicken, erklären Sie sich damit einverstanden, dass wir Ihre personenbezogenen Daten verarbeiten – gemäß unserer Datenschutzerklärung zu.

Indem Sie oben auf „kostenlos herunterladen“ klicken, akzeptieren Sie die Confluent-Lizenzvertrag und den gelegentlichen Erhalt von Marketing-E-Mails von Confluent. Zudem erklären Sie sich damit einverstanden, dass wir Ihre personenbezogenen Daten gemäß unserer Datenschutzerklärung zu.

Diese Website verwendet Cookies zwecks Verbesserung der Benutzererfahrung sowie zur Analyse der Leistung und des Datenverkehrs auf unserer Website. Des Weiteren teilen wir Informationen über Ihre Nutzung unserer Website mit unseren Social-Media-, Werbe- und Analytics-Partnern.