Kafka Summit San Francisco
Get access to all the presentations from Kafka Summit SF 2017.
This is a repository of all the presentations from the Kafka Summit held on August 28 in San Francisco.
Providing Reliability Guarantees in Kafka at One Trillion Events Per Day
Nitin Kumar, Principal Software Engineering Manager, Microsoft
One Day, One Data Hub, 100 Billion Messages: Kafka at LINE
Yuto Kawamura, Software Engineer, LINE Corporation
Running Kafka for Maximum Pain
Todd Palino, Staff Site Reliability Engineer, LinkedIn
Kafka makes so many things easier to do, from managing metrics to processing streams of data. Yet it seems that so many things we have done to this point in configuring and managing it have been object studies in how to make our lives, as the plumbers who keep the data flowing, more difficult than they have to be. What are some of our favorites?
Running Kafka as a Service at Scale
Sriram Subramanian, Director, Platform & Infra Engineering, Confluent
Kafka and the Polyglot Programmer
Edoardo Comar, Senior Developer, IBM; Andrew Schofield, Chief Architect, Hybrid Cloud Messaging, IBM
Best Practices for Running Kafka on Docker Containers
Nanda Vijaydev, Senior Director, Solutions Management, BlueData
Worldwide Scalable and Resilient Messaging Services with Kafka and Kafka Streams
Masaru Dobashi, Chief Engineer, NTT-DATA; Shingo Omura, Software Engineer, Chatwork, Inc.
Exactly-once Stream Processing with Kafka Streams
Guozhang Wang, Engineer, Confluent
Building Event-Driven Services with Stateful Streams
Benjamin Stopford, Engineer, Confluent
Streaming Processing in Python – 10 ways to avoid summoning Cuthulu
Holden Karau, Principal Software Engineer, IBM
Kafka Stream Processing for Everyone with KSQL
Nick Dearden, Director of Engineering, Confluent
Portable Streaming Pipelines with Apache Beam
Frances Perry, Software Engineer, Google
Much as SQL stands as a lingua franca for declarative data analysis, Apache Beam aims to provide a portable standard for expressing robust, out-of-order data processing pipelines in a variety of languages across a variety of platforms. By cleanly separating the user’s processing logic from details of the underlying execution engine, the same pipelines will run on any Apache Beam runtime environment, whether it’s on-premise or in the cloud, on open source frameworks like Apache Spark or Apache Flink, or on managed services like Google Cloud Dataflow. In this talk, I will:
Real-Time Document Rankings with Kafka Streams
Hunter Kelly, Senior Software/Data Engineer, Zalando
Fast Data in Supply Chain Planning
Jeroen Soeters, Lead Developer, ThoughtWorks
Query the Application, Not a Database: “Interactive Queries” in Kafka’s Streams API
Matthias Sax, Engineer, Confluent
Aufbau statusbehafteter Finanzanwendungen mit Kafka Streams
Charles Reese, Senior Software Engineer, Funding Circle; Matthias Margush, Software Engineer, Funding Circle
Efficient Schemas in Motion with Kafka and Schema Registry
Pat Patterson, Community Champion, StreamSets Inc.
One Data Center is Not Enough: Scaling Apache Kafka Across Multiple Data Centers
Gwen Shapira, Product Manager, Confluent
DNS for Data: The Need for a Stream Registry
Praveen Hirsave, Director Cloud Engineering, HomeAway; Rene Parra, Chief Architect, HomeAway
From Scaling Nightmare to Stream Dream : Real-time Stream Processing at Scale
Amy Boyle, Software Engineer, New Relic
Kafka Connect Best Practices – Advice from the Field
Randall Hauch, Engineer, Confluent
How Blizzard Used Kafka to Save Our Pipeline (and Azeroth)
Jeff Field, Systems Engineer, Blizzard
Billions of Messages a Day – Yelp’s Real-time Data Pipeline
Justin Cunningham, Technical Lead, Software Engineering, Yelp
Body Armor for Distributed System
Michael Egorov, Co-founder and CTO, NuCypher
Multi-Tenant, Multi-Cluster and Hierarchical Kafka Messaging Service
Allen Wang, Senior Software Engineer, Netflix
Kafka is easy to set up as a messaging service and serves the purpose well. However, it gets complicated in a multi-tenant environment, where users have different SLA on availability, durability and latency. As traffic grows, managing a huge and monolithic Kafka cluster in a cloud environment has been proved to be problematic and hard to scale.
At Netflix, our Kafka messaging system evolves into a multi-cluster and hierarchical service where it can serve over a trillion messages per day. Topics are allocated in either shared or dedicated clusters according to SLA requirements and can be migrated across clusters. Infrastructure routers connect Kafka clusters and provide hierarchical access to data. With the help of enhanced client libraries and proxies, clients interact with the service using higher level APIs and abstracted access points. Kafka deployments are transparent from clients. Enabled by our client libraries and Netflix cloud infrastructure, we are able to mitigate Kafka cluster level failures with our Kafka failover which is also transparent to the clients.
In this talk, we are going to discuss why this architecture is necessary and how we have implemented it with essential components including management and self-service tools, infrastructure routers, client libraries, proxies and monitoring service.
Accelerating Particles to Explore the Mysteries of the Universe and How Kafka Can Help on That
Martin Marquez, Data Scientist & Data Streaming Services Project Leader, Cern
Database Streaming At WePay With Kafka and Debezium
Moira Tagle, Senior Software Engineer, WePay
Riot’s Journey to Global Kafka Aggregation
Singe Graham, Systems Engineer, Riot Games
Eine Echtzeit-Streaming-Platform für Kommunikation und noch viel mehr
Vijay Pasam, Senior Software Development Manager, Capital One; Japan Bhatt, Master Software Engineer, Capital One
Shopify Flash-Sales and Apache Kafka
Sam Obeid, Senior Production Engineer, Shopify
Infrastructure for Streaming Applications in Criteo
Oleksandr Kaidannik, Software Engineer, Criteo
Streaming Data Applications on Docker
Nikki Thean, Staff Engineer, Etsy
Einer unserer Mitarbeiter meldet sich bei Fragen und Anliegen gerne bei Ihnen.Kontakt
Diese Website verwendet Cookies zwecks Verbesserung der Benutzererfahrung sowie zur Analyse der Leistung und des Datenverkehrs auf unserer Website. Des Weiteren teilen wir Informationen über Ihre Nutzung unserer Website mit unseren Social-Media-, Werbe- und Analytics-Partnern.