Modern businesses have data at their core, and this data is changing continuously. How can we harness this torrent of continuously changing data in real-time? The answer is stream processing, and Apache Kafka® is a core hub for streaming data.
This talk will provide a brief introduction to Apache Kafka and describe its usage as a platform for streaming data. It will explain how Kafka serves as a foundation for both streaming data pipelines and applications that consume and process real-time data streams. It will introduce some of the newer components of Kafka that help make this possible, including Kafka Connect, a framework for capturing continuous data streams, and Kafka Streams, a lightweight stream processing library.
Mitch Henderson, Sr. Technical Account Manager, Confluent
Mitch Henderson is a Sr. Technical Account Manager for Confluent. Mitch’s background is rooted in DevOps roles mostly focused on inserting new technologies into the enterprise. Before joining Confluent Mitch was a field engineer at DataStax and played **integral roles** at Thompson Reuters and CenturyLink where he built large-scale deployments, planned auditing services, and was a key player doing financial data delivery with a focus on quality, maintainability, stability and scalability.
Wir verwenden Cookies, damit wir nachvollziehen können, wie Sie unsere Website verwenden, und um Ihr Erlebnis zu optimieren. Klicken Sie hier, wenn Sie mehr erfahren oder Ihre Cookie-Einstellungen ändern möchten. Wenn Sie weiter auf dieser Website surfen, stimmen Sie unserer Nutzung von Cookies zu.