streaming processing with a distributed commit log

77
Streaming Processing with a Distributed Commit Log Apache Kafka

Upload: joe-stein

Post on 20-Mar-2017

885 views

Category:

Technology


1 download

TRANSCRIPT

Page 1: Streaming Processing with a Distributed Commit Log

Streaming Processing with a Distributed Commit Log

Apache Kafka

Page 2: Streaming Processing with a Distributed Commit Log

2

Apache Kafka committer and PMC member. A frequent speaker on both Hadoop and Cassandra, Joe is the Co-Founder and CTO of Elodina Inc. Joe has been a distributed systems developer and architect for over {years} now having built backend systems that supported over one hundred million unique devices a day processing trillions of events. He blogs and hosts a podcast about Hadoop and related systems at All Things Hadoop.

@allthingshadoop

$(whoami)

Page 3: Streaming Processing with a Distributed Commit Log

3

● Introduction to Apache Kafka● Brokers “as a Service”● Producers & Consumers “as a Service”● More Use Cases for Kafka

Overview

Page 4: Streaming Processing with a Distributed Commit Log

Apache Kafka

Page 5: Streaming Processing with a Distributed Commit Log

5

Apache Kafka was first open sourced by LinkedIn in 2011

Papers● Building a Replicated Logging System with Apache Kafka http://www.vldb.org/pvldb/vol8/p1654-wang.pdf

● Kafka: A Distributed Messaging System for Log Processing http://research.microsoft.com/en-us/um/people/srikanth/netdb11/netdb11papers/netdb11-final12.pdf

● Building LinkedIn’s Real-time Activity Data Pipeline http://sites.computer.org/debull/A12june/pipeline.pdf

● The Log: What Every Software Engineer Should Know About Real-time Data's Unifying Abstraction http://engineering.linkedin.com/distributed-systems/log-what-every-software-engineer-should-know-about-real-time-datas-unifying

http://kafka.apache.org/

Apache Kafka

Page 6: Streaming Processing with a Distributed Commit Log

It often starts with just one data pipeline

Page 7: Streaming Processing with a Distributed Commit Log

Data Pipelines

Page 8: Streaming Processing with a Distributed Commit Log

Data Pipelines

Page 9: Streaming Processing with a Distributed Commit Log

Data Pipelines

Page 10: Streaming Processing with a Distributed Commit Log

Point to Point Data Pipelines are Problematic

Page 11: Streaming Processing with a Distributed Commit Log

Reuse of data pipelines for new providers

Page 12: Streaming Processing with a Distributed Commit Log

Reuse of existing providers for new consumers

Page 13: Streaming Processing with a Distributed Commit Log

Eventually the solution becomes the problem

Page 14: Streaming Processing with a Distributed Commit Log

Decouple Data Pipelines

Page 15: Streaming Processing with a Distributed Commit Log

Decouple Data Pipelines

Page 16: Streaming Processing with a Distributed Commit Log

Decouple Data Pipelines

Page 17: Streaming Processing with a Distributed Commit Log

Topics & Partitions

Page 18: Streaming Processing with a Distributed Commit Log

Log Segments

Page 19: Streaming Processing with a Distributed Commit Log

Read and Write Keys & Values to each partition

Page 20: Streaming Processing with a Distributed Commit Log

Producers

Page 21: Streaming Processing with a Distributed Commit Log

Consumers

Page 22: Streaming Processing with a Distributed Commit Log

Brokers Kafka Wire Protocol - http://kafka.apache.org/protocol.html

● Preliminaries

○ Network○ Partitioning and bootstrapping○ Partitioning Strategies○ Batching○ Versioning and Compatibility

● The Protocol

○ Protocol Primitive Types○ Notes on reading the request format grammars○ Common Request and Response Structure○ Message Sets

● Constants

○ Error Codes○ Api Keys

● The Messages

● Some Common Philosophical Questions

Page 23: Streaming Processing with a Distributed Commit Log

Data Durability

Page 24: Streaming Processing with a Distributed Commit Log

Client LibrariesCommunity Clients https://cwiki.apache.org/confluence/display/KAFKA/Clients

● Go (aka golang) Pure Go implementation with full protocol support. Consumer and Producer implementations included, GZIP and Snappy compression supported.

● Python - Pure Python implementation with full protocol support. Consumer and Producer implementations included, GZIP and Snappy compression supported.

● C - High performance C library with full protocol support● Ruby - Pure Ruby, Consumer and Producer implementations included,

GZIP and Snappy compression supported. Ruby 1.9.3 and up (CI runs MRI 2.

● Clojure - Clojure DSL for the Kafka API● JavaScript (NodeJS) - NodeJS client in a pure JavaScript implementation

Page 25: Streaming Processing with a Distributed Commit Log

Operationalizing Kafkahttps://kafka.apache.org/documentation.html#basic_ops

Basic Kafka Operations

● Adding and removing topics

● Modifying topics

● Graceful shutdown

● Balancing leadership

● Checking consumer position

● Mirroring data between clusters

● Expanding your cluster

● Decommissioning brokers

● Increasing replication factor

Page 26: Streaming Processing with a Distributed Commit Log

Kafka “as a Service”

Page 27: Streaming Processing with a Distributed Commit Log

27

CURRENT STATE OF IMPLEMENTATION11 STEPS BEFORE ANY BUSINESS VALUE IS CREATED

1 SET UP Instances → AWS / GCE / etc..

2 Repeat above by # of instances

3 SET UP uniformly, harden, secure every machine

4 DOWNLOAD: Apache Kafka

5 LEARN to install, run on multiple nodes / high availability

6 LEARN to run on multiple data centers / multiple racks

7 CONFIGURE nodes, tables specifically by cluster

8 MONITOR performance, isolate bottlenecks

9 OPTIMIZE system / team to hands off through next objective

10 MONITOR for failure and build disaster recovery protocol

11 FAILURE RECOVERY investigation, recovery and spin back up time

10

11

9

8

7

6

5

4

3

2

1

10

11

9

8

7

6

5

4

3

2

1

10

11

9

8

7

6

5

4

3

2

1

10

11

9

8

7

6

5

4

3

2

1

10

11

9

8

7

6

5

4

3

2

1

10

11

9

8

7

6

5

4

3

2

1

AND process must repeat by # of instances and technologies

Page 28: Streaming Processing with a Distributed Commit Log

28

ELODINA AUTOMATES DEPLOYMENT, SCALING AND MAINTENANCEReduce steps and learning curve to a THREE stage repeatable process

1 SET UP Instances → AWS / GCE / etc..

2 Repeat above by # of instances

3 SET UP uniformly, harden, secure every machine

4 DOWNLOAD: Apache Kafka

5 LEARN to install, run on multiple nodes / high availability

6 LEARN to run on multiple data centers / multiple racks

7 CONFIGURE nodes, tables specifically by cluster

8 MONITOR performance, isolate bottlenecks

9 OPTIMIZE system / team to hands off through next objective

10 MONITOR for failure and build disaster recovery protocol

11 FAILURE RECOVERY investigation, recovery and spin back up time

Platform modulars allow for deployment in minutesDEPLOY

Grid scales automatically with low latency based on real time traffic patterns.

SCALE

Single destination to observe and troubleshoot from CLI, REST API or GUI

OBSERVE

Page 29: Streaming Processing with a Distributed Commit Log

29

BUILT- IN FRAMEWORKS DIRECTLY IN PLATFORMLeading technologies deployable across any compute resource

Platform modulars allow for deployment in minutesDEPLOY

Grid scales automatically with low latency based on real time traffic patterns.

SCALE

Single destination to observe and troubleshoot from CLI, REST API or GUI

OBSERVE

Res

ourc

esTe

chno

logi

es

Page 30: Streaming Processing with a Distributed Commit Log

30

IMMEDIATE OPERATIONAL BENEFITS

Removing Fragmentation with Interoperability

Clearing crowded market decisioning on which software or stack of software to choose and interoperate with your data center

Immediate Efficiency & Reliability

Operation resources deployed across multiple data centers across multiple regions streamlined with dynamic compute and automated scheduling capabilities.

Automated Speed and Recovery

Reduce costs and time to market on development cycle time and Automate recovery from failure and

Page 31: Streaming Processing with a Distributed Commit Log

What is Mesos?

Page 32: Streaming Processing with a Distributed Commit Log
Page 33: Streaming Processing with a Distributed Commit Log
Page 34: Streaming Processing with a Distributed Commit Log
Page 35: Streaming Processing with a Distributed Commit Log
Page 36: Streaming Processing with a Distributed Commit Log

Scheduler

Page 37: Streaming Processing with a Distributed Commit Log

Executors

Page 38: Streaming Processing with a Distributed Commit Log

mesos/kafka

https://github.com/mesos/kafka

Page 39: Streaming Processing with a Distributed Commit Log

Scheduler● Provides the operational automation for a Kafka Cluster.● Manages the changes to the broker's configuration. ● Exposes a REST API for the CLI to use or any other client.● Runs on Marathon for high availability.● Broker Failure Management “stickiness”

Executor● The executor interacts with the kafka broker as an

intermediary to the scheduler

Scheduler & Executor

Page 41: Streaming Processing with a Distributed Commit Log

Navigating Operations

● Adding brokers to the cluster

● Updating broker configurations

● Starting brokers

● Stopping brokers

● Restarting brokers

● Removing brokers

● Retrieving broker log

● Rebalancing brokers in the cluster

● Listing topics

● Adding topic

● Updating topic

Page 42: Streaming Processing with a Distributed Commit Log

Kafka as a Service

Page 43: Streaming Processing with a Distributed Commit Log

-

Page 44: Streaming Processing with a Distributed Commit Log

Kafka Consumers “as a Service”

Page 45: Streaming Processing with a Distributed Commit Log
Page 46: Streaming Processing with a Distributed Commit Log

http://heronstreaming.io

Page 47: Streaming Processing with a Distributed Commit Log
Page 48: Streaming Processing with a Distributed Commit Log
Page 49: Streaming Processing with a Distributed Commit Log

Topology MasterThe Topology Master (TM) manages a topology

throughout its entire lifecycle, from the time it’s

submitted until it’s ultimately killed. When heron

deploys a topology it starts a single TM and

multiple containers. The TM creates an

ephemeral ZooKeeper node to ensure that

there’s only one TM for the topology and that

the TM is easily discoverable by any process in

the topology. The TM also constructs the

physical plan for a topology which it relays to

different components.

Container Each Heron topology consists of multiple containers, each of which houses multiple Heron Instances, a Stream Manager, and a Metrics Manager. Containers communicate with the topology’s TM to ensure that the topology forms a fully connected graph. For an illustration, see the figure in the Topology Master section above.

Page 50: Streaming Processing with a Distributed Commit Log

Stream Manager

The Stream Manager (SM) manages the routing of tuples between topology components. Each Heron Instance in a topology connects to its local SM, while all of the SMs in a given topology connect to one another to form a network. Below is a visual illustration of a network of SMs:

Page 51: Streaming Processing with a Distributed Commit Log

Heron Instance

A Heron Instance (HI) is a process that handles a single task of a spout or bolt, which allows for easy

debugging and profiling.

Currently, Heron only supports Java, so all HIs are JVM processes, but this will change in the future.

Heron Instance Configuration

HIs have a variety of configurable parameters that you can adjust at each phase of a topology’s lifecycle.

Page 52: Streaming Processing with a Distributed Commit Log

Heron Instance

Page 53: Streaming Processing with a Distributed Commit Log

Back Pressure Built In

Page 54: Streaming Processing with a Distributed Commit Log

Metrics Manager

Each topology runs a Metrics Manager (MM) that collects and exports metrics from all components in a

container. It then routes those metrics to both the Topology Master and to external collectors, such as

Scribe, Graphite, or analogous systems.

You can adapt Heron to support additional systems by implementing your own custom metrics sink.

Page 55: Streaming Processing with a Distributed Commit Log

Cluster-level Components

Heron CLI

Heron has a CLI tool called heron that is used to manage topologies. Documentation can be found in Managing Topologies.

Heron Tracker

The Heron Tracker (or just Tracker) is a centralized gateway for cluster-wide information about topologies, including which topologies are running,

being launched, being killed, etc. It relies on the same ZooKeeper nodes as the topologies in the cluster and exposes that information through a

JSON REST API. The Tracker can be run within your Heron cluster (on the same set of machines managed by your Heron scheduler) or outside

of it.

Instructions on running the tracker including JSON API docs can be found in Heron Tracker.

Heron UI

Heron UI is a rich visual interface that you can use to interact with topologies. Through Heron UI you can see color-coded visual representations of

the logical and physical plan of each topology in your cluster.

For more information, see the Heron UI document.

Page 56: Streaming Processing with a Distributed Commit Log
Page 57: Streaming Processing with a Distributed Commit Log
Page 58: Streaming Processing with a Distributed Commit Log
Page 59: Streaming Processing with a Distributed Commit Log
Page 60: Streaming Processing with a Distributed Commit Log
Page 61: Streaming Processing with a Distributed Commit Log
Page 62: Streaming Processing with a Distributed Commit Log
Page 63: Streaming Processing with a Distributed Commit Log
Page 64: Streaming Processing with a Distributed Commit Log
Page 65: Streaming Processing with a Distributed Commit Log
Page 66: Streaming Processing with a Distributed Commit Log
Page 67: Streaming Processing with a Distributed Commit Log
Page 68: Streaming Processing with a Distributed Commit Log

Other Kafka Use Cases

Page 69: Streaming Processing with a Distributed Commit Log

69

STACK EXAMPLE AUse Case: Data Real-Time Analytics Ingestion

Page 70: Streaming Processing with a Distributed Commit Log

70

STACK EXAMPLE A+Use Case: Data Real-Time Analytics Ingestion + Long Term Storage for Batch

Page 71: Streaming Processing with a Distributed Commit Log

71

STACK EXAMPLE BUse Case: Real-Time Data Streaming/Processing

Page 72: Streaming Processing with a Distributed Commit Log

72

STACK EXAMPLE B+Use Case: Real-Time Data Streaming/Processing + Feedback Loop

Page 73: Streaming Processing with a Distributed Commit Log

73

STACK EXAMPLE CUse Case: Message Queuing

Page 74: Streaming Processing with a Distributed Commit Log

74

STACK EXAMPLE C+Use Case: Message Queuing + Priority Management

Page 75: Streaming Processing with a Distributed Commit Log

75

STACK EXAMPLE DUse Case: Distributed Akka Remoting for Real-Time Decisioning

Page 76: Streaming Processing with a Distributed Commit Log

76

STACK EXAMPLE D+Use Case: Distributed Akka Remoting for Real-Time Decisioning + Long-Term Batch

Page 77: Streaming Processing with a Distributed Commit Log

77

STACK EXAMPLE EUse Case: Distributed Trace Services