spark summit eu talk by kaarthik sivashanmugam

16
SPARK SUMMIT EUROPE 2016 Spark Streaming At Bing Scale Kaarthik Sivashanmugam @kaarthikss

Upload: spark-summit

Post on 16-Apr-2017

639 views

Category:

Data & Analytics


1 download

TRANSCRIPT

Page 1: Spark Summit EU talk by Kaarthik Sivashanmugam

SPARK SUMMIT EUROPE 2016

Spark Streaming At Bing Scale

Kaarthik Sivashanmugam@kaarthikss

Page 2: Spark Summit EU talk by Kaarthik Sivashanmugam

Scale• Billions of search queries per month• Hundreds of services power Bing stack• Thousands of machines – several Data Centers• Tens of TBs of events per hour• Several data processing frameworks

Page 3: Spark Summit EU talk by Kaarthik Sivashanmugam

Data Curation• Events of individual services - little value• Need correlation of events & curated datasets

– at scale, on time, high fidelity– contributes directly to improving quality of services &

monetization

Page 4: Spark Summit EU talk by Kaarthik Sivashanmugam

Data Pipelines• Traditionally implemented entirely using Batch

processing in COSMOS infrastructureo Storage – DFS (similar to HDFS)

o Execution – Dryad (general purpose, more expressive than map-reduce)

o Query – SCOPE (SQL 'style' scripting language that supports inline C#)

• Data pipelines are adopting near real-time processing – new issues to address

Page 5: Spark Summit EU talk by Kaarthik Sivashanmugam

NRT Data PipelinesKey issues to address in stream processing applications:• Events generated in different DCs and at a rapid

rate• Events arrive out of order• Events are delayed or get lost• Managing state can be very expensive and hard to

get right

Page 6: Spark Summit EU talk by Kaarthik Sivashanmugam

Mobius

NRT Processing Scenario – Event Merge Pipeline

Merge - 10-minute App-Time Window

G G G G

U U U U

C C C C

Checkpoint

G G

U U

C C

Read

Unbalanced partitions

Join by application-time

Slow Kafka Brokers

Find Offset-Range Expensive

OutputMergedEvents

G U C

G

U

C

Data Center

G

U

C

Data Center

G

U

C

Data Center

Page 7: Spark Summit EU talk by Kaarthik Sivashanmugam

Unbalanced Kafka Partitions• Direct API - Kafka partition maps to RDD partition• Largest partition is the long pole in processing• Solution

– Repartition data from one Kafka partition into multiple RDDs w/o extra shuffling cost of DStream.Repartition()

– Repartition threshold is configurable per topic– DynamicPartitionKafkaRDD.scala at github.com/Microsoft/Mobius

Page 8: Spark Summit EU talk by Kaarthik Sivashanmugam

Slow Kafka Brokers• SlowKafkabrokersincreasebatchtime• Delayinstartingthenextbatchaccumulates• Solution

– Submit Kafka data-fetch job on-time (defined by batch interval) in a separate thread, even when previous batch delayed

– CSharpDStream.scala at github.com/Microsoft/Mobius

Page 9: Spark Summit EU talk by Kaarthik Sivashanmugam

Find Offset-Range Expensive• FindingOffset-rangefor{DC XTopic XPartition} isexpensive

– SeveralDCs– 3topicseach– averageof170partitionspertopic– {Getmetadata+getoffsetrange}took10minsfor2minbatchwindow

• {Metadatarefresh+FindOffset}anddataprocessingnotparallel

• Solution– Move find offset-range to a separate thread– Materialize and cache Kafka RDD in that thread– DynamicPartitionKafkaInputDStream.scala at github.com/Microsoft/Mobius

Page 10: Spark Summit EU talk by Kaarthik Sivashanmugam

Join By Application-Time• Application-time based join not available in Spark 1.*• Solution

– Use custom join function in DStream.UpdateStateByKey()– Custom join function enforces time window based on

application time– UpdateStateByKey maintains partially joined events as the

state– PairDStreamFunctions.cs at github.com/Microsoft/Mobius

Page 11: Spark Summit EU talk by Kaarthik Sivashanmugam

SPARK SUMMIT EUROPE 2016

THANK YOU.

Page 12: Spark Summit EU talk by Kaarthik Sivashanmugam

ADDITIONAL SLIDES

Page 13: Spark Summit EU talk by Kaarthik Sivashanmugam

Dynamic Repartition

AfterDynamic Repartition

PseudoCodeclass DynamicPartitionKafkaRDD(kafkaPartitionOffsetRanges)override def getPartitions {

// repartition threshold per topic loaded from configval maxRddPartitionSize = Map<topic, partitionSize>

// apply max repartition threshold kafkaPartitionOffsetRanges.flatMap { case o =>

val rddPartitionSize = maxRddPartitionSize(o.topic)(o.fromOffset until o.untilOffset by rddPartitionSize).map(

s => (o.topic, o.partition, s, (o.untilOffset, s + rddPartitionSize)))}

}

SourceCodeDynamicPartitionKafkaRDD.scala - https://github.com/Micro soft /Mobius

2-minuteinterval

BeforeDynamicRepartition

Page 14: Spark Summit EU talk by Kaarthik Sivashanmugam

On-time Kafka fetch job submission

Job#A2UpdateStateByK

ey

Job#A1Fetchdata

Batch-A

NewThread

Job#A3Checkpoint

Job#B2UpdateStateByKey

Job#B1Fetchdata

Batch-B

Job#B3Checkpoint

Job#C2UpdateStateByKe

y

Job#C1Fetchdata

Batch-C

Job#C3checkpoin

t

MainThread

SubmitJob

PseudoCodeclass CSharpStateDStreamoverride def compute {

val lastState = getOrCompute (validTime - batchInterval)

val rdd = parent.getOrCompute(validTime)if (!lastBatchCompleted) {

// if last batch not complete yet// run Fetch data job to materialize rdd in a separate threadrdd.cache()ThreadPool.execute(sc.runJob(rdd))// wait for job to complele

}

<compute UpdateStateByKey Dstream>}

SourceCodeCSharpDStream.scala - https://github.com/Microsoft/Mobius

Driverperspective

Page 15: Spark Summit EU talk by Kaarthik Sivashanmugam

Parallel Kafka metadata refresh + RDD materialization

Kafkaoffset range

1RDD.Cache

t3

MainThread

t2

NewThread

t1

Kafkaoffset range

2RDD.Cache

Kafkaoffset range

3RDD.Cache

batch1 batch2 batch3BatchJobSubmission

KafkaMetadatarefresh

DriverperspectivePseudoCodeclass DynamicPartitionKafkaInputDStream

// starts a separate schedule thread at refreshOffsetsIntervalrefreshOffsetsScheduler.scheduleAtFixedRate(

<get offset ranges><generate kafka RDD>// materialize and cachesc.runJob(kafkaRdd.cache)<enqueue kafka RDD>

)

override def compute {<dequeue kafka RDD nonblockingly>

}

SourceCodeDynamicPartitionKafkaInputDStream.scala - https://github.com/Microsoft/Mobius

Page 16: Spark Summit EU talk by Kaarthik Sivashanmugam

Use UpdateStateByKey to join DStreams

G/WDStreamClickDStream

Batchjob1RDD@time1

Batchjob2RDD@time2

StateDStream

UpdateStateByKey

Batchjob3RDD@time3

PseudoCodeIterator[(K, S)] JoinFunction(

int pid, Iterator[(K:key, Iterator[V]:newEvents, S:oldState)] events){

val currentTime = events.newEvents.max(e => e.eventTime);

foreach (var e in events) {val newState = <oldState join newEvents>if (oldState.min(s => s.eventTime) + TimeWindow < currentTime) // TimeWindow 10minutes

<output to external storage>else

return (key, newState)}

}

UpdateStateByKey C#APIPairDStreamFunctions.cs, https://github.com/Microsoft/Mobius