spark at-hackthon8jan2014

Post on 01-Nov-2014

707 Views

Category:

Technology

1 Downloads

Preview:

Click to see full reader

DESCRIPTION

 

TRANSCRIPT

1

Spark @ Hackathon Madhukara Phatak

Zinnia Systems @madhukaraphatak

2

Hackathon

“ Hackathon is a programming ritual , usually done in night, where

programmers solve problems overnight which may take years of time otherwise”

-Anonymous

3

Where / When

Dec 6 and 7 th , 2013At BigData ConclaveHosted by FluturaSolving real world problem(s) in 24 hoursNo restriction on tools to be usedDeliverables : Results, Working code and Visualization

4

What?

Predict the global energy demand for next year using the energy usage data available for last four years, in order to enable utility companies , effectively handle the energy demand.Every minute usage data collected from smart metersFrom 2008- 2012Around 133 mb uncompressed2,75,000 recordsAround 25,000 missing data records

5

Questions to be answered

What would be the energy consumption for the next day?What would be week wise Energy consumption for the next one year?What would be household’s peak time load (Peak time is between 7 AM to 10 AM) for the next month.During WeekdaysDuring WeekendsAssuming there was full day of outage, Calculate the Revenue Loss for a particular day next year by finding the Average Revenue per day (ARPD) of the household using given tariff planCan you identify the device usage patterns?

6

Who

Four member team from Zinnia SystemsChose Spark/Scala for solving these problemsEvery one except me ,were new to Scala and SparkAble to solve the problems on timeWon first prize at Hackathon

7

Why Spark?

Faster prototypingExcellent in memory performance

Uses AKKAAble to run 2.5 million concurrent processing in 1GB RAM

Easy to debugExcellent integration in IntelliJ and Eclipse

Little to code - 500 lines of codeSomething new to try at Hackathon

8

Solution

Uses only core sparkUses Geometric Brownian motion algorithm for predictionComplete code Available at Github

https://github.com/zinniasystems/spark-energy-predictionUnder Apache license

Blog serieshttp://zinniasystems.com/blog/2013/12/12/predicting-global-energy-demand-using-spark-part-1/

9

Embrace Scala

Scala is a JVM language in which spark is implemented.Though Spark gives Java API and Python API Scala feels more natural.If you are coming from Pig , You feel at home in Scala API.Spark source base is small. So knowing Scala helps you peek at spark source whenever possible.Excellent REPL support.

10

Go functional

Spark encourages you to use functional idioms over object oriented one's Some of the functional idioms available are

ClosureFunction chainingLazy evaluation

Ex : Standard deviationSum( (xi – Mean)*(xi – Mean))

Map/Reduce wayMap calculates (xi – Mean)*(xi – Mean)Reduce does the sum

11

Spark way

private def standDeviation(inputRDD: RDD[Double], mean: Double): Double = { val sum = inputRDD.map(value => { (value - mean) * (value - mean) }).reduce((firstValue, secondValue) => { firstValue + secondValue })}

private def standDeviation(inputRDD: RDD[Double],mean: Double): Double = {val sum =0;inputRDD.map(value => sum+=(value-mean)*(value-mean))}

12

Use Tuples

Map/Reduce is restricted to Key,Value pairsRepresenting data like Grouped data is too difficult in Key,Value pairsWritable is too much work to develop/maintainThere was Tuple Map/Reduce effort some point of timeSpark (Scala) has built in.

13

Tuple Example

Aggregating data over hourdef hourlyAggregator(inputRDD: RDD[Record]): RDD[((String, Long), Record)] The resulting RDD has tuple as key which combines date and hour of the day.These tuples can be sent as input to other functionsAggregating data over Day def dailyAggregation(inputRDD: RDD[((String, Long), Record)]): RDD[(Date, Record)]

14

Use Lazy evaluation

Map/Reduce does not embrace lazy evaluation. Output of every job has to be written to HDFS HDFS is only way to share data in Map/ReduceSpark differs

Every operation, other than actions, are lazy evaluatedOnly write critical data to Disk, other intermediate data cache in MemoryBe careful when you use actions. Try to delay calling of actions as late possible.Refer Main.scala

15

Thank you

1

Spark @ Hackathon Madhukara Phatak

Zinnia Systems @madhukaraphatak

2

Hackathon

“ Hackathon is a programming ritual , usually done in night, where

programmers solve problems overnight which may take years of time otherwise”

-Anonymous

3

Where / When

Dec 6 and 7 th , 2013At BigData ConclaveHosted by FluturaSolving real world problem(s) in 24 hoursNo restriction on tools to be usedDeliverables : Results, Working code and Visualization

4

What?

Predict the global energy demand for next year using the energy usage data available for last four years, in order to enable utility companies , effectively handle the energy demand.Every minute usage data collected from smart metersFrom 2008- 2012Around 133 mb uncompressed2,75,000 recordsAround 25,000 missing data records

5

Questions to be answered

What would be the energy consumption for the next day?What would be week wise Energy consumption for the next one year?What would be household’s peak time load (Peak time is between 7 AM to 10 AM) for the next month.During WeekdaysDuring WeekendsAssuming there was full day of outage, Calculate the Revenue Loss for a particular day next year by finding the Average Revenue per day (ARPD) of the household using given tariff planCan you identify the device usage patterns?

6

Who

Four member team from Zinnia SystemsChose Spark/Scala for solving these problemsEvery one except me ,were new to Scala and SparkAble to solve the problems on timeWon first prize at Hackathon

7

Why Spark?

Faster prototypingExcellent in memory performance

Uses AKKAAble to run 2.5 million concurrent processing in 1GB RAM

Easy to debugExcellent integration in IntelliJ and Eclipse

Little to code - 500 lines of codeSomething new to try at Hackathon

8

Solution

Uses only core sparkUses Geometric Brownian motion algorithm for predictionComplete code Available at Github

https://github.com/zinniasystems/spark-energy-predictionUnder Apache license

Blog serieshttp://zinniasystems.com/blog/2013/12/12/predicting-global-energy-demand-using-spark-part-1/

9

Embrace Scala

Scala is a JVM language in which spark is implemented.Though Spark gives Java API and Python API Scala feels more natural.If you are coming from Pig , You feel at home in Scala API.Spark source base is small. So knowing Scala helps you peek at spark source whenever possible.Excellent REPL support.

10

Go functional

Spark encourages you to use functional idioms over object oriented one's Some of the functional idioms available are

ClosureFunction chainingLazy evaluation

Ex : Standard deviationSum( (xi – Mean)*(xi – Mean))

Map/Reduce wayMap calculates (xi – Mean)*(xi – Mean)Reduce does the sum

11

Spark way

private def standDeviation(inputRDD: RDD[Double], mean: Double): Double = { val sum = inputRDD.map(value => { (value - mean) * (value - mean) }).reduce((firstValue, secondValue) => { firstValue + secondValue })}

private def standDeviation(inputRDD: RDD[Double],mean: Double): Double = {val sum =0;inputRDD.map(value => sum+=(value-mean)*(value-mean))}

The code is available in EnergyUsagePrediction.scala

12

Use Tuples

Map/Reduce is restricted to Key,Value pairsRepresenting data like Grouped data is too difficult in Key,Value pairsWritable is too much work to develop/maintainThere was Tuple Map/Reduce effort some point of timeSpark (Scala) has built in.

13

Tuple Example

Aggregating data over hourdef hourlyAggregator(inputRDD: RDD[Record]): RDD[((String, Long), Record)] The resulting RDD has tuple as key which combines date and hour of the day.These tuples can be sent as input to other functionsAggregating data over Day def dailyAggregation(inputRDD: RDD[((String, Long), Record)]): RDD[(Date, Record)]

14

Use Lazy evaluation

Map/Reduce does not embrace lazy evaluation. Output of every job has to be written to HDFS HDFS is only way to share data in Map/ReduceSpark differs

Every operation, other than actions, are lazy evaluatedOnly write critical data to Disk, other intermediate data cache in MemoryBe careful when you use actions. Try to delay calling of actions as late possible.Refer Main.scala

15

Thank you

top related