scala exchange: building robust data pipelines in scala

30
Building robust data pipelines in Scala: the Snowplow experience

Upload: alexander-dean

Post on 12-Jul-2015

1.111 views

Category:

Software


2 download

TRANSCRIPT

Page 1: Scala eXchange: Building robust data pipelines in Scala

Building robust data pipelines in Scala: the Snowplow experience

Page 2: Scala eXchange: Building robust data pipelines in Scala

Introducing myself

• Alex Dean

• Co-founder and technical lead at Snowplow, the open-source event analytics platform based here in London [1]

• Weekend writer of Unified Log Processing, available on the Manning Early Access Program [2]

[1] https://github.com/snowplow/snowplow

[2] http://manning.com/dean

Page 3: Scala eXchange: Building robust data pipelines in Scala

Snowplow – what is it?

Page 4: Scala eXchange: Building robust data pipelines in Scala

Snowplow is an open source event analytics platform

1a. Trackers

2. Collectors 3. Enrich 4. Storage 5. AnalyticsB C D

A D Standardised data protocols

1b. Webhooks

A

• Your granular, event-level and customer-level data, in your own data warehouse

• Connect any analytics tool to your data• Join your event data with any other data set

Page 5: Scala eXchange: Building robust data pipelines in Scala

Today almost all users/customers are running a batch-based Snowplow configuration

Hadoop-based

enrichment

Snowplow event

tracking SDK

Amazon Redshift

Amazon S3

HTTP-based event

collector

• Batch-based• Normally run overnight;

sometimes every 4-6 hours

Page 6: Scala eXchange: Building robust data pipelines in Scala

We also have a real-time pipeline for Snowplow in beta, built on Amazon Kinesis (Apache Kafka support coming next year)

scala-stream-collector

scala-kinesis-enrich

S3Redshift

S3 sink Kinesis app

Redshift sink

Kinesis app

Snowplow Trackers

= not yet released

kinesis-elasticsearch-

sink

DynamoDBElastic-search

Event aggregator Kinesis app

Analytics on Read for agile exploration of

events, machine learning,

auditing, re-processing…

Analytics on Write for operational reporting, real-time dashboards, audience segmentation, personalization…

Raw event

stream

Bad raw event

stream

Enriched event

stream

Page 7: Scala eXchange: Building robust data pipelines in Scala

Snowplow and Scala

Page 8: Scala eXchange: Building robust data pipelines in Scala

Today, Snowplow is primarily developed in Scala

Data modelling scripts

• Used for Snowplow orchestration

• No event-level processing occurs in Ruby

• Used for event validation, enrichmentand other processing

• Increasingly used for event storage

• Starting to be used for event collection too

Page 9: Scala eXchange: Building robust data pipelines in Scala

Our initial skunkworks version of Snowplow had no Scala

Website / webapp

Snowplow data pipeline v1

CloudFront-based pixel

collector

HiveQL + Java UDF

“ETL” Amazon S3

JavaScript event tracker

Page 10: Scala eXchange: Building robust data pipelines in Scala

But our schema-first, loosely coupled approach made it possible to start swapping out existing components…

Website / webapp

Snowplow data pipeline v2

CloudFront-based event

collector

Scalding-based

enrichment

JavaScript event tracker

HiveQL + Java UDF

“ETL”

Amazon Redshift /

PostgreSQL

Amazon S3

or

Clojure-based event

collector

Page 11: Scala eXchange: Building robust data pipelines in Scala

What is Scalding?

• Scalding is a Scala API over Cascading, the Java framework for building data processing pipelines on Hadoop:

Hadoop DFS

Hadoop MapReduce

Cascading Hive Pig

Java

Scalding Cascalog PyCascadingcascading.

jruby

Page 12: Scala eXchange: Building robust data pipelines in Scala

We chose Cascading because we liked their “plumbing” abstraction over vanilla MapReduce

Page 13: Scala eXchange: Building robust data pipelines in Scala

Why did we choose Scalding instead of one of the other Cascading DSLs/APIs?

• Lots of internal experience with Scala – could hit the ground running (only very basic awareness of Clojurewhen we started the project)

• Scalding created and supported by Twitter, who use it throughout their organization – so we knew it was a safe long-term bet

• More controversial opinion (although maybe not at a Scala conference): we believe that data pipelines should be as strongly typed as possible – all the other DSLs/APIs on top of Cascading encourage dynamic typing

Page 14: Scala eXchange: Building robust data pipelines in Scala

Robust data pipelines

Page 15: Scala eXchange: Building robust data pipelines in Scala

Robust data pipelines means strongly typed data pipelines –why?

• Catch errors as soon as possible – and report them in a strongly typed way too

• Define the inputs and outputs of each of your data processing steps in an unambiguous way

• Forces you to formerly address the data types flowing through your system

• Lets you write code like this:

Page 16: Scala eXchange: Building robust data pipelines in Scala

Robust data processing is a state of mind: failures will happen, don’t panic, but don’t sweep them under the carpet either

• Our basic processing model for Snowplow looks like this:

• Looks familiar? stdin, stdout, stderr

Raw eventsSnowplow

enrichment process

“Bad” raw events +

reasons why they are bad

“Good” enriched events

Page 17: Scala eXchange: Building robust data pipelines in Scala

This pattern is extremely composable, especially with Kinesis or Kafka streams/topics as the core building block

Page 18: Scala eXchange: Building robust data pipelines in Scala

Validation, the “gateway drug” to Scalaz

Page 19: Scala eXchange: Building robust data pipelines in Scala

Inside and across our components, we use the Validation applicative functor from the Scalaz project extensively

• Scalaz Validation lets us perform a variety of different event validations and enrichments, and then compose (i.e. collate) the failures

• This is really powerful!

• The Scalaz codebase calls |@| a “DSL for constructing

Applicative expressions” – I think of it as “the Scream operator”

• Individual components of the enrichment process can themselves collate their own internal failures

Page 20: Scala eXchange: Building robust data pipelines in Scala

There is a great F# article by Scott Wlaschin which describes this approach as “railway-oriented programming” [1]

The Happy Path• If everything succeeds, then this path outputs an enriched event• Any individual failure along the path could switch us onto the

failure path• We never get back onto the happy path once we leave it

The Failure Path• Any failure can take us onto the failure path• We can choose whether to switch straight to the

failure path (“fail fast”), or collate failures from multiple independent tests

[1] http://fsharpforfunandprofit.com/posts/recipe-part2/

Page 21: Scala eXchange: Building robust data pipelines in Scala

Putting it all together, the Snowplow enrichment process boils down to one big type transformation

• Types abstracting over simpler types

• No mutable state

• Railway-oriented programming

• Collate failures inside a processing stage, fail fast between processing stages

Page 22: Scala eXchange: Building robust data pipelines in Scala

• Using Scott Wlaschin’s “fruit as cargo” metaphor:

• Currently Snowplow uses a Non-Empty List of Strings to collect our failures:

• We are working on a ProcessingMessage case class, to capture much richer and more structured failures than we can using Strings

The only limitation is that the Failure Path restricts us to a single type

Page 23: Scala eXchange: Building robust data pipelines in Scala

A brief aside on testing

Page 24: Scala eXchange: Building robust data pipelines in Scala

On the testing side: we love Specs2 data tables…

• They let us test a variety of inputs and expected outputs without making the mistake of just duplicating the data processing functionality in the test:

Page 25: Scala eXchange: Building robust data pipelines in Scala

… and are starting to do more with ScalaCheck

• ScalaCheck is a property-based testing framework, originally inspired by Haskell’s QuickCheck

• We use it in a few places –including to generate unpredictable bad data and also to validate our new Thrift schema for raw Snowplow events:

Page 26: Scala eXchange: Building robust data pipelines in Scala

Robustness in the face of user-defined types

Page 27: Scala eXchange: Building robust data pipelines in Scala

Snowplow is evolving from a fixed-schema platform to a platform supporting user-defined JSONs

• Where other analytics tools depend on schema-less JSONs or custom variables, we use JSON Schema

• Snowplow users send in events as “self-describing JSONs” which have to include the schema URI which validates the event’s JSON body:

Page 28: Scala eXchange: Building robust data pipelines in Scala

To support JSON Schema, we have open-sourced Iglu, a new schema repository system in Scala/Spray/Swagger/Jackson

Page 29: Scala eXchange: Building robust data pipelines in Scala

Our Scala client library for Iglu lets us work with JSONs in a safe way from within Snowplow

• If a JSON passes its JSON Schema validation, we should be able to deserialize it and work with it safely in Scala in a strongly-typed way:

• We use json4s with the Jackson bindings, as JSON Schema support in Java/Scala is Jackson-based

• We still wrap our JSON deserialization in Scalaz Validations in case of any mismatch between the Scala deserialization code and the JSON schema

Page 30: Scala eXchange: Building robust data pipelines in Scala

Questions?

http://snowplowanalytics.com

https://github.com/snowplow/snowplow

@snowplowdata

To meet up or chat, @alexcrdean on Twitter or [email protected]

Discount code: ulogprugcf (43% off Unified Log Processing eBook)