tuning elasticsearch for multi-terabyte analytics
Post on 26-Jan-2015
111 Views
Preview:
DESCRIPTION
TRANSCRIPT
ElasticSearch London
Tuning ElasticSearch for multi-terabyte analytics
or… “Counting stuff is hard”
Andrew CleggData Analytics & Visualization TeamPearson
@andrew_clegg
Introduction
Our data
Over 11 billion “docs” in production cluster.
Each doc is around 1-2KB of JSON.
~60 million docs/day == ~700 docs/sec.
Higher than this during peak times.
Much higher when backfilling historical data.
Conversely: not many end users yet: 5-20 on a typical day.
Our architecture
Palomino
Getting data in
Hardware
(Yes, actual hardware!)
Cisco UCS servers, 24 cores, 96GB memory.
8 x 1TB disks.7 for data, 1 for log files, temp files, etc.Reads/writes parallelized across segments.
Currently 5 of these in production cluster.
10GB switch.
Getting data in
Index configuration
We don’t store any data in ElasticSearch. All we need is facet counts.
Disable _source, _all, and individual field storage.
Disable term vectors and norms.
No analysis on text fields (just unbroken strings).
No date autodetection.
Getting data in
Weekly rolling indices mean shard level can increase as traffic does
NB currently we have steady state so it’s set to 5 shards each week.3 replicas per shard (including primary).Real-time implies: can’t disable replication during indexing!
Time (new index each week)
Shard count
Getting data in
Client configuration
Multiple writer threads on multiple machines: currently 6 x 3.
Bulk API: currently up to 1000 docs per batch.
Incoming docs queued until batch limit, or time or size limits, reached.
(e.g. 1000 docs or 100000 bytes or 2 secs since last batch)
Getting data in
Other things we could do -- but currently don’t
Tune indexer thread pool size?
Tune segment merge policy?
Reduce flush interval?
Even without these, our current record is over 20,000 docs indexed/sec.
(And think the bottleneck was the client machines…)
Getting data out
Typical queries
Date histogram and terms facet are the most common by far.
So we wrote our own versions with some optimizations :-)
https://github.com/pearson-enabling-technologies/elasticsearch-approx-plugin
Field data cache size important for speed: currently 30% of 80GB heap.(In fact it actually uses much more than this, with ES 0.90.2. Upgrade planned!)
We always use search_type=count.
Getting data out
Facet workflowClient request
Arbitrary master node:● Parses query● Distributes subqueries to data nodes
(including itself)● Combines results (reduce function)● Returns to clientData nodes:
● Find matching records● Perform groupings and counts
(and any other calculations)● Return to master
Getting data out
Facet plugin optimizations
Approximate data structures and sampling mode:Trade between speed/memory and accuracy.
Uses Lucene’s BytesRef & BytesRefHash instead of String & HashSet.
Micro-caching of local calculations, e.g. timestamp rounding.
Explicit “render” phase after “reduce” phase:Defer as much as possible until then.
Getting data out
General advice for plugin writers
Minimize object creation/destruction and type conversions.
Use arrays of primitives, or Trove collections, where possible. Reuse buffers.
Release objects as soon as possible when no longer needed.
Lucene has some neat tricks: bit fields, fast hashing algorithms.
So does ElasticSearch: CacheRecycler lets you reuse collections.
Getting data out
Hints for query performance tuning
Tools like jmap, jstat, Visual VM and MAT are very helpful.
Use ES “hot threads” API to see where it’s spending its time.
Set up unit/integration tests with time and RAM instrumentation.
Getting data out
Other things we could do -- but currently don’t
Non-data nodes to parse queries, and handle reduce & render phases.
Garbage collector tuning.
(Note to self: see if Trove still crashes Java 7 JVM under G1 GC…)
Use SSDs :-)
Thanks!
Any questions?
https://github.com/pearson-enabling-technologies/
https://twitter.com/andrew_clegg
top related