online training - couchbase 101 - installation
TRANSCRIPT
Technical Evangelist
twi0er: @scalabl3email: [email protected]
Jasdeep Jaitla
Couchbase 101: Architecture, Install & Config
Evolution from memcached
•Founders were key contributors to memcached
•Evolved into Membase, a distributed and persisted key-‐value store
•Evolved into Couchbase Document Store with JSON support and Map-‐Reduce Indexes, ElasTc Search IntegraTon, and Cross-‐Data Center ReplicaTon
Couchbase Server Core Principles
Easy Scalability
Consistent High Performance
Always On 24x365
Grow cluster without applicaTon changes, without downTme with a single click
Consistent sub-‐millisecond read and write response Tmes with consistent high throughput
No downTme for soWware upgrades, hardware maintenance, etc.
Flexible Data Model
JSON Anywhere document model with no fixed schema.
JSONJSONJSON
JSONJSON
PERFORMANCE
Couchbase Server 2.0 Architecture
Heartbeat
Process m
onito
r
Glob
al singleton supe
rviso
r
Confi
guraTo
n manager
on each node
Rebalance orchestrator
Nod
e he
alth m
onito
r
one per cluster
vBucket state and
replicaT
on m
anager
hDpRE
ST m
anagem
ent A
PI/W
eb UI
HTTP 8091
Erlang port mapper 4369
Distributed Erlang 21100 -‐ 21199
Erlang/OTP
storage interface
Couchbase EP Engine
11210 Memcapable 2.0
Moxi
11211 Memcapable 1.0
Memcached
New Persistence Layer
8092 Query API
Que
ry Engine
Data Manager Cluster Manager
Couchbase Server 2.0 Architecture
New Persistence Layer
storage interface
Couchbase EP Engine
11210 Memcapable 2.0
Moxi
11211 Memcapable 1.0
Object-‐level Cache
Disk Persistence
8092 Query API
Que
ry Engine
HTTP 8091
Erlang port mapper 4369
Distributed Erlang 21100 -‐ 21199
Heartbeat
Process m
onito
r
Glob
al singleton supe
rviso
r
Confi
guraTo
n manager
on each node
Rebalance orchestrator
Nod
e he
alth m
onito
r
one per cluster
vBucket state and
replicaT
on m
anager
hDp
REST m
anagem
ent A
PI/W
eb UI
Erlang/OTP
Server/Cluster Management & CommunicaYon
(Erlang)
RAM Cache, Indexing & Persistence Management
(C)
Couchbase OrganizaTon
•Couchbase operates like a Key-‐Value Document Store
• Key is a UTF-‐8 string up to 256 Bytes
• Values can be:
-‐ Simple Datatypes: strings, numbers, dateTme, boolean, and binary data can be stored -‐-‐ they are stored as Base64 encoded strings
-‐ Complex Datatypes: dicTonaries/hashes, arrays/lists, can be stored in JSON format (simple lists can be string based with delimiter)
-‐ JSON is a special class of string with a specific format for encoding simple and complex data structures
• Schema is unenforced and implicit, schema changes are programmaTc, done online, and can vary from Document to Document
Metadata and Documentsmeta {
“id”: “u::[email protected]”, “rev”: “1-‐0002bce0000000000”, “flags”: 0, “expiraYon”: 0, “type”: “json”
} !!document {
“uid”: 123456, “firstname”: “jasdeep”, “lastname”: “Jaitla”, “age”: 22, “favorite_colors”: [“blue”, “black”], “email”: “[email protected]”
}
Meta InformaYon Including Key (id)
!All Keys Unique and
Kept in RAM
Document Value !
Most Recent In RAM And Persisted To Disk
Retrieval Operations
Couchbase Server
EP EngineRAM Cache
Disk Write Queue
Replication Queue
Application Server
Replica Couchbase Cluster Machine
Retrieval Operations
Couchbase Server
EP EngineRAM Cache
Disk Write Queue
Replication Queue
Application Server
Replica Couchbase Cluster Machine
get
Retrieval Operations
Couchbase Server
EP EngineRAM Cache
Disk Write Queue
Replication Queue
Application Server
Replica Couchbase Cluster Machine
get
Storage Operations
Couchbase Server
EP EngineRAM Cache
Disk Write Queue
Replication Queue
Application Server
Replica Couchbase Cluster Machine
Storage Operations
Couchbase Server
EP EngineRAM Cache
Disk Write Queue
Replication Queue
Application Server
Replica Couchbase Cluster Machine
set/add/replace
Storage Operations
Couchbase Server
EP EngineRAM Cache
Disk Write Queue
Replication Queue
Application Server
Replica Couchbase Cluster Machine
set/add/replace
Storage Operations
Couchbase Server
EP EngineRAM Cache
Disk Write Queue
Replication Queue
Application Server
Replica Couchbase Cluster Machine
set/add/replace
Storage Operations
Couchbase Server
EP EngineRAM Cache
Disk Write Queue
Replication Queue
Application Server
Replica Couchbase Cluster Machine
set/add/replace
Consistency
Couchbase Server
EP EngineRAM Cache
Disk Write Queue
Replication Queue
Application Server
Replica Couchbase Cluster Machine
Consistency
Couchbase Server
EP EngineRAM Cache
Disk Write Queue
Replication Queue
Application Server
Replica Couchbase Cluster Machine
get
Ejection, NRU, Cache Miss
Couchbase Server
EP EngineRAM Cache
Disk Write Queue
Replication Queue
Application Server
Replica Couchbase Cluster Machine
Ejection, NRU, Cache Miss
Couchbase Server
EP EngineRAM Cache
Disk Write Queue
Replication Queue
Application Server
Replica Couchbase Cluster Machine
set/add/replaceset/add/replaceset/add/replace
Ejection, NRU, Cache Miss
Couchbase Server
EP EngineRAM Cache
Disk Write Queue
Replication Queue
Application Server
Replica Couchbase Cluster Machine
FULL (90%)
NRU Documents Ejected
Ejection, NRU, Cache Miss
Couchbase Server
EP EngineRAM Cache
Disk Write Queue
Replication Queue
Application Server
Replica Couchbase Cluster Machine
Ejection, NRU, Cache Miss
Couchbase Server
EP EngineRAM Cache
Disk Write Queue
Replication Queue
Application Server
Replica Couchbase Cluster Machine
get
Non-‐Resident Document
("Cache Miss")
Clients Connect Directly to Couchbase Nodes
Application Servers
MAP
1024
8 GB RAMPartitions
3 IO Workers
Clients Connect Directly to Couchbase Nodes
Application Servers
MAP
1024
8 GB RAMPartitions
3 IO Workers
Clients Connect Directly to Couchbase Nodes
Application Servers
MAP
MAP
MAP
1024
8 GB RAMPartitions
3 IO Workers
Key Hash-Partitioning
Application Servers
MAP
MAP
MAP
1024
8 GB RAMPartitions
3 IO Workers
ClientHashFuncYon("[email protected]") => ParYYon[0..1023] {25} ClusterMap[P(25)] => [x.x.x.x] => IP of Server Responsible for ParYYon 25
Application Servers8 GB RAM
3 IO Workers
8 GB RAM
3 IO Workers
Horizontal Scale-Rebalance
TOTAL
16 GB RAM
6 IO Workers
1024
Partitions
1024
Partitions
TOTAL
8 GB RAM
3 IO Workers
1024
Partitions
Application Servers8 GB RAM
3 IO Workers
8 GB RAM
3 IO Workers
Horizontal Scale-Rebalance
TOTAL
16 GB RAM
6 IO Workers
1024
Partitions1024
Partitions
512
Partitions
512
Partitions
Horizontal Scale-Rebalance
TOTAL
32 GB RAM
12 IO Workers
1024
Partitions
Application Servers
8 GB RAM
3 IO Workers
8 GB RAM
3 IO Workers
8 GB RAM
3 IO Workers
8 GB RAM
3 IO Workers
512
Partitions
512
Partitions
TOTAL
16 GB RAM
6 IO Workers
1024
Partitions
MAP
MAP MAP
MAP
Horizontal Scale-Rebalance
TOTAL
32 GB RAM
12 IO Workers
1024
Partitions
Application Servers
8 GB RAM
3 IO Workers
8 GB RAM
3 IO Workers
8 GB RAM
3 IO Workers
8 GB RAM
3 IO Workers
512
Partitions
512
Partitions
256
Partitions
256
Partitions
256
Partitions
256
Partitions
TOTAL
16 GB RAM
6 IO Workers
1024
Partitions
MAP
MAP
MAP
MAP
MAP
MAP
MAP
MAP
Horizontal Scale-Rebalance
TOTAL
32 GB RAM
12 IO Workers
1024
Partitions
Application Servers
8 GB RAM
3 IO Workers
8 GB RAM
3 IO Workers
8 GB RAM
3 IO Workers
8 GB RAM
3 IO Workers
512
Partitions
512
Partitions
256
Partitions
256
Partitions
256
Partitions
256
Partitions
MAP
MAP
MAP
MAP
MAP
MAP
MAP
MAP
!!!!!
All Metadata for All Documents
(64 bytes + Key Length) !
Document Values (NRU Ejected if RAM Quota
Used > 90%) !!!
Also Leave RAM For OS: !
[Filesystem Cache >> Views]
!!!!!!
Document Indexing !
Monitoring !
XDCR !!
Recommended: !
minimum 4 Cores + 1 core per design document + 1 core per XDCR replicated
bucket
!!!!!!
Persisted Documents !
All Indexes for Design Documents/Views
!Append-‐Only Disk Format
& CompacYon !!
Performance: !
MulYple EBS Volumes High IOPS Raid 0 on Amazon
RAM, CPU and IO Guidelines
RAM CPU Disk IO
Binary Socket Operations (11210/1)• get (key)
– Retrieve a document
• set (key, value)
– Store a document, overwrites if exists
• add (key, value)
– Store a document, error/exception if exists
• replace (key, value)
– Store a document, error/exception if doesn’t exist
• incr (key)
– Create/Increment Atomic Counter
• decr (key)
– Decrement Atomic Counter
• cas (key, value, cas)
– Compare and swap, set document only if it hasn’t changed (Optimistic Lock)
HTTP Operations (8092)
• View Querying
– Range Queries
– Index-Key Match Queries
– Set Match Queries
– Aggregate Reduces
– Group Level + Grouping Queries
Run Couchbase and Open Browserlocalhost:8091 or ec2-‐xx-‐xx-‐xx-‐xx.compute-‐1.amazonaws.com:8091
Supported SDK'swww.couchbase.com/communiYes
• Each supported SDK page has instrucTons for setup
• PHP, Ruby, and Python clients are wrappers around libcouchbase C library, so libcouchbase must be installed first
• For other community clients, click on "All Clients" on leW nav, scroll down the page and you can see clients for Go, Erlang, Clojure, TCL, Perl and others.
Installing Libcouchbase
• Mac Tips before Libcouchbase & SDK Install
• Make sure you have XCode & Command Line Tools Installed
• Install Homebrew if you don't have it already
• Do a $ brew update, $ brew upgrade and $ brew doctor to be sure you're up to date
www.couchbase.com/communiYes/c/geqng-‐started
Installing Libcouchbase
• Mac Via Homebrew
• $ brew install libcouchbase
• PC-‐Windows
• Download appropriate Zip from website
• Redhat/CentOS
• wget the yum repositories
• $ sudo yum install -‐y libcouchbase2-‐libevent libcouchbase-‐devel
• Ubuntu
• wget ubuntu repositories
• $ sudo apt-‐get install libcouchbase2-‐libevent libcouchbase-‐dev
www.couchbase.com/communiYes/c/geqng-‐started
Store and Retrieve Operations
Storage Ops/sRetrieve Ops/sRetrieve off DiskTotal Ops/s
Item CountDelete Ops/s CAS Ops/s % Documents in RAM
Store and Retrieve Operations
RAM FULL + Very High Velocity Writes!
Begin EjecYon of Replicas from RAM
Begin EjecYon of AcYve from RAM
RAM Used for Metadata + Data
Store and Retrieve Operations
Disk Creates Disk Updates Disk Queue SizeDisk Reads (Cache Miss)
Total Disk UsageDocs Data Size (Compressed) Docs Size on Disk FragmentaYon
(from Appends)
Partition (vbucket) Details
Metadata
Documents
EjecYons (RAM Full)
% Documents in RAM
CreaYons
Item Count
Partition (vbucket) Details
Note: 64 ParYYons on Mac OS X! This is simply to avoid altering
default file descriptor limits, don't XDCR with other OperaYng
Systems