Deploying a Ceph Cluster on DreamCompute
In-Ceph-tion
#DREAMCON2013
2
•Ceph in <30s•DreamCompute•OpenStack && Ceph•Deploying Ceph•Questions
Wat?
#DREAMCON2013
3
On commodity hardware
Ceph can run on any infrastructure, metal or virtualized to provide a cheap and powerful storage cluster.
Object, block, and file
Low overhead doesn’t mean just hardware, it means people too!
Awesomesauce
Infrastructure-aware placement algorithm allows you to do really cool stuff.
Huge and beyond
Designed for exabyte, current implementations in the multi-petabyte. HPC, Big Data, Cloud, raw storage.
…besides wicked-awesome?
What is Ceph?
Software All-in-1 CRUSH Scale
#DREAMCON2013
4
Find out more!Ceph.com
…but you can find out more
Use it todayDreamhost.com/cloud/DreamObjects
Get SupportInktank.com
That WAS fast
#DREAMCON2013
“DreamCompute is a highly scalable and cost-effective cloud computing service that is built to power everything from Web and mobile applications, digital media and ecommerce Web sites, big data analytics, and test and development environments. Using DreamCompute’s open source infrastructure-as-a-service platform that is powered by OpenStack®, customers can create and prosper in the cloud.”
5
Sexy OpenStack Goodness
DreamCompute
#DREAMCON2013
6
NOM NOM NOM
That’s a mouthful
#DREAMCON2013
7
Leave Amazon behind
Spin up infrastructure in the cloud. Want 100 Ubuntu machines? No problem!
No black boxes here
Everything is Open Source tech, so you can see the guts or even help build them.
Can’t stop me now!
Tons of adoption and support from everyone from the huge-mongous to the boutique.
GUI or API, you pick
Whether you’re plugging in orchestration frameworks or spinning up a single machine.
Distilled
DreamCompute
Your EC2 Open Momentum
Easy
#DREAMCON2013
Storage / Compute
Dell Power Edge 515s
• Six Core AMD Processor• 32G RAM• Two on-board 300GB SAS drives (RAID-1) contain the OS• H710 (LSI9260) controller• Storage is 12x 3TB drives, JBODs (RAID-0, one drive per set)• Two 10G NICs for data (or a single dual-port 10G NIC)• One NIC dedicated to IPMI (10/100/1000Mbps)• One 1G NIC dedicated to management (isolated from IPMI connections)
8
Geek us out
Non-Storage (mon/gateway/mgmt)
Dell Power Edge R415s
•Two on-board 300GB SAS drives (RAID-1) contain the OS• Two 1TB SAS drives on same controller (RAID-1) for logs• Single 10G NIC• One NIC dedicated to IPMI (10/100/1000Mbps)• One 1G NIC dedicated to management
What about the guts?
#DREAMCON2013
9
ResilientEverything was designed for failure.
Not for the feint of heart
CapacityOptimized for throughput (NIC speed)
ReplicationThis means that replication within a pod can happen at very nearly the same rate as replication within a single rack.
Networking
#DREAMCON2013
10
I have no clue what you just said
Huh?
#DREAMCON2013
Just rest assured that the DREAMHOST guys are really taking their time to get it RIGHT. The system is designed to have VAST RESOURCES at the ready, be RESILIENT by design, and provide ENTERPRISE-CLASS data security.
11
Because the DreamHost guys are awesome!
It’s Awesome
#DREAMCON2013
12
Good together
OpenStack && Ceph
#DREAMCON2013
Cinder
“Cinder provides an infrastructure for managing volumes in OpenStack. It was originally a Nova component called nova-volume, but has become an independent project since the Folsom release.”
Cinder is able to boot a VM using a copy-on-write clone of an image stored in Glance.
13
Ceph’s best pals
Glance
“The Glance project provides services for discovering, registering, and retrieving virtual machine images. Glance has a RESTful API that allows querying of VM image metadata as well as retrieval of the actual image.”
Ceph is able to store machine images for Glance in a block device.
Cinder && Glance
#DREAMCON2013
14
Squash HotspotsMultiple hosts = parallel workload
But what does that mean?
Instant ClonesNo time to boot for many images
Live migrationShared storage allows you to move instances between compute nodes transparently.
Looks delicious
#DREAMCON2013
Swift
“Swift is a highly available, distributed, eventually consistent object/blob store. Organizations can use Swift to store lots of data efficiently, safely, and cheaply.
Swift is currently a core project.
15
The artist previously known as Swift
Why Ceph?
Ceph allows you to do both object and block for OpenStack in a single cluster. Additionally you can use the Swift API natively via Ceph’s RESTful gateway.
Performance, fault tolerance, and self-management advantages.
Object Storage
#DREAMCON2013
16
Come for the blockStay for the object and file
Ceph, it only takes once!
Reduced OverheadEasier to manage one cluster
File applicationsWhile not production ready, many are using CephFS for things like image distribution.
Gateway Drug
#DREAMCON2013
17
Where the metal meets the…software
Deploying Ceph
#DREAMCON2013
18
Procedural, Ruby
Written in Ruby, this is more of the dev-side of DevOps. Once you get past the learning curve it’s powerful though.
Model-driven
Aimed more at the sysadmin, this procedural tool has a very wide penetration (even on Windows!).
Agentless, whole stack
Using the built-in OpenSSH in your OS, this super easy tool goes further up the stack than most.
Fast, 0MQ
Using ZeroMQ this tool is designed for massive scale and fast, fast, fast. Unfortunately 0MQ has no built in encryption.
The new hotness
Orchestration
Chef Puppet Ansible Salt
#DREAMCON2013
19
Canonical Unleashed
Being language agnostic, this tool can completely encapsulate a service. Can also handle provisioning all the way down to hardware.
Dell has skin in the game
Complete operations platform that can dive all the way down to BIOS/RAID level.
Others are joining in
Custom provisioning and orchestration, just one example of how busy this corner of the market is.
Doing it w/o a tool
If you prefer not to use a tool, Ceph gives you an easy way to deploy your cluster by hand.
MOAR HOTNESS
Orchestration Cont’d
Juju Crowbar ComodIT Ceph-deploy
#DREAMCON2013
20
GUI or CLIMade to be drag-and-drop easy.
My favorite flavor
Step-by-stepceph.com/dev-notes/deploying-ceph-with-juju/
Don’t trust meThere are lots of tools and recipes available. I use Juju because it makes sense to my brain. Use what works for you!
Ceph && Juju
#DREAMCON2013
21
Get credentialsOpenStack or EC2
Get to the action already
ConfigMost are YAML
BootstrapMost need a machine to act as the air traffic controller. Drop this in to start spinning up infrastructure.
Deploy!
#DREAMCON2013
22
Spin up MONsQuorum and auth
MOAR
Spin up OSDsStorage and replication
Voila! Storage!You now have a functional Ceph cluster that you can use for OpenStack, raw storage, or your own nefarious plans for world-domination!
Deploy MOAR
#DREAMCON2013
23
Sping up RGWRESTful gateway for object storage
For the over achiever
Spin up MDSMetadata servers for CephFS (beta)
Second ClusterWith incremental snapshots (RBD), gateway replication (RGW), and directory-level snapshots (CephFS) a second cluster can be set up for disaster recovery.
Extra Credit
#DREAMCON2013
24
This Ceph thing sounds hot.
What’s Next?
#DREAMCON2013
25
An ongoing process
While the first pass for disaster recovery is done, we want to get to built-in, world-wide replication.
Reception efficiency
Currently underway in the community!
Headed to dynamic
Can already do this in a static pool-based setup. Looking to get to a use-based migration.
The evolution of open
This project is moving fast and we plan to continue moving with it!
Hop on board!
The Ceph Train
Geo-Replication Erasure Coding Tiering More OpenStack
#DREAMCON2013
26
Quarterly Online Summit
Coming up next week, this online summit puts the core devs together with the Ceph community.
Coming to a city near you!
Already planned for Santa Clara and London. Keep an eye out: http://inktank.com/cephdays/
Geek-on-duty
During the week there are times when Ceph experts are available to help. Stop by oftc.net/ceph
Email makes the world go
Our mailing lists are very active, check out ceph.com for details on how to join in!
Open Source is Open!
Get Involved!
CDS Ceph Day IRC Lists
#DREAMCON2013
27
Comments? Anything for the good of the cause?
WEBSITECeph.com
SOCIAL@[email protected]/cephstorage
SEE YOU NEXT YEAR!
THANKS FOR COMING