ceph, storage cluster to go exabyte and beyond
TRANSCRIPT
![Page 1: Ceph, storage cluster to go exabyte and beyond](https://reader034.vdocument.in/reader034/viewer/2022042619/5885514f1a28ab47268b49c5/html5/thumbnails/1.jpg)
CEPHstorage cluster to go exabyte
and beyondAlvaro SotoOpenStack & Ceph engineer
![Page 2: Ceph, storage cluster to go exabyte and beyond](https://reader034.vdocument.in/reader034/viewer/2022042619/5885514f1a28ab47268b49c5/html5/thumbnails/2.jpg)
# whoami
OS Lover
![Page 3: Ceph, storage cluster to go exabyte and beyond](https://reader034.vdocument.in/reader034/viewer/2022042619/5885514f1a28ab47268b49c5/html5/thumbnails/3.jpg)
# whoami
• Software developer• Full Linux sysadmin stack• Cepher / Stacker
![Page 4: Ceph, storage cluster to go exabyte and beyond](https://reader034.vdocument.in/reader034/viewer/2022042619/5885514f1a28ab47268b49c5/html5/thumbnails/4.jpg)
Agenda•Storage Background (*** I’m not a storage guy ***)•Ceph Intro & architecture•Myth or fact•Ceph & OpenStack
![Page 5: Ceph, storage cluster to go exabyte and beyond](https://reader034.vdocument.in/reader034/viewer/2022042619/5885514f1a28ab47268b49c5/html5/thumbnails/5.jpg)
Storage Background*** I’m not a storage guy ***
![Page 6: Ceph, storage cluster to go exabyte and beyond](https://reader034.vdocument.in/reader034/viewer/2022042619/5885514f1a28ab47268b49c5/html5/thumbnails/6.jpg)
Storage BackgroundScaleUP (in the old days)
DISK DISK
DISK DISK
DISK DISK
DISK DISK
DISK DISK
Computer / System
CLIENTCLIENTCLIENTCLIENTCLIENT
![Page 7: Ceph, storage cluster to go exabyte and beyond](https://reader034.vdocument.in/reader034/viewer/2022042619/5885514f1a28ab47268b49c5/html5/thumbnails/7.jpg)
Storage BackgroundScaleUP (in the old days)
DISK DISK
DISK DISK
DISK DISK
DISK DISK
DISK DISK
Computer / System
CLIENTCLIENTCLIENTCLIENTCLIENT
CLIENTCLIENT
CLIENTCLIENT
CLIENT
CLIENT
CLIENTCLIENT
CLIENTCLIENT
CLIENTCLIENT
CLIENTCLIENT
CLIENT
![Page 8: Ceph, storage cluster to go exabyte and beyond](https://reader034.vdocument.in/reader034/viewer/2022042619/5885514f1a28ab47268b49c5/html5/thumbnails/8.jpg)
Storage BackgroundScaleUP (in the old days)
DISK DISK
DISK DISK
DISK DISK
DISK DISK
DISK DISK
Computer / System
CLIENTCLIENTCLIENTCLIENTCLIENT
CLIENTCLIENT
CLIENTCLIENT
CLIENT
CLIENT
CLIENTCLIENT
CLIENTCLIENT
CLIENTCLIENT
CLIENTCLIENT
CLIENT
DISK DISK
DISK
DISK DISK
DISK DISK
DISK DISK
DISK
![Page 9: Ceph, storage cluster to go exabyte and beyond](https://reader034.vdocument.in/reader034/viewer/2022042619/5885514f1a28ab47268b49c5/html5/thumbnails/9.jpg)
Storage BackgroundScaleUP (in the old days)
DISK DISK
DISK DISK
DISK DISK
DISK DISK
DISK DISK
Computer / System
CLIENTCLIENTCLIENTCLIENTCLIENT
CLIENTCLIENT
CLIENTCLIENT
CLIENT
CLIENT
CLIENTCLIENT
CLIENTCLIENT
CLIENTCLIENT
CLIENTCLIENT
CLIENT
DISK DISK
DISK
DISK DISK
DISK DISK
DISK DISK
DISK
![Page 10: Ceph, storage cluster to go exabyte and beyond](https://reader034.vdocument.in/reader034/viewer/2022042619/5885514f1a28ab47268b49c5/html5/thumbnails/10.jpg)
Storage BackgroundScaleUP (in the old days)
DISK DISK
DISK DISK
DISK DISK
DISK DISK
DISK DISK
Computer / System
CLIENTCLIENTCLIENTCLIENTCLIENT
CLIENTCLIENT
CLIENTCLIENT
CLIENT
CLIENT
CLIENTCLIENT
CLIENTCLIENT
CLIENTCLIENT
CLIENTCLIENT
CLIENT
DISK DISK
DISK
DISK DISK
DISK DISK
DISK DISK
DISK
![Page 11: Ceph, storage cluster to go exabyte and beyond](https://reader034.vdocument.in/reader034/viewer/2022042619/5885514f1a28ab47268b49c5/html5/thumbnails/11.jpg)
Storage BackgroundScaleOUT (in the cloud age)
CLIENTCLIENTCLIENTCLIENTCLIENT
Computer / System
Computer / System DISK
Computer / System DISK DISK DISK DISK
DISK DISKDISKDISK
DISK DISK DISK
DISK DISK DISK DISK
Computer / System
![Page 12: Ceph, storage cluster to go exabyte and beyond](https://reader034.vdocument.in/reader034/viewer/2022042619/5885514f1a28ab47268b49c5/html5/thumbnails/12.jpg)
Storage BackgroundScaleUP (in the old days)
CLIENTCLIENTCLIENTCLIENTCLIENT
CLIENTCLIENT
CLIENTCLIENT
CLIENT
CLIENT
CLIENTCLIENT
CLIENTCLIENT
CLIENTCLIENT
CLIENTCLIENT
CLIENT
Computer / SystemComputer / SystemComputer / System
Computer / SystemComputer / SystemComputer / System
Computer / SystemComputer / SystemComputer / System
Computer / SystemComputer / SystemComputer / System
![Page 13: Ceph, storage cluster to go exabyte and beyond](https://reader034.vdocument.in/reader034/viewer/2022042619/5885514f1a28ab47268b49c5/html5/thumbnails/13.jpg)
![Page 14: Ceph, storage cluster to go exabyte and beyond](https://reader034.vdocument.in/reader034/viewer/2022042619/5885514f1a28ab47268b49c5/html5/thumbnails/14.jpg)
Cephintroduction & architecture
• Ceph was initially created by Sage Weil (DreamHost), 2007• Linus Torvalds merged the CephFS client into Linux kernel, 2010• Weil created Inktank Storage, 2012• First stable release, code name Argonaut, 2012• Minimun two release per year (10)• Red Hat purchased Inktank, 2014• Last stable release, code name Jewel, 2016
![Page 15: Ceph, storage cluster to go exabyte and beyond](https://reader034.vdocument.in/reader034/viewer/2022042619/5885514f1a28ab47268b49c5/html5/thumbnails/15.jpg)
Cephintroduction & architecture
![Page 16: Ceph, storage cluster to go exabyte and beyond](https://reader034.vdocument.in/reader034/viewer/2022042619/5885514f1a28ab47268b49c5/html5/thumbnails/16.jpg)
Cephintroduction & architecture
Community focused!!!!!
![Page 17: Ceph, storage cluster to go exabyte and beyond](https://reader034.vdocument.in/reader034/viewer/2022042619/5885514f1a28ab47268b49c5/html5/thumbnails/17.jpg)
commodity hardwareor standard hardware
opensource
enterprise support**
Cephintroduction & architecture
![Page 18: Ceph, storage cluster to go exabyte and beyond](https://reader034.vdocument.in/reader034/viewer/2022042619/5885514f1a28ab47268b49c5/html5/thumbnails/18.jpg)
commodity hardwareor standard hardware
enterprise support**
philosophydesign
Cephintroduction & architecture
ScalableSoftware base
Self managing / healing
OpenSourceCommunity focused
No single point of failure
![Page 19: Ceph, storage cluster to go exabyte and beyond](https://reader034.vdocument.in/reader034/viewer/2022042619/5885514f1a28ab47268b49c5/html5/thumbnails/19.jpg)
Cephintroduction & architecture
![Page 20: Ceph, storage cluster to go exabyte and beyond](https://reader034.vdocument.in/reader034/viewer/2022042619/5885514f1a28ab47268b49c5/html5/thumbnails/20.jpg)
Data placement with CRUSHPseudo-random placement algorithmRule based configuration
Cephintroduction & architecture
Controlled Replication UnderScalable Hashing
![Page 21: Ceph, storage cluster to go exabyte and beyond](https://reader034.vdocument.in/reader034/viewer/2022042619/5885514f1a28ab47268b49c5/html5/thumbnails/21.jpg)
Myth or fact
![Page 22: Ceph, storage cluster to go exabyte and beyond](https://reader034.vdocument.in/reader034/viewer/2022042619/5885514f1a28ab47268b49c5/html5/thumbnails/22.jpg)
Performance???
Compilers: Principles, Techniques, and Toolsby Alfred Aho, Jeffrey Ullman, Monica S. Lam, and Ravi Sethi
http://www.ceph.com
Myth or fact
![Page 30: Ceph, storage cluster to go exabyte and beyond](https://reader034.vdocument.in/reader034/viewer/2022042619/5885514f1a28ab47268b49c5/html5/thumbnails/30.jpg)
Scalability???
http://www.ceph.com
Myth or fact
The architecture is inherently scalable without any theoretical boundaries
![Page 31: Ceph, storage cluster to go exabyte and beyond](https://reader034.vdocument.in/reader034/viewer/2022042619/5885514f1a28ab47268b49c5/html5/thumbnails/31.jpg)
Ceph & OpenStack
![Page 32: Ceph, storage cluster to go exabyte and beyond](https://reader034.vdocument.in/reader034/viewer/2022042619/5885514f1a28ab47268b49c5/html5/thumbnails/32.jpg)
Ceph & OpenStackIntegration
Image by RedHat
![Page 33: Ceph, storage cluster to go exabyte and beyond](https://reader034.vdocument.in/reader034/viewer/2022042619/5885514f1a28ab47268b49c5/html5/thumbnails/33.jpg)
Ceph & OpenStackFeatures (Some cool)
Copy-on-write snapshots (RBD)KRBD for BARE (RBD)Tiering (Ceph Pools)Leaf configuration
Ceilometer integration for RGWMulti-attach for RBD (Cinder)
![Page 34: Ceph, storage cluster to go exabyte and beyond](https://reader034.vdocument.in/reader034/viewer/2022042619/5885514f1a28ab47268b49c5/html5/thumbnails/34.jpg)
Ceph & OpenStackFeatures (Some cool)
Import/export snapshots and RBD (Cinder)Differential backup orchestration (Cinder)
Deep flatten (RBD Snap)RBD mirroring integration (Jewel)
CephFS with Ganesha NFS -> Manila (Jewel)DevStack Ceph (From Kilo)
![Page 35: Ceph, storage cluster to go exabyte and beyond](https://reader034.vdocument.in/reader034/viewer/2022042619/5885514f1a28ab47268b49c5/html5/thumbnails/35.jpg)
Q & A
![Page 36: Ceph, storage cluster to go exabyte and beyond](https://reader034.vdocument.in/reader034/viewer/2022042619/5885514f1a28ab47268b49c5/html5/thumbnails/36.jpg)
THANK YOU
http://headup.ws
# locate
khyron
@alsotoes
CephersMX
StackersMX
[email protected]@kionetworks.com
![Page 37: Ceph, storage cluster to go exabyte and beyond](https://reader034.vdocument.in/reader034/viewer/2022042619/5885514f1a28ab47268b49c5/html5/thumbnails/37.jpg)
Storage backend de facto???Myth or fact
![Page 38: Ceph, storage cluster to go exabyte and beyond](https://reader034.vdocument.in/reader034/viewer/2022042619/5885514f1a28ab47268b49c5/html5/thumbnails/38.jpg)
Ceph & OpenStackFeatures (Some cool)
ceph-deploy disk zap vm04:vdbceph-deploy osd create --dmcrypt vm04:vdb
ceph osd getcrushmap -o crushmap.compiledcrushtool -d crushmap.compiled -o crushmap.decompiled
host vm04-encr { id -7 # do not change unnecessarily # weight 0.080 alg straw hash 0 # rjenkins1 item osd.5 weight 0.040}
root encrypted { id -8 # do not change unnecessarily # weight 0.120 alg straw hash 0 # rjenkins1 item vm02-encr weight 0.040 item vm03-encr weight 0.040 item vm04-encr weight 0.040}
rule encrypted_ruleset { ruleset 1 type replicated min_size 1 max_size 10 step take encrypted step chooseleaf firstn 0 type host step emit}
ceph osd pool create encrypted 128ceph osd pool set encrypted crush_ruleset 1