openstack agility. red hat reliability.€¦ · what is ceph 3 openstack summit - sydney november...

17
OPENSTACK ® AGILITY. RED HAT ® RELIABILITY. 1

Upload: doandung

Post on 19-Jul-2018

224 views

Category:

Documents


0 download

TRANSCRIPT

OPENSTACK® AGILITY. RED HAT® RELIABILITY.

1

Andrew HatfieldPractice Lead - Cloud Storage and Big DataNovember 2017

@andrewhatfield

CephFS: NOW FULLY AWESOMEWhat is the impact of CephFS on OpenStack

2

WHAT IS Ceph

3 OpenStack Summit - Sydney November 2017

RGWS3 and Swift compatible object storage with object versioning,

multi-site federation, and replication

LIBRADOSA library allowing apps to directly access RADOS (C, C++, Java, Python, Ruby, PHP)

RADOSA software-based, reliable, autonomic, distributed object store comprised of

self-healing, self-managing, intelligent storage nodes (OSDs) and lightweight monitors (Mons)

RBDA virtual block device with

snapshots, copy-on-write clones, and multi-site replication

CEPHFSA distributed POSIX file system

with coherent caches and snapshots on any directory

OBJECT BLOCK FILE

● CephFS is a POSIX-compatible distributed file system

● File based workloads

● Managed and hierarchical shared workspaces

○ Openstack Manila shares

● Coherent caching across clients

○ Synchronous updates visible everywhere

● All (meta)data stored in RADOS

● Clients access data directly via RADOS

WHAT IS CephFS

4 OpenStack Summit - Sydney November 2017

Client Client

ceph-mds

RADOS

Metadata RPCFile I/O

JournalMetadata

● MDS run on RADOS

○ No host level storage

○ Serves as cache for

metadata

● CRUSH + RADOS used to place

file data and metadata

CephFS Architecture

5 OpenStack Summit - Sydney November 2017

OSD OSD OSD OSD OSD

Data Pool Placement Groups

Metadata Pool Placement Groups

ceph-mdsceph-mds

Client

write

open

CephFS has a number of well suited workloads, including;

WHAT CephFS IS GOOD FOR

● Storage as a Service (STaaS)

● OpenStack Manila File as a Service

● High Performance Computing (HPC)

● Metadata intensive workloads

66 OpenStack Summit - Sydney November 2017

● OpenStack Shared Filesystems service

● APIs for tenants to request file system

shares

● Support for several drivers

○ Proprietary

○ CephFS

○ “Generic” (NFS on Cinder)

OpenStack Manila

7 OpenStack Summit - Sydney November 2017

Tenant admin Guest VM

Manila API

Driver A

Driver B

Storage cluster/controller

1. Create share

2. Create share

3. Return address

4. Pass address

5. Mount

● Most OpenStack clouds use

Ceph as its storage backend

● Open source

● Scalable data + scalable

metadata

● POSIX

WHY CephFS FOR MANILA

8 OpenStack Summit - Sydney November 2017

https://www.openstack.org/user-survey/survey-2017

CephFS SHARES IN HORIZON GUI

9 OpenStack Summit - Sydney November 2017

CephFS SHARES IN HORIZON GUI

10 OpenStack Summit - Sydney November 2017

11 OpenStack Summit - Sydney November 2017

Create keyring file with ceph auth ID, secret key

Create a ceph.conf file with Ceph monitor addresses

Ceph-fuse mount the share

No auto-mount of shares

Metad

ata u

pdat

es Data updates

Client directly connected to Ceph’s public network. So security?

Trusted clients, Ceph authentication

No single point of failure in data plane (HA of MON, MDS, OSD)

OpenStack client/Nova VM

Monitor

Metadata Server

OSD Daemon

Ceph server daemons

server daemons

CephFS NATIVE DRIVER (IN DATA PLANE)

CephFS NATIVE DRIVER DEPLOYMENT

12 OpenStack Summit - Sydney November 2017

Public OpenStack Service API (External) network

Storage (Ceph public) network

External Provider Network

Storage Provider Network

Router Router

Tenant VMs with 2 nics Manila Share

service

Ceph MON

Ceph MDSCeph OSD Ceph OSD

Ceph OSDController

Nodes

Storage nodes

Tenant A Tenant B

Compute Nodes

Manila API service

Ceph MDS placement:With MONs/python services/dedicated?

Ceph MDS req:8G RAM, 2 cores

13 OpenStack Summit - Sydney November 2017

NFS mount the share

Metad

ata

upda

tes

Data updates

Clients connected to NFS-Ganesha gateway. Better security.

No single point of failure (SPOF) in Ceph storage cluster (HA of MON, MDS, OSD)

OpenStack client/Nova VM

Monitor

Metadata Server

OSD Daemon

Ceph server daemons

server daemons

NFS gateway

NFS-Ganesha needs to be HA for no SPOF in data plane.

NFS-Ganesha active/passive HA WIP (Pacemaker/Corosync)N

ativ

e Ce

phN

FS

CephFS NFS DRIVER (IN DATA PLANE)

14 OpenStack Summit - Sydney November 2017

CephFS NFS DRIVER DEPLOYMENT

Public OpenStack Service API (External) network

Storage (Ceph public) network

External Provider Network

Router Router Manila Share

service

Ceph MON

Ceph MDSCeph OSD Ceph OSD

Ceph OSD

Storage nodes

Tenant ATenant B

Compute Nodes

Controller nodesNFS-Ganesha server in the controller?Bottleneck in data path, and might affect other services running in the controller.

Manila API service

15 OpenStack Summit - Sydney November 2017

Kubernetes hosted Ceph cluster all-in-one

Public OpenStack Service API (External)

Storage (Ceph public)

External Provider Network

Router Router

Ceph OSD Ceph OSD

Ceph OSD

k8s Ceph Cluster

Tenant A Tenant B

Compute Nodes

Manila Share

service

Manila API

service

Controller Nodes

Ceph MON

Ceph MDS

SEE - FULLY AWESOME!What have we seen?

So there’s a lightning overview of CephFS and why it’s now fully awesome

● What is Ceph● What is CephFS● CephFS Architecture● What is CephFS good for● OpenStack Manila Architecture● Why CephFS for Manila● Horizon GUI● Driver and Deployment Models

○ CephFS fuse driver○ NFS Ganesha Driver○ Kubernetes hosted Ceph and Ganesha

16 OpenStack Summit - Sydney November 2017

THANK YOU

plus.google.com/+RedHat

youtube.com/user/RedHatVideos

facebook.com/redhatinc

twitter.com/RedHatNewslinkedin.com/company/red-hat

17