ceph@mimos: growing pains from r&d to deployment

29
Ceph@MIMOS: Growing Pains from R&D to Deployment Jing Yuan LUKE Advanced Computing Lab Ceph Day Kuala Lumpur, 22 August 2016

Upload: patrick-mcgarry

Post on 07-Apr-2017

184 views

Category:

Technology


4 download

TRANSCRIPT

Page 1: Ceph@MIMOS: Growing Pains from R&D to Deployment

Ceph@MIMOS: Growing Pains from R&D to Deployment

Jing Yuan LUKEAdvanced Computing Lab

Ceph Day Kuala Lumpur, 22 August 2016

Page 2: Ceph@MIMOS: Growing Pains from R&D to Deployment

• MIMOS – A Brief Overview• Distributed Object Storage in the era of Big Data• Our Ceph Journey• Moving Forward• Concluding Remark

Copyright © 2016, MIMOS Bhd 2

Outline

Page 3: Ceph@MIMOS: Growing Pains from R&D to Deployment

3

MIMOS: An Overview

Enhancing ICT industry growth through indigenous technologies

R&D in ICThttp://www.mimos.my

Copyright © 2016, MIMOS Bhd

Page 4: Ceph@MIMOS: Growing Pains from R&D to Deployment

MIMOS R&D Labs

Data Centric Approach

4Copyright © 2016, MIMOS Bhd

Page 5: Ceph@MIMOS: Growing Pains from R&D to Deployment

MIMOS R&D Labs

Data Centric Approach

5

INFORMATION SECURITY

PHOTONICSNANOELECTRONICS

ADVANCED ANALYSIS & MODELING

ARTIFICIAL INTELLIGENCE

WIRELESS COMMUNICATIONS

ADVANCED INFORMATICS

ADVANCED COMPUTING

MICRO-ELECTRONICS /ENERGY

ACCELERATIVE TECHNOLOGY

INTELLIGENT INFORMATICS

USER EXPERIENCE

Copyright © 2016, MIMOS Bhd

Page 6: Ceph@MIMOS: Growing Pains from R&D to Deployment

Copyright © 2016, MIMOS Bhd 6

Enterprise DataStore

Structured Data

DATA Sources in the Big Data Era

Page 7: Ceph@MIMOS: Growing Pains from R&D to Deployment

Enterprise DataStore

Structured Data

Copyright © 2016, MIMOS Bhd 7

Sensors

Social Networks

Unstructured Data

MobilieDevices

Wearables

Tags

DATA Sources in the Big Data Era

Page 8: Ceph@MIMOS: Growing Pains from R&D to Deployment

DATAThe new Currency

Enterprise DataStore

Structured Data

Copyright © 2016, MIMOS Bhd 8

Sensors

Social Networks

Unstructured Data

MobilieDevices

Wearables

The TSUNAMI

Tags

Page 9: Ceph@MIMOS: Growing Pains from R&D to Deployment

Copyright © 2016, MIMOS Bhd9

DATAThe new Currency

KnowledgeThe new Discovery

Enlightenment

Page 10: Ceph@MIMOS: Growing Pains from R&D to Deployment

Copyright © 2016, MIMOS Bhd10

DATAThe new Currency

KnowledgeThe new Discovery

Enlightenment

More data generatedMore storage required

Page 11: Ceph@MIMOS: Growing Pains from R&D to Deployment

Copyright © 2016, MIMOS Bhd 11

Distributed Object Storage in Big Data Era

Source: SNIA : Swift Object Storage adding EC (Erasure Code), 2014

Page 12: Ceph@MIMOS: Growing Pains from R&D to Deployment

Copyright © 2016, MIMOS Bhd 12

Distributed Object Storage in Big Data Era

Source: SNIA : Swift Object Storage adding EC (Erasure Code), 2014

• Other potential benefits• Scales

• Capability• Capacity

• Hardware agnostics• Self-healing

• Replication• Erasure Codes

Page 13: Ceph@MIMOS: Growing Pains from R&D to Deployment

Copyright © 2016, MIMOS Bhd 13

Our Ceph Journey

2013

2014

20152016

• R&D• PoC/Testbed

Page 14: Ceph@MIMOS: Growing Pains from R&D to Deployment

Copyright © 2016, MIMOS Bhd 14

Our Ceph Journey

2013

2014

20152016

• R&D• PoC/Testbed

• Which solution/platform that can provide:– Support to existing cloud initiatives, is it well

accepted by:• OpenStack• OpenNebula• Others

• Provide different ways to access the backend– Web services– Block like access– File System

• Highly Available– Active-active

• Linux upstream support

Page 15: Ceph@MIMOS: Growing Pains from R&D to Deployment

Copyright © 2016, MIMOS Bhd 15

Our Ceph Journey

2013

2014

20152016

• R&D• PoC/Testbed

12

3MIMOS

TPMHPCC2 HPCC1

32

1

MIMOS KHTP

Page 16: Ceph@MIMOS: Growing Pains from R&D to Deployment

Copyright © 2016, MIMOS Bhd 16

Our Ceph Journey

2013

2014

20152016

• R&D• PoC/Testbed

• Big Data Storage Event

• First Internal Deployment

Page 17: Ceph@MIMOS: Growing Pains from R&D to Deployment

Copyright © 2016, MIMOS Bhd 17

Our Ceph Journey

2013

2014

20152016

• R&D• PoC/Testbed

• Big Data Storage Event

• First Internal Deployment

Big Data Storage @ Big Data Week KL 2014

Page 18: Ceph@MIMOS: Growing Pains from R&D to Deployment

Copyright © 2016, MIMOS Bhd 18

Our Ceph Journey

2013

2014

20152016

• R&D• PoC/Testbed

• Big Data Storage Event

• First Internal Deployment

• Simple Backup/Archiving– First attempt to use Ceph in a small

production cluster– Backup application: BackupPC– Access: CephFS– Challenge: BackupPC creates a lot of small

files (several kB), saw plenty of space wasted (due to the default 4MB object size), solution create a dedicated pool and use extended attributes to assign mount point to different pool and reduce object size

– Moving forward: considering RBD and Bacula

Page 19: Ceph@MIMOS: Growing Pains from R&D to Deployment

Copyright © 2016, MIMOS Bhd 19

Our Ceph Journey

2013

2014

20152016

• R&D• PoC/Testbed

• Big Data Storage Event

• First Internal Deployment

• Mi-ROSS 1.0 development

• More deployments for:– VDI– Government

agencies, law enforcement agency, etc.

– Expanding internal cluster

Page 20: Ceph@MIMOS: Growing Pains from R&D to Deployment

Copyright © 2016, MIMOS Bhd 20

2013

2014

20152016

• R&D• PoC/Testbed

• Big Data Storage Event

• First Internal Deployment

• Mi-ROSS 1.0 development

• More deployments for:– VDI– Government

agencies, law enforcement agency, etc.

– Expanding internal cluster

Our Ceph Journey

Page 21: Ceph@MIMOS: Growing Pains from R&D to Deployment

Copyright © 2016, MIMOS Bhd 21

2013

2014

20152016

• R&D• PoC/Testbed

• Big Data Storage Event

• First Internal Deployment

• Mi-ROSS 1.0 development

• More deployments for:– VDI– Government

agencies, law enforcement agency, etc.

– Expanding internal cluster

• VDI– Challenges:

• 60+ Windows based VMs accessed via RDP, for s/w development environment, lots of I/O (code check-in/out, compiling, etc.)

• 30+ development VMs– Solution: Used a experimental feature, i.e. KV

based datastore instead of the typical journal based datastore

Our Ceph Journey

Page 22: Ceph@MIMOS: Growing Pains from R&D to Deployment

Copyright © 2016, MIMOS Bhd 22

2013

2014

20152016

• R&D• PoC/Testbed

• Big Data Storage Event

• First Internal Deployment

• Mi-ROSS 1.0 development

• More deployments for:– VDI– Government

agencies, law enforcement agency, etc.

– Expanding internal cluster

• Storage for Cloud Deployment and General Uses– Mi-Cloud (MIMOS Cloud Platform)

• Ceph provides datastores for and to VMs– Support for multiple workloads via Mi-ROSS

• File Sharing (NFS and SAMBA)

MyHDW

Our Ceph Journey

Page 23: Ceph@MIMOS: Growing Pains from R&D to Deployment

Copyright © 2016, MIMOS Bhd 23

Our Ceph Journey

2013

2014

20152016

• R&D• PoC/Testbed

• Big Data Storage Event

• First Internal Deployment

• Mi-ROSS 1.0 development

• More deployments for:– VDI– Government

agencies, law enforcement agency, etc.

– Expanding internal cluster

• Mi-ROSS 2.0 development

• Both Mi-Cloud and Mi-ROSS supporting 600+ VMs internally

Page 24: Ceph@MIMOS: Growing Pains from R&D to Deployment

Copyright © 2016, MIMOS Bhd

2013

2014

20152016

• R&D• PoC/Testbed

• Big Data Storage Event

• First Internal Deployment

• Mi-ROSS 1.0 development

• More deployments for:– VDI– Government

agencies, law enforcement agency, etc.

– Expanding internal cluster

• Mi-ROSS 2.0 development

• Both Mi-Cloud and Mi-ROSS supporting 600+ VMs internally

24

Our Ceph Journey

Page 25: Ceph@MIMOS: Growing Pains from R&D to Deployment

• As we grow we have learnt about various operation issues, e.g.– Impact on performance as we add/remove disks into the cluster

• How to work around it– Dealing with different network switches

• Different brands/models often implement the same protocols slight differently– Monitoring and Maintenance

• Developing monitoring agents/plug-ins• Developing our own SOP

– Fine tuning• Ceph• Kernel

Copyright © 2016, MIMOS Bhd 25

Lessons learnt from the Journey

Page 26: Ceph@MIMOS: Growing Pains from R&D to Deployment

• A NAS appliance like building on top of Ceph providing:– Ceph Management

• Pool• RBD• CRUSH (planned)• Keys (planned)

– File sharing• SAMBA• NFS• iSCSI (in progress)

– Dashboard• Utilization• Health• Other Ceph related statistics

– Other Storage Services• “Dropbox” (planned)• Backup (planned)

Copyright © 2016, MIMOS Bhd 26

Mi-ROSS

Page 27: Ceph@MIMOS: Growing Pains from R&D to Deployment

• Distributed Storage for Disaster Mitigation and Smart Cities

• How can distributed block storage such Ceph and be used to address Edge Computing– Network Latency– Ad-Hoc-ness of the edge devices– Etc

Copyright © 2016, MIMOS Bhd 27

Moving Forward

Page 28: Ceph@MIMOS: Growing Pains from R&D to Deployment

• Distributed Object Storage such Ceph can address the data deluge from the era of Big Data and IoT

• MIMOS’ Advance Computing Lab would like welcome all to collaborate to further enhance this exciting open solutions

Copyright © 2016, MIMOS Bhd 28

Concluding Remark

Page 29: Ceph@MIMOS: Growing Pains from R&D to Deployment

Copyright © 2016, MIMOS Bhd 29

Thank You