The journey to the hyperconvergedsolution from Cisco Systems
Jiří Cihlář
Team Leader – Data center
„I skate to where the puck is going to be, not where it has been. “
Wayne Gretzky
Technology
HighAvailability
StorageBackup
ComputeVirtualization
Network
Data Applications
Security
Business objectives
Operation Consumption model
DATA CENTER IS EVOLVINGApplications
OperationConsumption
IT
LOB
DEvOps
Bare Metal
Virtualized
Microservices
OnPrem
Public
Hybrid
The changes in all3 vectors have
impact on Infrastructure
VirtualizationCloudDevops
VirtualizationNumber of AppsNew technologiesHybrid environments
REASONS FOR TRANSITION FROM TRADITIONAL TO MODERN DC
Picture explaining basic
principles of evolution in
nature from Darwin’s book
What are the causes that we do not speak about traditional but evolution to modern data center?
VirtualizationMoor’s law
„The number of transistors
incorporated into a chip will
approximately double every
24 months… “
„Server virtualization allows to
run more operating systems
(virtual machines) on one
shared hardware.“
Cloud
„Applications and services are
provided automatically and in
standard way and accessible
remotely via internet (public
option)”
Compute
How the compute in data centers evolves
1990+
Rack servers were introduced 1995
Cubix-ERS: First attempt fo blade servers 2001 HP, Compaq, IBM, Dell, SUN entered the market withblade servers
Compaq introduced in 1994 the first rack-mountable server, the ProLiant Series.
2002
RLX Blade, first modern blade servers
• Density
• Cable Reduction
• Consolidated access to BMCs(iLO) –IntegratedMgmt
• Shared power & cooling
• Increased integration
• Removal of I/O slots
• Adoption of Ethernet & FC switches in enclosure
Design Goals of Bladeservers:
Achieved through:
BLADE SERVERS
2009
Cisco announced UCS
In March 2009 came the debut of Cisco's Unified Computing System (UCS), aka Project California, which seeks to combine
• storage,
• server,
• virtualization
• networking
• management
capabilities in one integrated system, managed from the single point.
CISCO UCS HISTORY
TRADITIONAL VS CISCO UCS MODEL
• Fewer cables
• Fewer switches,
• Fewer adapters
• Less FC zoning
• Overall less power
• Interoperates with existing SAN’s
• FCoE is Industry Standard
• Each enclosure has 5-6 management points
• Lots of cabling
• Many IP addresses to manage
Mgmt Server
UNIFIED FABRIC – SIMPLIFIES THE DATACENTER
From ad hoc and inconsistent…
…to structured, but siloed, complicated
and costly…
…to simple, optimized and automated
UNIFIED FABRIC – SIMPLIFIES THE DATACENTER
• Wire once for bandwidth, not connectivity• Integrates as a single system into you data center
2x 4 Link80/320 Gbps per Chassis
2x 2 Link40/160 Gbps per Chassis
2x 1 Link20/80 Gbps per Chassis
UNIFIED FABRIC – SIMPLIFIES THE DATACENTER
Fabric Interconnect
UCS Blade
Heartbeat link (No Data)
Cisco
VIC
Fabric Extender
LOOKING FOR SIMPLIFICATION IN COMPUTE
Single point of management
CLOUD - CONSEQUENCE
• Big cloud providers are using commodity rack servers
• => Servers became the commodity
• => It was predicted that rack servers will be preferred option against blade servers
CISCO UCS – UNIFIED APROACH TO BLADE AND RACK SERVERS
CIMC
vNIC
vHBA
Mgmt
vHBA
vNIC
Fabric Interconnect
Fabric Extender
…OR USE IT AS COMMODITY SERVER…
VIRTUALIZATION - CONSEQUENCE
CPU RAM DiskNIC
Hypervisor
CPU RAM DiskNIC
OS
APP
CPU RAM DiskNIC
OS
APP
Service ProfileHW Identity & config
CPU RAM DiskNIC
OS
APP
Service ProfileHW Identity & config
• Virtualization separates OS from physical server and brings HW level abstractionfrom OS
• HW is not virtualized, HW configuration are tight coupled to HW
• => Solution that brings separation of HW and its configuration – stateless computing
• => Cisco comes with unique solution called service profile
UCS SERVICE PROFILES
LAN
SAN
•RAID settings•Disk scrub actions
•Number of vHBAs•HBA WWN assignments•FC Boot Parameters•HBA firmware
•FC Fabric assignments for HBAs
•QoS settings•Border port assignment per vNIC•NIC Transmit/Receive Rate Limiting
•VLAN assignments for NICs•VLAN tagging config for NICs
•Number of vNICs•PXE settings•NIC firmware•Advanced feature settings
•Remote KVM IP settings•Call Home behavior•Remote KVM firmware
•Server UUID•Serial over LAN settings•Boot order•IPMI settings•BIOS scrub actions•BIOS firmware
Deploying Servers in 4 Easy Steps
1. Create the pools and policies 2. Build a template 3. Create the logical servers
(Services Profiles) 4. Associate logical servers to
physical hardware
2012
Cisco completes acquisition of Meraki
2016 – ANOTHER STEP TO SIMPLIFICATION
NEW WAY OF COMPUTE MANAGEMENT
Use the CloudAnalyze theTelemetry
Combine the Management with Automation
CISCO INTERSIGHT – AKA STARSHIP
A NEW ERA OF ADAPTIVE SYSTEMS MANAGEMENT
Compute & Storage
VIRTUALIZATION – CONSEQUENCE• Virtualization introduced the abstraction of the server hardware for operating systems, enabled sharing the
compute power, enabled the live migration etc.
• Other positive conditions like
• Underutilized data centers of large service providers
• Mobility revolution and evolution of applications running out of the device using network communication
• Sufficient network bandwith and trustworthy Internet access
• => Buzzword cloud was created as a new model of using computing technologies
• => Cloud accelerated evolution of all mentioned and many other technologies used in data centers
CLOUD CHARACTERISTICS - CONSEQUENCE
Pay as you growSimple scalability and
growthSimple management
and operation
Looking for cloud like solutions but on premise
CONVERGED INFRASTRUCTURE
Storage
Compute
NetworkMgmt
Mgmt
Mgmt Storage
Compute
Network
Mgmt
Traditional infrastructure Converged infrastructure
• Application Silos• Components from different vendors• Separate management• Managed by different teams• Different renewal cycles• Complicated scalability
• Mostly Sales model• Professional services• Can be 1 part number• Single Management software• Validated design• Still complicated scalability
2012
Vmware aquired Nicira
SOFTWARE DEFINED STORAGE
• Simplified management with option for automation• Standard management interface (API)• Virtualized Data Path – Block, File and Object interfaces that support
applications written to these interfaces.• Seamless ability to scale the storage infrastructure without disruption to
the specified availability or performance Source:SNIA SDS White paper
+ Software =Proprietary HW x86 server SDS cluster
HYPERCONVERGED SYSTEMS
Hypervizor Hypervizor Hypervizor
Compute Storage
Management
SDS
Compute
Network
Mgmt
Mgmt
Hyperconvergedinfrastructure 1st. Gen.
• Software defined storage• Network is not part of the
solution• Still more management systems
for HW and SW
CONVERGED VS. HYPERCONVERGED SYSTEMS
HCI $5B by 2019 65%
CAGR
Both growing and will co-exist
Software-Defined (SDx)
Driving HCI0
2
4
6
8
10
12
14
2015 2016 2017 2018 2019
WW
Rev
enu
e, $
B
$7.2B 20165.2% CAGR
$1.9B 201665% CAGR
ConvergedSystems
Hyperconverged Systems
2016
Cisco announces Hyperconverged solution Hyperflex
NEW GENERATION OF HYPERCONVERGENCE
Simplicity in Deployment
Simplicity in Growing
Simplicity in Operation
Combinecompute,
storage and network intoone system
Independent compute and
storagescalability,
automatic node installation
Single management for
compute, storage and
network
HYPERFLEX – COMPLETED HYPERCONVERGENCE
SDS
Compute
Network
Mgmt
Next-Gen Hyperkonvergence
• Software defined storage• Network is part of the solution• One management system based
on policy and intent• Easy and automated installation
and adding new nodes• Multicloud
Networking
Compute Storage
Management
CISCO UCS – PERFECT FOUNDATION
SDS
• UCS is not server but platform• Modern solution for compute and
network• Single point of Management • LAN/SAN connectivity
Compute UCS blade or rack serversSTATELESS COMPUTING
LAN/SAN: Fabric Interconnect
UCS Manager: Fabric Interconnect Mgmt
Network
+
+
?
CISCO HX DATA PLATFORM
CISCO HX DATA PLATFORM
HYPERVISOR CONTROLLER HYPERVISOR CONTROLLERHYPERVISOR CONTROLLER HYPERVISOR CONTROLLER
VM VM VM VM VM VM VM VM VM VM VM VM
HX NODE HX NODE HX NODE HX NODE
• Unique Log-structureddistributed file system, pointer-based file system
• Inline compression
• Inline deduplication
• Instant clones and snapshots
Storage capacity efficient
• Dynamic data distribution
• #1 Performing HCI
• Consistent performance
• No data locality
• Linear scalability with using wholecluster
• No RAIDs penalty
Performance
• Native data replication with factor2 or 3 within cluster
• Synchronous replication betweensites with Stretch cluster
• Asynchronous replication betweensites for distater recovery
• Logical availability zones
Data protection
Inside HX Data Platform
INSIDE HX DATA PLATFORM NODE
Data Services are
Offloaded
to HX Data Platform
HX Controller VM
Assumes Direct Access of
Local Storage
IOVisor Module Presents
Pooled Storage to HyperVisor
and Stripes IO
DATASTORE/VOLUME
HYPERVISORHX
CONTROLLERVM
IOVisor
VAAI
VM VM VM VM VM
HYPERCONVERGED DATA PLATFORMHYPERCONVERGED DATA PLATFORMHYPERCONVERGED DATA PLATFORMHYPERCONVERGED DATA PLATFORM
HYPERCONVERGED SCALE OUT AND DISTRIBUTED FILE SYSTEM
Start with as Few
as Three Nodes
Hyperconverged
Data Platform
Installs in Minutes
Add Servers, One
or More at a Time
Network Fabric
Policy Configures
QoS Settings
Distribute and
Rebalance Data
Across Servers
Automatically
Retire Older
Servers
HYPERVISOR CONTROLLER
VM VM VM
HYPERVISOR CONTROLLERHYPERVISOR CONTROLLER HYPERVISOR CONTROLLER HYPERVISOR CONTROLLER
VM VM VMVM VM VMVM VM VMVM VM VM
HX DATA PLATFORM
DYNAMIC DATA DISTRIBUTION
HYPERVISOR CONTROLLER HYPERVISOR CONTROLLERHYPERVISOR CONTROLLER HYPERVISOR CONTROLLER
Systems Built on Conventional File Systems Write
Locally, Then Replicate, Creating Performance Hotspots
HX Data Platform Stripes Data Across All
Nodes Simultaneously, Leveraging Cache
Across all SSDs for Fast Writes
Balanced Space Utilization: No Data Migration
Required Following a VM Migration
VM VM VM VM VM VM VM VM VM VM VM VM
WHY DISTRIBUTED DATA
• Performance – Aggregate all resources into a unified pool• Pool of cache• Pool of capacity• Simultaneously leverage all controllers
• Reduce Hot Spots• No vMotion limitations or performance implications
• With locality, ownership of cache must be transferred to local node
• Compare network vs. store latencies• We have been using network based storage for decades;
the network is not a bottleneck!
CAPACITY AND NETWORK UTILISATION
Conventional file systems use their local filesystem…
leading to unbalanced utilisation
CAPACITY AND NETWORK UTILISATION
HX balances space utilisation: no data migration required following a VM migration
CONTINUOUS DATA OPTIMIZATION
BEFORE
Inline Deduplication
20–30% space savings
Inline Compression
30–50% space savings
No Special Hardware
No Performance Impact
No Config lock-in
No Additional License
Log-Structured File System Yields More Efficient Data Optimization
Lower Cost
STORAGE OPTIMISATION
• Deduplication/Compression always on and native to the filesystem• Deduplication is best effort• Has both local and global properties
• These features have minimal impact, unlike other bolt-on solutions• Perform Dedup/compression once – less CPU intensive• Log structured filesystem provides performance, efficiency and data
services• Low impact on incoming writes
C O N S I S T E N T
P E R F O R M A N C E
Flexible Performance Scaling Options
• Multiple VMs on each node of the cluster
• Ability to absorb VM perf hotspots
• Single large VM can drive cluster perf
• Perf scaling without moving around VMs
Each dot represents a Virtual Machine and its average IOPS over an hour of load
0
200
400
600
800
1000
1200
0 50 100 150
IOP
S
VM
Vendor B IOPS per VM
#1 in Performance and
consistency
0
200
400
600
800
1 000
1 200
0 20 40 60 80 100 120 140 160
IOP
S
VM
Cisco HyperFlex IOPS per VM
HIGH RESILIENCY, FAST RECOVERY
Platform can sustain
simultaneous 2 node failure
without data loss; replication
factor is tunable
If a node fails, the evacuated
VMs re-attach with no data
movement required
Replacement node automatically
configured via UCS Service Profile
DATASTORE DATASTORE
NON-DISRUPTIVE OPERATIONS
• Stripe blocks of a file across servers• Replicate one or two additional copies to
other servers• Handle entire server or disk failures
• Restore back to original number of copies
• Rebalance VMs and data post replacement
• Rolling software upgrades
CONTROLLERVMHYPERVISOR
VM VM VM
CONTROLLERVMHYPERVISOR
VM VM VM
CONTROLLERVMHYPERVISOR
VM VM VM
CONTROLLERVMHYPERVISOR
VM VM VM
File.vmdk
D1 E1A1 B1 C1B2 A2 A3C2 C3 D2D3 E2E3 D1E1 B3 B3
EDCBA
Hyperflex highlights
Disaster Avoidance Zero RPO Automated DR Maximum Uptime
HYPERFLEX STRETCHED CLUSTER CLOUD SCALE DATA PLATFORM
Power Mission Critical Apps with
Site-BSite-A
HX Data Platform
DBAPPAPPDB
Synchronous Replication
SSDSSDSSDSSDSSDSSD
HYPERFLEX DATA PROTECTIONBUILT-IN 1-CLICK DISASTER RECOVERY
DB
WebApp
Long Distance Replication
HX Data Platform
DB
App
HX Data Platform
Web
Test Recovery Planned Migration Unplanned Failover
• DR Readiness
• Customize DR Test parameters
• Move VMs across Data Centers / Clusters
• Re-Protect after Migration
• Recovery VMs after Disaster
• Re-Protect after Recovery
INDEPENDENT SCALING OF COMPUTE AND CAPACITY
Scale Cache or Capacity Within Nodes Add Nodes
Scale Compute-Only
Blades
Scale Compute-Only
Racks
Non-HyperFlex Hosts Connects to
Storage with IOVisor
No aditional license needed
H X D A T A P L A T F O R M
Hypervisor Controller Hypervisor Controller Hypervisor Controller
IOVisor IOVisorIOVisor
IOVisor IOVisor IOVisor
NODE SCALING OPTIONS IN HXDP
Support for up to 64 node clusters (32 HX converged, 32 compute-only)
Support for External Storage
HyperFlex
Node
AdaptiveInfrastructure
HyperFlex
Node
HyperFlex
Node
HyperFlex
Node
HyperFlex
Node
HyperFlex supports mounting external storage arrays via NFS, iSCSI, or FC
• VM migration and data transfer• Co-existence with existing SAN environment/data• Present RDM from SAN to VMs• Use VMware storage vMotion for migrations over
Ethernet• Use for backup and other application support
Example Use-Cases
HyperFlexUCS Domain
NAS
SAN
NFS, iSCSIor FC
Use Existing Fabric Interconnects• Leverage existing investment in UCS Fabric
Interconnects• Support multiple HyperFlex clusters, CI stacks,
Bare Metal Servers, etc.
HyperFlex HyperFlex ConvergedInfrastructure
Bare MetalServers
Supporting your Application Ecosystem
Hybrid/Multicloud
Intelligent Management (AI Ops)
AI/ML Apps
Containers / Cloud Native Apps
Core Cloud
Edge
Cisco Workload
Optimization Manger
Enterprise Apps
Cisco Container
Platform
Cisco Cloud Center
Centralized Cloud-Based Management of the Future
NextGeneration
Management
SaaSSimplicity
ActionableIntelligence
Intersight
Remotely Deploy & Manage
ConventionalInfrastructure
Ship People and Infra to
Sites
Stage
Connect toIntersight
HyperFlexwith Intersight
Deploy &manage remotely
Ship to sites
IT DeployOn-Site
Integrate and Configure
21
Ship from Factory
2 31 4 5
3
ConnectedTAC
Secure andCompliant
API Driven,DevOps Enabled
Policy BasedOrchestration
Telemetryand Analytics
Cisco Intersight
Compatibility(HCL) Check
Recommendations Engine
CISCO CONTAINER PLATFORM FOR HYPERFLEX
IaaS
HyperFlex
Compute/Storage
HyperFlex
Network ACI
Nexus 9k standalone
On-premises Kubernetes
Cisco Container Platform
Container NetworkingContiv / ACI CNI / Calico
Container StorageHyperFlex Flex driver
Turnkey Kubernetes• Simple & Seamless Day0 &
DayN K8S operations
integrated into HyperFlex
• HyperFlex IaaS
Enterprise Storage • Scale-out, HA Filesystem
• Data protection, efficiency
and resiliency
Enterprise Networking
and Security • Multi-tenant architecture,
Micro-segmentation,
Security policies
Common Platform for
Legacy and Modern Apps• Co-existence of VMs and
containers on same
platform
DevOps Ready IT • Enable developer agility
with IT & security policies
• Avoid Shadow IT
Turnkey Appliance for
Enterprise Kubernetes
Cisco Container Platform
Single Vendor Support• Fully supported by Cisco
Global TAC
• Single throat to choke for
entire stack
Hyperflex summary
Cloud based centralized management
Seamless integration of Converged & Hyperconverged
Independent Scaling of Compute & Capacity
HYPERFLEX PLATFORM DIFFERENTIATION
High Performance & Scalable Data Platform
#1 Performing HCI
platform
Enterprise Class Data Services & Storage
Optimization
Integrated Dedup &
Compression w/ no
performance penalty
Deployment Automation & Simplicity
Out-of-the-box service profiles,
install/upgrade automation,
automated cluster scaling
Integrated High Performance Network
Fabric
10G/40G VIC/Fabric
Factory installed, integrated
networking, fabric QoS
Data Protection, High Availability & Resiliency
Native replication, backup/DR,
Stretch Cluster, Availability
Zones, Fault tolerant HA
architecture
Broad Range
Of Supported
Workloads
ROBO
(Branch, IOT)
VSI
(app/web)
VDI
(Citrix, Horizon)
Databases
(Oracle, SQL)Mission Critical &
ERP
(SAP)
Analytics
(Splunk)
Cloud-Native Apps
(Docker,
Kubernetes)
Cost optimization through
Compute-only node support
Consistent , Low latency
performance
Investment protection of
existing storage and compute
investment
3X Lower TCO, 3X Higher VM
Density, 64 node scale, linear
scale out performance
Collaboration
(UC, HCS)
Intersight
Monitoring, Telemetry, Analytics,
Policy, Orchestration, Proactive
TAC, HX Cluster management
CONCLUSION
Case study
• Leading producer of agricultural machines in the Czech Republic
• Employing more than 2150 people (89 % of them working in the production area)
• 2017 turnover amount by over 260 million EUR
• Difficult projects, strict customers’ requirements and expanding production were the reasons for thorough modernisation of the existing machinery and purchase of new progressive technologies.
CASE STUDY: AGROSTROJ
How the story started…. (2017)
25 physical servers
How the story started….
25 physicalservers
3 IT specialists
How the story started….
Difficult operation, performance and capacity problems
Solution. Cisco Hyperflex
All-in-one system
Solution. Cisco Hyperflex
All-in-one systemSW defined
solution
Solution. Cisco Hyperflex
All-in-one systemSW defined
solutionInternal Cloud
Advantage of solution for customer
High availability, fasterdeployment of new VM, APP
Advantage of solution for customer
Easy scaling of performance with node adition
Advantage of solution for customer
Savings
1 IT employe, 3 active serversRack space and energy savings
Thank you for playing with us!
Team Datacenter