sangfor hci competition analysis 0303 - cips

43
Sangfor Hyper-converged Infrastructure Competition Analysis

Upload: others

Post on 09-May-2022

4 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Sangfor HCI Competition Analysis 0303 - Cips

Sangfor Hyper-converged Infrastructure Competition Analysis

Page 2: Sangfor HCI Competition Analysis 0303 - Cips

Who are competing with us in this market?

HCI

Hypervisor

Of course there are also other competitors like HP Hyper Converged System, Cisco Hyperflex , Oracle VitualBox, etc. In this slides we only focus on the above mainstream vendors

Page 3: Sangfor HCI Competition Analysis 0303 - Cips

Market Scope

Page 4: Sangfor HCI Competition Analysis 0303 - Cips

About Nutanix – The Enterprise Cloud Company

2100+ customers

Over 70 countries

6 continents

Make datacenter infrastructure invisible, elevating IT to focus on applications and services

Founded in 2009

1,500+ employees

Page 5: Sangfor HCI Competition Analysis 0303 - Cips

Distributed Storage Fabric

App Mobility Fabric

AcropolisPrism

Acropolis Hypervisor

Nutanix Products

Infrastructure Management

Operational Insights

Planning

Page 6: Sangfor HCI Competition Analysis 0303 - Cips

Nutanix Solution Makeup

Distributed Storage FabricWeb-scale Core | Compression | Deduplication| Tiering | Resilience | Data

Protection

App Mobility FabricWorkload Mobility | Expandability | Lock-in Free | Continuity | API

Azure

ESXi Hyper-V

AWS

Acr

opol

is

One-click Infrastructure Management

One-click Operational Insight

One-click Planning

PrismAcropolis

Hypervisor (based on KVM)

Xtreme Computing Platform (XCP)

Page 7: Sangfor HCI Competition Analysis 0303 - Cips

Storage Network

SAN

Scale-out Servers

Power of Convergence

Converged compute and storage for virtualized environments

Page 8: Sangfor HCI Competition Analysis 0303 - Cips

Nutanix Hardware Platforms – Series Summary

SERIES

1000 Remote Office / Back Office / DR / Oracle Licensing

3000 Mid-tier / General Purpose Virtualization

6000 Storage Heavy / Storage Only

7000 Virtual Desktop / Graphical Intensive (GPU)

8000 High Performance / Database / Exchange / High Throughput

9000 High Performance (All Flash) / Database / Exchange / High Throughput

Page 9: Sangfor HCI Competition Analysis 0303 - Cips

Nutanix Pain Points

High resource consumption from

CVM

At least 1.4Ghz CPUand

16GB memory for each CVM

Impact on performance due to

massive HA and vMotion

All CVMs are involved in synchronization for

data locality

Page 10: Sangfor HCI Competition Analysis 0303 - Cips

Comparison of Nutanix/Sangfor HCINutanix Sangfor HCI

Architecture Virtualization, server and storage convergence

Architecture innovation through server/storage/network/NFV

Network virtualization None Yes

Hardware Platform SuperMicro, Dell, Lenovo & UCS Sangfor HCI Appliance or 3rd party server

Solution Appliance Appliance or Software

NFV None Yes

Cluster Configuration 3 Nodes at least 2 Nodes at least

Overhead 16G Memory/8 Cores(CVM) for one host 8G Memory for one host

Automated Hot Add (CPU/Memory) No (Only manual hot add) Yes

Page 11: Sangfor HCI Competition Analysis 0303 - Cips

Nutanix ShortcomingsWe have to admit that Nutanix has done a great job on SDS(software-defined storage), but...

ü No network virtualization while Sangfor has it

ü Appliance solution only (Supermicro, Dell, Lenovo & UCS) while Sangfor can provide pure software solution to reuse legacy servers

ü No NFV security functionalities while Sangfor can provide comprehensive security features on Sangfor HCI platform

ü At least 3 nodes to build a cluster while Sangfor only needs 2, saving initial investment especially for SME and ROBO

ü Pricey…

Page 12: Sangfor HCI Competition Analysis 0303 - Cips

About Simplivity

• Founded in 2008 as Ecological Solutions Corp • Renamed to SimpliVT in 2009• Renamed to SimpliVity in 2012• HQ in Massachusetts • Launched Omnicube in August 2012 at VMworld• Changed product launch date to April 2013 (when they had their first sale) l Acquired by HPE in Jan. 2017 for 650 million USD• Founded by Doron Kempel (CEO), formerly Diligent Tech, EMC, Imedia

Page 13: Sangfor HCI Competition Analysis 0303 - Cips

Simplivity Product Snapshot• SimpliVity delivers a 2U hyper-converged appliance called OmniCube (running on OmniStack) • OmniStack technology consists of: – SVT - a software controller that runs in a VM – Omnicube Accelerator Card (OAC) - a custom FPGA PCIe card providing CPU o oad for storage operations like inline deduplication and compression • Omnicubes are joined to a Federation (SimpliVity’s cluster) which form a resource pool boundary • Within the Federation Omnicubes are deployed in a vCenter Datacenter• SimpliVity supports a maximum of 8 Omnicubes in a vCenter Datacenter• Each VMDK is stored to a local node and one other cluster node (i.e. paired HA node model) • Omnistack also uses local RAID data protection on each Omnicube• Management tool strategy uses plug-ins – Currently support vSphere plug-in – No current support for Hyper-V or KVM – promised for 2+ years now • Core offering on Dell hardware – CN-2000 (SMB)– CN-3000 (General Purpose) – CN-5000 (Performance) • Added Support for Cisco UCS and Lenovo servers– Meet-in-the-channel model for Cisco UCS c-series servers (no OEM agreement) – Meet-in-the-channel model for Lenovo servers (no OEM agreement)

Page 14: Sangfor HCI Competition Analysis 0303 - Cips

Simplivity Pain Points• Not Built to Scale... The way SimpliVity designed their architecture significantly limits the

number of nodes they can scale to. They can only support a maximum of 4 Omnicubes in a cluster, or more accurately 4 nodes in a vCenter datacenter construct (as Simplivity doesn't use the concept of clusters). To scale beyond 4 nodes you need a new vCenter datacenter. Larger cluster sizes don't increase system performance or increase the ability to recover from disk failures more quickly. Even management is not built for scale. You can only manage a max of 32 nodes in a Federation (Simplivity's management boundary).

• Not 100% Software-Defined... All of the webscale platforms (AWS, Google, Azure, etc.) are designed to be 100% software defined and unlike legacy datacenter architectures, no longer have any dependencies on proprietary hardware. SimpliVity is not 100% software defined. They take a dependency on a proprietary hardware FPGA card called the OmnicubeAccelerator Card (used to offload CPU for dedupe and compression) which impacts their ability to deliver cloud-paced technology updates in software. They also use legacy hardware RAID controllers for local data protection.

• Dishonest About System Resource Overhead... SimpliVity talks a lot about their data efficiency story and how overhead for features like dedupe and compression are offloaded to the Omnicube Accelerator Card (OAC). What they don’t tell you is that only CPU is offloaded and that the SVT (their controller VM) reserves 100GB (57GB on entry level systems) of system memory to support the data efficiency features of the OAC. This is far greater than any other HCI solution. This means that if you buy a system with 128GB of RAM, only 28GB is usable, forcing customers to purchase higher memory configurations or live with lower VM densities.

Page 15: Sangfor HCI Competition Analysis 0303 - Cips

Simplivity Pain Points• Force Customers to Align to Optimal Block Size... Simplivity's system is

optimized for 8k IO and often asks customers to reformat their databases and application data to 8k in order to get optimal performance from the system. Performance comparisons are done with synthetic datasets perfectly optimized for their system and are not based in reality.

• SimpliVity’s 2-Node Configuration Requires More Than 2 Nodes... One of SimpliVity’s main selling points for ROBO is they can support a minimum configuration of 2 nodes. However, their sales guys often don’t tell you that you always require a 3rd node to run the Arbiter service (a ‘witness-like’ windows based service that manages tie breaker scenarios, DNS split-brain, etc.) and you need to deploy a separate physical server to support vCenter and the SimpliVitymanagement plug-in. So in reality, you still need a minimum of 3 physical servers to deploy any SimpliVity solution.

• Data Efficiency Features Not Configurable... Dedupe and Compression are always on and cannot be disabled even if the dataset cannot take advantage of these features.

Page 16: Sangfor HCI Competition Analysis 0303 - Cips

Sangfor Adantages over Simplivity

• True scale-out architecture, supporting up to 64 nodes in one cluster• 100% software-defined, totally hardware agnostic. Future proof technology

enables smooth transition to hybrid cloud.• Comprehensive network and security virtualization on one single software

stack.• Real 2 nodes deployment for ROBO• Intuitive web-based management platform providing a single pane-of-glass

management.• ......

Page 17: Sangfor HCI Competition Analysis 0303 - Cips

About Vmware• Founded in 1998• HQ in Palo Alto, CA• First shipping product (VMware Workstation) in May 1999• Acquired by EMC in 2004 for $625M• August 2007, EMC released 15% of VMware’s shares in an IPO – NYSE: VMW• 2014 Revenue – $6B• 2014 Net Income – $0.86B• 2014 Employee Count – 18,000• Founders – Diane Greene, Mendel Rosenblum, Scott Devine, Edward Wang and Edouard Bugnion

Page 18: Sangfor HCI Competition Analysis 0303 - Cips

VMware Overview

VMware, Inc. is a subsidiary of Dell Technologies, that provides cloud and virtualization software and services, and claims to be the first

to successfully virtualize the x86 architecture commercially. Founded in 1998, VMware is based in Palo Alto, California. In 2004, it was acquired by and became

a subsidiary of EMC Corporation.

Page 18

vSphere 4, May. 21, 2009vSphere 6.5, Oct. 18, 2016

Acquire Nicira, June 23, 2012NSX 6.3, 2016 First Launch, 2014

vSAN 6.5, 2016

Page 19: Sangfor HCI Competition Analysis 0303 - Cips

VMware SDDC Architecture

Page 19

Page 20: Sangfor HCI Competition Analysis 0303 - Cips

Page 20

vCenter Server NSX Manager

SDDC Manager: vRealize Operations, vRealize Log Insight, vRealize Automation (Optional)

Security

Page 21: Sangfor HCI Competition Analysis 0303 - Cips

Page 21

aSV + aNET + aSAN

Compute Networking Storage

One Software Stack with Unified Mgmt

Page 22: Sangfor HCI Competition Analysis 0303 - Cips

VMware Shortcomings

VMware is the leader in virtualization, however…

üSeparate software stacks to compose a full SDDC solution with management silos

üManagement requires license

üComplex license structure, only the highest version comes with full features

üUser unfriendly mgmt UI and long learning curve for newbies

üExtremely high cost on license, training and support

Page 22

Page 23: Sangfor HCI Competition Analysis 0303 - Cips

Vmware vSphere

vSphere Editions (un-bundled)- ROBO- Essentials- Standard- Enterprise Plus

vSphere is the collective term for VMwares virtualization platform, it includes the ESX hypervisor as well as the vCenter Management suite and associate components. vSphere is considered by many the industry’s most mature and feature rich virtualization platform and had its origins in the initial ESX releases around 2001/2002.

Page 24: Sangfor HCI Competition Analysis 0303 - Cips
Page 25: Sangfor HCI Competition Analysis 0303 - Cips
Page 26: Sangfor HCI Competition Analysis 0303 - Cips

vSphere Pain Points

• Management platform (vCenter) sold separately while Sangfor‘s is built-in by default

• Expensive, license and subscription cost, training cost,etc. while Sangfor’saSV is much more cost-effective.

• Require highest version (enterprise plus) to get full features (like DRS) while Sangfor aSV is all included in one version.

• Complex, different versions and components while Sangfor aSV is much lighter.

• Only manual hot add , no automated hot add while Sangfor has it.

Page 27: Sangfor HCI Competition Analysis 0303 - Cips

About Microsoft Hyper-v• Microsoft Hyper-V, codenamed Viridian and formerly known as Windows Server

Virtualization, is a native hypervisor; it can create virtual machines on x86-64 systems running Windows. Starting with Windows 8, Hyper-V supersedes Windows Virtual PC as the hardware virtualization component of the client editions of Windows NT. A server computer running Hyper-V can be configured to expose individual virtual machines to one or more networks.

• Hyper-V was first released alongside Windows Server 2008, and has been available without charge for all the Windows Server and some client operating systems since.

Windows Server 2012 R2 comes in 4 editions, Datacenter, Standard, Essential and Foundation but only Datacenter and Standard enable virtualization capabilities. There is also the free Hyper-V Server edition. Datacenter and Standard provide the equivalent set of capabilities and differ only in their virtualization rights i.e. how many Operating System Environments (OSEs = licensed guests) are included. Datacenter is aimed at: Highly virtualized private and hybrid cloud environments.

Page 28: Sangfor HCI Competition Analysis 0303 - Cips

The Virtual Machine Manager feature that allowed you to convert existing physical computers into virtual machines through the physical-to-virtual (P2V) conversion process is no longer supported in System Center 2012 R2 Virtual Machine Manager.

System Center is based on the .NET frame work. The primary administrator interfaces are not browser-based (like e.g. the new vSphere web client attempts to achieve).

Limited Linux OS version support, strong windows ecosystem, but weaker on Linux as opposed to vmware and KVM(which Sangfor aSV is based on)

Expensive, separate System Center license and subscription

No Automated Hot Add

Microsoft Hyper-v Pain Points

Page 29: Sangfor HCI Competition Analysis 0303 - Cips

Hypervisor ComparisonFeature Sangfor aSV Vmware vSphere Microsoft Hyper-v

Cost for hypervisor(💲 ) <$1380/socket, S&S 1Y: <$500

$3,495/socket + S&S 1Y: $734 (B) or $874 (Prod)

Datacenter: $6,155 for (up to) 2 sockets,Software

Assurance (SA) for 2 years (per 2 physical CPUs) is

around $2,406 for 2 years or $3,609 for 3 years

Cost for management(💲 ) 0 $4995(Std) + $1049(B) or $1249 (P)

Datacenter: $3,607 or Standard $1,323 per

managed endpoint (up to 2 CPU)

Web-based Mgmt Yes Yes No

Automated Hot Add Yes No No

Guest OS support Comprehensive Comprehensive Limited (for Linux)

P2V Yes Yes No

Page 30: Sangfor HCI Competition Analysis 0303 - Cips

Sangfor HCI Highlights• One software stack realizing compute , storage, network and security

virtualization, eliminating DC silos• Built-in intuitive and easy-to-use web-based management platform without

additional cost• Totally hardware agnostic, a future proof platform which enables smooth

transition to hybrid cloud• Outstanding integrated networking and security features • The best cost-effective HCI solution in the market

Page 31: Sangfor HCI Competition Analysis 0303 - Cips

Some other products...

Page 32: Sangfor HCI Competition Analysis 0303 - Cips

VxRail Product Snapshot• VxRail is the latest hyper-converged appliance exclusively from EMC/VCE/VMware• VxRail is built on Virtual SAN and Quanta servers • VxRail effectively replaces VSPEX Blue

o Customers will be able to upgrade to VxRail or join VSPEX Blue to a new VxRail clustero VxRail will be sold by VCE/EMC/VMware and not exclusively through the channel like VSPEX Blue

• The EVO:RAIL Program was also terminated.o OEM Partners will continue to sell VSAN 6.2 through the VSAN Ready Nodes Program o VMware Partners will continue to support the <10 EVO:RAIL appliances ever sold for the next 5 years. No

upgrade offered. • VxRail includes the following software bundle:

o VxRack Managero vCenter Servero vSphere Enterprise Pluso VSANo vRealize Log Insighto EMC Recover Point for VMs (15 VM license) o EMC CLoud Gateway (10TB license)

• VMware vSphere Loyalty Program allows customers to apply VMware vSphere Enterprise Plus licenses to their VxRail Appliance • VxRail can scale from 4 nodes to 64 nodes

o At release the minimum scale unit is a 4 node block o Single node additions will be available in Q2CY16

Page 33: Sangfor HCI Competition Analysis 0303 - Cips

VxRail Models • VxRail will be available (Q1) in the following hybrid models:o VxRail 60 (6c / 64GB per node) - $59K listo VxRail 120 (12c / 128GB per node) - $98K listo VxRail 160 (16c / 256GB per node) - $148K list o VxRail 200 (20c / 512GB per node) - $176K list

• VxRail will be available (Q2) in the following all-flash models: o VxRail 120F (12c / 128GB per node) - $TBD listo VxRail 160F (16c / 256GB per node) - $TBD listo VxRail 200F (20c / 512GB per node) - $TBD list o VxRail 240F (24c / 512GB per node) - $TBD list o VxRail 280F (28c / 512GB per node) - $TBD list

Page 34: Sangfor HCI Competition Analysis 0303 - Cips

VxRail Pain Points• Management Not As Simple As EMC/VMware Claims... Customers require VxRail Manager, vCenter Operations Manager, VSAN Observer, vRealize Log Insight, VCE Vision Intelligent Operations, plus tools for all the bolt-on software to manage the appliance.

• Bolt-On Software and Services... Where ever possible, VCE has bolted on other EMC Federation products... e.g. RecoverPoint for VMs (15 VMs per appliance), Avamar, and EMC CloudArray (10 TB cloud storage license per appliance). A bolt-on software and services approach without any true integration just adds additional complexity to the solution.

• Multiple Support Organizations... Customers need to call EMC for support on the appliance, VMware for support on any VMware tools, and VCE for support on VCE Vision.

• Overlapping and Confusing Products... EMC/VMware have VxRAIL that plugs into VCE Vision and VxRACK that plugs into EVO SDDC. Both use VSAN and scale to 64 nodes in a cluster. It's unclear why both products exist, similar to the rest of EMCs overlapping product portfolio.

Page 35: Sangfor HCI Competition Analysis 0303 - Cips

VSAN 6.5 Pain Points•No Data Locality... VSAN writes VM data to any 2 nodes in the cluster regardless of the location of the VM. If the VM moves to a new host, there is no attempt to localize it's data to the new host. Bottom line, VSAN has data locality by accident, Sangfor has it by design. VMware claims that remote reads have no impact on performance. While this may be the case in 4 node POCs running on test networks, there is definitely a performance impact of reading all data from remote nodes in larger production clusters with saturated networks. But VMware already knows this and quietly added a 1GB local cache in version 6.2 to mitigate this.

•Management Single Point of Failure... VSAN does not have a highly available control plane and requires separate infrastructure dedicated to management. Default vCenter deployments run in a single VM which is a single-point-of-failure. In contrast, Sangfor HCI Management Platform runs on every node in the cluster and provides a highly available control plane that scales as the cluster grows.

Page 36: Sangfor HCI Competition Analysis 0303 - Cips
Page 37: Sangfor HCI Competition Analysis 0303 - Cips

Cisco HyperFlex SnapshotCisco HyperFlex System -Cisco‘s new Hyper-Converged Appliance– Built on Cisco UCS Servers + HyperFlex HX Data Platform (Springpath software) – Announced March 1st 2016

HyperFlex HX Data Platform– Uses Springpath Data Platform under an OEM agreement – Based on patented HALO (Hardware Agnostic Log-structured Object) architecture– HALO uniformly stripes data across all servers in cluster (256MB stripe units, 8 units per stripe)– Leverages DRAM, SSD and HDD

• SSD used as write log buffer• HDD used as a capacity tier• DRAM and SSD used as read cache

– Write buffer is distributed (min 3 SSDs in cluster), Read cache is local to each node– Can scale caching tier and persistence tier independently with auto-rebalancing (add SSDs or HDDs

anytime) – RF2 and RF3. RF3 is default, but they don‘t have erasure coding– CVM model (consumes 8 vCPUs and 48-64GB)

Scalability – Min 3 nodes, scale 1 node at a time – Max 8 nodes in a cluster, Max 4 HX clusters in UCS FI Domain (32 nodes total)

Page 38: Sangfor HCI Competition Analysis 0303 - Cips

Hypervisor Support – vSphere (available now)– Hyper-V (2H2016)– KVM (2H2016) – Note: CVM will run in Docker Container– Management tool will be a separate plug-in for each hypervisor Models– Entry Level - HX-220C - 1U system with 7TB capacity – Capacity Heavy - HX-240C - 2U system with up to 29TB capacity– Compute Heavy - HX240C + B200 Blade Servers (IO Visor VIB deployed on blades to provide access to HyperFlex storage) – Cannot mix models in same cluster Pricing– Initial Appliance includes hardware, software, maintenance and support for 1 year– Annual renewal requires additional software, maintenance and support– Includes vSphere Foundation (max 3 nodes) which can be upgraded to Enterprise or Enterprise Plus Use Cases (as positioned by Cisco): – Desktop Virtualization – Test & Dev– ROBO

Page 39: Sangfor HCI Competition Analysis 0303 - Cips

HyperFlex Pain Points•Unproven Technology... The Springpath data platform has only been in market for 1 year. There are very few customers running it. They only have 2 customer case studies. • Unclear Roadmap... There have been no new features added in the last year and no real sense of what's coming or what the development priorities are for the platform. • Limited Scale... Cisco reset the max cluster size from 64 to 8 nodes. This is a function of using unproven technology and Cisco is taking a cautious approach by limiting the size of the deployments until they have confidence with the platform at scale. •10K SAS HDDs?... The HyperFlex capacity tier uses 10K SAS HDDs which are more expensive and lower capacity than 7.2K SATA/ NL SAS HDDs used in most hyperconverged systems. Cisco's explanation is they want to keep the test matrix small. •Data Efficiency Features Not Configurable... Dedupe and Compression are always on and cannot be disabled even if the dataset cannot take advantage of these features. •Poor Resiliency... After 2 HDD failures the cluster goes into read-only mode to prevent data loss. After 3 HDD failures the cluster will take itself •offline. •Plug-in Based Management Tool... The HyperFlex Data Platform is managed by the Cisco HyperFlex HX Data Platform Administration Plug-in for vCenter. Additional plug-ins will need to be developed for other hypervisors. They will be tied to the release train of each hypervisor vendor and will not have the ability to release technology innovation at their own pace or deliver management feature parity across hypervisors. Admins will not have a single-pane-of-glass management experience for multi-hypervisor environments.

Page 40: Sangfor HCI Competition Analysis 0303 - Cips

•Multiple Mangement Tools... In addition to the management plug-in, admins will have to use UCS Manager or UCS Manager Plugin for vCenter to manage server hardware and networking. In addition, another web based console is used for initial setup and must be used for adding or removing nodes from the cluster. •Setup Pre-Reqs Still Manual... Need to manually create the vSphere datacenter, create 2 vSwitches, add hosts to the datacenter, put the hosts in maintenance mode, etc. •Requires FIs... The initial appliance models are built on Cisco C-series rack mount servers which should be able to exist on their own. Cisco requires redundant fabric interconnects (FIs) in all HyperFlex deployments which adds to the cost, complexity, and management overhead. Customers cannot use existing FIs, they must buy new ones dedicated for the HyperFlex cluster. •Max 4 HX clusters in a HyperFlex UCS FI Domain... Normally each UCS domain could have 20+ blade chassis under it which would support 160+ servers. For HyperFlexdeployments Cisco has set a limit of 4 HX clusters in an UCS domain. So after 32 nodes, a second set of FIs and a new instance of UCS manager is required which again adds additional cost, complexity and management overhead for larger deployments. •Can't Attach External Storage... not even for data migration. •No Mixed Clusters... You cannot mix HX220C and HX240C nodes in the same cluster. Another sign of product immaturity.

Page 41: Sangfor HCI Competition Analysis 0303 - Cips

HPE Hyperconverged 380 SnapshotHPE announced new Hyper Converged 380 appliance • Built with StoreVirtual VSA on DL380 servers• 2-Node Initial Entry Point• Scales to 16 nodes in a cluster • Improved Setup/Deployment: 8 simple questions, 5-click install, deploying VMs in 15 minutes• Custom templates for VM provisioning• Built-in life cycle management and analytics• 3-click non-disruptive firmware and driver updates • Replication is via HP Remote Copy - can use any StorVirtual-based storage as a target • Choose from x options based on HP DL380 servers: -TBD• Bundled with:– VMware vSphere – HPE StoreVirtual VSA– HPE OneView InstantON (installer)– NEW HPE OneView "HC UX"– NEW HPE OneView "HC UX" For Mobile – HPE Remote Copy– HPE Cloud Optimizer– Factory Software Installation– HPE 24x7 Technology Support Services • HPE Flexible Capacity Program – pay-as-you-grow/shrink – variable monthly payment aligned to usage • Integration with HP Cloud System 9.0 (HPE’s cloud-in-a-box)• Channel Play

Page 42: Sangfor HCI Competition Analysis 0303 - Cips

HPE Hyperconverged 380 Pain Points• The New UI is Very Basic... HPE targeted the new OneView based UI at IT Generalists (non vCenteradministrators). The interface is very simple, but unfortunately it's also very basic. Admins will still have to use vCenter and other tools complete anything but the most basic management tasks. • Still StoreVirtual... A new UI is just a band-aid on top of an aging and irrelevant Store Virtual product. Since HP acquired Lefthand Networks almost a decade ago, they have done very little to improve the underlying product. Most of HPE's focus has been on different packaging exercises and not on core product improvement. • Not Built for Virtualization... StoreVirtual still uses LUNs, volumes, disk groups and other legacy storage constructs that map poorly to VMs and VHDs. All storage features (snapshots, clones, etc.) are implemented at the array or volume level. • Not Built for Flash... StoreVirtual was not built for flash. It doesn't even have the ability to differentiate between SSDs and HDDs. Administrators have to define different tiers and then manually add SSDs or HDDs to those tiers before StoreVirtual can do any sort of tiering. The HC380 product has removed this as a manual process provided that SSDs and HDDs are in the correct disk slots.• No Support for Hypervisor Upgrades... StoreVirtual does not have the ability to assist with upgrading hypervisor versions. • Limited Scalability... HC380 can only scale to a max of 16 homogeneous nodes in a cluster. • Can't Mix Node Types in a Cluster... Nodes in a HC380 cluster still must have a homogeneous configuration (i.e. have the same CPU, Memory, and Disk Configuration). This is another sign of product immaturity in what is supposed to be a mature product. Even most of the smaller HCI startups can do this.

Page 43: Sangfor HCI Competition Analysis 0303 - Cips

Conclusion – Sangfor HCI HighlightsØ World’s first and only 3rd Gen HCI Platform

ü Comprehensive E2E one stop solution which deeply integrates compute, storage, networking and NFV

ü One unified & intuitive web-based management platform in one pane of glassü One step to cloud with smooth evolution

Ø Unprecedented Flexibility and Simplicity ü Appliance or Software, your choice!ü 2 nodes to start a cluster, best choice for ROBOü One single unified platform with single vendor supportü One license suite with no confusion

Ø Best cost-effective HCI solution in the marketü Significant TCO reduction through hardware& software consolidation ü Requires no specialist on O&M

Ø Thorough and professional local support ü Dedicated pre-sales in SEA region and comprehensive CTI remote support from

Malaysia