hype cycle for storage technologies, 2016 - … · gartner's hype cycle illustrates the...

62
This research note is restricted to the personal use of [email protected] This research note is restricted to the personal use of [email protected] G00302826 Hype Cycle for Storage Technologies, 2016 Published: 5 July 2016 Analyst(s): Pushan Rinnen, Julia Palmer This Hype Cycle evaluates storage-related hardware and software technologies in terms of their business impact, adoption rate and maturity level to help users decide where and when to invest. Table of Contents Analysis.................................................................................................................................................. 3 What You Need to Know.................................................................................................................. 3 The Hype Cycle................................................................................................................................ 3 The Priority Matrix............................................................................................................................. 5 Off the Hype Cycle........................................................................................................................... 6 On the Rise...................................................................................................................................... 7 Shared Accelerated Storage....................................................................................................... 7 Cloud Data Backup.................................................................................................................... 8 Management SDS.................................................................................................................... 10 File Analysis.............................................................................................................................. 12 At the Peak.....................................................................................................................................13 Data Backup Tools for Mobile Devices...................................................................................... 13 Open-Source Storage...............................................................................................................14 Copy Data Management........................................................................................................... 16 Information Dispersal Algorithms............................................................................................... 17 Integrated Systems: Hyperconvergence................................................................................... 18 Infrastructure SDS.................................................................................................................... 20 Solid-State DIMMs....................................................................................................................22 Sliding Into the Trough.................................................................................................................... 24 Integrated Backup Appliances.................................................................................................. 24 Data Sanitization....................................................................................................................... 25 Object Storage......................................................................................................................... 27 Storage Cluster File Systems.................................................................................................... 28

Upload: hacong

Post on 29-Sep-2018

230 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Hype Cycle for Storage Technologies, 2016 - … · Gartner's Hype Cycle illustrates the typical life cycle that a new storage technology goes through before it reaches mainstream

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

G00302826

Hype Cycle for Storage Technologies, 2016Published: 5 July 2016

Analyst(s): Pushan Rinnen, Julia Palmer

This Hype Cycle evaluates storage-related hardware and softwaretechnologies in terms of their business impact, adoption rate and maturitylevel to help users decide where and when to invest.

Table of Contents

Analysis..................................................................................................................................................3

What You Need to Know.................................................................................................................. 3

The Hype Cycle................................................................................................................................ 3

The Priority Matrix.............................................................................................................................5

Off the Hype Cycle........................................................................................................................... 6

On the Rise...................................................................................................................................... 7

Shared Accelerated Storage....................................................................................................... 7

Cloud Data Backup.................................................................................................................... 8

Management SDS.................................................................................................................... 10

File Analysis.............................................................................................................................. 12

At the Peak.....................................................................................................................................13

Data Backup Tools for Mobile Devices...................................................................................... 13

Open-Source Storage...............................................................................................................14

Copy Data Management...........................................................................................................16

Information Dispersal Algorithms...............................................................................................17

Integrated Systems: Hyperconvergence................................................................................... 18

Infrastructure SDS.................................................................................................................... 20

Solid-State DIMMs....................................................................................................................22

Sliding Into the Trough.................................................................................................................... 24

Integrated Backup Appliances.................................................................................................. 24

Data Sanitization.......................................................................................................................25

Object Storage......................................................................................................................... 27

Storage Cluster File Systems.................................................................................................... 28

Page 2: Hype Cycle for Storage Technologies, 2016 - … · Gartner's Hype Cycle illustrates the typical life cycle that a new storage technology goes through before it reaches mainstream

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

Cross-Platform Structured Data Archiving.................................................................................29

Emerging Data Storage Protection Schemes............................................................................ 31

Enterprise Endpoint Backup..................................................................................................... 32

Hybrid DIMMs...........................................................................................................................34

Virtual Machine Backup and Recovery......................................................................................35

Cloud Storage Gateway............................................................................................................37

Disaster Recovery as a Service.................................................................................................38

Public Cloud Storage................................................................................................................40

Online Data Compression......................................................................................................... 41

Climbing the Slope......................................................................................................................... 42

SaaS Archiving of Messaging Data........................................................................................... 42

Storage Multitenancy................................................................................................................44

Solid-State Arrays.....................................................................................................................45

Automatic Storage Tiering........................................................................................................ 47

Enterprise Information Archiving................................................................................................49

Data Deduplication................................................................................................................... 50

Network-Based Replication Appliances.................................................................................... 52

Continuous Data Protection......................................................................................................53

External Storage Virtualization...................................................................................................55

Entering the Plateau....................................................................................................................... 57

EFSS........................................................................................................................................ 57

Appendixes.................................................................................................................................... 58

Hype Cycle Phases, Benefit Ratings and Maturity Levels.......................................................... 60

Gartner Recommended Reading.......................................................................................................... 61

List of Tables

Table 1. Hype Cycle Phases................................................................................................................. 60

Table 2. Benefit Ratings........................................................................................................................60

Table 3. Maturity Levels........................................................................................................................ 61

List of Figures

Figure 1. Hype Cycle for Storage Technologies, 2016.............................................................................4

Figure 2. Priority Matrix for Storage Technologies, 2016......................................................................... 6

Figure 3. Hype Cycle for Storage Technologies, 2015...........................................................................59

Page 2 of 62 Gartner, Inc. | G00302826

Page 3: Hype Cycle for Storage Technologies, 2016 - … · Gartner's Hype Cycle illustrates the typical life cycle that a new storage technology goes through before it reaches mainstream

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

Analysis

What You Need to Know

This year's Hype Cycle for Storage Technologies adds four new profiles. The previous software-defined storage (SDS) profile has been split into two profiles: infrastructure SDS and managementSDS. Infrastructure SDS has replaced and expanded on the technology profile formerly called"virtual storage appliance." Management SDS was created to feature software products that enableautomated and optimized storage resource management and storage virtualization, with I/Ooptimization as a subcategory, but no longer a separate technology profile. Another new profile —shared accelerated storage — presents an emerging storage architecture where nonvolatile memorysuch as flash is used to scale out to form a central storage pool for dozens of compute clients thatrequire very high performance and ultra-low latency. Last, but not least, cloud data backup wascreated to focus on emerging backup products that protect data generated in the cloud, replacingthe previous profile for cloud-based backup services, which primarily focused on backup as aservice.

The Hype Cycle

Gartner's Hype Cycle illustrates the typical life cycle that a new storage technology goes throughbefore it reaches mainstream adoption. The technologies that have moved the farthest along theHype Cycle curve in the past year include enterprise file synchronization and sharing (EFSS), onlinedata compression and disaster recovery as a service (DRaaS), the profile name of which has beenchanged from "cloud-based disaster recovery services." EFSS technologies have become mature,with two orthogonal directions: either data infrastructure modernization or modern contentcollaboration and business enablement. Online data compression has been well-adopted in backuptarget appliances as well as primary storage arrays, especially solid-state arrays (SSAs). DRaaS isnow being offered by more than 250 providers, with production service instances more thandoubled in the past year. SSAs continue seeing fast user adoption in the past year and also sizablemovement on the Hype Cycle. Hyperconverged integrated systems have reached the peak of theHype Cycle, with 5% to 20% target market penetration.

High-profile technologies with high and transformational business impacts tend to reach their HypeCycle peak and trough quickly, while low-profile technologies or technologies with low or moderatebusiness impact could move much more slowly on the curve, and sometimes may never reach theirpeak before approaching the Trough of Disillusionment or becoming obsolete. Examples of fast-moving and high-impact technologies associated with storage include data deduplication, SSAsand hyperconverged integrated systems. Examples of slow-moving technologies with moderatebusiness impact include appliance-based replication and storage multitenancy (see Figure 1).Business impact may change during the life cycle of the technology.

Gartner, Inc. | G00302826 Page 3 of 62

Page 4: Hype Cycle for Storage Technologies, 2016 - … · Gartner's Hype Cycle illustrates the typical life cycle that a new storage technology goes through before it reaches mainstream

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

Figure 1. Hype Cycle for Storage Technologies, 2016

Innovation Trigger

Peak ofInflated

Expectations

Trough of Disillusionment Slope of Enlightenment

Plateau of Productivity

time

expectations

Years to mainstream adoption:

less than 2 years 2 to 5 years 5 to 10 years more than 10 yearsobsoletebefore plateau

As of July 2016

Shared Accelerated Storage

Cloud Data Backup

File Analysis

Management SDS

Data Backup Tools for Mobile Devices

Open-Source Storage

Copy Data Management

Information DispersalAlgorithms Integrated Systems: Hyperconvergence

Infrastructure SDSSolid-State DIMMs

Integrated Backup AppliancesData Sanitization

Object Storage

Storage Cluster File Systems

Cross-Platform Structured Data ArchivingEmerging Data Storage Protection Schemes

Enterprise EndpointBackup

Hybrid DIMMs

Virtual Machine Backupand Recovery

Cloud Storage Gateway

Disaster Recovery as a ServicePublic Cloud Storage

Online Data CompressionSaaS Archiving of Messaging Data

Storage MultitenancySolid-State Arrays

Automatic Storage TieringEnterprise Information Archiving

Data DeduplicationNetwork-Based Replication Appliances

Continuous Data ProtectionExternal Storage Virtualization

EFSS

Source: Gartner (July 2016)

Page 4 of 62 Gartner, Inc. | G00302826

Page 5: Hype Cycle for Storage Technologies, 2016 - … · Gartner's Hype Cycle illustrates the typical life cycle that a new storage technology goes through before it reaches mainstream

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

The Priority Matrix

The Priority Matrix maps the benefit rating for each technology against the length of time beforeGartner expects it to reach the beginning of mainstream adoption. This alternative perspective canhelp users determine how to prioritize their storage hardware and storage software technologyinvestments and adoption. In general, companies should begin with technologies that are ratedtransformational in business benefits and are likely to reach mainstream adoption quickly. Thesetechnologies tend to have the most dramatic impact on business processes, revenue or cost-cutting efforts. Infrastructure SDS's business benefit is now elevated to transformational asorganizations start to realize cost savings.

After these transformational technologies, users are advised to evaluate high-impact technologiesthat will reach mainstream adoption status in the near term, and work downward and to the rightfrom there. This year, two new technology profiles have high business benefit ratings: cloud databackup and shared accelerated storage. Information dispersal algorithms' benefit is raised to "high"this year, as it is implemented in more storage arrays, hyperconverged systems and SDS.

Figure 2 shows where the storage technologies evaluated in this year's Hype Cycle fall on thePriority Matrix. Note that this is Gartner's generic evaluation; each organization's graph will differbased on its specific circumstances and goals.

Gartner, Inc. | G00302826 Page 5 of 62

Page 6: Hype Cycle for Storage Technologies, 2016 - … · Gartner's Hype Cycle illustrates the typical life cycle that a new storage technology goes through before it reaches mainstream

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

Figure 2. Priority Matrix for Storage Technologies, 2016

benefit years to mainstream adoption

less than 2 years 2 to 5 years 5 to 10 years more than 10 years

transformational Solid-State Arrays Data Deduplication Infrastructure SDS

high Continuous Data Protection

EFSS

Enterprise Endpoint Backup

Enterprise Information Archiving

Integrated Systems: Hyperconvergence

Object Storage

Online Data Compression

Public Cloud Storage

Storage Cluster File Systems

Virtual Machine Backup and Recovery

Cloud Data Backup

Copy Data Management

File Analysis

Information Dispersal Algorithms

Management SDS

Open-Source Storage

Shared Accelerated Storage

moderate Disaster Recovery as a Service

Automatic Storage Tiering

Cloud Storage Gateway

Cross-Platform Structured Data Archiving

Data Backup Tools for Mobile Devices

Data Sanitization

External Storage Virtualization

Hybrid DIMMs

Integrated Backup Appliances

Network-Based Replication Appliances

SaaS Archiving of Messaging Data

Solid-State DIMMs

Storage Multitenancy

Emerging Data Storage Protection Schemes

low

As of July 2016

Source: Gartner (July 2016)

Off the Hype Cycle

Technologies that have fallen off the chart because of high maturity and widespread adoption maystill be discussed in the related Gartner IT Market Clock research. The following technology profileshave been retired or replaced in this year's Hype Cycle:

Page 6 of 62 Gartner, Inc. | G00302826

Page 7: Hype Cycle for Storage Technologies, 2016 - … · Gartner's Hype Cycle illustrates the typical life cycle that a new storage technology goes through before it reaches mainstream

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

■ Cloud-based backup services: This profile has been replaced with a new profile called "clouddata backup" to focus on technologies protecting cloud-native data, instead of backup-as-a-service technologies.

■ Data encryption technologies, HDDs and SSDs: This technology has failed to be widely adoptedby storage arrays due to a performance penalty.

■ FCoE: Development for this technology has been discontinued.

■ I/O optimization: This technology has been merged into management SDS.

■ Linear tape file systems: This technology is commonly supported by tape libraries.

■ Software-defined storage: This profile has been split into two separate profiles: infrastructureSDS and management SDS.

■ Virtual storage appliance: This technology has been merged into infrastructure SDS.

On the Rise

Shared Accelerated Storage

Analysis By: Julia Palmer

Definition: Shared accelerated storage is an architecture that has been designed to tackle newdata-intensive workloads by bringing high-performance and high-density next-generation solid-state-shared storage to the compute servers over a low-latency network. This technology deliversbenefits of shared storage with performance of server-side flash by leveraging standardizedNonvolatile Memory Express (NVMe) Peripheral Component Interconnect Express (PCIe)- super-low-latency technology.

Position and Adoption Speed Justification: Shared accelerated storage is an emergingarchitecture that takes advantage of the latest nonvolatile memory, currently flash technology, toaddress the needs of extreme-low-latency workloads. This technology is fast-evolving and replacingserver-side flash used to accelerate workloads at the compute layer, but it has limited capacity andis managed as a silo on a server-per-server basis. Unlike server-attached flash storage, sharedaccelerated storage can scale out to high capacity, uses ultra-dense flash, has high availabilityfeatures and can be managed from a central location serving dozens of compute clients. Currentshared accelerated storage specs are promising to deliver over 1 million input/output operations persecond (IOPS) per 1 unit of the rack space. Shared accelerated storage is nascent technology withjust a few vendors and products entering the market. It is expected to grow as the cost of flashcontinues to decline. Expect it to be delivered as a stand-alone product or as part of a convergedintegrated systems infrastructure offering in the next five years.

User Advice: Shared accelerated storage products feature disaggregated compute and storagearchitecture that allows flash memory acceleration to be deployed centrally and accessed by manyservers connected via high-bandwidth, low-latency networks. The typical design would havedozens of high-capacity PCIe-based flash modules forming a single pool of storage that presented

Gartner, Inc. | G00302826 Page 7 of 62

Page 8: Hype Cycle for Storage Technologies, 2016 - … · Gartner's Hype Cycle illustrates the typical life cycle that a new storage technology goes through before it reaches mainstream

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

to servers over NVMe over fabric network. NVMe fabric technology provides low-latency access tothe shared storage resource pool by avoiding overhead associated with traditional protocoltranslation. Shared accelerated storage products will appeal to the most demanding workloads thatrequire extreme performance, storage persistency, ultra-low latency and compact form factor.Today, shared accelerated storage aims at extremely-high-performance and big data analyticsworkloads, such as applications built on top of Hadoop, and in-memory database use cases, suchas SAP Hana. While the benefits of 10 times higher IOPS and five times lower latency than industry-leading solid-state arrays are very attractive for Tier 0 high-performance computing (HPC), thetechnology is currently in the beginning stages of adoption, thus cost and capacity are inhibitingmass deployment. In the next five years — as adoption accelerates, awareness increases and theprice point drops — any application that benefits from ultra-low-latency persistent storage can takeadvantage of this architecture. Buyers need to be aware that nonvolatile memory technology isevolving, and solution architecture must be flexible to the newest, most effective technology, like 3DXPoint. The most prominent use cases for shared accelerated storage will be online transactionprocessing (OLTP) databases, data mining, real-time analytics, HPC applications for video editing,financial processing and analyses, online trading, oil and gas exploration, genomic research, andfraud detection.

Business Impact: Shared accelerated storage can have dramatic impact on business cases wherelarge bandwidth, high IOPS and low-latency requirements are critical to the bottom line of theenterprise. It is designed to power emerging Tier 0 workloads like: enterprise analytics; real-time,big data analyses; and high-volume transactions that require high performance, capacity andavailability where you need an architecture that extends beyond modern general purpose solid-statearrays. While it requires some retooling on the compute side (PCIe/NVMe) to integrate with this typeof storage, its benefits are likely to attract high-performance computing customers that will be ableto show positive ROI. Unlikely to be relevant as general purpose storage, it's targeted at analyticsand transactional workloads where low latency is a crucial requirement and a business differentiator.

Benefit Rating: High

Market Penetration: Less than 1% of target audience

Maturity: Emerging

Sample Vendors: E8 Storage; EMC; Mangstor; X-IO Technologies; Zstor

Recommended Reading: "Vendor Rating: EMC"

"Market Guide for In-Memory Computing Technologies"

"Market Trends: The Next-Generation Server-Side SSD Landscape"

Cloud Data Backup

Analysis By: Pushan Rinnen; Robert Rhame

Definition: Cloud data backup refers to policy-based backup tools that can back up and restoredata generated natively in the cloud. Such data could be generated by software as a service (SaaS)

Page 8 of 62 Gartner, Inc. | G00302826

Page 9: Hype Cycle for Storage Technologies, 2016 - … · Gartner's Hype Cycle illustrates the typical life cycle that a new storage technology goes through before it reaches mainstream

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

applications, such as Microsoft Office 365 and Salesforce, or by infrastructure as a service (IaaS)compute services, such as Amazon Elastic Compute Cloud (EC2) instances.

Position and Adoption Speed Justification: Backup of data generated natively in public cloud isan emerging requirement as more and more organizations realize that cloud providers are notresponsible for data loss, even when data is generated in the cloud. SaaS applications' native dataprotection capabilities are typically limited but free of charge (although there may be a charge for alarge restore), while native backup of IaaS usually resorts to snapshots and scripting, and incursadditional cost in storage. Small backup vendors have existed for a few years, offering backup ofSaaS applications (primarily Salesforce and Google Apps) to another cloud location; some of themwere acquired by larger vendors. As Microsoft Office 365 gains more momentum, Office 365 backupcapabilities start to emerge as well. IaaS data backup is a more nascent area to cater toorganizations' need to back up data when they move production applications to the IaaS cloud.

User Advice: Before migrating on-premises applications to SaaS or IaaS, organizations shouldensure that data generated in the cloud has enough protection and recoverability to meet their dataprotection requirements. They should assess the native backup capabilities within an IaaSinfrastructure and a SaaS application and any associated additional cost. They should ensure thattheir contract with the cloud provider clearly specifies the capabilities and cost associated with thefollowing items in terms of native data protection:

■ Backup/restore methods: This area is to measure how backup and restore are done, whether abackup is application consistent, and if a backup can capture all the relevant data withprotection requirements. Restore capabilities should include descriptions on how and whatusers can restore themselves.

■ Retention period: This is to measure how long cloud providers can retain native backups free ofcharge or with additional cost.

■ Service-level agreements (SLAs) on recovery point objective (RPO) and recovery time objective(RTO): RPO measures how frequently data is backed up natively in the same cloudinfrastructure to gauge the data loss window. RTO measures how long it takes to restore atdifferent granular levels such as a file, a mailbox or an entire application.

■ Additional storage cost due to backup: Insist on concrete guidelines on how much storageIaaS's native snapshots will consume, so that organizations can predict backup storage cost.

If organizations find the cloud-native data protection is insufficient, they should evaluate third-partybackup tools, focusing on application consistency, process automation, backup retention,performance and backup location flexibility. See the sample vendor list for examples of third-partyvendors.

Business Impact: As more production workloads migrate to the cloud (either in the form of SaaS orIaaS), it has become critical to protect data generated natively in the cloud. SaaS and IaaSproviders typically offer infrastructure resiliency and availability to protect their system or site failure.However, when data is lost due to their infrastructure failure, the providers are not financiallyresponsible for the value of lost data but only provide limited credit for the period of downtime.

Gartner, Inc. | G00302826 Page 9 of 62

Page 10: Hype Cycle for Storage Technologies, 2016 - … · Gartner's Hype Cycle illustrates the typical life cycle that a new storage technology goes through before it reaches mainstream

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

When data is lost due to user errors, software corruption or malicious attacks, user organizationsare fully responsible themselves. The more critical cloud-generated data is, the more critical it is forusers to provide recoverability of such data.

Benefit Rating: High

Market Penetration: 1% to 5% of target audience

Maturity: Emerging

Sample Vendors: Asigra; Commvault; Datos IO; Datto; Druva; EMC; Microsoft; N2W Software;Nakivo; OwnBackup

Recommended Reading: "You May Need Additional Backup to Prevent Data Loss From Your SaaSSolutions"

"Data Backup/Recovery Factors to Consider When Adopting SaaS"

Management SDS

Analysis By: Julia Palmer; Dave Russell

Definition: Management software-defined storage (SDS) coordinates the delivery of storageservices to enable greater storage agility. It can be deployed as an out-of-band technology withrobust policy management, I/O optimization and automation functions to configure, manage andprovision other storage resources. Management SDS products enable abstraction, mobility,virtualization, SRM and I/O optimization of storage resources to reduce expenses, making externalstorage virtualization software products a subset of management SDS category.

Position and Adoption Speed Justification: While management SDS is still largely a vision, it is apowerful notion that could revolutionize storage architectural approaches and storage consumptionmodels over time. The concept of abstracting and separating physical or virtual storage services viabifurcating the control plane (action signals) regarding storage from the data plane (how dataactually flows) is foundational to SDS. This is achieved largely through programmable interfaces(such as APIs), which are still evolving. SDS requests will negotiate capabilities through softwarethat, in turn, will translate those capabilities into storage services that meet a defined policy or SLA.Storage virtualization abstracts storage resources, which is foundational to SDS, whereas theconcepts of policy-based automation and orchestration — possibly triggered and managed byapplications and hypervisors — are key differentiators between simple virtualization and SDS.

The goal of SDS is to deliver greater business value than traditional implementations via betterlinkage of storage to the rest of IT, improved agility and cost optimization. This is achieved throughpolicy management, such that automation and storage administration are simplified with lessmanual oversight required, which allows larger storage capacity to be managed with fewer people.Due to its hardware-agnostic nature, management SDS products are more likely to provide deepcapability for data mobility between private and public clouds to enable a hybrid cloud enterprisestrategy.

Page 10 of 62 Gartner, Inc. | G00302826

Page 11: Hype Cycle for Storage Technologies, 2016 - … · Gartner's Hype Cycle illustrates the typical life cycle that a new storage technology goes through before it reaches mainstream

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

User Advice: Gartner's opinion is that management SDS is targeting end-user use cases where theultimate goal is to improve or extend existing storage capabilities. However, value propositions andleading use cases of management SDS are not clear, as the technology itself is fragmented by manycategories. The software-defined storage market is still in a formative stage, with many vendorsentering and exiting the marketplace and tackling different SDS use cases. When looking at differentproducts, identify and focus on use case applicable to your enterprise, and investigate each productfor its capabilities.

Gartner recommends proof of concept (POC) implementations to determine suitability for broaderdeployment.

Top reasons for interest in SDS, as gathered from interactions with Gartner clients, include:

■ Improving the management and agility of the overall storage infrastructure through betterprogrammability, interoperability, automation and orchestration

■ Storage virtualization and abstraction

■ Performance improvement by optimizing and aggregating storage I/O

■ Better linkage of storage to the rest of IT and the software-defined data center

■ Operating expenditure (opex) reductions by reducing the demands of administrators

■ Capital expenditure (capex) reductions from more efficient utilization of existing storage systems

Despite the promise of SDS, there are potential problems with some storage point solutions thathave been rebranded as SDS to present a higher value proposition versus built-in storage features,and it needs to be carefully examined for ROI benefits.

Business Impact: Management SDS's ultimate value is to provide broad capability in the policymanagement and orchestration of many storage resources. While some management SDS productsare focusing on enabling provisioning and automation of storage resources, more comprehensivesolutions feature robust utilization and management of heterogeneous storage services, allowingmobility between different types of storage platforms on-premises and in the cloud. As a subset ofmanagement SDS, I/O optimization SDS products can reduce storage response times, improvestorage resource utilization and control costs by deferring major infrastructure upgrades. Thebenefits of management SDS are in improved operational efficiency by unifying storagemanagement practices and providing common layers across different storage technologies. Theoperational ROI of management SDS will depend on IT leaders' ability to quantify the impact ofimproved ongoing data management, increased operational excellence and reduction of opex.

Benefit Rating: High

Market Penetration: 1% to 5% of target audience

Maturity: Emerging

Gartner, Inc. | G00302826 Page 11 of 62

Page 12: Hype Cycle for Storage Technologies, 2016 - … · Gartner's Hype Cycle illustrates the typical life cycle that a new storage technology goes through before it reaches mainstream

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

Sample Vendors: Atlantis Computing; DataCore Software; EMC; FalconStor; ioFABRIC; IBM;Infinio; PernixData; Primary Data; VMware

Recommended Reading: "Top Five Use Cases and Benefits of Software-Defined Storage"

"Innovation Insight: Separating Hype From Hope for Software-Defined Storage"

"Technology Overview for I/O Optimization Software"

"Multivendor SDS: A Complex and Costly Myth"

"Should Your Enterprise Deploy a Software-Defined Data Center?"

File Analysis

Analysis By: Alan Dayley

Definition: File analysis (FA) tools analyze, index, search, track and report on file metadata and, insome cases (e.g., in unstructured data environments), on file content. FA tools are usually offered assoftware options. FA tools report on file attributes and provide detailed metadata and contextualinformation to enable better information governance and data management actions.

Position and Adoption Speed Justification: FA is an emerging technology that assistsorganizations in understanding the ever-growing repository of unstructured "dark" data, includingfile shares, email databases, SharePoint, enterprise file sync and share (EFSS) and cloud platformssuch as Microsoft Office 365. Metadata reports include data owner, location, duplicate copies, size,last accessed or modified, file types and custom metadata. The primary use cases for FA forunstructured data environments include but are not limited to: organizational efficiency and costoptimization; information governance and analytics; and risk mitigation. The desire to mitigatebusiness risks (including security and privacy risks), identify sensitive data, optimize storage costand implement information governance are some of the key factors driving the adoption of FA. Theidentification, classification, migration, protection, remediation and disposition of data are keyfeatures of FA tools.

User Advice: Organizations should use FA to better understand their unstructured data, includingwhere it resides and who has access to it. Data visualization maps created by FA can be presentedto other parts of the organization and be used to better identify the value and risk of the data,enabling IT, line of business and compliance organizations to make better-informed decisionsregarding classification, information governance, storage management and content migration. Onceknown, redundant, outdated and trivial data can be defensibly deleted, and retention policies can beapplied to other data.

Business Impact: FA tools reduce risk by identifying which files reside where and who has accessto them. They support remediation in such areas as the elimination or quarantining of sensitive data,identifying and protecting intellectual property, and finding and eliminating redundant and outdateddata that may lead to unnecessary business risk. FA shrinks costs by reducing the amount of datastored. It also classifies valuable business data so that it can be more easily leveraged and

Page 12 of 62 Gartner, Inc. | G00302826

Page 13: Hype Cycle for Storage Technologies, 2016 - … · Gartner's Hype Cycle illustrates the typical life cycle that a new storage technology goes through before it reaches mainstream

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

analyzed, and it supports e-discovery efforts for legal and regulatory investigations. In addition, FAproducts feed data into corporate retention initiatives by using file attributes.

Benefit Rating: High

Market Penetration: 1% to 5% of target audience

Maturity: Emerging

Sample Vendors: Acaveo; Active Navigation; Bloomberg; Controle; Hewlett Packard Enterprise;IBM-StoredIQ; Kazoup; STEALTHbits Technologies; Varonis; Veritas

Recommended Reading: "Market Guide for File Analysis Software"

"Organizations Will Need to Tackle Three Challenges to Curb Unstructured Data Glut and Neglect"

"How to Move From Data Negligence to Effective Storage Management"

"Information Governance Gets Real: Four Case Studies Show the Way Out of Information Chaos"

"Save Millions in Storage Costs With These Effective Data Management Best Practices"

At the Peak

Data Backup Tools for Mobile Devices

Analysis By: John Girard

Definition: This technology profile describes tools and services that back up and restore mobiledevice data. Backups may be made via a cable (that is, tethered) or over the internet to a serverhosted at a company site or a cloud service provider. Restoration of some user data, but usually notthe entire system image, is accomplished after loss, theft or migration.

Position and Adoption Speed Justification: Mobile device providers offer free-to-inexpensivecloud sync and share services that can be used to back up and restore parts of user data andconfiguration profiles. These services, along with enterprise file share and sync (EFSS) tools, provideconvenient but often unmanaged and incomplete backups that are unlikely to meet businessrecovery, security and privacy requirements. Enterprise-grade tools may be expensive, and theresults are inconsistent because mobile OSs are typically sandboxed and APIs for managing mobilecontent are limited and inconsistent across different mobile OSs. If budgets are limited andcomplexity is a barrier, the buyer may simply choose to take their chances with sync tools. The pre-Peak status indicates that companies increasingly realize they need mobile backup, but lack robustproduct and service choices.

User Advice: Users of both company devices and personal devices are responsible for ensuringthat important business data is preserved, given the inconsistency of current backup methods andthe chaotic effects of distributed mobile information storage. In typical circumstances where a

Gartner, Inc. | G00302826 Page 13 of 62

Page 14: Hype Cycle for Storage Technologies, 2016 - … · Gartner's Hype Cycle illustrates the typical life cycle that a new storage technology goes through before it reaches mainstream

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

restoration is needed, users may have access to more than one copy of the same calendar,contacts and emails for some period of time simultaneously from a workstation, tablet and phone,because of persistent storage of the inbox and to certain files which had been placed in cloudstorage. Companies should get in the habit of archiving copies of important information as part ofnormal workflow design, and should provide attractive choices for EFSS and backup solutions thatmeet business security requirements. Users must be directed to get in the habit of regularly backingup critical business information or face the consequences.

Business Impact: IT planners should document best and worst practices for mobile backups, andtie the recommendations into their business continuity plans, training programs and help deskprocedures. EFSS tools are a partial solution, but they may not meet business security and privacyrequirements and are not designed for managing structured backups. Left on their own, users willprefer free or inexpensive services lacking company-controlled encryption, where ownership of andfuture access to stored data may fall into the hands of the storage vendor. Neither EFSS norenterprise backup systems are yet suited to comprehensively back up mobile devices; however,long-term plans should prioritize centrally managed, certified, professional-grade tools. Recentdebates over the need for and use of legally mandated back doors raise scenarios wherecompanies could completely lose disclosure control over confidential information. A solid backupsolution can help companies to comply promptly with information requests, before they becomesubject to blanket search demands.

Benefit Rating: Moderate

Market Penetration: 5% to 20% of target audience

Maturity: Emerging

Sample Vendors: Acronis; Asigra; Carbonite; Citrix ShareFile; Code42; Commvault; Datacastle;Druva; Hewlett Packard Enterprise; IDrive

Recommended Reading: "Magic Quadrant for Enterprise File Synchronization and Sharing"

"Magic Quadrant for Enterprise Backup Software and Integrated Appliances"

"The Smartphone Backdoor Debate Puts Your Company's Confidentiality Rights at Risk"

"Make Self-Service Backup and Recovery a Reality for Enterprise Applications"

"Critical Capabilities for Enterprise Endpoint Backup"

Open-Source Storage

Analysis By: Arun Chandrasekaran

Definition: Open-source storage is core storage software that is used to create a storage array, aswell as data protection and management software. It involves software abstracted from theunderlying hardware for which the source code is made available to the public through free license.

Page 14 of 62 Gartner, Inc. | G00302826

Page 15: Hype Cycle for Storage Technologies, 2016 - … · Gartner's Hype Cycle illustrates the typical life cycle that a new storage technology goes through before it reaches mainstream

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

Similar to proprietary storage, open-source storage software supports primary, secondary andtertiary storage tiers, as well as heterogeneous management.

Position and Adoption Speed Justification: Although open-source storage has been around for along time, it has been mainly relegated to file-serving and backup deployments in small businessenvironments. Products such as FreeNAS (TrueNAS for business environments), Openfiler andAmanda have been in use for many years. Recent innovations in multicore processors and CPUcore density, combined with an innovative open-source ecosystem, are making open-sourcestorage attractive for cloud and big data workloads and as a potential alternative to proprietarystorage. As cloud computing, big data analytics and information archiving push the capacity, pricingand performance frontiers of traditional scale-up storage architectures, there has been renewedinterest in open-source software (OSS) as a means to achieve high scalability in capacity andperformance at lower acquisition costs.

The rise of open-source platforms such as Apache Hadoop and OpenStack, which are backed bylarge, innovative communities of developers and vendors, together with the entry of disruptivevendors such as Red Hat (Gluster Storage, Ceph Storage) and Intel (Lustre), is enabling enterprisesto seriously consider open-source storage for use cases such as cloud storage, big data andarchiving. More vendors are also bringing products to market based on the popular OpenZFSproject. There has been a growing number of open-source storage projects for container-basedstorage such as Flocker and Minio.

User Advice: Although open-source storage offers a less-expensive upfront alternative toproprietary storage, IT leaders need to measure the benefits, risks and costs accurately. Someenterprise IT organizations overstate the benefits and understate the costs and risks. Conversely,with the emerging maturity of open-source storage solutions, enterprise IT buyers should notoverlook the value proposition of these solutions. IT leaders should actively deploy pilot projects,identify internal champions, train storage teams and prepare the overall organization for thisdisruptive trend. Although source code can be downloaded for free, it is advisable to use acommercial distribution and obtain support through a vendor, because OSS requires significanteffort and expertise to install, maintain and support. Customers deploying "open core" or"freemium" storage products need to carefully evaluate the strength of lock-in against the perceivedbenefits. This is a model in which the vendor provides proprietary software — in the form of add-onmodules or management tools — that functions on top of OSS.

In most cases, open-source storage is not general-purpose storage. Therefore, choose use casesthat leverage the strengths of open-source platforms — for example, batch processing or a low-cost archive for Hadoop and test/development private cloud for OpenStack — and use themappropriately. It is important to focus on hardware design and choose cost-effective referencearchitectures that have been certified by the vendors and for which support is delivered in anintegrated manner. Overall, on-premises integration, management automation and customersupport should be key priorities when selecting open-source storage solutions.

Business Impact: Open-source storage is playing an important role in enabling cost-effective,scalable platforms for new cloud and big data workloads. Gartner is seeing rapid adoption amongtechnology firms and service providers, as well as in research and academic environments. Big data

Gartner, Inc. | G00302826 Page 15 of 62

Page 16: Hype Cycle for Storage Technologies, 2016 - … · Gartner's Hype Cycle illustrates the typical life cycle that a new storage technology goes through before it reaches mainstream

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

and private cloud use cases in enterprises are also promising use cases for open-source storage,where Gartner is witnessing keen interest. As data continues to grow at a frantic pace, open-sourcestorage will enable customers to store and maintain data, particularly unstructured data, at a loweracquisition cost, with "good enough" availability, performance and manageability.

Benefit Rating: High

Market Penetration: 1% to 5% of target audience

Maturity: Emerging

Sample Vendors: Cloudera; ClusterHQ; Hortonworks; iXsystems; IBM; Micro Focus International;OpenStack; Pivotal Labs; Red Hat; SwiftStack

Recommended Reading: "Market Guide for Open-Source Storage"

"I&O Leaders Can Benefit From Storage Industry Innovation"

"Should I Use Open Source in My Infrastructure?"

Copy Data Management

Analysis By: Pushan Rinnen; Garth Landers

Definition: Copy data management (CDM) refers to products that capture application-consistentdata via snapshots in primary storage and create a live "golden image" in a secondary storagesystem where virtual copies in native disk format can be mounted for use cases such as backup/recovery or test/development. Support for heterogeneous primary storage is an essentialcomponent. Different CDM products have different additional data management capabilities.

Position and Adoption Speed Justification: Copy data management (CDM) has become a hypedterm as various vendors start to use it to promote their product capabilities. CDM awareness andbuying cycles are increasing. Some storage array vendors use the term to describe their array-internal capabilities, which don't fit into Gartner's definition in terms of heterogeneous primary arraysupport. CDM adoption continues to focus on two areas: (1) consolidated backup and disasterrecovery and (2) test/development workflow automation. New products tend to focus more on thefirst use case. The main challenge faced by CDM products is being leveraged across these two verydifferent use cases, as these represent very different buying centers and decision makers. The lackof products from major vendors and the inconsistent usage of the CDM term impede fast adoption.

User Advice: IT should look at CDM as part of a backup modernization effort or when managingmultiple application copies for testing/development has become costly, overwhelming or abottleneck. CDM could also be useful for organizations that are looking for active access tosecondary data sources for reporting or analytics due to its separation from the productionenvironment. Enterprises should also look at opportunities for database and application archivingfor storage reduction or governance initiatives to further justify investment. Due to the short historyof the new architecture and vendors, new use cases beyond the common ones are not field-provenand should be approached with caution.

Page 16 of 62 Gartner, Inc. | G00302826

Page 17: Hype Cycle for Storage Technologies, 2016 - … · Gartner's Hype Cycle illustrates the typical life cycle that a new storage technology goes through before it reaches mainstream

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

Business Impact: IT organizations have historically used different storage and software products todeliver backup, archive, replication, test/development, legacy application archiving and other data-intensive services with very little control or management across these services. This results inoverinvestment in storage capacity, software licenses and operational expenditure costs associatedwith managing excessive storage and software. CDM facilitates the use of one copy of data for all ofthese functions via virtual copies, thereby dramatically reducing the need for multiple physicalcopies of data and enabling organizations to cut costs associated with multiple disparate softwarelicenses and storage islands. The separation of the "golden image" from the productionenvironment can facilitate aggressive recovery point objectives (RPOs) and recovery time objectives(RTOs). In the case of test/development, CDM improves the workflow process and operationalefficiency by enabling database administrators and application developers more self-servicescapabilities.

Benefit Rating: High

Market Penetration: 1% to 5% of target audience

Maturity: Emerging

Sample Vendors: Actifio; Catalogic Software; Cohesity; Delphix; Rubrik

Information Dispersal Algorithms

Analysis By: Valdis Filks

Definition: Information dispersal algorithms provide a methodology for storing information in pieces(i.e., dispersed) across multiple locations, so that redundancy protects the information in the eventof localized outages, and unauthorized data access at a single location does not provide usableinformation. Only the originator or a user with a list of the latest pointers created by the originaldispersal algorithm can properly assemble the complete information.

Position and Adoption Speed Justification: These algorithms are being used in more data centerstorage devices to improve data availability and scale. Nevertheless, commercial solutions continueto become available for the data center from large, established vendors and smaller startups fordomestic use and file sync and share. The solutions are also built-in and are available in homeconsumer storage appliances. However, they differ from the presently prevailing centralized cloudstorage offerings as these solutions are not centralized, but distributed and, similar to the internet,have no central control or fault domain. The Information dispersal algorithm technology has beenexpanded to include peer-to-peer (P2P) file-sharing technologies and protocols, such as thosebased on the BitTorrent protocol, which has proved robust on the internet. A variation is the open-source BitTorrent protocol used in P2P networks to store and recreate data among systems. This isan early cloud technology in which the data is truly dispersed, rather than stored in a small numberof centralized, hyperscale or traditional data centers. Therefore, fault tolerance is provided by thenature of the design of these systems, and, due to the dispersal nature of the data, some protectionis also provided by their design. Due to their innate design, many scale-out storage systems areimplementing redundant array of independent disks (RAID) designs that disperse data among nodes

Gartner, Inc. | G00302826 Page 17 of 62

Page 18: Hype Cycle for Storage Technologies, 2016 - … · Gartner's Hype Cycle illustrates the typical life cycle that a new storage technology goes through before it reaches mainstream

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

within racks. This technology is developing into geographically distributed, file-sharing nodes,blurring the lines between scale-out storage systems, information dispersal algorithms and cloudstorage.

User Advice: Many vendors have been using this technology in scale-out storage systems for morethan five years and it has proved to be reliable. Customers who are not satisfied with centralizedcloud storage offerings should investigate information dispersal algorithms as they reduce customerdependence on a few large hyperscale vendors and locations that still use the traditional centralizeddata center design. In many ways, these algorithms are tantamount to a form of encryption. Thedesign, coding and testing of the attack resistance of dispersion algorithms has proven to be adifficult undertaking because it is of similar complexity to the design of encryption implementations.Just as proprietary forms of encryption should not be considered as reliable as implementationsbased on well-proven algorithms and code, the robustness of proprietary dispersal algorithms —and especially their implementations — should not automatically be considered trusted code.Buyers that expect to rely on this technology for confidentiality control should seek evidence fromthe tester at high levels of testing and from peer review.

Business Impact: Information dispersal algorithms are used in the latest storage arrays, integratedsystems and SDS software. It could eventually provide secure storage over the internet and otherpublic or private networks without the overhead and other costs of encryption and the need to havecentralized hyperscale data centers, such as those from Amazon and Google. Use of the BitTorrentprotocol has been political because one of its early applications was to share copyrighted data viathe internet among home PCs. However, the protocol is content-neutral and simple to use. It couldjust as easily be used by software companies to distribute software, updates and any digitalinformation that is stored and geographically dispersed among many nodes and computers in anetwork.

Open-source implementations are integrated into products by commercial companies as a newmethod to distribute and store digital data. This is one factor that increases the amount ofunstructured data stored on the planet.

Benefit Rating: High

Market Penetration: 5% to 20% of target audience

Maturity: Adolescent

Sample Vendors: BitTorrent; Caringo; Ctera Networks; EMC; Hedvig; HGST; IBM; SimpliVity; Vivint

Recommended Reading: "Traditional Storage Vendors, Brands and Products Are No Longer Risk-Free"

"Increases in Disk Capacity Are Affecting RAID Recovery; Start to Purchase New RAIDTechnologies"

Integrated Systems: Hyperconvergence

Analysis By: George J. Weiss; Andrew Butler

Page 18 of 62 Gartner, Inc. | G00302826

Page 19: Hype Cycle for Storage Technologies, 2016 - … · Gartner's Hype Cycle illustrates the typical life cycle that a new storage technology goes through before it reaches mainstream

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

Definition: Hyperconverged systems are integrated systems that apply a modular and sharedcompute/network/storage building block approach, with a unified management layer on commodityhardware and direct-attached storage leveraging scale-out clusters.

Position and Adoption Speed Justification: Hyperconverged infrastructure is a rapidly expandingmarket segment that is expected to increase at a 65% compound annual growth rate (CAGR). By2020, hyperconverged integrated systems (HCISs) will represent 32% of total convergedinfrastructure shipments by revenue, with HCIS reaching $6 billion. HCIS enables IT to start from asmall base — a single or dual node — and incrementally scale out as demand requires. Themodular-building-block approach enables enterprises to take small steps, rather than make thesignificant upfront investments required by traditional integrated systems, which typically have acostly proprietary chassis with fabric infrastructure.

Gartner expects HCISs to continue their expansion with new providers, as well as most traditionalsystem vendors, supporting HCIS in their portfolios. Systems will continue to evolve with additionalfeature/function deliverables and broader vendor portfolios to address mixed workloads. The fastpace of hyperconvergence growth will begin peaking in the 2020 period. However, we expectcontinued feature/function evolution toward hybrid cloud configurations and higher levels ofapplication delivery agility and efficiency through advancements such as composable infrastructureand cloud management functions and integration.

User Advice: IT leaders should recognize HCIS as an evolution within the broader category ofintegrated systems that lays the foundation for ease of use, simplicity, virtualization, clouddeployment and eventual bimodal implementations. IT should be able to harness its fundamentaladvantages in efficiency, utilization, agility, data protection, continued life cycle deployment andorchestration as part of a strategic data center modernization objective. Plan strategically, but investtactically in hyperconverged systems, because the market is nascent and subject to volatility. Planfor a payback of two years or less to ensure financial success and investment value. Test thescalability limits of solutions and total cost of ownership (TCO) benefits, because improvements willoccur rapidly during this nascent period.

Business Impact: Although integrated systems will generally be driven by new workloads and datacenter modernization initiatives, the hyperconverged portion has also experienced enthusiasticreception from the midmarket, due to the simplicity and convenience of appliance configurations.Use cases especially well-suited to HCIS include virtual desktop infrastructures (VDIs), servervirtualization and consolidation, data migration, private cloud, remote or branch office, relationaldatabases, Hadoop, and dedicated application infrastructures. However, general-purposeworkloads are increasingly being moved from server virtualization blades to exploit HCIS's favorableTCO properties.

Benefit Rating: High

Market Penetration: 5% to 20% of target audience

Maturity: Adolescent

Gartner, Inc. | G00302826 Page 19 of 62

Page 20: Hype Cycle for Storage Technologies, 2016 - … · Gartner's Hype Cycle illustrates the typical life cycle that a new storage technology goes through before it reaches mainstream

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

Sample Vendors: Atlantis; Cisco; Dell; EMC; Gridstore; Hewlett Packard Enterprise; Nutanix;Pivot3; Scale Computing; SimpliVity

Recommended Reading: "Prepare for the Next Phase of Hyperconvergence"

"Five Keys to Creating an Effective Hyperconvergence Strategy"

"Deploying Hyperconverged Integrated Systems: Eight Great Use Cases"

"The Positive Disruption of Hyperconvergence Adoption in the Midmarket"

"Beware the 'Myth-Conceptions' Surrounding Hyperconverged Integrated Systems"

"Competitive Landscape: Hyperconverged Integrated Systems — Multivendor Solutions"

"How to Evaluate Vendors in the Hyperconverged Space"

Infrastructure SDS

Analysis By: Julia Palmer; Dave Russell

Definition: Infrastructure software-defined storage (SDS) creates and provides data center servicesto replace or augment traditional storage arrays. It can be deployed as a virtual machine, or assoftware on a bare-metal x86 industry standard server, allowing organizations to deploy a storage-as-software package. This creates a storage solution that can be accessible by file, block or objectprotocols.

Position and Adoption Speed Justification: Infrastructure SDS is positioned to change theeconomics and delivery model of enterprise storage infrastructures. Whether deployedindependently, or as an element of a hyperconverged integrated system, SDS is altering howorganizations buy and deploy enterprise storage. Following web-scale IT's lead, I&O leaders aredeploying SDS as hardware-agnostic storage, and breaking the bond from high-priced proprietary,legacy-integrated external-controller-based (ECB) storage hardware. The power of multicore Intelx86 processors, use of solid-state drives (SSDs) and high throughput networking have essentiallyeliminated hardware-associated differentiation, transferring all of the value to storage software.Expect new infrastructure SDS vendors and products to emerge, and to target a broad range ofdelivery models and workloads, including server virtualization, archiving, big data analytics andunstructured data. Comprehensive analyses of SDS total cost of ownership (TCO) benefits involveboth capital expenditure (capex) and operating expenditure (opex), including administrative design,verification, deployment, and ongoing management and support, as well as a potential improvementon business agility.

User Advice: Infrastructure SDS is the delivery of data services and storage-array-like functionalityon top of industry standard hardware. Enterprises choose a software-defined approach when theywish to accomplish some or all of the following goals:

■ Build a storage solution at a low acquisition price point on commodity x86 platform.

■ Decouple storage software and hardware to standardize their data center platforms.

Page 20 of 62 Gartner, Inc. | G00302826

Page 21: Hype Cycle for Storage Technologies, 2016 - … · Gartner's Hype Cycle illustrates the typical life cycle that a new storage technology goes through before it reaches mainstream

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

■ Establish a scalable solution specifically geared toward Mode 2 workloads.

■ Build agile, "infrastructure as code" architecture to enable storage to be a part of software-defined data center automation and orchestration framework.

Advice to end users:

■ Recognize that infrastructure SDS remains a nascent, but growing, deployment model that willbe focused on web-scale deployment agility.

■ Implement infrastructure SDS solutions that enable you to decouple software from hardware,reduce TCO and enable greater data mobility.

■ Assess emerging storage vendors, technologies and approaches, and create a matrix thatmatches these offerings with the requirements of specific workloads.

■ Deploy infrastructure SDS for single workload or use case. Take the lessons learned from thisfirst deployment, and apply SDS to additional use cases.

■ For infrastructure SDS products, identify upcoming initiatives where SDS could deliver highvalue. Use infrastructure SDS with commodity hardware as the basis for a new applicationdeployment aligned with these initiatives.

■ Build infrastructure SDS efficiency justification as a result of a proof-of-concept deployment,based on capital expenditure ROI data and potential operating expenditure impact, as well asbetter alignment with the core business requirements.

Business Impact: Infrastructure SDS is a hardware-agnostic platform. It breaks the dependency onhigh-priced proprietary ECB storage hardware and lowers acquisition costs by utilizing the industrystandard x86 platform of the customer's choice. Some Gartner customers report up to 40% TCOreduction with infrastructure SDS that comes from the use of x86 industry standard hardware andlower cost of upgrades and maintenance fees. However, the real value of infrastructure SDS in thelong term is increased flexibility and programmability that is required for Mode 2 workloads. I&Oleaders that successfully deployed and benefited from infrastructure SDS have usually belonged tolarge enterprises or cloud service providers that pursued web-scale-like efficiency, flexibility andscalability, and viewed SDS as a critical enablement technology for their IT initiatives. I&O leadersshould look at infrastructure SDS not as another storage product but as an investment in improvingstorage economics and providing data mobility including cloud integration.

Benefit Rating: Transformational

Market Penetration: 5% to 20% of target audience

Maturity: Adolescent

Sample Vendors: Atlantis Computing; EMC; Formation Data Systems; Hedvig; IBM; Nexenta; RedHat; Scality; SwiftStack; VMware

Gartner, Inc. | G00302826 Page 21 of 62

Page 22: Hype Cycle for Storage Technologies, 2016 - … · Gartner's Hype Cycle illustrates the typical life cycle that a new storage technology goes through before it reaches mainstream

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

Recommended Reading: "How to Determine Whether Software-Defined Storage Is Right for YourOrganization"

"Why Hardware Matters to the Success of Your Software-Defined Storage Deployment"

"Top Five Use Cases and Benefits of Software-Defined Storage"

"I&O Leaders Can Benefit From Storage Industry Innovation"

"Innovation Insight: Separating Hype From Hope for Software-Defined Storage"

Solid-State DIMMs

Analysis By: Michele Reitz

Definition: Solid-state dual in-line memory modules (SS DIMMs) are all-flash versions of nonvolatileDIMMs (NVDIMMs) that reside on the double data rate (DDR) DRAM memory channel and arepersistent. These devices integrate nonvolatile memory (currently NAND flash) and a systemcontroller chip. By sitting on the memory channel, they have key advantages over other types ofsolid-state drives (SSDs) in terms of reduced write latency and increased input/output operationsper second, bandwidth and scalability.

Position and Adoption Speed Justification: Solid-state DIMMs were introduced in 2014, whenIBM debuted its eXFlash device (now owned by Lenovo), which is included in SanDisk'sULLtraDIMM line of products through a partnership with Diablo Technologies. In February 2016,Xitore introduced its NVDIMM-X product, which operates in much the same way, but boasts muchbetter performance and lower latency. This is due to an improved cache architecture, local high-speed buffers and enhanced memory controller solution with flexibility to operate with a variety ofnonvolatile memory technologies on the back end.

Since DIMMs sit directly on the faster memory channel, rather than on the storage channel, they willnot face the storage channel bottlenecks of a traditional storage system. Because of this, theseNAND-flash-based SSDs can achieve drastically lower latencies (at least 50% lower) than anyexisting solid-state storage solution, and can be viable alternatives to DRAM memory, if the speedsare acceptable.

The NVDIMM Special Interest Group, a consortium within the Storage Networking IndustryAssociation (SNIA), classifies three types of NVDIMM — NVDIMM-N, NVDIMM-F and NVDIMM-P.Gartner classifies NVDIMM-N and NVDIMM-P as hybrid DIMMs and NVDIMM-F as a solid-stateDIMM.

Use of any solid-state DIMMs requires a mix or all of the following: support by the host chipset,optimization for the OS and optimization for the server hardware. As such, to achieve greateradoption, server, driver and OS support will need to extend beyond IBM, Supermicro and Huaweion selected platforms. In addition, use cases for memory channel products will need to spreadbeyond the extremely high-performance, high-bandwidth and ultra-low-latency applications forwhich they are attracting most interest today. Despite these challenges, this technology will mature

Page 22 of 62 Gartner, Inc. | G00302826

Page 23: Hype Cycle for Storage Technologies, 2016 - … · Gartner's Hype Cycle illustrates the typical life cycle that a new storage technology goes through before it reaches mainstream

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

in the next two to five years as users will find the price/performance value proposition a good matchfor in-memory computing, cloud, virtualization, virtual desktops, big data and analytics applications.

Evolution of 3D XPoint for SS-DIMMs will provide a higher-performance nonvolatile alternative toNAND flash in 2017 and beyond. 3D XPoint is an emerging nonvolatile memory technology fromIntel and Micron that boasts substantial performance and reliability gains over flash memory, but ithas not been widely commercialized.

We have moved this technology profile up and over the Peak of the Hype Cycle. This is because theinitial hype that was built up with the IBM and Huawei announcements was very much affected bythe litigation levied by Netlist, thwarting Diablo's ability to amass more vendor support and achievelarger adoption. In September 2015, the litigation case was closed and decisively ruled for Diablo,but through this period the market has seen the peak and is gearing up for the transition through thetrough in the next two years. Lenovo announced expanded support for the original IBM product inMay 2015, and we anticipate more vendors will follow in 2016 and 2017.

User Advice: IT professionals should evaluate solid-state DIMMs for use as a new tier of storage, ifultra-low latency is important. They should use the significant improvement in flash response times— nearly half those of conventional SSDs — and the denser form factor to meet increasedrequirements for overall system storage.

IT professionals should analyze the roadmaps of the major server and storage OEMs, along withthose of the SSD appliance vendors that will be launching DIMM-based storage systems, and weighthe benefits for their needs. They should be aware that servers, OSs and drivers will need to becustomized to support these new types of DIMMs.

Business Impact: This technology's impact on users will be improved system performance overall.It will also offer an alternative to DRAM for certain in-memory computing applications that needsupport for large data stores and can sacrifice performance.

With DRAM prices at a premium compared with NAND flash, solid-state DIMMs will have a much-improved price/performance ratio when it meets users' performance needs.

NAND flash vendors should consider solid-state DIMMs to enhance their value proposition forcommodity NAND flash and expand their addressable market.

Benefit Rating: Moderate

Market Penetration: Less than 1% of target audience

Maturity: Emerging

Sample Vendors: Diablo Technologies; Huawei; IBM; SanDisk; Supermicro; Xitore

Recommended Reading: "Market Trends: The Next-Generation Server-Side SSD Landscape"

"Market Share Analysis: SSDs and Solid-State Arrays, Worldwide, 2015"

Gartner, Inc. | G00302826 Page 23 of 62

Page 24: Hype Cycle for Storage Technologies, 2016 - … · Gartner's Hype Cycle illustrates the typical life cycle that a new storage technology goes through before it reaches mainstream

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

Sliding Into the Trough

Integrated Backup Appliances

Analysis By: Robert Rhame

Definition: An integrated backup appliance is an all-in-one backup software and hardware solutionthat combines the functions of a backup application server, media server (if applicable) and backuptarget device. The appliance is typically preconfigured and fine-tuned to cater to the capabilities ofthe onboard backup software. It is a more simplified and easier-to-deploy backup solution than thetraditional approach of separate software and hardware installations, but lacks flexibility onhardware choices and scalability.

Position and Adoption Speed Justification: Integrated backup appliances have been around formany years without much fanfare. The current hype is driven by existing large backup softwarevendors that have started packaging their software in an appliance, and by innovative emergingvendors that are offering all-in-one solutions. The momentum of integrated backup appliances isdriven by the desire to simplify the setup and management of the backup infrastructure, as"complexity" is a leading challenge when it comes to backup management. Overall, integratedbackup appliances have resonated well with many small and midsize customers that are attractedby the one-stop-shop support experience and tight integration between software and hardware. Asthe appliances scale up, they will be deployed in larger environments.

Within the integrated backup appliance market, the former clear segmentation by backup repositorylimitations have vanished, with most vendors adding cloud target capabilities. There are generallythree types of vendors selling integrated backup appliances separated primarily by heritage. Thefirst kind includes backup software vendors that package their software with hardware to offercustomers integrated appliances. Examples include Arcserve and Veritas Technologies. The secondtype is made up of emerging products that tightly integrate software with hardware, such as Actifio,Cohesity and Rubrik. The third kind is a cloud backup provider that offers a customer an on-premises backup appliance as part of its cloud backup solution. Examples include, BarracudaNetworks, Ctera Networks, Datto and Unitrends.

User Advice: Organizations should evaluate backup software functions first to ensure that theirbusiness requirements are met, before making a decision about acquiring an integrated backupappliance or a software-only solution. Once a specific backup software product is chosen,deploying an appliance with that software will simplify operational processes and address anycompatibility issues between backup software-only products and deduplication backup targetappliances. If customers prefer deploying backup software-only products to gain hardwareflexibility, they should carefully consider which back-end storage to choose — be it generic diskarray/network-attached storage (NAS) or deduplication backup target appliances.

Business Impact: Integrated backup appliances ride the current trend of converged infrastructureand offer tight integration between software and hardware, simplify the initial purchase andconfiguration process, and provide the one-vendor support experience with no finger-pointing risks.On the down side, an integrated backup appliance tends to lack the flexibility and heterogeneous

Page 24 of 62 Gartner, Inc. | G00302826

Page 25: Hype Cycle for Storage Technologies, 2016 - … · Gartner's Hype Cycle illustrates the typical life cycle that a new storage technology goes through before it reaches mainstream

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

hardware support offered by backup software-only solutions, which are often needed by large,complex environments.

Benefit Rating: Moderate

Market Penetration: 5% to 20% of target audience

Maturity: Adolescent

Sample Vendors: Actifio; Arcserve; Barracuda Networks; Cohesity; Ctera Networks; Datto; Rubrik;Unitrends; Veritas Technologies

Recommended Reading: "Magic Quadrant for Enterprise Backup Software and IntegratedAppliances"

"Magic Quadrant for Deduplication Backup Target Appliances"

Data Sanitization

Analysis By: Philip Dawson; Rob Schafer

Definition: Data sanitization is the consistently applied, disciplined process of reliably andcompletely removing all data from a read/write medium so that it can no longer be read orrecovered.

Position and Adoption Speed Justification: Growing concerns about data privacy and security,leakage, regulatory compliance, and the ever-expanding capacity of storage media are makingrobust data sanitization a core competency for all IT organizations.

This competency should be applied to all devices with storage components (such as PCs, mobilephones, tablets, and high-end printers and copiers) when they are repurposed, returned to thesupplier/lessor, sold, donated to charity or otherwise disposed of. Where organizations lack thisrobust data sanitization competency, it is often due to handling the various stages of the asset lifecycle as isolated events, with little coordination between business boundaries (such as finance,security, procurement and IT). Thus, the personnel assigned to IT asset disposition (ITAD) are oftendifferent from those responsible for risk management and compliance, which can put theorganization at risk of both internal and external noncompliance.

For mobile devices, a remote data-wiping capability is commonly implemented, triggered either bythe user logging into a website or an administrator remotely invoking a mobile device manager(MDM). Although a remote capability such as this should not be considered a fail-safe mechanism,reliability should be adequate for a significant majority of lost or stolen mobile devices. The degreeto which various hardware storage technologies are reliably wiped varies according to organizationtype and device type.

User Advice:

Gartner, Inc. | G00302826 Page 25 of 62

Page 26: Hype Cycle for Storage Technologies, 2016 - … · Gartner's Hype Cycle illustrates the typical life cycle that a new storage technology goes through before it reaches mainstream

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

■ Follow a life cycle process approach to IT risk management that includes making an explicitdecision about data sanitization and destruction, device reuse and retirement, and dataarchiving.

■ Implement policies that assign responsibility for all media carrying sensitive or regulated data —whether corporate or personal — to ensure that they are properly wiped or destroyed at the endof their production use.

■ Create appropriate data sanitization/destruction standards that provide specific guidance on thedestruction process, based on data sensitivity.

■ Verify that your ITAD vendor consistently meets your data sanitization security specificationsand standards.

■ Understand the implications of personal devices and plug-and-play storage. Organizations thathave yet to address CDs/DVDs and other portable data-bearing devices are even less preparedto deal with these implications.

■ Consider using whole-volume encryption for portable devices and laptops, and self-encryptingdevices in the data center.

■ Consider destroying storage devices containing highly sensitive and/or regulated data (e.g.,organizations in the financial and healthcare industries), either by mechanical means or by usingdegaussing machines, rendering them permanently unusable and ensuring that the data is notrecoverable.

■ Consider software tape shredding. Tape shredding performs a three-pass wipe of the selectedvirtual tapes using an algorithm specified by the U.S. Department of Defense (Standard5220.22-M), which helps IT managers meet security and regulatory compliance requirements.

■ Forbid the use of USB memory sticks for sensitive, unencrypted files. Some undeleted butlargely inaccessible data remains on most USB memory sticks.

■ Understand end-of-contract implications, and ask current and potential providers for anexplanation of their storage reuse and account retirement practices. This advice applies tobuyers of any form of externally provisioned service.

Business Impact: At a relatively low cost, the proper use of encryption, wiping and, whennecessary, destruction will help minimize the risk that proprietary and regulated data will leak.

By limiting data sanitization to encryption and/or software wiping, organizations can preserve theasset's residual market value; the destruction of data-bearing devices within an IT asset typicallyreduces the asset's residual value (RV) to salvage, incurring the cost of environmentally compliantrecycling.

The National Association for Information Destruction (NAID) supports best practices in datadestruction services, and offers a list of service providers. Also refer to the National Institute ofStandards and Technology (NIST) December 2014 revision of its Special Publication 800-88:"Guidelines for Media Sanitization."

Benefit Rating: Moderate

Page 26 of 62 Gartner, Inc. | G00302826

Page 27: Hype Cycle for Storage Technologies, 2016 - … · Gartner's Hype Cycle illustrates the typical life cycle that a new storage technology goes through before it reaches mainstream

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

Market Penetration: 20% to 50% of target audience

Maturity: Early mainstream

Sample Vendors: Blancco Technology Group; DestructData; ITRenew; Kroll Ontrack

Object Storage

Analysis By: Arun Chandrasekaran

Definition: Object storage refers to devices and software that house data in structures called"objects," and serve hosts via protocols (such as HTTP) and APIs (such as Amazon Simple StorageService [Amazon S3], OpenStack Swift and CDMI). Conceptually, objects are similar to files in thatthey are composed of content and metadata. In general, objects support richer metadata than filestorage by enabling users or applications to assign attributes to objects that can be used foradministrative purposes, data mining and information management

Position and Adoption Speed Justification: Although object storage products have been aroundfor more than a decade, the first-generation products had limitations around scalability,performance and induced lock-in through proprietary interfaces. Broad adoption of second-generation commercial object storage has remained low so far; however, it is now accelerating dueto the need for cost-effective big data storage. The growing maturity of solutions from emergingvendors and refreshed products from large storage portfolio vendors is expected to further stimulateadoption from end users, as the addressable use cases for these products increase. While costcontainment of traditional SAN/NAS infrastructure continues to be the key driver for object storageadoption, private cloud and analytics deployments in big data industries such as media andentertainment, life sciences, the public sector, and education/research are spawning newinvestments. Object storage products are available in a variety of deployment models — virtualappliances, managed hosting, purpose-built hardware appliances or software that can beconsumed in a flexible manner.

User Advice: IT leaders that require highly scalable, self-healing and cost-effective storageplatforms for unstructured data should evaluate the suitability of object storage products. Thecommon use cases that Gartner sees for object storage are archiving, backup, cloud storage andcontent distribution. When building on-premises object storage repositories, customers shouldevaluate the product's API support for dominant public cloud providers, so that they can extendtheir workloads to a public cloud, if needed. Amazon's S3 has emerged as the dominant API, withmore vendors starting to support OpenStack Swift API as well. Select object storage vendors thatoffer a wide choice of deployment (software-only versus packaged appliances versus managedhosting) and licensing models (perpetual versus subscription) that can provide flexibility and reduceTCO. These products are capable of a huge scale in capacity and better-suited for workloads thatrequire high bandwidth than transactional workloads that demand high input/output operations persecond (IOPS) and low latency.

Business Impact: Rapid growth in unstructured data (40% year over year) and the need to storeand retrieve it in a cost-effective, automated manner will drive the growth of object storage. Object

Gartner, Inc. | G00302826 Page 27 of 62

Page 28: Hype Cycle for Storage Technologies, 2016 - … · Gartner's Hype Cycle illustrates the typical life cycle that a new storage technology goes through before it reaches mainstream

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

storage can be effective in big data environments, which scale from a few hundred terabytes to tensof petabytes. Because objects exist as self-describing logical entities, coupled with metadata andcontrolled by policies, they can scale effectively. Object storage is also well-suited to multitenantenvironments that need stringent object-level security and rich metadata for easy automation andmanagement. There is growing interest in object storage from enterprise developers and DevOpsteam members looking for agile and programmable infrastructures that can be extended to thepublic cloud. Object storage software, deployed on commodity hardware, is emerging as a threat toexternal controller-based (ECB) storage hardware vendors in big data environments with heavyvolume challenges.

Benefit Rating: High

Market Penetration: 5% to 20% of target audience

Maturity: Adolescent

Sample Vendors: Caringo; Cloudian; DataDirect Networks; EMC; Hitachi Data Systems; IBM;NetApp; Red Hat; Scality; SwiftStack

Recommended Reading: "Critical Capabilities for Object Storage"

"Choosing the Right Storage for Cloud-Native Applications"

"Market Guide for Open-Source Storage"

Storage Cluster File Systems

Analysis By: Arun Chandrasekaran

Definition: Distributed file systems storage uses a single parallel file system to cluster multiplestorage nodes together, presenting a single namespace and storage pool to provide high bandwidthfor multiple hosts in parallel. Data is distributed over multiple nodes in the cluster to handleavailability and data protection in a self-healing manner and to provide high throughput and scalablecapacity in a linear manner.

Position and Adoption Speed Justification: The growing strategic importance of storing andanalyzing large-scale, unstructured data is bringing scale-out storage architectures to the forefrontof IT infrastructure planning. Storage vendors are continuing to develop cluster file systems toaddress performance and scalability limitations in traditional, scale-up, network-attached storage(NAS) environments. This makes them suitable for batch and interactive processing and other high-bandwidth workloads. Apart from academic high-performance computing (HPC) environments,commercial vertical industries — such as oil and gas, financial services, media and entertainment,life sciences, research and web services — are leading adopters for applications that require highlyscalable storage bandwidth.

Beyond the HPC use case, large home directories storage, rich-media streaming, backup andarchiving are other common use cases for cluster file systems. Products from vendors such asPanasas, DataDirect Networks (DDN) and Intel are most common in HPC environments. Most

Page 28 of 62 Gartner, Inc. | G00302826

Page 29: Hype Cycle for Storage Technologies, 2016 - … · Gartner's Hype Cycle illustrates the typical life cycle that a new storage technology goes through before it reaches mainstream

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

leading storage vendors, such as Dell, Hewlett Packard Enterprise (HPE) and IBM, as well asemerging vendors, such as Red Hat and Qumulo, have enhanced their presence in this segment.Vendors are also increasingly starting to offer software-based deployment options in a capacity-based perpetual licensing model to stimulate market adoption.

Hadoop Distributed File System (HDFS) is starting to see wide enterprise adoption for big data,batch processing use cases and beyond. With the growing demand for high input/output operationsper second (IOPS) and aggregated bandwidth for shared storage, cluster file systems are expectedto see robust adoption in the future.

User Advice: Storage cluster file systems have been around for years, although vendor maturityvaries widely. Users that need products that enable them to pay as they grow in a highly dynamicenvironment, or that need high bandwidth for shared storage, should put cluster file systems ontheir shortlists. Most commercial and open-source products specialize in tackling specific use casesbut integration with workflows may be lacking in several products. Evaluate your application andinput/output (I/O) requirements to select a pertinent cluster file system. Prioritize scalability,performance manageability, independent software vendor (ISV) support, deployment flexibility andresiliency features as important selection criteria. There is little technical know-how regarding scale-out file systems in many enterprise IT organizations; hence, I&O leaders should allocate a portion ofthe storage budget to training.

Business Impact: Storage cluster file systems are competitive alternatives that scale storagebandwidth more linearly, surpassing expensive monolithic frame storage arrays in this capability.The business impact of storage cluster file systems is most pronounced in environments in whichapplications generate large amounts of unstructured data, and the primary access is through fileprotocols. However, they will also have an increasing impact on traditional data centers that want toovercome the limitations of dual-controller storage designs as well as for use cases such as backupand archiving. Many storage cluster file systems will have a significant impact on private cloudservices, which require a highly scalable and elastic infrastructure. IT professionals keen toconsolidate file server or NAS file sprawl should consider using cluster file system storage productsthat offer operational simplicity and nearly linear scalability.

Benefit Rating: High

Market Penetration: 5% to 20% of target audience

Maturity: Early mainstream

Sample Vendors: Cray; Dell; HP; Huawei; IBM; Intel; Panasas; Quantum; Qumulo; Red Hat

Recommended Reading: "Critical Capabilities for Scale-Out File System Storage"

"Who's Who in Open-Source, Scale-Out File System Storage"

Cross-Platform Structured Data Archiving

Analysis By: Garth Landers

Gartner, Inc. | G00302826 Page 29 of 62

Page 30: Hype Cycle for Storage Technologies, 2016 - … · Gartner's Hype Cycle illustrates the typical life cycle that a new storage technology goes through before it reaches mainstream

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

Definition: Cross-platform structured data archiving software moves data from custom orcommercially provided applications to an alternate file system or DBMS while maintaining dataaccess and referential integrity. Reducing the volume of data in production instances can improveperformance; shrink batch windows; and reduce storage acquisition costs, facility requirements, thecost of preserving data for compliance when retiring applications and environmental footprints.Archives can also be used for historical and other analysis.

Position and Adoption Speed Justification: Structured data archiving tools have been availablefor almost two decades and have historically seen more adoption in larger enterprises. Theseproducts provide functionality to identify old or infrequently used application data and manage itappropriately. Although ROI can be high, developing policies for retaining and deleting oldapplication data is difficult and often not seen as a priority. In addition, vendor offerings areexpensive, and enterprises will only engage when events demand it. Organizations generally tend toadd more database licenses or use native database capabilities, such as purging and partitioning, toaddress application growth. The technology has long been seen as a cost avoidance measure usedto contain operational and capital expenditures related to data growth as well as improve factorslike application performance. The market is changing and growing due to growth in data,application retirement, information governance requirements and big data analysis opportunities.

Today's data archiving products are mature and will face challenges as various distributions ofHadoop add capabilities such as retention management. Increasingly, leading vendors are lookingto support Hadoop and differentiate through vertical offerings, such as healthcare. In addition, newapproaches to application retirement and curbing structured data growth are happening throughareas such as copy data management. This approach, while immature when applied to these usecases, is growing, and offers a less complex approach than the technology offered by leadingofferings. Application retirement continues to be a significant driver. Organizations are looking forways to cut costs associated with maintaining no-longer-needed legacy applications whilepreserving application data for compliance or its historical value. Data center consolidations,moving to the cloud, and mergers and acquisitions are contributing to the interest in structured dataarchiving solutions to reduce the number of enterprise applications.

Competition often comes from internal resources who want to build it themselves and fromimprovements in storage technology that transparently improve performance while reducing storageacquisition and ownership costs — more specifically, autotiering, SSDs, data compression and datadeduplication. Do-it-yourself efforts typically lack appropriate governance controls such as secureaccess, data masking and retention management, and legal hold. The allure of tools that cansupport multiple applications and underlying databases and the added capabilities these toolsprovide for viewing data as business objects independent of the application are drivingadministrators to consider them as viable solutions. New capabilities — such as better search andreporting, integration with big data analysis tools, retention management, support for databasepartitioning, and support for SAP archiving — are broadening their appeal. Also, newer productofferings are beginning to include unstructured data along with relational data for a more holisticapproach to application archiving.

User Advice: The ROI for implementing a structured data archiving solution can be exceptionallyhigh, especially to retire an application or to deploy a packaged application for which vendor-supplied templates are available to ease implementation and maintenance. Expect that the planning

Page 30 of 62 Gartner, Inc. | G00302826

Page 31: Hype Cycle for Storage Technologies, 2016 - … · Gartner's Hype Cycle illustrates the typical life cycle that a new storage technology goes through before it reaches mainstream

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

phase may take longer than the implementation. Among the roadblocks to implementation arerequiring the consulting services, gaining application owner acceptance (especially through testingaccess to archived data), defining the archiving policies and building the initial business case.Application retirement projects with large data footprints and numerous applications are projectsthat can span more than a year. Most vendors in this space can provide good references, andorganizations should speak with references that have similar application portfolios and goals formanaging their data. Enterprises should consider developing their own solutions when either thenumber of applications being retired is very low, data retention requirements are not very long (suchas one to two years) or governance requirements such as audit or litigation are unlikely.

Business Impact: Creating an archive of less frequently accessed data and reducing the size of theactive application database (and all related copies of that database) improve applicationperformance and recoverability, and lower costs related to database and application license, server,infrastructure and operation costs. Transferring old, rarely accessed data from a disk archive to tapecan further reduce storage requirements. Most vendors in this space are supporting cloud storageas the repository for archived data. Retiring or consolidating legacy applications cuts the costs andrisks associated with maintaining these systems. Optimally, historical data can be preserved foranalysis, supporting improvements to digital business. Overall, organizations can experience betterinformation governance, including reduced risk associated with governance events like audits.

Benefit Rating: Moderate

Market Penetration: 5% to 20% of target audience

Maturity: Mature mainstream

Sample Vendors: Delphix; HP; IBM; Informatica; OpenText; PBS; Solix Technologies

Recommended Reading: "Magic Quadrant for Structured Data Archiving and ApplicationRetirement"

"Use Hadoop for Your Big Data Archiving"

"Build a Leaner Data Center Through Application Retirement"

"Storage and Data Management Are Not the Same Thing"

Emerging Data Storage Protection Schemes

Analysis By: Stanley Zaffos

Definition: Emerging data storage protection schemes deliver higher mean time between data loss(MTBDL) than traditional redundant array of independent disks (RAID) schemes. Commercialimplementations of emerging data storage protection schemes include replacing the concept ofspare disks with spare capacity, which enables parallel rather than sequential data rebuilds; rebuildsthat only reconstruct data actually stored on failed disks or nodes; triple mirroring; Reed Solomon, aform or erasure coding; other erasure codes; and dispersal algorithms.

Gartner, Inc. | G00302826 Page 31 of 62

Page 32: Hype Cycle for Storage Technologies, 2016 - … · Gartner's Hype Cycle illustrates the typical life cycle that a new storage technology goes through before it reaches mainstream

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

Position and Adoption Speed Justification: Hard-disk drive (HDD) capacity is growing faster thanHDD data rates. The result is ever longer rebuild times that are outstripping traditional RAIDimplementations' ability to effectively protect data against HDD failures; hence, the focus onreducing rebuild times or increasing the fault tolerance or resiliency of the data protection scheme.Erasure coding and dispersal algorithms, which add the physical separation of storage nodes toerasure coding, take advantage of inexpensive compute power to store blocks of data as systemsof equations, and transform these systems of equations back into blocks of data during readoperations. Allowing the user to specify the number of failures that can be tolerated during a datarebuild within an HDD or solid-state drive (SSD) group enables users to trade off data protectionoverheads (costs) against MTBDLs. Because erasure coding and dispersal algorithms increase thenumber of overhead inputs/outputs (I/Os) needed to protect data, they are most commonly used inscale-out storage systems supporting applications that are not response-time-sensitive.

User Advice: Require vendors offering advanced data protection schemes to profile theperformance/throughput of their storage systems supporting your workloads using the variousprotection schemes supported to better understand performance-overhead trade-offs. Requestminimum/average/maximum rebuild times to size the likely rebuild window of vulnerability in astorage system supporting your production workloads. Cap microprocessor consumption at 75% ofavailable cycles to ensure that the system's ability to meet service-level objectives is notcompromised during rebuilds. Give extra credit to vendors willing to guarantee rebuild times.Confirm that the choice of protection scheme does not limit the use of other value-added features,such as compression and deduplication, autotiering, or deduplication.

Business Impact: The deployment of advanced protection schemes will enable vendors and usersto continue lowering storage costs by taking advantage of disk capacity increases as soon as theybecome technically and economically attractive. The rapid adoption of new high-capacity HDDs willreduce environmental footprints, and may enable users to delay or avoid doing facilities upgrades orexpansions.

Benefit Rating: Moderate

Market Penetration: 5% to 20% of target audience

Maturity: Adolescent

Sample Vendors: Caringo; DataDirect Networks (DDN); Dell; EMC; IBM; NEC; Panasas; Scality;SwiftStack

Recommended Reading: "Increases in Disk Capacity Are Affecting RAID Recovery; Start toPurchase New RAID Technologies"

"Technology Overview for Erasure Coding"

"Slow Storage Replication Requires the Redesign of Disaster Recovery Infrastructures"

Enterprise Endpoint Backup

Analysis By: Pushan Rinnen

Page 32 of 62 Gartner, Inc. | G00302826

Page 33: Hype Cycle for Storage Technologies, 2016 - … · Gartner's Hype Cycle illustrates the typical life cycle that a new storage technology goes through before it reaches mainstream

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

Definition: Enterprise endpoint backup refers to backup products for laptops, desktops, tablets andsmartphones that can recover corrupted or lost data residing on the devices, as well as personalconfigurations. Endpoint backup differs from file sync and share's versioning capabilities in thatbackup preserves secure, centrally managed copies that cannot be changed or deleted by endusers and that it protects PC/laptop data in a more comprehensive way. However, mobile contentprotection is weaker due to lack of APIs from mobile OS providers.

Position and Adoption Speed Justification: Overall, more organizations are adopting endpointbackup to tackle different risks. Those that have globally distributed offices and employees like toleverage web-scale public cloud storage providers and backup-as-a-service providers that offer amultiple-country presence. As employees become more mobile, laptop backup has been the drivingforce for organizations to adopt endpoint backup, not just to restore lost data, but also to enablemore efficient ways to perform ongoing laptop refresh/migration, to comply with company policies,to perform legal hold and e-discovery, and to avoid other risks such as potential data leaks orransomware attacks. Technologywise, vendors have added more features to cater to the mobilenature of laptops, such as VPN-less backup over the internet, cellular network awareness andremote wipe. Other new product developments focus on legal hold capabilities, devicereplacement/migration automation and full-text search for faster restore/recovery. The oldperformance issues are tackled by the use of client-side deduplication in addition to incrementalforever backups, near-continuous data protection (CDP) technologies, and CPU and networkthrottling.

Backup of mobile devices such as tablets and smartphones continues to be problematic due to lackof APIs for integration with third-party backup software. As a result, most organizations don't have apolicy regarding mobile data backup.

User Advice: Protecting endpoint user data must be part of a robust enterprise data protection andrecovery plan. Organizations should evaluate and deploy a laptop/PC backup solution, be it on-premises or in the cloud, to maintain control and prevent data loss or leakage, instead of dependingon employees to create their own backup methods.

Business Impact: Endpoint backup and recovery have become increasingly important as the globalworkforce has become more mobile and is creating more business content on their variousendpoint devices. Moreover, new malicious attacks such as ransomware have increased riskprofiles and may rely on backup to restore data instead of paying the ransom. If employees don'tback up their endpoint devices regularly (and many do not on their own), companies may facesignificant risks when important or sensitive data is lost, stolen or leaked, including R&D setbacks,fines, legal actions and the inability to produce user data in a lawsuit. Based on Gartner's estimates,laptop/PC data loss as a result of lack of backup could cost an organization of 10,000 employeesabout $1.8 million a year.

Benefit Rating: High

Market Penetration: 5% to 20% of target audience

Maturity: Adolescent

Gartner, Inc. | G00302826 Page 33 of 62

Page 34: Hype Cycle for Storage Technologies, 2016 - … · Gartner's Hype Cycle illustrates the typical life cycle that a new storage technology goes through before it reaches mainstream

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

Sample Vendors: Asigra; Code42; Commvault; Ctera Networks; Datacastle; Druva; EMC; HP;Infrascale; Intermedia

Recommended Reading: "Critical Capabilities for Enterprise Endpoint Backup"

"How to Address Three Key Challenges When Considering Endpoint Backup"

"Cloud File Sync/Share Is Not Backup"

Hybrid DIMMs

Analysis By: Michele Reitz

Definition: Hybrid dual in-line memory modules (hybrid DIMMs) are nonvolatile DIMMs that resideon the double data rate (DDR) DRAM memory channel, function as DRAM memory and focus onpreserving data in case of power failure in critical applications. They integrate DRAM and nonvolatilememory (currently NAND flash), a system controller chip and an ultracapacitor powerful enough toallow the module time to write all of the contents of the DRAM to the nonvolatile memory whenpower is lost unexpectedly, thereby providing persistent data storage.

Position and Adoption Speed Justification: Hybrid DIMMs are good alternatives to battery-powered backup systems or the super-capacitor-based DIMMs used to save current data in case ofpower failure. Hybrid DIMMs use the same industry-standard DRAM sockets, and — with declinesin NAND flash pricing — it is becoming economical to design systems with sufficient storagecapacity to enable backup capability. Generally, there is a one-to-two ratio between DRAM andNAND storage capacity, but that can change according to the application. Hybrid DIMMs also canbe configured for use as secondary storage, as long as they possess flash management andinterface support for the host application.

The NVDIMM Special Interest Group, a consortium within the Storage Networking IndustryAssociation (SNIA), classifies three types of NVDIMM — NVDIMM-N, NVDIMM-F and NVDIMM-P.Gartner classifies NVDIMM-N and NVDIMM-P as hybrid DIMMs and NVDIMM-F as a solid-stateDIMM.

Industry support and standardization are critical for adoption. Currently, only a few major OEMs areusing the technology. In addition, hybrid DIMMs are available from only two major DRAM and NANDvendors — Micron (via AgigA Tech and Hewlett Packard Enterprise) and SK hynix — and fromcustom module providers such as Viking Technology, Smart Modular and a few other small modulevendors. We expect other major memory vendors to enter the market, along with other custommodule providers that are already involved in both DRAM and flash-based memory modules. Weexpect adoption of this technology to increase slowly in the next two to three years.

In addition, evolution of 3D XPoint for P type NVDIMMs will provide a higher-performancenonvolatile alternative to NAND flash in 2017 and beyond. 3D XPoint is an emerging nonvolatilememory technology from Intel and Micron that boasts substantial performance and reliability gainsover flash memory, but it has not been widely commercialized.

Page 34 of 62 Gartner, Inc. | G00302826

Page 35: Hype Cycle for Storage Technologies, 2016 - … · Gartner's Hype Cycle illustrates the typical life cycle that a new storage technology goes through before it reaches mainstream

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

The slow pace of new introductions of hybrid DIMMs, slow growth of OEM support, slow migrationof users to new technologies and a lack of education about the potential benefits of hybrid DIMMshas limited penetration of this technology. We have, therefore, moved its position down slightlycloser to the Trough of Disillusionment on the Hype Cycle compared with last year.

User Advice: IT professionals should examine the roadmaps of major server and storage OEMs, aswell as those of solid-state array (SSA) vendors, to see which will launch hybrid DIMM-basedsystems. They should ascertain whether hybrid DIMMs are supported by the OS and server theywish to use, and whether the required BIOS changes have been implemented in their targetsystems. The latencies of hybrid DIMMs and all DRAM DIMMs require that servers, systems and OStiming routines are tuned properly.

IT professionals should educate themselves about the option to use hybrid DIMMs to meet theirnonvolatile DIMM needs. Although the focus of hybrid DIMMs is DRAM backup, they also can beused as a new tier of storage with access times closer to DRAM, as they are significantly faster thanconventional SSDs and have a denser form factor that allows for greater system capacities. For thisuse case, users should consider both hybrid and solid-state DIMMs.

Business Impact: Hybrid DIMMs have several advantages over conventional battery-poweredbackup DIMMs, including faster speed, lower maintenance costs, greater reliability, high availabilityand improved system performance. Currently, the cost premium over existing solutions isconsiderable, but it should drop as NAND flash pricing stabilizes and competition intensifiesthroughout the remainder of 2016 and into 2017.

Memory vendors should consider hybrid DIMMs as a way of adding value to what are essentiallytwo commodity products — DRAM and NAND flash (or, potentially, 3D XPoint). By exploiting theattributes of these devices, they will not only enhance their own value proposition, but also expandtheir addressable market.

Benefit Rating: Moderate

Market Penetration: 1% to 5% of target audience

Maturity: Adolescent

Sample Vendors: AgigA Tech; Hewlett Packard Enterprise; Intel; Micron Technology; Netlist;Samsung; SK hynix; Smart Modular; Viking Technology

Recommended Reading: "Market Trends: The Next-Generation Server-Side SSD Landscape"

"Market Share Analysis: SSDs and Solid-State Arrays, Worldwide, 2015"

Virtual Machine Backup and Recovery

Analysis By: Pushan Rinnen; Dave Russell

Gartner, Inc. | G00302826 Page 35 of 62

Page 36: Hype Cycle for Storage Technologies, 2016 - … · Gartner's Hype Cycle illustrates the typical life cycle that a new storage technology goes through before it reaches mainstream

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

Definition: Virtual machine (VM) backup and recovery focuses on protecting and recovering datafrom VMs, as opposed to the physical server they run on. Backup methods optimized for VMbackup typically leverage hypervisor-native APIs for changed block tracking (CBT), which enablesblock-level incremental forever backup, eliminating the general need for the in-guest agent backupmethod. Some backup vendors created their own CBT driver before a hypervisor vendor introducedits own.

Position and Adoption Speed Justification: Enterprise VM backup typically focuses on VMwareand Hyper-V, as they are the most deployed hypervisors in enterprise data centers. For VMwarebackup, most backup software solutions have abandoned the traditional guest OS agent approachand adopted image-based backup, leveraging VMware's CBT and data protection APIs. However,many traditional backup applications require installation of guest OS agents to do granular itemrestore for applications, such as Exchange and SharePoint running on VMware. Other differentiatorscenter around ease of use, scalability and self-service capabilities. For Microsoft Hyper-V, somevendors developed their own Hyper-V CBT driver, but we expect the native CBT driver in theupcoming Windows Server 2016 later this year to become the standard. While both VMware andMicrosoft have native backup tools for small homogeneous VM environments, Veeam continues tobe the leading vendor for heterogeneous, VM-focused backup. Meanwhile, a few smaller vendorsare competing on prices for service providers or small businesses. Integrated VMware backupappliances, such as Rubrik and Cohesity, have also emerged, offering an easier path to scalebackup hardware infrastructure. VM backup tends to have a socket-based pricing model instead ofa capacity-based model.

Agentless backup for other VM platforms is spotty at best. Backup for containers such as Dockerhasn't become a user requirement, as most deployment of containers is for test/development andworkloads that don't require persistent storage and data recovery. The sample vendor list includesthe vendors that have products that only or primarily back up VMs.

User Advice: Recoverability of the virtual infrastructure is a significant component of anorganization's overall data availability, backup/recovery and disaster recovery plan. Protection ofVMs needs to be taken into account during the planning stage of a server virtualization deployment,as virtualization presents new challenges and new options for data protection.

Evaluate application data protection and restoration requirements before choosing VM-levelbackup. Additionally, snapshot, replication and data reduction techniques, and deeper integrationwith the hypervisor provider, should also be viewed as important capabilities. With hundreds tothousands of VMs deployed in the enterprise, and typically with 10 or more mission-critical VMs ona physical server, improved data capture, bandwidth utilization, and monitoring and reportingcapabilities will be required to provide improved protection without complex scripting andadministrative overhead.

Business Impact: As production environments have become highly or completely virtualized, theneed to protect data in these environments has become critical. VM backup and recovery solutionshelp recover from the impact of disruptive events, including user or administrator errors, applicationerrors, external or malicious attacks, equipment malfunction, and the aftermath of disaster events.The ability to protect and recover VMs in an automated, repeatable and timely manner is importantfor many organizations.

Page 36 of 62 Gartner, Inc. | G00302826

Page 37: Hype Cycle for Storage Technologies, 2016 - … · Gartner's Hype Cycle illustrates the typical life cycle that a new storage technology goes through before it reaches mainstream

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

Benefit Rating: High

Market Penetration: More than 50% of target audience

Maturity: Early mainstream

Sample Vendors: Cohesity; Dell; Hewlett Packard Enterrpise; Rubrik; Veeam Software; Vembu;Zerto

Recommended Reading: "Essential Practices for Optimizing VMware Backup"

"Magic Quadrant for Data Center Backup and Recovery Software"

"Critical Capabilities for Data Center Backup and Recovery Software"

"Best Practices for Repairing the Broken State of Backup"

Cloud Storage Gateway

Analysis By: Raj Bala

Definition: Cloud storage gateways refer to the physical or virtual appliances that reside in anorganization's enterprise data center and/or public cloud network. They provide users andapplications with seamless access to files stored in a public or private cloud. Users and applicationstypically read and write files through network file system or host connection protocols. Files are thentransparently written to remote cloud storage through web service protocols, such as those offeredby Amazon Web Services (AWS) and Microsoft Azure.

Position and Adoption Speed Justification: Cloud storage gateways are important elements ofhybrid cloud storage models that connect on-premises storage with public cloud storage servicesfrom vendors such as AWS, Google and Microsoft. The market for cloud storage gateways is mostlymade up of startup companies or large vendors that have acquired startup companies. Theacquired products have been folded into larger product portfolios and, in some cases, are bundledtogether with products as diverse as high-end storage arrays, enterprise backup software andpublic cloud services. Customers typically deploy cloud storage gateways for archiving andcollaboration use cases that traditional storage platforms do not support, but primary cloud storagegateway functionality is being introduced into object storage platforms such as IBM Cleversafe andScality. Some cloud storage gateways, such as those from Nasuni and Panzura, provide a globalnamespace for files across disparate offices, but with local file access performance.

User Advice: The market for cloud storage gateways is much smaller compared with otheradjacent, emerging storage markets. Businesses are not adopting cloud storage gateways orexhibiting hybrid cloud storage behavior in large numbers.

Users should deploy cloud storage gateways to realize unique value that isn't present in productsfrom more mature markets such as HCIS. In particular, no other categories of storage productsprovide a global namespace, file locking and mobile access. These features serve collaboration and

Gartner, Inc. | G00302826 Page 37 of 62

Page 38: Hype Cycle for Storage Technologies, 2016 - … · Gartner's Hype Cycle illustrates the typical life cycle that a new storage technology goes through before it reaches mainstream

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

file sharing use cases across disparate geographies that are otherwise underserved by the largerstorage market.

Business Impact: Cloud storage gateways act as a technology bridge between on-premisesstorage and public cloud storage. But technology bridges are often temporary. They are eventuallydismantled when users understand how to get to the other side. And that case is already happeningin the market for cloud storage gateways. Businesses are either keeping applications and data on-premises, or they are moving them to public cloud infrastructure as a service (IaaS). Onlyinfrequently are they taking a hybrid approach unless unique value can be derived.

Cloud storage gateways can provide customers that want to reduce in-house backup/disasterrecovery processes, archives and unstructured data with compelling, cloud-based alternatives.Some organizations are deploying cloud storage gateways such as virtual appliances in computeinstances running in public cloud IaaS providers, such as AWS and Google Cloud Platform. Thegateways then connect back to a customer's enterprise data center and act as a bridge betweenelastically scaled compute instances in the public cloud and the data stored on primary storageplatforms inside the customer's data center. This scenario is particularly useful for big dataworkloads where the compute capacity is best used in a temporary, elastic fashion. This model flipsthe traditional notion of an enterprise's use of public cloud: An enterprise data center becomes anextension of public cloud, rather than the opposite.

Benefit Rating: Moderate

Market Penetration: 5% to 20% of target audience

Maturity: Adolescent

Sample Vendors: Amazon Web Services; Avere Systems; Ctera Networks; EMC; Microsoft; Nasuni;NetApp; Panzura

Recommended Reading: "Market Guide for Cloud Storage Gateways"

Disaster Recovery as a Service

Analysis By: John P Morency

Definition: Disaster recovery as a service (DRaaS) is a cloud-based recovery service in which theservice provider is responsible for managing virtual machine (VM) replication, VM activation andexercise management. Increasingly, in addition to service offerings that just recover virtualmachines, a growing number of service providers are now offering managed hosting services forhybrid recovery configurations that are composed of both physical and virtual servers.

Position and Adoption Speed Justification: Over the past year, Gartner has seen a significantincrease in both the number of DRaaS providers (more than 250 today) and in the number ofproduction service instances (over 50,000, more than double the number in 2015). Initially, smallorganizations with fewer than 100 employees were DRaaS early adopters. The reason for theservice uptake in smaller organizations was because they often lacked the recovery data center,experienced IT staff and specialized skill sets needed to manage a disaster recovery (DR) program

Page 38 of 62 Gartner, Inc. | G00302826

Page 39: Hype Cycle for Storage Technologies, 2016 - … · Gartner's Hype Cycle illustrates the typical life cycle that a new storage technology goes through before it reaches mainstream

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

on their own. This made managed recovery in the cloud an extremely attractive option. However,since the beginning of 2014, many large (1,000 to 5,000 employees) and very large (5,000-plusemployees) enterprises have also begun initial piloting or have moved beyond the piloting stage tofull production. Today, large and very large enterprises represent approximately 27% and 13% ofthe DRaaS installed base, respectively.

Because of the growing number of production instances, rapidly falling service pricing andsignificant increases in service pilot evaluations, Gartner has increased the Hype Cycle position ofDRaaS to post-trough 10%.

User Advice: Clients should not assume that the use of cloud-based recovery services willsubsume the use of traditional DR providers or self-managed DR any time in the near future. Thekey reasons for this are computing-platform-specific recovery requirements, security concerns,active-active operations requirements and cost advantages of noncloud alternatives, among others.Therefore, it is important to look at DRaaS as just one possible alternative for addressing in-houserecovery and continuity requirements.

Consider cloud infrastructure when you need DR capabilities for either Windows- or Linux-centriccloud-based applications, or when the alternative to a cloud-based recovery approach is theacquisition of additional servers and storage equipment for building out a dedicated recovery site.Additionally, because cloud services for enterprises are still rapidly evolving, carefully weigh the costbenefits against the service management risks as an integral part of your DR sourcing decisionmaking.

Business Impact: The business impact is moderate today. The actual benefits will vary, dependingon the diversity of computing platforms that require recovery support and the extent to whichservice customers can orchestrate (and ideally fully automate) the recurring recovery testing tasksthat need to be performed. An additional consideration is the extent to which the customer cantransparently and efficiently use same-provider cloud storage for ongoing data backup, replicationand archival. The key challenge is ensuring that these services can be securely, reliably andeconomically used to complement or supplant the use of more traditional equipment subscription-based services or the use of dedicated facilities. In addition, given that no service, including DRaaS,is immune to scope creep, it is incumbent on service users to ensure that providers consistentlydeliver on committed recovery time and availability service levels, especially as the size of the in-scope configuration increases and the configuration itself becomes more heterogeneous.

Benefit Rating: Moderate

Market Penetration: 5% to 20% of target audience

Maturity: Adolescent

Sample Vendors: Axcient; Bluelock; Cable & Wireless Worldwide; iland; IBM Resiliency Services;Microsoft (Azure); NTT Communications; Sungard Availability Services; Verizon Enterprise Solutions;VMware

Recommended Reading: "Five Pragmatic Questions to Ask Potential DRaaS Providers"

Gartner, Inc. | G00302826 Page 39 of 62

Page 40: Hype Cycle for Storage Technologies, 2016 - … · Gartner's Hype Cycle illustrates the typical life cycle that a new storage technology goes through before it reaches mainstream

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

"10 Strategic Questions to Ask Potential DRaaS Providers"

"Critical Capabilities for Disaster Recovery as a Service"

"Magic Quadrant for Disaster Recovery as a Service"

Public Cloud Storage

Analysis By: Raj Bala

Definition: Public cloud storage is infrastructure as a service (IaaS) that provides block, file and/orobject storage services delivered through various protocols. The services are stand-alone but oftenused in conjunction with compute and other IaaS products. The services are priced based oncapacity, data transfer and/or number of requests. The services provide on-demand storage and areself-provisioned. Stored data exists in a multitenant environment, and users access that datathrough the block, network and REST protocols provided by the services.

Position and Adoption Speed Justification: Public cloud storage forms a primitive in the buildingblocks of IaaS platforms, with providers offering a wide range of storage services. Storage servicesare optimized for workloads by performance, cost and availability. Public cloud storage usage islargely driven by workloads in IaaS compute instances. Enterprise customers occasionally usehybrid solutions that bridge on-premises or colocated storage with public cloud services. Thisallows for seamless operation between disparate environments.

Amazon Web Services (AWS) and Microsoft are solidifying their positions in the market and arecontinuing to establish their global presence and credibility. However, regulatory and sovereigntyconcerns have contributed to differences in expectations among users in the U.S. and othergeographic areas. Gartner expects adoption expansion to continue as costs, legal concerns,security and infrastructure integration issues are sufficiently addressed to reduce the risk of usageby large enterprises.

User Advice: Utilize public cloud storage services when deploying applications in public cloud IaaSenvironments. Match workload characteristics and cost requirements to a provider with equivalentlysuited services. The realistic possibility of upheaval in this market warrants significant considerationof the risks should organizations choose a provider that is not one of the hyperscale vendors suchas AWS, Google and Microsoft. Many of the Tier 2 public cloud storage providers that exist todaymay not exist in the same form tomorrow, if they exist at all.

Business Impact: The cost and agility expectations set by IaaS providers are enabling in-house IToperations to change their storage infrastructure management procedures and storageinfrastructure strategies. User demands for lower costs, more agility and operations that are moreautonomic are influencing vendor R&D investments and cloud service offerings.

Vendors continue to introduce a wide range of storage services with varying performance and cost— most notably archive tiers of storage for infrequently accessed data. Vendors are enabling endusers to align their storage costs with their usage rates and reduce costs as a result.

Page 40 of 62 Gartner, Inc. | G00302826

Page 41: Hype Cycle for Storage Technologies, 2016 - … · Gartner's Hype Cycle illustrates the typical life cycle that a new storage technology goes through before it reaches mainstream

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

Operational management of hardware and underlying infrastructure shifts to the cloud provider, butmanagement issues such as chargeback, billing, security and performance responsibilities remainwith the customer.

Benefit Rating: High

Market Penetration: 20% to 50% of target audience

Maturity: Early mainstream

Sample Vendors: Alibaba Cloud; Amazon Web Services; AT&T; Google; IBM; Microsoft; Oracle;Rackspace

Recommended Reading: "Market Guide for Cloud Storage Gateways"

"Magic Quadrant for Public Cloud Storage Services, Worldwide"

"Magic Quadrant for Cloud Infrastructure as a Service, Worldwide"

Online Data Compression

Analysis By: Santhosh Rao

Definition: Online data compression encodes data using mathematical algorithms to reduce thenumber of bits needed to store an object and decodes the data when it is retrieved. This analysisdeals with "lossless" compression schemes, meaning that the original data can be reconstructed inits entirety, exactly as it was, from the compressed data with no degradation. Run-length encoding,Lempel-Ziv and Huffman coding are three of the most popular algorithms in widespread use —sometimes with several of these techniques used in combination.

Position and Adoption Speed Justification: Since the early 2000s, compression has been used inbackup appliances such as virtual tape libraries and deduplication devices. These use cases oftencould tolerate the process and/or elapsed time demands that compression required.

In the last few years, compression has entered the primary storage market, and is often included inhybrid and solid-state arrays, but older storage arrays can be years behind. Advancements inprocessor speed, overall cost improvements and especially the random access, nonmechanicalnature of flash technology have accelerated compression usage for primary data.

User Advice: Online data compression offers favorable capacity savings with modest to noperformance considerations (to compress data at the time of the write and to reinflate data during aread operation) for the majority of workloads, and should be evaluated whenever available. Theability to apply online data compression to a greater number of use cases and workloads isincreasing as the cost per-CPU cycle declines and storage systems deploy more powerfulprocessors and/or additional amounts of memory, both of which can accelerate the mathematicalcomputations involved with compression algorithms. Compression significantly reduces writes insolid-state arrays (SSAs), but the ability to selectively turn compression on or off in SSAs is

Gartner, Inc. | G00302826 Page 41 of 62

Page 42: Hype Cycle for Storage Technologies, 2016 - … · Gartner's Hype Cycle illustrates the typical life cycle that a new storage technology goes through before it reaches mainstream

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

advantageous for workloads that are not compressible or for applications, such as OracleDatabase, that already have compression as an add-on feature.

Despite the advantages that compression can offer, data reduction is not always achievable. Datathat is previously encrypted or compressed may not exhibit any repeating patterns thatcompression algorithms can further reduce, thus minimizing or negating any benefit. Also, certainapplications such as VDI and email are better suited for deduplication than compression. Today,there is little harm caused by attempting to compress most workloads, as the processor or memoryused for compression algorithms no longer requires the same percentage of resources compared tosystems in the past.

Business Impact: Depending on performance considerations, type of data and retention periods,compression ratios can vary; however, typical results are usually in the 2-to-1 to 4-to-1 range,although Gartner generally guides clients to assume no higher than 2.5-to 1-for planning purposes.The positive impact of high data compression ratios on the need for additional storage purchases,operations, facility requirements and environmental costs will change the design of primary storageinfrastructures, as well as backup/restore and archiving solutions. SSAs are able to use online datacompression, which results in achieving price points close to traditional storage systems.

Online data compression can actually improve performance. This is because with compressionenabled, each input/output (I/O) operation could carry a higher effective data payload, thusswapping hard disk I/O for processing cycles, an advantageous trade-off in cost and time.

Benefit Rating: High

Market Penetration: 20% to 50% of target audience

Maturity: Early mainstream

Sample Vendors: EMC; Hitachi Data Systems; IBM; Kaminario; NetApp; Nimble Storage; PureStorage; SolidFire; Tintri

Recommended Reading: "Magic Quadrant for Solid-State Arrays"

"Critical Capabilities for Solid-State Arrays"

"Magic Quadrant for Enterprise Backup Software and Integrated Appliances"

"Best Practices for Repairing the Broken State of Backup"

Climbing the Slope

SaaS Archiving of Messaging Data

Analysis By: Alan Dayley

Definition: Software-as-a-service (SaaS) archiving of messaging data involves email, instantmessaging (IM), public social, business social and text/SMS data. Compliance and regulatory

Page 42 of 62 Gartner, Inc. | G00302826

Page 43: Hype Cycle for Storage Technologies, 2016 - … · Gartner's Hype Cycle illustrates the typical life cycle that a new storage technology goes through before it reaches mainstream

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

requirements drive the retention of messaging data, with SaaS archiving increasingly becoming therepository of choice. The capture of messaging content occurs at the time of creation or as it entersthe organization's communications systems, where it can be stored on immutable, write once-readmany storage.

Position and Adoption Speed Justification: SaaS-archiving solutions are mature. Many users findthe administration tools for SaaS archiving solutions more user-friendly than those available fromon-premises solutions. As the journaling feature is turned on in the email administration console,capture is as simple as pointing the journaled email to the hosted provider's site. IM archiving is asmature as email, and it is often stored in an email format in the archive repository. Public socialmedia and business social archiving are newer, and their capture is usually through APIs providedby the social media applications. Although social media data can be stored in an email format in thearchive, the industry trend is to store it in native format. Capture of text messaging data is becomingmore popular as well.

Unlike backup or disaster recovery as a service, archive users are less concerned about latency andmore interested in the accurate capture of metadata and the chain of custody of data; therefore, thespeed of the internet connections is not a major concern. This, coupled with prevalence of easy-to-use administrative and supervision tools, has led many organizations to choose a hosted solution.This has enabled archive expenses to shift to an operating expenditure (opex) model and away fromcapital expenditure (capex).

As government and industry regulations proliferate, SaaS-archiving vendors have been nimble atupdating the compliance requirements of offered solutions. Most SaaS-archiving vendors offer endusers access to messaging data through a search interface or, in some cases, a native applicationfolder view. Basic e-discovery capabilities of hosted solutions have received high marks fromcustomers and are noted as another reason for adoption. Microsoft Office 365 is having an impacton this market because native archiving and discovery features on that platform are improving.

User Advice: Organizations in highly regulated industries will find SaaS message-archivingsolutions to be mature, secure and reliable enough to meet the most stringent requirements.Organizations with message-archiving needs will find the hosted option easy to administer andattractively priced. They will find that it offers an opportunity to optimize internal IT resources. Mostorganizations do not face internal or external requirements or regulations that require the data toreside on-premises, so the willingness to consider the cloud revolves primarily around companyculture of risk, security, data sovereignty and costs.

When considering a solution, focus on indexing, search and discovery capabilities to ensure thatyour needs are met by the offering or through integration with a third-party e-discovery vendor. Themigration of legacy email archives, including into and out of a hosted solution, can be expensiveand should be scoped during the selection phase. In SaaS-archiving contracts, organizationsshould include an exit strategy that minimizes costs, and remember that they, not the SaaSproviders, own the data. When determining the costs versus benefits for SaaS archiving, includesoft expenses associated with on-premises solutions for personnel and IT-involved discoveryrequests.

Gartner, Inc. | G00302826 Page 43 of 62

Page 44: Hype Cycle for Storage Technologies, 2016 - … · Gartner's Hype Cycle illustrates the typical life cycle that a new storage technology goes through before it reaches mainstream

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

Business Impact: Organizations switch capex for opex costs when selecting a hosted archivesolution. Pricing is typically based on a per-mailbox or a per-user basis, paid as a monthlysubscription. IT departments are relieved of the responsibility for updating legacy, on-premisesarchive systems when hardware and software need to be refreshed. Compliance and legalpersonnel within organizations directly access the hosted solution without IT involvement, and canmore easily provide access to the hosted archive message data to outside parties, as required.

Benefit Rating: Moderate

Market Penetration: 5% to 20% of target audience

Maturity: Early mainstream

Sample Vendors: Bloomberg; Global Relay; Google; Hewlett Packard Enterprise; Microsoft;Mimecast; Proofpoint; Smarsh; Sonian; Veritas

Recommended Reading: "Magic Quadrant for Enterprise Information Archiving"

"Critical Capabilities for Enterprise Information Archiving"

"Plan Your Data Exit Strategy Before You Sign a SaaS Contract"

"How to Determine Whether Your Organization Needs Website Archiving"

"Five Factors to Consider When Choosing Between Cloud and On-Premises Email ArchivingSolutions"

Storage Multitenancy

Analysis By: Stanley Zaffos

Definition: Storage multitenancy features enable the secure sharing of a storage system betweenusers running diverse workloads and managing users' performance to service-level objectives.Multitenancy features generally include one or more of the following: logical partitioning, largeprimary or secondary cache configurations, autotiering, I/O prioritization and/or throttling, QoSfeatures that set IOPS floors and ceilings or manage response times, balanced host and back-endbandwidth, file virtualization, and clustered or distributed file support.

Position and Adoption Speed Justification: The widespread adoption of server virtualization,virtual desktop infrastructure (VDI), the move to 24/7 operations and competition from the cloud aredriving storage consolidation and the deployment of private cloud infrastructures. These trends, inturn, are driving the adoption of scale-out storage systems and increasing the importance ofmultitenancy features.

User Advice: Multitenancy support should not be treated as a primary evaluation criterion, but ause case that affects the weighting of other, more basic measures of storage system attractiveness,such as scalability and availability, performance/throughput, ecosystem support, and vendorsupport capabilities. Users should minimize risk associated with deploying shared storage arrays by

Page 44 of 62 Gartner, Inc. | G00302826

Page 45: Hype Cycle for Storage Technologies, 2016 - … · Gartner's Hype Cycle illustrates the typical life cycle that a new storage technology goes through before it reaches mainstream

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

testing to scale and negotiating performance/throughput guarantees with meaningful remedies thatare not punitive, yet enforceable, and span diverse workloads.

Business Impact: Good multitenancy features increase the probability of success of storageconsolidation projects that can lower total cost of ownership (TCO) by reducing the total number ofstorage systems being managed. Consolidation projects also have the potential to improve dataprotection, simplify disaster recovery testing, and improve the value of green storage technologiesby increasing the average configuration of systems deployed. Larger configurations may enableusers to deploy high-end rather than midrange storage systems. They increase the probability ofapplications being contained within a single storage system, which simplifies data protection anddisaster recovery testing. Larger configurations increase the value of green storage technologies,such as thin provisioning, autotiering and data reduction technologies by making it practical toconfigure a storage system with usable amounts of second-level cache and/or separatelyidentifiable tiers of storage.

Benefit Rating: Moderate

Market Penetration: 20% to 50% of target audience

Maturity: Early mainstream

Sample Vendors: DataDirect Networks (DDN); EMC; Fujitsu; Hitachi Data Systems; HewlettPackard Enterprise; IBM; NetApp; Nimble Storage; Tegile Systems; Tintri

Recommended Reading: "Magic Quadrant for General-Purpose Disk Arrays"

Solid-State Arrays

Analysis By: Joseph Unsworth; John Monroe

Definition: Solid-state arrays (SSAs) are a subcategory of the broader external controller-based(ECB) storage market. SSAs are scalable, dedicated solutions based solely on semiconductortechnology for data storage that cannot be configured with hard-disk drives (HDDs) at any time. Asdistinct from solid-state drive (SSD)-only racks configured within ECB storage arrays, SSAs must bestand-alone products denoted with a specific name and model number. They typically include anOS and data management software that are optimized for solid-state technology.

Position and Adoption Speed Justification: The SSA market started to take shape in 2012, amida flurry of startup activity and innovation driven by the falling prices of flash memory, but its truepromise and speed of adoption were not well-understood. The picture is much more lucid in 2016,as all incumbents now offer at least one SSA product, and the flood of early startups has distilled toa handful of companies vying for customers. Most, but not all, companies offer a complete set ofdata services. The most exceptional are built around simplicity and possessing-capable storageefficiency, data protection and, of course, predictable performance.

The maturation of the market over the last three years not only witnessed many acquisitions, butalso several company failures as funding dried up. More companies are expected due to the

Gartner, Inc. | G00302826 Page 45 of 62

Page 46: Hype Cycle for Storage Technologies, 2016 - … · Gartner's Hype Cycle illustrates the typical life cycle that a new storage technology goes through before it reaches mainstream

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

reinvention of SSAs for an even higher-performance capability predicated on NVMe PCIe SSDs andeventually complemented by next-generation memory, such as 3D XPoint. Startups are not the onlycompanies pursuing this next-generation tier. Most existing players in the market are as well, whichwill foster even more competition, innovation and dynamism in the market.

User Advice: IT professionals must assess their past and present application workload demands todetermine performance, capacity and availability requirements. This will be an essential first step todetermine the most cost-effective solutions that provide the appropriate data services to meetfuture workload demands.

Users should demand that SSA vendors include the following as "must do" parts of theircompetitive strategy:

■ Qualify and reliably integrate consumer-grade flash technology to provide higher-capacitysolutions that can be sold for less than $5 per raw (uncompressed and without support/services) gigabyte.

■ Provide total cost of ownership (TCO) analyses of performance efficiencies, energy savings andspace conservation relating to specific application workloads.

■ Embody flexibly selectable thin provisioning, in-line deduplication, in-line data compression,cross-application quality of service (to prevent inconsistent performance from the "noisyneighbor" effect), at-rest and on-the-fly encryption, high availability, and disaster recovery (bymeans of local/metro synchronous and remote asynchronous replication) in most, if not all, SSAconfigurations.

■ Prove your company capable of providing competent and enduring global support and services,flexible maintenance programs, and guarantees around performance, storage efficiency andhigh availability.

■ Outline a clear roadmap predicated on continued cost reductions, flexibility to accommodateupcoming solid-state technology, predictive analytics and ease of management.

IT professionals must weigh their actual needs against the cost and features of the availablesolutions when considering adoption. IT professionals should also remain cautious in deploymentand select only financially stable and proven system suppliers that have strong direct or indirectpartnerships that can enable them to deliver competent and enduring support and services.

Business Impact: Compared with the legacy HDD-based and hybrid HDD-/SSD-based arrays,SSAs remain relatively expensive on a raw dollar-per-gigabyte basis, but when storage efficiencyand TCO are factored in, they can become quite compelling and even cheaper than HDD-basedstorage arrays, depending on the environment. The greatest opportunities for deployment are forperformance usage in database and high-performance computing and consolidation in highlyvirtualized environments, such as virtualized server infrastructure and hosted virtual desktopinfrastructure. SSAs are also increasingly being used in analytics and data warehouse environments.

SSAs already have proved to be a dynamic subset of the ECB market arenas; SSA revenue grew by116% in 2015, expanding from $1.25 billion to $2.70 billion. Gartner predicts that the SSA share willapproach almost 50% (more than $9.6 billion) of the total ECB market, with a 29.0% compound

Page 46 of 62 Gartner, Inc. | G00302826

Page 47: Hype Cycle for Storage Technologies, 2016 - … · Gartner's Hype Cycle illustrates the typical life cycle that a new storage technology goes through before it reaches mainstream

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

annual growth rate (CAGR), from 2015 through 2020. Legacy ECB storage arrays, while still atapproximately $11 billion, will display a negative 11.0% CAGR from 2015 through 2020.

The legacy ECB arrays and SSAs will reflect a symbiotic evolution as the markets move toward theall-flash data centers of the future. Efficient data management software that seamlessly integratesthe benefits of variously configured storage tiers, and can be closely coupled with the host OS,increasingly will become an imperative measure for long-term success.

Benefit Rating: Transformational

Market Penetration: 5% to 20% of target audience

Maturity: Early mainstream

Sample Vendors: EMC; Hewlett Packard Enterprise; Hitachi Data Systems; Huawei; IBM;Kaminario; NetApp; Pure Storage; Tegile Systems; Violin Memory

Recommended Reading: "Magic Quadrant for Solid-State Arrays"

"Critical Capabilities for Solid-State Arrays"

"Market Share Analysis: SSDs and Solid-State Arrays, Worldwide, 2015"

"Solid-State Array TCO Reality Check"

"Moving Toward the All Solid-State Storage Data Center"

"The Top 10 Must-Ask Questions When Evaluating Solid-State Storage Arrays"

"Evaluation Criteria for Solid-State Arrays"

Automatic Storage Tiering

Analysis By: Stanley Zaffos

Definition: Automatic storage tiering moves pages of a logical volume or file between processormemory, tiers of cache and storage. Page movements are transparent to applications except fortheir potential impact on performance and throughput. Page movements are managed by policiesand/or algorithms with the objectives of minimizing storage costs while meeting performance andthroughput service-level agreements. Solid-state technologies, such as flash or 3D XPoint, can bemanaged as cache or as an identifiable tier of storage.

Position and Adoption Speed Justification: Autotiering has made flash usage in general-purposestorage systems commonplace. Indeed, most general-purpose disk arrays now being sold arehybrids configured with flash and HDDs. Most flash is packaged as SSDs using the SAS protocol,but PCIe cards that bypass high-latency protocols are finding increased adoption. Users takingadvantage of autotiering and SSDs or PCIe cards are able to decrease their storage acquisition andownership costs without sacrificing performance and throughput by enabling users to replace

Gartner, Inc. | G00302826 Page 47 of 62

Page 48: Hype Cycle for Storage Technologies, 2016 - … · Gartner's Hype Cycle illustrates the typical life cycle that a new storage technology goes through before it reaches mainstream

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

10,000 rpm or 15,000 rpm high-performance HDDs with low-cost, high-capacity 7,200 rpm hard-disk drives (HDDs). The use of SSDs or PCIe-based solid-state technologies and low-cost, high-capacity 7,200 rpm HDDs also improves mean time between data losses (MTBDLs) in two ways.Solid-state technologies provide advance notice of failures, and using high-capacity HDDs reducesthe number of HDDs in the system, which reduces the number of components in the system thatcan fail and hence the frequency of repair activities. This also reduces its environmental footprintrelative to a system configured to an equivalent capacity.

User Advice: Allow potential suppliers to benchmark the workloads their storage systems willsupport before they configure the systems they will bid, and then use their performance claims asthe basis of performance guarantees with meaningful remedies. This increases the probability ofproposed configurations meeting or exceeding performance SLAs, and gives storage suppliers atleast partial ownership of any performance-related problems. Continually monitor performance andthroughput, resize cache and storage, and adjust policies as indicated to accommodate changes inworkload characteristics. When deploying autotiering software in a storage system that is beingreplicated to a disaster recovery site, take into account the impact of autotiering on replication andfailover performance at the disaster recovery site.

Business Impact: The value of autotiering is proportional to system capacity, the ratio of active tostale data and the locality of references to application data. Multitenancy and security requirementsinfluence the usability of autotiering features because of their ability to influence cache data accesspatterns and management, which in turn influence system performance and usable scalability.Autotiering implementations that are autonomic or near-autonomic in their operation may furtherlower storage total cost of ownership (TCO) by improving staff productivity.

Benefit Rating: Moderate

Market Penetration: 20% to 50% of target audience

Maturity: Early mainstream

Sample Vendors: Dell; EMC; Hewlett Packard Enterprise; Hitachi Data Systems; IBM; Infinidat;NetApp; Nimble Storage; Tegile; Tintri

Recommended Reading: "Save Storage Admin Costs, Stop Storage Array Tuning"

"Technology Overview for I/O Optimization Software"

"Overcome Disk Autotiering Problems With These Deployment Recommendations"

"How Much and What Type of Disk Storage Do IT Departments Need?"

"Where to Use SSDs in Your Storage Infrastructure"

"Solid-State Drives Will Complement, Not Replace, Hard-Disk Drives in Data Centers"

"Use SSDs, Rather Than Disk Striping, to Improve Storage Performance and Cut Costs"

Page 48 of 62 Gartner, Inc. | G00302826

Page 49: Hype Cycle for Storage Technologies, 2016 - … · Gartner's Hype Cycle illustrates the typical life cycle that a new storage technology goes through before it reaches mainstream

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

Enterprise Information Archiving

Analysis By: Alan Dayley

Definition: Enterprise information archiving (EIA) solutions provide tools for capturing data into adistributed or centralized repository for compliance and efficiency. EIA supports multiple data types(including email, file system, social media, business social, website, and mobile). These toolsprovide access to archived data in the repository or through a plug-in to the native application via apointer or via browser access, and some manage the data in place. EIA tools support operationalefficiency, compliance, retention management and e-discovery.

Position and Adoption Speed Justification: The number of vendors offering EIA solutions hasstabilized and, in some cases, there has been consolidation in the market. Driven by awarenesscreated through Microsoft Office 365 adoption, archiving is becoming mainstream for meetingcompliance and e-discovery needs for organizations implementing information governanceprograms. SaaS for messaging data archiving, including email and social media, has also gainedsignificant traction.

Support for the capture and supervision of social media has become a requirement in regulatedindustries. File archiving has slowed, with a focus on selective archiving for records. EIA productsthat support multiple content types are the norm. Many companies are looking to replace theirarchiving products with newer ones (particularly SaaS solutions), and many migration services areavailable. In addition, there is growing interest in managing the compliance and retention of data "inplace," rather than moving it to a different repository.

The appetite for email-only archiving solutions remains; however, most organizations are looking tovendors with existing solutions or a roadmap for EIA products.

User Advice: As requirements to store, search and discover old data grow, companies areimplementing an EIA solution, starting with email as the first managed content type. Manyorganizations are looking to migrate to cloud email and productivity solutions, such as those offeredby Microsoft and Google, and when migrating, associated compliance and regulatory retentionrequirements need to be considered. In addition, organizations should have an overall data retentionplan including the need to archive additional content types. EIA use cases are growing to includerecords management and analytics capabilities. Organizations must ensure contractually that theyhave a reasonably priced process, as well as an option for extracting data from an archive solution— namely from SaaS providers. Migrating personal stores to the archive should be part of thedeployment of an email archive system.

Business Impact: EIA improves application performance, delivers improved service to users, andenables a timely response to legal discovery and business requests for historical information.Archived data can be stored in a less-expensive fashion, with the opportunity to take some dataoffline or delete it. Moving old data to an archive also reduces backup and recovery times bydecreasing the active dataset.

Email remains the predominant content type archived as part of an EIA implementation. Archivingoffered via SaaS is increasing in popularity, because of the benefits associated with offloading low-

Gartner, Inc. | G00302826 Page 49 of 62

Page 50: Hype Cycle for Storage Technologies, 2016 - … · Gartner's Hype Cycle illustrates the typical life cycle that a new storage technology goes through before it reaches mainstream

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

business-value tasks to a third party, as well as the reduced capital expense. SaaS-based messagedata archiving is leading the way and is currently priced on a per-user, per-month basis, with nostorage overages. As cost structure and integration issues are ironed out, more file system data andapplication data will be archived in the cloud. In additional, more organizations are seeking to createa holistic information governance strategy, including analytics of all data, so the right selection of anarchiving or retention solution becomes even more imperative.

EIA is an important part of e-discovery, providing support for the Electronic Discovery ReferenceModel. Legal hold, retention management, search and export features are used to meet discoveryand compliance requirements. Supervision tools for sampling and reviewing messages are availablewith many EIA products, in response to requirements specific to the regulated portion of thefinancial industry. To meet the requirements of mobile workers, EIA offers a way for organizations tokeep data compliant in an archive, while providing access via mobile devices.

Benefit Rating: High

Market Penetration: 20% to 50% of target audience

Maturity: Early mainstream

Sample Vendors: Bloomberg; EMC; Global Relay; Hewlett Packard Enterprise; IBM; Microsoft;Mimecast; Proofpoint; Smarsh; Veritas

Recommended Reading: "Magic Quadrant for Enterprise Information Archiving"

"Critical Capabilities for Enterprise Information Archiving"

"Plan Your Data Exit Strategy Before You Sign a SaaS Contract"

"Organizations Will Need to Tackle Three Challenges to Curb Unstructured Data Glut and Neglect"

"How to Determine Whether Your Organization Needs Website Archiving"

Data Deduplication

Analysis By: Dave Russell

Definition: Data deduplication is a unique form of compression that eliminates redundant data on asubfile level to improve storage utilization. Redundant data is eliminated, leaving only a pointer tothe extraneous copies of the data. Compared to traditional compression, deduplication has abroader scope for comparing redundant data, such as across multiple users, VMs, backup jobs orstorage array volumes, and examines data that has been written over a longer period of time,sometimes with greater levels of granularity.

Position and Adoption Speed Justification: This technology reduces the amount of physicalstorage required, significantly improving the economics of disk-, flash- or memory-based solutionsfor backup, archiving and primary storage. While deduplication has historically been used in backup

Page 50 of 62 Gartner, Inc. | G00302826

Page 51: Hype Cycle for Storage Technologies, 2016 - … · Gartner's Hype Cycle illustrates the typical life cycle that a new storage technology goes through before it reaches mainstream

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

activities (due to the repetitive nature of capturing largely unchanged data), it can be applied tolong-term archiving and primary storage.

Deduplication has taken on a vital role in solid-state array (SSA) and hybrid flash array storageappliances in an effort to contain the cost of the flash solution while maximizing capacity. As such,nearly all flash storage and hybrid flash array devices possess some form of deduplication.

User Advice: Solutions vary in terms of where and when deduplication takes place, which cansignificantly affect performance and ease of installation. When used with backup, deduplication thatoccurs on a protected machine is referred to as "client-side" or "source" deduplication.Deduplication that takes place after it leaves the protected machine — after the data is sent to thebackup application — is considered "target-side" deduplication. A distinction is also made betweensolutions that deduplicate the data as it is processed ("in-line" deduplication) and products thatwrite data directly to disk, as they would without deduplication, and then deduplicate it later, whichis "post-processing" or "deferred" deduplication. Deduplication solutions also vary in granularity,but 4KB to 128KB segments of data are typical. Some deduplication algorithms are content-aware,meaning that they apply special logic for further processing, depending on the type of applicationand data being stored, and/or can factor out metadata from an application, such as a backupprogram.

Gartner clients using deduplication for backup typically report seven to 25 times the reduction (a 7-to-1 to 25-to-1 ratio) in the size of data. Archiving deduplication ratios are often in the 3-to-1 to 10-to-1 range, and primary data commonly yields 3-to-1 to 6-to-1 ratios, with all flash array (AFA) ratiossometimes reaching 8-to-1 on a consistent basis. Restore performance can be negatively affectedby deduplication, depending on the solution and media implemented, as data must be rehydrated.

Given the costs associated with flash storage, deduplication is an essential capability for improvingthe economics and wear endurance of flash, and it should be considered a "must have" featurebecause there is no performance trade-off for flash deduplication as there can be for diskdeduplication.

Business Impact: Deduplication improves the cost structure of storage because less storage needsto be purchased, deployed, powered and cooled. As a result, businesses may be able to use disk,flash or DRAM memory for more of their storage requirements, and/or may retain data for longerperiods of time, thus enabling faster recovery or read access versus retrieval from slower media.The additional benefits of deduplication include its positive impact on disaster recovery (DR)because less network connectivity is required, since each input/output (I/O) operation carries alarger data payload.

Benefit Rating: Transformational

Market Penetration: 20% to 50% of target audience

Maturity: Early mainstream

Sample Vendors: Actifio; EMC; ExaGrid; Hewlett Packard Enterprise; NetApp; Permabit; PureStorage; Quantum; SimpliVity; Veritas Technologies

Gartner, Inc. | G00302826 Page 51 of 62

Page 52: Hype Cycle for Storage Technologies, 2016 - … · Gartner's Hype Cycle illustrates the typical life cycle that a new storage technology goes through before it reaches mainstream

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

Recommended Reading: "Magic Quadrant for Solid-State Arrays"

"Critical Capabilities for Solid-State Arrays"

"Magic Quadrant for Deduplication Backup Target Appliances"

"Magic Quadrant for Enterprise Backup Software and Integrated Appliances"

"Best Practices for Repairing the Broken State of Backup"

Network-Based Replication Appliances

Analysis By: Stanley Zaffos

Definition: Network-based replication appliances provide storage-vendor-neutral block-level and/ornetwork-attached storage (NAS) replication services. Local snapshots, clones, continuous dataprotection (CDP), remote replication (synchronous and asynchronous) and consistency groups ortheir equivalent are commonly offered services. Network-based replication solutions can provide awider span of view than server- and storage-system-based solutions. Instantiations can be viavirtual or physical servers.

Position and Adoption Speed Justification: Offloading replication services from storage systemsinto a network-based replication appliance provides operational and financial advantages,compared with software- and controller-based solutions. Operational benefits include preservingnative storage system performance, providing common replication services across multipleheterogeneous storage systems, and protecting both SAN and DAS storage with the sametechnology, which can simplify disaster recovery by creating a constant timeline or consistencygroup across multiple storage systems. Network-based replication also reduces the strength ofstorage vendor lock-ins, which can lower storage ownership costs by keeping storage systemacquisitions competitive. Despite these financial and operational advantages, market acceptancehas been hampered by:

■ A strong end-user reluctance to add anything to the input/output path because of concernsabout I/O bottlenecks, I/O elongation and creating another potential single point of failure(SPOF)

■ Competition from storage virtualization appliances

■ The idea of replacing a server or storage-array-based solution that already works and has beentested

The increasing number of storage systems that use all-inclusive software-pricing models

User Advice: Users should consider network-based replication appliances when there is:

■ A need to create a constant timeline across multiple homogeneous or heterogeneous storagesystems

■ A problem with the usability or performance of the existing replication solution

Page 52 of 62 Gartner, Inc. | G00302826

Page 53: Hype Cycle for Storage Technologies, 2016 - … · Gartner's Hype Cycle illustrates the typical life cycle that a new storage technology goes through before it reaches mainstream

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

■ A need to preserve investments in existing storage systems

■ A reluctance to invest in older installed storage systems

■ A desire to pursue a dual-vendor strategy or put incumbent storage vendors on notice that youare dissatisfied with their pricing or postsales support

Users should ensure that replication appliance microcode overhead and network latency do notcreate performance/throughput bottlenecks when protecting solid-state arrays (SSAs) and/or solid-state drives (SSDs).

Business Impact: Network-based replication appliance services can:

■ Provide the benefits of storage-based replication solutions without the lock-ins that storage-system-based replication solutions create.

■ Delay storage system upgrades by offloading replication overhead from a storage system thatlacks the compute power and bandwidth needed to limit the impact of replication services onnative system performance.

■ Work with DAS, SANs and NAS.

■ Provide heterogeneous replication targets to allow lower-cost solutions.

Benefit Rating: Moderate

Market Penetration: 5% to 20% of target audience

Maturity: Early mainstream

Sample Vendors: DataCore Software; EMC; FalconStor; Hitachi Data Systems; Huawei; IBM;NetApp; Zerto

Recommended Reading: "Slow Storage Replication Requires the Redesign of Disaster RecoveryInfrastructures"

"Decision Point for Data Replication for HA/DR"

Continuous Data Protection

Analysis By: Dave Russell

Definition: Continuous data protection (CDP) is an approach to recovery that continuously, ornearly continuously, captures and transmits changes to applications, files or blocks of data whilejournaling these changes. This capability provides the option to recover to many more-granularpoints in time to minimize data loss. Some CDP solutions can be configured to capture data eithercontinuously (true CDP) or at scheduled times (near CDP).

Position and Adoption Speed Justification: The difference between near CDP and regular backupis that backup is typically performed once, to only a few (typically no more than two to four times), a

Gartner, Inc. | G00302826 Page 53 of 62

Page 54: Hype Cycle for Storage Technologies, 2016 - … · Gartner's Hype Cycle illustrates the typical life cycle that a new storage technology goes through before it reaches mainstream

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

day, whereas near CDP is often done every few minutes or hours, providing many more recoveryoptions and minimizing any potential data loss. Several products also provide the ability toheterogeneously replicate and migrate data between two different types of disk devices, allowing forpotential cost savings for disaster recovery solutions. Checkpoints of consistent states are used toenable rapid recovery to known good states (such as before a patch was applied to an OS, or thelast time a database was reorganized) to ensure application consistency of the data and minimizethe number of log transactions that must be applied.

CDP and near-CDP capabilities can be packaged as server-based software (most common), asnetwork-based appliances (today less common) that sit between servers and storage, or as part ofa storage controller. Storage controllers offer near CDP only by way of the frequent use ofsnapshots, and do not allow for the capture, journaling and transmission of every write activity. Thedelineation between frequent snapshots (one to four per hour or less granularity) and near CDP isnot crisp, and administrators often implement snapshots and CDP solutions in a near-CDP mannerto strike a balance between resource utilization and improved recovery.

User Advice: Consider CDP for critical data where regular snapshots and/or backups do not enablemeeting the required recovery point objectives (RPOs). Gartner has observed that true CDPimplementations are most commonly deployed for files, email and laptop data. True CDP fordatabases and other applications is not common and has a much lower market penetration. NearCDP for applications might be more appropriate to ensure application consistency, and minimizethe amount of disk and potential processor cycles required for the solution.

CDP can be an effective countermeasure against ransomware.

Many large vendors have acquired their current offerings, and very few startup vendors remain, withmost startups being acquired or having failed. Many backup applications include CDP technologyas an option to their backup portfolio. The market has since mostly adopted near-CDP solutions viamore frequent, array-based snapshots or as part of the backup application. The disk requirementsand potential production application performance impact were among the main reasons for trueCDP initially facing challenges. Later, as near CDP became more readily available, it satisfied mostof the market's needs.

Business Impact: CDP can dramatically change the way data is protected, decreasing backup andrecovery times, as well as reducing the amount of lost data, and can provide additional recoverypoints. Compared to traditional backup, which typically captures data once a day, the amount ofdata lost in a restore situation can be nearly 24 hours for backup versus minutes or a few hours withCDP.

Benefit Rating: High

Market Penetration: More than 50% of target audience

Maturity: Mature mainstream

Sample Vendors: Actifio; Arcserve; Catalogic Software; CloudEndure; Code42; Commvault;DataCore Software; EMC; Microsoft; Vision Solutions

Page 54 of 62 Gartner, Inc. | G00302826

Page 55: Hype Cycle for Storage Technologies, 2016 - … · Gartner's Hype Cycle illustrates the typical life cycle that a new storage technology goes through before it reaches mainstream

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

Recommended Reading: "Magic Quadrant for Enterprise Backup Software and IntegratedAppliances"

"Best Practices for Repairing the Broken State of Backup"

External Storage Virtualization

Analysis By: Roger W. Cox

Definition: Storage virtualization is a technology that sits between host servers and the externalcontroller-based (ECB) storage system infrastructure. In most cases, it provides a virtual view of thephysical storage devices, and aggregates the devices into a common resource pool for presentationto the compute environment. Storage virtualization can be provided by hardware appliances, or bysoftware within an ECB storage system. Vendors sometimes use "storage hypervisor" to describetheir storage virtualization offering.

Position and Adoption Speed Justification: Besides virtualizing multiple physical, and oftendisparate ECB storage systems into a common storage pool, most storage virtualization solutionsprovide other services, such as common provisioning, including thin provisioning, as well as localand remote replication data services. Some even support storage efficiency features, such as tieredstorage and data reduction.

Storage virtualization can be implemented as symmetrical or asymmetrical solutions. In asymmetrical (or in-band) approach, the layers of abstraction and processing used by thevirtualization solution are inserted directly into the data path. In an asymmetrical (or out-of-band)approach, the abstraction and processing control lie outside the data path. Storage virtualizationappliances, software-only offerings intended to create appliances, and ECB storage array-basedsoftware solutions employ the symmetrical implementation. However, asymmetricalimplementations use storage area network (SAN) switches in conjunction with appliances.

Even though functionality and scalability have improved since this technology was introduced in1999, market traction remains muted relative to the overall size of the ECB storage system market.This minimal market traction can be attributed, in part, to the following:

■ Some segments of the ECB storage system market, such as installations supportingmainframes and network-attached storage, are not broadly supported by storage virtualizationsolutions.

■ Some silo applications, such as data warehousing and Microsoft Exchange, do not fit within thestorage virtualization solution model.

■ There are challenges associated with establishing or validating a compelling value proposition,compared with a conventional storage infrastructure composed of multiple ECB storagesystems.

■ Only four major storage vendors support an organic, symmetrical in-band storage virtualizationsolution — EMC, which launched Federated Tiered Storage for its Symmetrix Storage platformin May 2012, and its VPLEX in May 2010; Hitachi (Hitachi Data Systems), which launched its

Gartner, Inc. | G00302826 Page 55 of 62

Page 56: Hype Cycle for Storage Technologies, 2016 - … · Gartner's Hype Cycle illustrates the typical life cycle that a new storage technology goes through before it reaches mainstream

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

Universal Volume Manager in August 2004; IBM, which launched the SVC in June 2003 and theStorwize V7000 in October 2010; and NetApp, which released the V-Series in March 2005 (nowwithdrawn from NetApp's product card) and its FlexArray virtualization software option inFebruary 2014.

Although it is a close race, current market momentum, from a vendor revenue perspective, favorsthe symmetrical in-band storage virtualization appliances from EMC (VPLEX) and IBM (SVC).However, symmetrical in-band storage virtualization solutions that are incorporated within an ECBstorage system from Hitachi (Hitachi Data Systems) (Universal Volume Manager), IBM (StorwizeV7000/V5000) and NetApp (FlexArray virtualization option) are gaining increasing market share ascost-effective migration tools.

User Advice: Current offerings tend to attack different problems or segments of the market, sousers should carefully examine what they want to accomplish before comparing products. Considerthese devices for mass-volume migration between ECB storage systems, for managementimprovement, where the software tools are better than existing arrays, and for consolidation/transition to a single-storage vendor when the user owns ECB storage systems from many vendors.Be aware that storage virtualization creates vendor lock-in, and most (but not all) disable the value-added data services and caching features of the ECB storage systems being virtualized. Therefore,cost justifications must be scrutinized in detail to guard against paying twice for the same features.

Business Impact: Storage virtualization eases migration from old to new ECB storage systems, andit enables temporary consolidation of older ECB storage systems prior to moving to a single-vendorsolution. In addition, migrating existing ECB storage infrastructures to cloud paradigms is sparkinginterest in storage virtualization offerings as a bridge to a cloud environment, by repurposingexisting investments in a storage infrastructure. It can improve provisioning and other storagemanagement to the extent that it provides better software tools, and it can sometimes reduce thecost of the back-end storage. However, storage administrators must still manage the ECB storagesystems to perform basic low-level configuration tasks.

Benefit Rating: Moderate

Market Penetration: 5% to 20% of target audience

Maturity: Early mainstream

Sample Vendors: DataCore Software; EMC; FalconStor; Hitachi; IBM; NetApp

Recommended Reading: "Storage Virtualization"

"Decision Point for Server Virtualization Storage Selection"

"Storage Virtualization: Steppingstone to a Better Environment"

Page 56 of 62 Gartner, Inc. | G00302826

Page 57: Hype Cycle for Storage Technologies, 2016 - … · Gartner's Hype Cycle illustrates the typical life cycle that a new storage technology goes through before it reaches mainstream

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

Entering the Plateau

EFSS

Analysis By: Monica Basso

Definition: Enterprise file synchronization and sharing (EFSS) refers to a range of on-premises orcloud-based capabilities that enable individuals to synchronize and share documents, photos,videos and files across mobile devices, PCs and servers. Sharing can happen within theorganization or outside of it, with partners, customers or others. Security and collaboration arecritical complementary features for enterprises adoption.

Position and Adoption Speed Justification: EFSS technology is relatively mature; its basicfeatures include native mobile and web client apps, password protection and data encryption, andserver integration (with SharePoint, for example), whereas its enhanced ones include contentcreation, collaboration, digital rights management, cloud encryption key management (EKM) andmodern responsive user interface (UI). Leading vendors offer integration with Microsoft's Office 365.Over the past six years, EFSS adoption has grown rapidly to hundreds of users, thus drivingprogressive standardization. A variety of IT vendors in multiple markets (such as enterprise contentmanagement [ECM], collaboration, storage, backup and enterprise mobility management [EMM])added file synchronization and sharing features as extensions to their offerings. EFSS puts pressureon traditional markets, such as ECM and storage, forcing an evolution in EFSS functionality toinclude cloud, mobile and analytics paradigms, to enhance user experience, IT administration andbusiness support. Cloud storage providers such as Google, Microsoft and Amazon acceleratecommoditization, bundling EFSS for minimal price into broader deals.

EFSS destination vendors are evolving in two different directions: data infrastructure modernization(that is, enterprise systems/resources integration and management with support for federation, e-discovery, data governance and security); or modern content collaboration and businessenablement. Despite concerns that security and compliance may slow down adoption, EFSSinvestments continue to grow as organizations must balance IT control over bring your own (BYO)cloud services with users' demand for modern productivity tools. Adoption of Microsoft Office 365has driven significant interest into OneDrive for Business, but Dropbox and Box Enterprise continueto be users' preferred options and Google offers the best alternative for real-time documentcollaboration.

User Advice: IT leaders responsible for digital workplace initiatives must consider EFSS. To workmore effectively, organizations must explore potential security risks of personal cloud services, aswell as users' requirements for modern productivity tools. They should evaluate EFSS options andcapabilities to enable secure mobile content sharing, collaboration and productivity, thus reducingpotential risks. If email or legacy FTP services are used for file transfers, organizations shouldconsider EFSS as an efficient and potentially more secure way for employees to share data, ratherthan using personal services. Organizations looking for secure alternatives to personal cloud whilepreserving user preferences, and those that are especially focusing on external collaboration,should consider public cloud offerings such as Box Enterprise or Dropbox. Organizations withtighter data control requirements, or with a large storage infrastructure, should focus on hybrid

Gartner, Inc. | G00302826 Page 57 of 62

Page 58: Hype Cycle for Storage Technologies, 2016 - … · Gartner's Hype Cycle illustrates the typical life cycle that a new storage technology goes through before it reaches mainstream

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

solutions (such as Citrix, Syncplicity and Egnyte) that allow the organization to maintain morecontrol over where data is housed and leverage existing storage investments. Organizations withstrong requirements for data protection, or organizations that have strict regulations about datalocation and residency or complex data manipulation requirements, should focus on private cloudor on-premises EFSS deployments.

Business Impact: Enterprise file sharing will enable higher productivity and collaboration for mobileworkers who deal with multiple devices, and lead to a more agile and connected workforce.Organizations investing in such capabilities will enable a more modern and collaborative real-timeworkplace, while reducing or avoiding the inherent security/compliance threats of personal cloudservices. Business benefits include increased productivity and cost savings.

Benefit Rating: High

Market Penetration: 20% to 50% of target audience

Maturity: Mature mainstream

Sample Vendors: Accellion; Box; Citrix; Ctera Networks; Dropbox; Egnyte; Google; Intralinks;Microsoft; ownCloud

Recommended Reading: "The EFSS Market's Future Will Present an Opportunity for IT Planners"

"How to Build EFSS Plans to Address Current and Future Business Requirements"

"Magic Quadrant for Enterprise File Synchronization and Sharing"

"Toolkit: Enterprise File Synchronization and Sharing RFI/RFP"

Appendixes

Page 58 of 62 Gartner, Inc. | G00302826

Page 59: Hype Cycle for Storage Technologies, 2016 - … · Gartner's Hype Cycle illustrates the typical life cycle that a new storage technology goes through before it reaches mainstream

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

Figure 3. Hype Cycle for Storage Technologies, 2015

Innovation Trigger

Peak ofInflated

Expectations

Trough of Disillusionment Slope of Enlightenment

Plateau of Productivity

time

expectations

Plateau will be reached in:

less than 2 years 2 to 5 years 5 to 10 years more than 10 yearsobsoletebefore plateau

As of July 2015

I/O Optimization

File Analysis

Copy Data Management

Solid-State DIMMsIntegrated Systems: Hyperconvergence

Open-Source StorageVirtual Storage Appliance

Cloud-Based Backup ServicesInformation Dispersal Algorithms

Integrated Backup AppliancesSoftware-Defined Storage

Object Storage

Data Sanitization

Linear Tape File System (LTFS)Online Data Compression

Disaster Recovery as a ServiceEnterprise Endpoint Backup

Hybrid DIMMsCross-Platform Structured Data ArchivingEmerging Data Storage Protection Schemes

Cloud StorageGateway

Virtual MachineBackup and

Recovery

Storage Cluster File Systems

Public Cloud Storage

Fibre Channel Over Ethernet

Automatic Storage TieringSaaS Archiving of Messaging DataStorage Multitenancy

Enterprise File Synchronization and Sharing (EFSS)Solid-State Arrays

Appliance-Based ReplicationEnterprise Information Archiving

Data DeduplicationContinuous Data Protection

Data Encryption Technologies, HDDs and SSDs

External Storage Virtualization

Source: Gartner (July 2015)

Gartner, Inc. | G00302826 Page 59 of 62

Page 60: Hype Cycle for Storage Technologies, 2016 - … · Gartner's Hype Cycle illustrates the typical life cycle that a new storage technology goes through before it reaches mainstream

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

Hype Cycle Phases, Benefit Ratings and Maturity Levels

Table 1. Hype Cycle Phases

Phase Definition

Innovation Trigger A breakthrough, public demonstration, product launch or other event generates significantpress and industry interest.

Peak of InflatedExpectations

During this phase of overenthusiasm and unrealistic projections, a flurry of well-publicizedactivity by technology leaders results in some successes, but more failures, as thetechnology is pushed to its limits. The only enterprises making money are conferenceorganizers and magazine publishers.

Trough ofDisillusionment

Because the technology does not live up to its overinflated expectations, it rapidly becomesunfashionable. Media interest wanes, except for a few cautionary tales.

Slope ofEnlightenment

Focused experimentation and solid hard work by an increasingly diverse range oforganizations lead to a true understanding of the technology's applicability, risks andbenefits. Commercial off-the-shelf methodologies and tools ease the development process.

Plateau of Productivity The real-world benefits of the technology are demonstrated and accepted. Tools andmethodologies are increasingly stable as they enter their second and third generations.Growing numbers of organizations feel comfortable with the reduced level of risk; the rapidgrowth phase of adoption begins. Approximately 20% of the technology's target audiencehas adopted or is adopting the technology as it enters this phase.

Years to MainstreamAdoption

The time required for the technology to reach the Plateau of Productivity.

Source: Gartner (July 2016)

Table 2. Benefit Ratings

Benefit Rating Definition

Transformational Enables new ways of doing business across industries that will result in major shifts in industrydynamics

High Enables new ways of performing horizontal or vertical processes that will result in significantlyincreased revenue or cost savings for an enterprise

Moderate Provides incremental improvements to established processes that will result in increased revenueor cost savings for an enterprise

Low Slightly improves processes (for example, improved user experience) that will be difficult totranslate into increased revenue or cost savings

Source: Gartner (July 2016)

Page 60 of 62 Gartner, Inc. | G00302826

Page 61: Hype Cycle for Storage Technologies, 2016 - … · Gartner's Hype Cycle illustrates the typical life cycle that a new storage technology goes through before it reaches mainstream

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

Table 3. Maturity Levels

Maturity Level Status Products/Vendors

Embryonic ■ In labs ■ None

Emerging ■ Commercialization by vendors

■ Pilots and deployments by industry leaders

■ First generation

■ High price

■ Much customization

Adolescent ■ Maturing technology capabilities and processunderstanding

■ Uptake beyond early adopters

■ Second generation

■ Less customization

Early mainstream ■ Proven technology

■ Vendors, technology and adoption rapidly evolving

■ Third generation

■ More out of box

■ Methodologies

Maturemainstream

■ Robust technology

■ Not much evolution in vendors or technology

■ Several dominant vendors

Legacy ■ Not appropriate for new developments

■ Cost of migration constrains replacement

■ Maintenance revenue focus

Obsolete ■ Rarely used ■ Used/resale market only

Source: Gartner (July 2016)

Gartner Recommended ReadingSome documents may not be available as part of your current Gartner subscription.

"Understanding Gartner's Hype Cycles"

"2015 Strategic Roadmap for Storage"

Gartner, Inc. | G00302826 Page 61 of 62

Page 62: Hype Cycle for Storage Technologies, 2016 - … · Gartner's Hype Cycle illustrates the typical life cycle that a new storage technology goes through before it reaches mainstream

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

GARTNER HEADQUARTERS

Corporate Headquarters56 Top Gallant RoadStamford, CT 06902-7700USA+1 203 964 0096

Regional HeadquartersAUSTRALIABRAZILJAPANUNITED KINGDOM

For a complete list of worldwide locations,visit http://www.gartner.com/technology/about.jsp

© 2016 Gartner, Inc. and/or its affiliates. All rights reserved. Gartner is a registered trademark of Gartner, Inc. or its affiliates. Thispublication may not be reproduced or distributed in any form without Gartner’s prior written permission. If you are authorized to accessthis publication, your use of it is subject to the Usage Guidelines for Gartner Services posted on gartner.com. The information containedin this publication has been obtained from sources believed to be reliable. Gartner disclaims all warranties as to the accuracy,completeness or adequacy of such information and shall have no liability for errors, omissions or inadequacies in such information. Thispublication consists of the opinions of Gartner’s research organization and should not be construed as statements of fact. The opinionsexpressed herein are subject to change without notice. Although Gartner research may include a discussion of related legal issues,Gartner does not provide legal advice or services and its research should not be construed or used as such. Gartner is a public company,and its shareholders may include firms and funds that have financial interests in entities covered in Gartner research. Gartner’s Board ofDirectors may include senior managers of these firms or funds. Gartner research is produced independently by its research organizationwithout input or influence from these firms, funds or their managers. For further information on the independence and integrity of Gartnerresearch, see “Guiding Principles on Independence and Objectivity.”

Page 62 of 62 Gartner, Inc. | G00302826