software defined storage solves performance problems

12
Powered by: Storage eBook Storage Insider Software-defined Storage solves performance problems Optimising your existing various storage devices allows your infrastructure to operate at peak performance levels Published by

Upload: datacore-software

Post on 12-Aug-2015

88 views

Category:

Software


0 download

TRANSCRIPT

Powered by:

StorageeBook

StorageInsider

Software-defined Storage solves performance problemsOptimising your existing various storage devices allows your infrastructure to operate at peak performance levels

Published by

2 Storage-Insider.de | Software-defined Storage: Performance

3 Software-defined storage: The devil is in the detail

SDS is the framework of the future

7 Software-defined storage solves performance problems

Making optimal use of existing storage media

Content

DataCore Software GmbHBahnhofstr. 18, 85774 UnterföhringPhone +49 89 4613570-0E-mail [email protected] www.datacore.com/de

Vogel IT-Medien GmbHAugust-Wessels-Str. 27, 86156 Augsburg, GermanyPhone +49 (0) 821/2177-0E-mail [email protected] www.Storage-Insider.deGeneral manager: Werner NieberleEditor in chief: Rainer Graefen, responsible as per press laws, [email protected] date: September 2014

Title image: vege - Fotolia.com Liability: Should any articles or information be inaccurate, the publisher will only be liable in the event of proven gross negligence. Where articles identify the author by name, the author himself will be responsible.Copyright: Vogel IT-Medien GmbH. All rights reserved. Reprints, digital use of any kind and/or duplication hereof are only permitted with the written consent of the editorial staff.Reprints and electronic use: If you would like to use any articles from this eBook for your own publications, such as special prints, websites, other electronic media or customer newsletters, you can obtain the necessary information and the required licences online at www.mycontentfactory.de, phone

+49 (0) 931/418-2786.

Powered by:

3 Storage-Insider.de | Software-defined Storage: Performance

IDC, Forrester or Gartner: regardless of analyst house, market experts universally agree that software-defined storage will become the de-facto platform for storage provision in the future. The main reason for this, they say, is that businesses of any size will always need more capacity to store their data. Moreover, the requirements of performance and availability tend to increase in importance, depending on the applications in use. Yet companies only have limited funds available to invest in storage. Based on the most recent research by leading analyst company,

Ask any market analyst and he will agree - software-defined storage is the platform of the future.

The term „software-defined“ is now regularly used to describe storage.

You would be hard-pressed to find any major supplier who today does

not use this keyword to describe their products. Yet, what exactly does

this concept imply, what are the advantages for businesses when

they implement it and at what point is it worthwhile considering

software-defined storage?

451 Research, the percentage of funds invested in storage compared to the overall IT budget has, in fact, decreased over the last two years. To satisfy this purse tightening, there is demand for solutions that can be scaled to fit exact needs; that can offer businesses a higher degree of flexibility and that promise to save costs. This is where software-defined storage (SDS) comes in.

SDS: simply a marketing claim?

What exactly is SDS? The explanation provided by IDC analysts can be used as an initial reference: they interpret the term software-defined storage as “a storage software stack installed on shared resources (x86 hardware, hypervisors or in the cloud) or on commercially available computer hardware”. This provides the basis for “allowing the bundling of existing storage resources, the improvement of their utilization and the capability to structure a service-based infrastructure”.By contrast, manufacturers are still having a hard time finding a generally applicable definition or even a willingness to agree on standards. This is understandable because storage hardware suppliers are, of course, primarily interested in continuing to sell their own systems successfully. In

Software-defined storage: The devil is in the detail

SDS is the framework of the future

© v

ege

- Fo

tolia

.com

4 Storage-Insider.de | Software-defined Storage: Performance

the storage component. The result is that storage is no longer defined by physical limits, but instead can be distributed more flexibly, thus becoming logically accessible.

A division between the physical and logical brings several advantages: Existing resources can be used more efficiently, expansions are easier to implement, data can be migrated without interruption, management can be centralised and new functions can be introduced at all levels.

In the solution, the choice between the numerous technical options primarily depends on which direction each of the manufacturers has decided to follow. As examples, in SAN, virtualization can

either take place by means of an in-band, out-band or split-path process, either in the host or in the storage controller of the storage system. Generally, with technology inherently tied to specific devices or models, we must accept that they only work properly with the systems offered by their particular manufacturer.Thus, for a long time now, one tried tested and therefore effective solution has been to revert to software-based solutions. They can bundle every resource at a software level that is valid for all devices. The less products that are bound to specific platforms and/or manufacturers, the better. The result is that all performance criteria can be made available at all levels, irrespective of the existing hardware; access to the storage systems can be controlled at a central level and the entire storage infrastructure can be uniformly managed from a single console.

Approaches that focus on hardware suffer from limitations

There is much to be said about integrating “cookie-cutter” functions and management solutions at software level and replacing classic hardware-focused architectures with non-proprietary virtual and

the meantime, however they continue to deliver models carrying an “SDS” label.More often than not deployment does not bring about any change, because the required functions are still tied to their specific storage platforms, typically proprietary software. Thus the system‘s

own set of features can neither interact with new components nor with other manufacturers‘ systems. Needless to say, this contradicts the principles of SDS, were the software determines the functions of the storage and does so entirely independently from the underlying devices or selected topologies.

Storage virtualization serves as an SDS vehicle

Generally, manufacturers revert to storage virtualization techniques as a means to an

end, typically integrating an abstraction level between the application server and

SDS is the framework of the future

SDS solutions bundle all resources into a software layer used by all devices. This solution allows all performance criteria to be made generally available to all devices, irrespective of the existing hardware. (Image: DataCore)

5 Storage-Insider.de | Software-defined Storage: Performance

software-defined approaches. There are quite a few reasons for doing so. Firstly, data volumes will continue to increase, making it difficult to determine just how much storage space must be reserved in the medium term. Applications - sophisticated tier 1 applications in particular - and the requirements of storage infrastructure become more demanding with an increase in work load.Yet classic systems are not designed for this and are not flexible enough to adjust to these changing conditions. What makes things even more difficult is the limited useful life of the hardware, which for storage arrays is on average around five or at most seven years. Frequently businesses purchase oversized storage space so that they are equipped for any scenario during

this period. This approach does not allow the appropriate flexibility to react to new requirements.However, if capacity and performance are not sufficient in the day-to-day work environment, expansions will be required, combined with the need to procure additional devices that more often than not have to be managed separately or at worst, require a complete change of architecture.This, in turn, creates even more problems. The result is a highly complicated jumbled mess of storage environments, requiring a great deal of effort to operate and manage. Additional hardware takes up much more space, expenses for power and cooling increase and, in equal measure, an increase in maintenance expenses.

SDS frees businesses from technical constraints

Both from a technical as well as an economic perspective, old-fashioned hardware-based storage architectures will eventually reach their limits in the short or long term. With this in mind, software-defined storage represents a future-orientated conceptual approach that may be interesting for both small and medium-sized businesses alike. It is worthwhile to put some detailed thought into SDS, especially when it becomes necessary to purchase new storage hardware, or when the use of flash/SSDs or server and desktop virtualization

projects is on the agenda. This is just as important when business continuity is a key topic of discussion, requiring a fail-safe, high performance and high availability IT infrastructure as the basis for running business processes without interruptions.

No matter which of these scenarios applies: by separating the storage services and functions from devices, businesses are given the freedom to make use of standard software, irrespective of its type, for their

SDS is the framework of the future

The future of SDS from the point of view of analysts

IDCBased on a survey conducted by IDC, a majority of European businesses do in fact deal with SDS as a topic, yet by now only eight percent of them have implemented relevant solutions. Despite this, software-defined storage represents an attractive approach - 42 percent of the IT decision makers questioned in the survey consider software to be a key engine for innovation in the field of storage.

GartnerMarket researcher Gartner considers SDS to be a concept still in the making, but one that businesses should already be discussing now. From an analyst’s point of view, one of the greatest benefits of SDS is the integration of hardware infrastructures that are not manufacturer dependent; that are operated based on SLAs and that can provide solutions to problems that once posed challenges to conventional data storage. Based on estimates by Gartner analysts, however, it will take at least another ten years before SDS becomes prevalent on a large scale.

ForresterAccording to Forrester, storage budgets can no longer keep up with the demand for storage. As a result, IT administrators are being challenged and are seeking solutions that will allow them to make storage capacities and performance available as needed, preferably automatically. The analysts do not think that integrating additional platforms would be the best response available today, because in their opinion this would increase the silo mentality even more, thus making the storage environment even more complicated. Instead, they are convinced that the weak points of this conventional approach only serve to accelerate the introduction of SDS.

6 Storage-Insider.de | Software-defined Storage: Performance

own purposes and to manage all their storage needs with software. In this way, existing traditional hard drive storage can be combined with flash media and hybrid systems in storage architectures tailored to their own individual requirements.This is the key to replacing existing island solutions and to finally being able to say goodbye to parallel, block-orientated SAN, file-based NAS and separate backup

and disaster recovery systems, various hypervisors or flash solutions.

SDS: A performance turbine for critical business applications

The classic storage systems of the past are no longer capable of satisfying the performance needs of critical data or transaction-focused business applications. This is why, over the years, the added use of flash media or solid state disks (SSDs) has become a common option to increase overall performance. However, integrating

Flash efficiently into existing environments still poses a challenge to IT managers.On the other hand, SDS-based architectures are able to solve integration problems because fast storage can be integrated quickly, with no complications and almost no interruptions, allowing the use of existing components. Indeed, businesses can benefit from the large number of cross-platform functions and services designed to speed up and optimise performance; in addition to automatic tiering and load balancing, there are also functions offered such as sophisticated caching methods. If the primary objective is to eliminate performance bottlenecks, SDS may thus prove to be the best approach. We explore the do’s and don’ts in the following article. Tina Billo

The optimal use of various storage media allows storage infrastructure to operate at peak performance levels.

© v

ege

- Fo

tolia

.com

SDS is the framework of the future

7 Storage-Insider.de | Software-defined Storage: Performance

Virtual work stations are on the rise. They demand better performance from the storage infrastructure. (Image: IDC)

While the amount of computing power and networking speed that can be achieved has rapidly multiplied over the last decade, as far as storage systems are concerned, the only radical changes that have occurred

are in disk density and disk capacity, and do not really consider overall performance.Since 2000, the speed of traditional mechanical hard drives has been 15,000 rpms, for example; due to the physical limitations, we can hardly expect any further developments in this regard. And so opens the glaringly obvious performance gap between the CPU and attached storage.This has been posing a problem for businesses for quite some time. One reason for this is, that the data volumes that need to be processed have increased beyond proportion - the average annual rate of growth is between 40 and 45 percent.The use of mobile devices has exploded in most recent years; this tendency will continue in the future, with social networks and cloud services propagating even greater volumes of data on the go. Simply recording and storing this data is only one small part of the puzzle. Evaluation and management presents an ever greater ongoing challenge.

Making optimal use of existing storage media

Big Data, Cloud Computing, Social Media, Mobile Business: These

trends have spurred on exponential volumes of data that now need to

be recorded, processed and analysed. High performance applications

are required to assist, but also storage systems need to offer sufficient

performance to efficiently store and back up the data they generate,

guaranteeing high availability and overall administration. Traditional

hard drive- storage arrays quickly reach their limits here. As a result,

alternative paths are sought to work around these limits. This paper

provides an overview of storage technologies available today and their

advantages and disadvantages.

Software-defined storage solves performance problems

8 Storage-Insider.de | Software-defined Storage: Performance

Decision makers hope to obtain important insights from their data for their own businesses’ benefit so that they can stay ahead of the competition. Such levels of understanding are only gained through data mining using sophisticated applications which come with a price tag of high performance needs; high flow rates and lower latency.

Traditional storage systems cannot keep up, and why would they? They’re based on 20-year old architectural models, so they are not designed for such extreme workloads. The gap created between the processor and the storage speed creates a “bottleneck” situation here.

Bottleneck storage: Server and desktop virtualization require more powerful systems

At the same time, virtualization technologies

are also being integrated into businesses with increasing momentum. Based on a survey conducted among IT decision

makers, analysts at Forrester estimate that by now 77 percent of all companies around the world are working with virtualised servers.At the same time, virtual desktop infrastructures (VDI) are on the rise. A 2013 IDC study showed that 27 percent of European companies already have virtual work stations set up, with a further 20 percent discussing their implementation and another 27 percent trialling introduction.However, simultaneously executing applications in virtual machines (VMs) creates a pattern of mixed workloads and arbitrarily distributed data access points. Classic disk storage options prove to be more of a stumbling block, because they do not have sufficient I/O performance to read and write the data fast enough. Even though the IOPS performance can be increased by integrating additional disks, many businesses feel that costly undertakings of this kind are no longer an approach suitable for this day and age.

The end justifies the means

More recent concepts and solutions are defining what is yet to come. This also includes, among other things, creating more storage space and/or increasing performance by adding additional devices (“scaling out”) or by upgrading existing systems by adding more components (“scaling up”). In the latter case, IT managers are generally leaning more towards solid state storage, which uses NAND or flash drives as a storage medium in the form of solid state disks (SSDs) and flash modules. Where performance is concerned, they are far superior to HDDs and therefore do very well, even given a large number of arbitrary read and write operations, such as those common in virtualised environments.Another option to achieve improved application performance is to consider

Making optimal use of existing storage media

Convergent systems combine server, storage and network technologies. These “out-of-the-box” data processing centres come with the promise of improved application performance. (Image: IDC)

9 Storage-Insider.de | Software-defined Storage: Performance

convergent systems which combine server, storage and networking technologies. According to IDC, these virtualised ‘out-of-the-box” processing centres are becoming increasingly appealing. About 16 percent of the companies queried in a 2013 survey had already implemented a converged approach with an additional 53 percent considering implementation.Companies also expected improved utilisation of the existing systems and a higher storage performance gained through storage virtualization, either implemented

with systems that are already in operation or via a software solution.Software-defined storage (SDS) is deemed to be the next logical step. Once again the focus is on introducing an abstraction level between the applications and the physical devices, with the aim of logically combining resources for shared access.

There are already so many options out there that promise to improve the performance of storage infrastructures and companies will need to give due thought to which approach is ultimately the most appropriate.

SDS: Boosted performance for storage infrastructures

If the decision is to set up a software-defined storage environment, companies

have two options they can choose from. They can either revert to the solutions offered by hardware manufacturers for their platforms or they can choose a purely software-based/device-independent approach.The first option poses the very real risk that functions are partially or entirely tied to the components of each hardware brand and cannot be made available across the board.Taking the pure software-defined storage option, will consolidate all storage resources, services and management processes, and offset any proprietary limitations and incompatibilities. The intelligence and functions are moved to the software providing an autonomous virtual intermediate layer, detached from the physical hardware restrictions. This offers the distinct advantage that all storage media, irrespective of format, becomes available via a standardised centralised platform. This includes, for example, automated storage tiering, caching or load balancing processes, all of which serve one single purpose: to utilise to the best the performance potential of different resources and to speed up applications.

Using auto-tiering to meet application requirements perfectly

Thanks to high data transfer rates and extremely short access times, solid state disks and flash technologies are the ideal option to counteract the increased performance requirements for critical business applications.Yet performance does not come cheaply: Purchasing fast storage is still much more expensive than classic hard drives. This is why businesses rely on solutions that allow them to make the best possible and economically feasible use of costly storage space. This can be achieved by using software- controlled

Making optimal use of existing storage media

Data blocks with high access ratios are automatically migrated to faster SSDs and less active ones to slower mass storage, using the software-controlled automatic tiering process. (Image: DataCore)

10 Storage-Insider.de | Software-defined Storage: Performance

auto-tiering. For this purpose, storage media appears consolidated as virtual storage pools which are first organised into separate storage classes, or “tiers” determined on their price-to-performance characteristics, before being organised and profiled. Using pre-defined criteria, such as date and degree of utilisation, intelligent mechanisms ensure seamless placement

on the most suitable storage type at block level, based on cost/performance aspects.Data blocks with high access rates can be automatically migrated to faster SSDs and less active ones relegated to slower mass storage, based on pre-defined rules. With consistent monitoring of the I/O performance and accounting for all competing I/O requirements, the software automatically allocates demanding, latency-sensitive workloads to fast storage

media, while allocating the workloads that are not time-critical to slower, more cost effective ones.In order to cushion foreseeable peak loads that occur regularly at peak times, virtual hard drives can also be statically assigned to a high performance tier. As soon as capacity is exhausted, the loads can be switched to a lower level. This makes it possible to meet the performance and availability requirements of critical application workloads, speed up response times and enormously increase the processing of business-critical tier 1 applications across the entire infrastructure, irrespective of the underlying hardware.

Faster data reading and writing using caching

More technically sophisticated storage virtualization solutions take this a stage

further by making use of the strengths of caching to increase access speeds. If the selected software runs on x86-64 standard servers, the devices connected in a storage pool can make use of the DRAM working memory and the I/O resources of each node as a high-performance “mega cache”. A part of the physical server RAM is available to respond directly to incoming application queries.Frequently read blocks remain in intermediate storage, relieving the load on the back-end data carriers and reducing I/O latency.Moreover, established caching processes such as reading ahead, writing behind or harmonising arbitrary writing processes can be applied across the board to allocated sequential disk I/O processes (“write co-alescing”).In consequence, applications can be executed more quickly, thus increasing the performance of disk storage by a factor of 3x to 5x. Caching writing operations also increase the life of SSDs, because they

Making optimal use of existing storage media

Data caches increase the performance of disk storage three- to five-fold and also extend the life span of SSDs. (Image: DataCore)

11 Storage-Insider.de | Software-defined Storage: Performance

only need to run a lower number of writing and reading cycles.If companies prefer to keep the shared storage close to the applications, setting up a central virtual SAN is an interesting option. With this option, in addition to storage for the application servers, VMs also have access to resources of the virtualization

nodes and to all other connected physical storage systems, including components such as DRAM caches, flash- or cloud-based solutions from a single source. This

improves the scalability of the entire infrastructure with regard to capacity and performance even more.Furthermore, businesses also benefit from the fact that they can access company-wide storage functions which in the past were reserved for classic SAN infrastructures and can automate and manage them centrally from a console.

This includes the storage pooling, auto tiering, adaptive reading/writing caching and load balancing in addition to a large number of other services.Utilisation of the installed storage capacity can be improved with thin provisioning and creating snapshots and continuous data backups (CDP) that guarantees comprehensive protection of critical company data. Additionally, technologies such as synchronous mirroring and asynchronous replication ensure that

invaluable information for day-to-day business operations is available to all locations without the fear of downtime. Installing a virtual SAN is the perfect solution if a medium sized company is interested in moving towards software-defined storage without the heavy investment overhead.

Load balancing improves data flow rates and response times

Load balancing is yet another component used to prevent typical storage bottlenecks such as the “blender effect”. This term describes the reoccuring problem when many applications compete for shared storage resources at the same time in virtualised environments. Classic hard-drive-based storage arrays simply cannot handle this rush, nor the high number of I/O-intensive access operations and application performance suffers as a result.Automatic load balancing is an option to correct this problem, which in conjunction with auto tiering and caching, forms the cornerstone of high performance.

Generally, we distinguish between two methods. One option is to distribute the load on the available front-end connections between the application servers and the storage virtualization node(s). The other is

to distribute the data load between various physical hard drives within the pool.

Summary

Companies primarily interested in finding practical solutions to increase the performance of their overall storage infrastructure should take a closer look at SDS. If a fast, cost-effective entry in line with the IT budget is required, this can be realized by using a virtual SAN.Because the software defines the functions, performance improvements can be gained across all storage options and completely independently of the manufacturer or the

Making optimal use of existing storage media

A virtual SAN improves the scalability of the entire infrastructure in terms of capacity and performance. (Image: DataCore)

12 Storage-Insider.de | Software-defined Storage: Performance

technology. In addition, they also have the flexibility to integrate components based on current developments into their existing infrastructure at any time. As a result, they can react to changes in the performance requirements and speed up the performance of critical tier 1 business applications.A recent global study conducted by TechValidate Research proves just how enormous the gain is.. The study showed

that 72 percent of companies that already rely on software-defined storage were able to quote a three- to ten-fold increase in profits. Similarly, as far as capacity optimisation is concerned, they also verified that they have achieved good results: Some 64 percent of the companies queried were able to reclaim over half of what was over-provisioned wasted storage space. As a result, the companies questioned

showed that existing total capacity was utilised by four fold, and that they were able to utilise existing hardware longer with no further investment in additional storage space required. Typically, they demonstrated savings of between 25 to 75 percent. Therefore SDS in practice is a worthwhile investment for companies of any size. Tina Billo

Making optimal use of existing storage media

Load balancing is an additional component used to prevent typical storage bottlenecks. (Image: DataCore)