spectrum scale strategy days 2020 einsteigertag€¦ · 1-4 ssd enclosures gsxs flash disk: 0.5 -...
Post on 16-Jul-2020
0 Views
Preview:
TRANSCRIPT
Spectrum Scale Strategy Days 2020
Einsteigertag
ESS Grundlagen
Götz Menselgoetz.mensel@de.ibm.com
Unique IBM: Get It Your Way
CloudApplianceSoftware
Defined Storage
Spectrum Scale – Get it Your Way
3
Spectrum Scale
( GPFS )
any server
Your own
installation and config
service
any storage
Spectrum Scale
( GPFS )
IBM server
Your own
installation and config
service
IBM storage
Spectrum Scale
( GPFS )
IBM server
IBM
installation and config
service
IBM storage
GNR
Elastic Storage
Server
Spectrum Scale
( GPFS )
server
storage
IBM
installation and config
service
CloudApplianceSoftware
Defined Storage
?GNR
Erasure Code
Edition
www.extremetech.com
press release August 2011
4
Spectrum Scale (GPFS) Naitive Raid - GNR
ist die Software Implementierung der RAID Technologie in Disk Kontrollern
GPFS„Classic“ GPFS Native Raid
Declustered Raid
5
Declustered RAID Example
6
7 disks3 groups
6 disks
spare
disk
21 stripes
(42 strips)
49 strips7 stripes per group
(2 strips per stripe)
7 spare
strips
3 1-fault-tolerant
mirrored groups
(RAID1)
Rebuild Overhead Reduction Example
7
failed diskfailed disk
Rd Wr
time
Rebuild activity confined to just a few disks – slow rebuild,
disrupts user programs
Rd-Wr
time
Rebuild activity spread across many disks, less
disruption to user programs
Rebuild overhead reduced by 3.5x
Declustered RAID6 Example
8
Declusterdata, parity and
spare
14 physical disks / 3 traditional RAID6 arrays / 2 spares 14 physical disks / 1 declustered RAID6 array / 2 spares
Declusterdata, parity and
spare
14 physical disks / 3 traditional RAID6 arrays / 2 spares 14 physical disks / 1 declustered RAID6 array / 2 spares
failed disks failed disks
failed disksfailed disksNumber of faults per stripe
Red Green Blue
0 2 0
0 2 0
0 2 0
0 2 0
0 2 0
0 2 0
0 2 0
Number of faults per stripe
Red Green Blue
1 0 1
0 0 1
0 1 1
2 0 0
0 1 1
1 0 1
0 1 0
Number of stripes with 2 faults = 1Number of stripes with 2 faults = 7
De-clustered RAID allows super fast rebuilds
3rd disk failure- start of critical rebuild
critical rebuild finished, continue normal rebuild
4 Minutes 16 seconds crit rebuild
Rebuild of a Critical Failure in minutes instead of hours and days!
norm rebuildno rebuild
1st disk failure
Norm
Rebuild
9
3 Vorteile von Spectrum Scale RAID im Vergleich zu traditional RAID
Faster disk rebuilds• Disk rebuilds are in minutes vs hours/days because of de-clustered RAID
Higher storage resiliency• uses erasure coding with upto 3 parity blocks and can survive 3 disk failures with only 27%
overhead in capacity compared to 200% overhead with 3-way replication• uses fault domains to layout disks in such a ways that it can survive entire disk shelf failures • uses disk hospital to pro-actively identify sick drives (disks with bad sectors or media errors)
and either a) replace the disk or b) fix any bad data from parity
End-to-end data integrity• Spectrum Scale RAID maintains checksum of data blocks from the client to the blocks
on the disk and validates at every point, eliminating the chances of silent data corruption or data loss
1
2
3
20 physical disks / 1 Declustered array
Checksums und Version numbers
Checksums in data trailer detects corruption
Checksums don’t detect stale data if it is not also stored elsewhere in metadata
• Old data matches old checksum
Storing strong checksums in metadata adds space constraints• Instead store version numbers in metadata to detect dropped writes
GPFS Native RAID uses both checksums and version numbers to protect user data
End-to-end checksums• Write operation
– From compute node to IO node
– From IO node to disk
• Read operation– From disk to IO node
– From IO node to compute node
Checksum
Trailer
Data
GPFS compute nodes
Network
GPFS IO nodes
w/ Native RAID
Disk Array
IBM Storage & SDI
IBM Elastic Storage Server
© Copyright IBM
Corporation 2017
1
2
1
2
3
Vorteile Elastic Storage Server (ESS)
Quick to deploy• Hardware is pre-sized and pre-validated• Just set up host and network configuration through setup wizard
Pre-configured integrated solution• Hardware and software pre-loaded, tested, validated in manufacturing• Supported as a solution by IBM Service/Support
Easy to use GUI for common tasks• System setup wizard• Monitoring and administering• Verify hardware and software status
13
© IBM Corporation 2020 14
IBM Elastic Storage Server (ESS)Integrated scale-out data management for file and object data
Optimal building block for high-performance, scalable, reliable enterprise Spectrum Scale storage
• Faster data access with choice to scale-up or out• Easy to deploy clusters with unified system GUI• Simplified storage administration with IBM Spectrum Control
integration
One solution for all your Spectrum Scale data needs• Single repository of data with unified file and object support• Anywhere access with multi-protocol support:
NFS 4.0, SMB, OpenStack Swift, Cinder, and Manila• Ideal for Big Data Analytics with full Hadoop transparency
Ready for business critical data• Disaster recovery with synchronous or asynchronous replication• Ensure reliability and fast rebuild times using Spectrum Scale RAID’s
dispersed data and erasure code• Five 99999s of availability
ESS 5U84 Storage
ESS 5U84 Storage
ESS 5U84 Storage
ESS 5U84 Storage
ESS 5U84 Storage
ESS 5U84 Storage
15
| 15
ESS model range
Spectrum Scale
ESS
Capacity is approximate based on 8+2P, single shared Data and Metadata pool. Performance is based on standard IOR benchmark, sufficient clients & network performance etc.Performance shown includes reduction from peak performance measured in testing, as an allowance for variations in real world deployments.
Achievable performance will vary from the figures shown, based on workload, network, and other factors outside of IBM’s control.
D1 D2 D3 D4 D5 D6 D7 D8
S822L
D1 D2 D3 D4 D5 D6 D7 D8
S822L
D1 D2 D3 D4 D5 D6 D7 D8
S822L
D1 D2 D3 D4 D5 D6 D7 D8
S822L
D1 D2 D3 D4 D5 D6 D7 D8
S822L
D1 D2 D3 D4 D5 D6 D7 D8
S822L
D1 D2 D3 D4 D5 D6 D7 D8
S822L
D1 D2 D3 D4 D5 D6 D7 D8
S822L
7 - 32 GB/s
Models GL1S, GL2S, GL4S, GL5S, GL6S 1-6, 84 disk drive enclosures
0.25 – 6.8 PB usable0.33 – 8.9 PB raw
GLxSDisk
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
FC
5887
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
FC
5887
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
FC
5887
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
FC
5887
D1 D2 D3 D4 D5 D6 D7 D8
S822L
D1 D2 D3 D4 D5 D6 D7 D8
S822L
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
FC
5887
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
FC
5887
D1 D2 D3 D4 D5 D6 D7 D8
S822L
D1 D2 D3 D4 D5 D6 D7 D8
S822L
60 TB – 1.1 PB usable90 TB to 1.5 PB raw
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
FC
5887
D1 D2 D3 D4 D5 D6 D7 D8
S822L
D1 D2 D3 D4 D5 D6 D7 D8
S822L
9 - 37 GB/s
Models GS1S, GS2S, GS4S 1-4 SSD enclosures
GSxSFlash
Disk: 0.5 - 2.5 PB usableSSD: 60 - 530 TB usable
D1 D2 D3 D4 D5 D6 D7 D8
S822L
D1 D2 D3 D4 D5 D6 D7 D8
S822L
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
FC
5887
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
FC
5887
D1 D2 D3 D4 D5 D6 D7 D8
S822L
D1 D2 D3 D4 D5 D6 D7 D8
S822L
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
FC
5887
D1 D2 D3 D4 D5 D6 D7 D8
S822L
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
FC
58
87
GHxxHybrid
Disk: 14 to 29 GB/sSSD: 13 to 26 GB/sMax: to 36 GB/s*
Models GH12, GH14, GH22, GH24 2-4, 84 drive HDD enclosures1-2, 24 drive SSD enclosures
*Maximum combined Disk&SSD Perf per ESS unit
D1 D2 D3 D4 D5 D6 D7 D8
S822L
D1 D2 D3 D4 D5 D6 D7 D8
S822L
D1 D2 D3 D4 D5 D6 D7 D8
S822L
D1 D2 D3 D4 D5 D6 D7 D8
S822L
D1 D2 D3 D4 D5 D6 D7 D8
S822L
D1 D2 D3 D4 D5 D6 D7 D8
S822L
D1 D2 D3 D4 D5 D6 D7 D8
S822L
D1 D2 D3 D4 D5 D6 D7 D8
S822L
D1 D2 D3 D4 D5 D6 D7 D8
S822L
D1 D2 D3 D4 D5 D6 D7 D8
S822L
Models GL1C, GL2C, GL4C, GL5C, GL6C, GL8C 1-8, 106 disk drive enclosures
7 - 32 GB/s
0.78 – 9.1 PB usable1 – 11.8 PB raw
GLxCDisk
9 PBperrack
© IBM Corporation 2020 ▪16
Improved storage capacity and economy
• ESS GLxC models
• Uses Modern Helium drives
• 57% Higher Storage Density per enclosure
• 25%+ Faster per enclosure*
Model 4U106
drawersDrives
Raw
capacity
GL1C 1 104 1.04 PB
GL2C 2 210 2.1 PB
GL4C 4 432 4.22 PB
GL6C 6 634 6.34 PB
>70GB/sec*
>8PB of storage
>789TB per Rack UnitUltra-dense storage
In a single 42U Rack
* Final Benchmarks to be published
GL5C
GL8C11,8 PB in 42U14 TB
Summit & Sierra by the numbers
Single Node 16 GB/sec sequential read/write
50K creates/sec per shared directory
1 TB/sec 1MB sequential
read/write
2.5 TB/sec single stream IOR
2.6 Million 32K file creates/sec
Together, more than
44,000 NVIDIA GPUs
400 PBof IBM Storage
#1 & #2 most powerful supercomputers,built for AI
Nothing else like it. Not even close.
© IBM Corporation 2020 18
IBM Power Accelerated Computing Platform
Accelerated compute configurations for High Performance Computing (HPC) and Artificial Intelligence / Deep Learning (AI/DL)
• Provides ability to create your own installation based on the IBM CORAL installation – the world’s most powerful and smartest supercomputer
|
5146-GL6 mit 6 JBOD Enclosures
19
NS
D S
erv
er 0
2
with
thre
e L
SI 9
206
NS
D S
erv
er
01
w
ith t
hre
e L
SI 9206
DCS3700 Disk Enclosure JBOD04 (60 disk slots)
DCS3700 Disk Enclosure JBOD03 (60 disk slots)
DCS3700 Disk Enclosure JBOD02 (60 disk slots)
DCS3700 Disk Enclosure JBOD01 (60 disk slots)
DCS3700 Disk Enclosure JBOD06 (60 disk slots)
RG01_DA3
(+29 disks)p
RG02_DA3
(+29 disks)p p
DCS3700 Disk Enclosure JBOD05 (60 disk slots)
RG01_DA3
(29 disks)p p
RG02_DA3
(29 disks)p
RG01_DA2
(+29 disks)p
RG02_DA2
(+29 disks)p p
RG01_DA2
(29 disks)p p
RG02_DA2
(29 disks)p
RG01_DA1
(+29 disks)p
RG02_DA1
(+29 disks)p p
RG01_DA1
(29 disks)p p
RG02_DA1
(29 disks)p
p
p
p
p
p
p
p
p
p
p
p
p
p
p
p
p
p
p
20
© IBM Corporation 2020 21
IBM Elastic Storage Server: Hybrid Models
ESS Hybrid Models: GH12, GH14 and GH24• Provide combination of flash and HDD storage
tiers in one ESS building block• Combine all-flash performance with GLxS
capacity• Use cases:
– Single system for both metadata services and high density– Use the Flash for Burst Buffer storage and have Spectrum
Scale migrate data to high density spinning disk– Use Scale to automatically manage data location between
flash and disk based upon heat maps and policy– Handle multiple kinds of workloads such as video and
analytics in the same environment
Model GL1S: 1 Enclosures, 9U82 NL-SAS, 2 SSD
38 GB/s*
Model GH14: 1 2U24 Enclosure SSD4 5U84 Enclosure HDD
334 NL-SAS, 24 SSD
40 GB/s*
Model GH24: 2 2U24 Enclosure SSD4 5U84 Enclosure HDD
334 NL-SAS, 48 SSD
* Estimate of performance aggregated across SSD and HDD.Assumes EDR Infiniband connections
Model GH12: 1 2U24 Enclosure SSD2 5U84 Enclosure HDD
166 NL-SAS, 24 SSD
18 GB/s*
ESS 5U84 Storage
ESS 5U84 Storage
ESS 5U84 Storage
ESS 5U84 Storage
ESS 5U84 Storage
ESS 5U84 Storage
ESS 5U84 Storage
ESS 5U84 Storage
ESS 5U84
Storage
IBM Systems
ESS Easy Upgrade: Add Low-Priced Capacity to Existing ESSes
Business Issues• Keep ahead of data growth
• Additional capacity
• Additional performance
• Constrained budgets
• New workloads
• Avoiding downtime
• Fit within data center facility constraints
The Solution• Increase performance and capacity
• Increase from 1 to 2, 2 to 4, or 4 to 6
HDD enclosures
• For example, 2 to 4 enclosures,
while doubling the capacity and bandwidth
• Increase from 1 to 2 or 2 to 4 SSD enclosures,
while doubling the capacity and the bandwidth
• Plug and go:
• No downtime
• Automatically rebalances the data
onto the new storage racks
• No need to dump and reload the data
• Attach new storage enclosures
to Power servers in existing ESSes
Low prices
No additional ESS Power servers or networks
No additional floorspace
No down time
Automatically rebalances the data
ESS Easy Upgrades
From To
GS1S w/24 SSDs GS2S w/48 SSDs
GS2S w/48 SSDs GS4S w/96 SSDs
GL1S w/82 HDDs GL2S w/166 HDDs
GL2S w/166 HDDs GL4S w/334 HDDs
GL4S w/334 HDDs GL6S w/502 HDDs
IBM Value • Cost effective
• Lower prices than in initial ESSes
• Improved ESS PLM
• Reuse existing ESS infrastructure
• Non-disruptive upgrades
Clients• Existing ESS customers
• Those needing more storage capacity
or performance
Qualifying Questions• Are your existing ESSes filling up?
• Do your need more capacity?
• Do you need higher performance?
• Do you need to avoid user disruption?
• Do you have space in your
existing ESS racks?
Lower prices
compared to adding new ESS systems
24
IBM Elastic Storage System 3000NVMe Flash for AI & Big Data Workflows
All new storage solution
• Integrated scale-out advanced data
management with end-to-end NVMe storage
• Containerized software for ease of install
and update
• Hours, not days, for initial configuration
• Fast and easy update and scale-out expansion
• Performance, capacity, & ease of integration
for AI and Big Data workflows
© Copyright IBM Corporation 2020
IBM Storage and SDI
• Self-Contained Storage Building Block for Spectrum Scale
• Use as a capacity & performance brick to build or expand
• Simple Install and Upgrade
• Containerized Install/Upgrade
• Leverages proven FS9100 technology
• NVMe flash drives (no FCM in current release)
• Processing, NVMe Storage, network, RAS
• Dense packaging = 2U
ESS 3000 Overview
Target
• AI and Big Data workloads• POWER9• X86 and Nvidia DGX
• Existing Scale customers
• Hot data tier Disk Object/Tape
• IOPs heavy e.g. snapshots, ILM/HSM, metadata for massive # of files etc.
ESS 3000
© Copyright IBM Corporation 2020
IBM Storage and SDI
• Dual active-active controllers running Spectrum Scale
• 384GB memory (192/canister),to 768GB per controller
• Infiniband EDR, 100GbE or mixed
• 100GbE also supports 40GbE, 10 GbE
• 4 to 12 ports per ESS (2 to 6 per canister)
• 12x or 24x 2.5-inch NVMe flash drives (no FCM in initial release)
• 1.92*, 3.84, 7.68 or 15.36 TB
• 15 TB to 260 TB Storage (8+2P, usable filesystem capacity, approx.)
• Performance, scales linearly as units are added (most workloads)
• 40 GB/s sequential read (24 drives, IOR, seq., 8MiB fs blocksize)
• 35 GB/s sequential write (24 drives, IOR, seq., 8MiB fs blocksize)
• 12 drives: 31 GB/s read, 18 GB/s write
ESS 3000 Details
ESS 3000
*1.92 TB drives are read-intensive, 1 DWPDNot configurable, contact SME to get approval
© Copyright IBM Corporation 2020
IBM Storage and SDI
• ESS 3000 solutions need:– An ESS Management server, must be POWER
– Usually 2 or more Protocol/gateway servers, x86 or POWER
• Lowest cost: up to 15 TB*, 12 x 1.92 TB– Only for read-intensive work, slightly lower street price than Entry
• Entry: up to 31 TB*, 12x 3.84 TB
• Mid: up to 65 TB*– Highest performance: 24x 3.84 TB
– Future expansion**: 12x7.68 TB
• Mid: up to 131 TB*– Highest performance: 24x 7.68 TB
– Future expansion**: 12x15.36 TB
• High: up to 263 TB*, 24x 15.36 TB
• Higher: use more ESS 3000 units!
Drive and solution positioning
*Usable (filesystem, 8+2P, approx.)
** Expansion from 12 to 24 drives is intended but not announced or committed to
Number Raw DA Capacity Block RAID 100% RAID Efficiency
Disk Size Of Disks For User Vdisks Size Code File System Size Theoretical Effective
--------- -------- --------------- ----- ---- ------------------- ----------- ---------
1.92 TB 12 17660 GiB 4 MiB 8+2p 13877 GiB ( 15 TB) 80.0% 78.6%
1.92 TB 12 17660 GiB 4 MiB 8+3p 12618 GiB ( 14 TB) 72.7% 71.4%
3.84 TB 12 37240 GiB 4 MiB 8+2p 29265 GiB ( 31 TB) 80.0% 78.6%
3.84 TB 12 37240 GiB 4 MiB 8+3p 26622 GiB ( 29 TB) 72.7% 71.5%
7.68 TB 12 76400 GiB 4 MiB 8+2p 60049 GiB ( 64 TB) 80.0% 78.6%
7.68 TB 12 76400 GiB 4 MiB 8+3p 54636 GiB ( 59 TB) 72.7% 71.5%
15.36 TB 12 154720 GiB 4 MiB 8+2p 121674 GiB (131 TB) 80.0% 78.6%
15.36 TB 12 154720 GiB 4 MiB 8+3p 110584 GiB (119 TB) 72.7% 71.5%
1.92 TB 24 37241 GiB 4 MiB 8+2p 29297 GiB ( 31 TB) 80.0% 78.7%
1.92 TB 24 37241 GiB 4 MiB 8+3p 26622 GiB ( 29 TB) 72.7% 71.5%
3.84 TB 24 76402 GiB 4 MiB 8+2p 60104 GiB ( 65 TB) 80.0% 78.7%
3.84 TB 24 76402 GiB 4 MiB 8+3p 54629 GiB ( 59 TB) 72.7% 71.5%
7.68 TB 24 154724 GiB 4 MiB 8+2p 121793 GiB (131 TB) 80.0% 78.7%
7.68 TB 24 154724 GiB 4 MiB 8+3p 110703 GiB (119 TB) 72.7% 71.5%
15.36 TB 24 311368 GiB 4 MiB 8+2p 245177 GiB (263 TB) 80.0% 78.7%
15.36 TB 24 311368 GiB 4 MiB 8+3p 222745 GiB (239 TB) 72.7% 71.5%
© IBM Corporation 2020 29
File Object Design Studio ( BP und IBM only )
© IBM Corporation 2020 30
Licensing Options Summary
IBM Spectrum Scale Data Access Edition Data Management Edition
Pricing metric Per “usable” TBPer disk option with ESS
Multi-protocol scalable file service with simultaneous access to a common set of data
Data access with a global namespace, massively scalable file system, quotas and snapshots, data integrity and availability, and filesets
Simplified GUI management
Improved efficiency with QoS and compression
Creation of optimized tiered storage pools based on performance, locality, or cost
Simplified data management with information lifecycle management (ILM) tools that include policy-based data placement and migration
Worldwide data access using AFM asynchronous replication
Asynchronous multi-site disaster recovery
Hybrid cloud (Transparent Cloud Tiering)
Data protection with native encryption and secure erase, NIST compliant and FIPS certified
File audit logging
Watch folder (iNotify)
free clients in per TB license
Spectrum Scale - Unified Software Defined Storage
31
NFSPOSIX
SMB iSCSIObject
• S822-L
• 2 Sockets
• RAM: 128 GB default
– Max 256 (will allow bigger configurations
later)
• Requires 1x10 Gbps Ethernet (will
remove restriction later)
ESS Protocol Server
32
D1 D2 D3 D4 D5 D6 D7 D8
S822L
• Scale Quorum/Manager
• Spectrum Protect
• Spectrum Archive
• SKLM
• TCT services
• Tier to COS or AWS
Can I run other workloads on Protocol Nodes ?
Yes, pick one below and configure ESS protocol node with
256 GB RAM
IBM Storage & SDI
31
• NVIDIA DGX & IBM Spectrum Scale certified
• DGX1 -DGX2 and IBM Storage integrated solution
• Reference architecture with published AI performance benchmarks
• GTM partnership and promotion with NVIDIA and Mellanox, sold through select joint IBM & NVIDIA Business Partners (Meet in the channel)
NVIDIA DGX und IBM Sectrum Scale
IBM Storage & SDI
34
Flash
NVIDIADGX
Faktisch alle anderen Hersteller arbeiten im AI Umfeld über NFS!
NFS
„Storage AI Performance“
IBM Storage & SDINVIDIA DGX mit IBM Scale
35
DGX
Spectrum Scale
Spectrum Scale
DGX IBM POWER
• NVIDIA DGX mit Spectrum Scale
• DGX und IBM Storage Scale: • 60GB/s for DGX1• 90GB/s for DGX2
ESS Flash und ESS 3000
IBM Spectrum Storage for AIwith NVIDIA DGX Systems
A scalable, software-defined infrastructure powered by IBM
Spectrum Scale ESS 3000 and NVIDIA DGX systems for
ML/DL workloads
Certified solution offering with reference architecture and published
performance benchmarks
Meet in the channel delivery model to be fulfilled by BPs
Extensible for the AI Data Pipeline • Support for any tiered storage, including Cloud and Tape
Composable to grow as needed• Up to 9 DGX-1 servers (72 GPUs) in a rack
• Storage scale-out from a single 280TB ESS 3000 node to 8 Exabytes and a
Yottabyte of files
High-Performance to feed the GPUs• Three ESS 3000 with NVMe = throughput of 120GB/s in a rack
• Over 40GB/s sustained random read per 2U
IBM Storage & SDI
IBM Spectrum Scale
Erasure Code Edition
© Copyright IBM
Corporation 2017
3
7
Hardware Architecture
SDS Erasure Code Edition• Network RAID Internal disk rich commodity servers
• Tolerates concurrent failure of an arbitrary pair of servers
(or 3 servers if 8+3p erasure code) and disks
ESS• Twin-tailed disks, dual servers – provide very high availability
• However, in case when a failure of both the master and
backup servers happens it results in data unavailability
Scale-out Cluster with Multiple Building Blocks
SDS Erasure Code Edition• Commodity servers based ECE cluster (building block)
• Multiple ECE clusters in the same Scale cluster
ESS• Twin-tailed disks, dual servers building block
• Multiple ESS building blocks in the same Scale cluster
IBM Storage & SDI
IBM FlashSystem Family
© Copyright IBM
Corporation 2017
4
0
SDS mit IBM Storwize V5000
IBM V5010
Enterprise Integrated Model
Unify and parallelize storage silos
IBM FlashSystem
IBM Storage & SDI
SCM > Storage Class Memory
42
• Persistenter Speicher mit „ultra“ geringer Latenz zwischen DRAM und Flash-Storage
• Extended Memory (plan) und Block Aperture
• SCM Latenz bei Cache-Read miss ca: 360µs auf 140µs
• SCM Drives als Tier-Stufe und Easy-Tier support
• Intermix mit NVMe/Flash Drives
• Capacity> 375GB, 750GB, 800GB, 1.6TB
• DRAID 5 (4 SCM) oder TRAID10 (2 SCM)
Enterprise-Class Storage Made Simple for Hybrid MulticloudIBM FlashSystem Family – Ankündigung 11. Feb. 2020
FlashSystem 5010 FlashSystem 5030 FlashSystem 5100 FlashSystem 7200 FlashSystem 9200
IBM Spectrum Virtualize Software – Block Storage
Encryption
Data Reduction Pools
Scale-out Clustering
HyperSwap High Availability
IBM Storage Insights (AI-based Monitoring, Predictive Analytics, and Proactive Support)
NVMe Flash and FC-NVMe Host Connections
External Storage Virtualization of more than 450 storage systems
High Performance Compression and Encryption with IBM FCM
Storage Class Memory
Easy Tier AI-driven automated tiering
3-Site Replication
VMware and Red Hat OpenShift Container Integration
Local and remote replication (snapshots, DR, copy/migrate to cloud)
Transparent Data Migration
FlashSystem 9200R
44 https://ibm.biz/Bd2bBd
Danke
top related