ibm spectrum virtualize 8.3.1 technical update · ibm flashsystemfamily single family, all-flash...
TRANSCRIPT
IBM Spectrum Virtualize 8.3.1 Technical Update
Manuel SchweigerMaster Certified IT Specialist
Agenda
• Product Positioning/Hardware
• FS9200 Rack/Controller Upgrade Program
• New Drives (NVMe, FCM2, SCM)
• STAT Tool in GUI
• Spectrum Virtualize for Public Cloud
• Performance
• 3 Site Coordinated Metro/Global Mirror
• DRAID Expansion
• Volume Protection
• HyperSwap Volume Expansion
• Miscellaneous Enhancements• SNMPv3 - 8.3.0.1• Enhanced Audit Logging – 8.3.0.1• NVMe-oF/RDMA Enhancements• Secure Data Deletion• Licensing/SCU Changes
2
Cloud
Storage made Simple
Leadership in all-flash arrays at each price and performance target
Common data management across portfolio and to the cloud
Extend the benefits of IBM Storage to your entire storage estate
↑ Innovation↓ Complexity↓ Cost
Entry Midrange High-End
FlashSystemSingle client experience
3
One Platform
Midrange Enterprise
storage
Storage software for Public Cloud
Entry Enterprise
storage
High-end Enterprise
storage
public
Operations made simple with
Storage made simple with
Storage Virtualization
4
Entry Enterprise
Midrange Enterprise
High-End Enterprise
FlashSystem 5000
FlashSystem 7200
FlashSystem 5100
FlashSystem 9200
FlashSystem 9200R
Hybrid Multicloud
IBM Spectrum Virtualize SoftwareIBM Storage Insights (AI-based Monitoring, Predictive Analytics, and Proactive Support)
IBM FlashSystemFamily
Storage made simple for hybrid multicloud
5
IBM Storwize andFlashSystemFamilies
4Q19
Entry Enterprise Midrange Enterprise High-End Enterprise
Storwize V5000E
Storwize V5100
Storwize V7000 Gen3
FlashSystem 9110
FlashSystem 9150
6
IBM FlashSystemFamily
1Q20
Storage madesimple for hybridmulticloud
Entry Enterprise Midrange Enterprise High-End Enterprise
FlashSystem 5000
FlashSystem 9200R
New Hardware!New Hardware!
FlashSystem 9200
FlashSystem 7200FlashSystem 5100
7
VMsHybrid Cloud ContainersSnapshots Tape
Storage for Hybrid Multicloud Storage for AI & Big Data
IBM Award winning Storage Portfolio
TS7700
FlashSystem 9200
FlashSystem7200
All-Flash and Hybrid Systems
SAN Volume Controller
SVC
FlashSystem 5000 DS8900F
FlashSystem 9200R
Cloud Object Storage
Elastic Storage System
Storage for Z
Networking
Suite
Cyber Resiliency & Modern Data Protection
8
IBM FlashSystem FamilySingle family, all-flash and hybrid
IBM FlashSystem Family
Previously 1Q20
All-flash Hybrid flash
Storwize V5010E FlashSystem 5010 FlashSystem 5010H
Storwize V5030E FlashSystem 5030 FlashSystem 5030H
Storwize V5100, V5100F FlashSystem 5100 FlashSystem 5100H
Storwize V7000 Gen3 FlashSystem 7200 FlashSystem 7200H
FlashSystem 9100 Family FlashSystem 9200 Family
FlashSystem 9110
FlashSystem 9150 FlashSystem 9200
FlashSystem 9200R
Entry Enterprise
High-end Enterprise
Midrange Enterprise
9
IBM FlashSystem 5010 (2072-2H2,2H4)Onboard 12Gb SAS (Disk
expansion)Management and I/O port (1GbE)
LEDsBattery - Canister
Technician port(may be used for 1GbE I/O after
setup)USB port Optional HIC10
IBM FlashSystem 5030 (2072-3H2,3H4)
11
Onboard 12Gb SAS (Disk expansion)
Dedicated technician port
LEDsBattery - Canister
Management and I/O ports (10GbE)
USB port Optional HIC
11
IBM FlashSystem 5100 (2077/2078-4H4,UHB)
12
SAS Expansion(Disk Expansion)
10GbE Ports (4)Tech Port USB Ports (2) 2KW PSU (2)
One 8-coreSkylake processor per canister
Up to 576GBCache per controller enclosure
Optional Interface Card (1)
12
IBM FlashSystem 7200 (2076-824,U7C)
13
Interface Card Slot (3)(up to 24 ports per 2U)
10GbE Ports (4)Tech Port USB Ports (2) 2KW PSU (2)
Dual 8-coreCascade Lakeprocessors per canister
Up to 1.5TBCache per controller enclosure
Optional SAS Expansion
13
IBM FlashSystem 9200 (9846/9848-AG8,UG8)
14
Interface Card Slot (3)(up to 24 ports per 2U)
10GbE Ports (4)Tech Port USB Ports (2) 2KW PSU (2)
Dual 16-core Cascade Lakeprocessors per canister
Up to 1.5TBCache per controller enclosure
Optional SAS Expansion
14
10GbEDMI
CrossCardComms PCIe-3 X 16
USB
DDR4DDR4
To 24xDual-PortedNVMedrives
Boot SSDBoot SSD
CascadeLake-SP
CascadeLake-SP
DDR4DDR4
UPIPC
IeSW
ITCH
TPM2
BBU
Lewisburgw/ Quick
Assist Technology
Internal Diagram
Technician PortPCIe-3 X 2 per drive
Note: FlashSystem x1xx models are Skylake processors. FlashSystem models with x2xx are Cascade Lake processors.Note: The FlashSystem 5100 only has a single processor 15
SAS Expansions
16
2U 3.5” x 12 SAS enclosure (Hybrid only)
2U 2.5” x 24 SAS enclosure
5U 3.5” x 92 SAS enclosure
16
Per System Per System
CPU 8 core 1.7GHz x4(skylake)
8 core +HT 2.1GHz x4(cascade lake)
Mem (min/max) 128GiB / 1.1TiB 256GiB / 1.5TiB
Licensing honor based by enclosure,key based encryption
all inclusive, SCU for virtualized storage, key based encryption
Warranty 1 year; 9x5 NBD 1 year; 9x5 NBD
HBA Slots slot 3 SAS EXP, slots 1&2 other HIC Optional SAS or HIC in slot 3
Midrange Enterprise Model Comparison
FlashSystem 7200Storwize V7000 Gen3
1717
Per System Per System
CPU 14 core +HT 2.2GHz x4(Skylake)
16 core +HT 2.3GHz x4(Cascade Lake)
Mem (min/max) 128GiB / 1.5TiB 256GiB / 1.5TiB
High-End Enterprise Model Comparison
FlashSystem 9200FlashSystem 9150
1818
FlashSystem Memory Options
Per Enclosure
9150 7200 9200
Memory Memory Bandwidth
Memory Memory Bandwidth
Memory Memory Bandwidth
Base 1 128 GB 33% 256 GB 33% 256 GB 33%
Upgrade 1 256GB 66% 768 GB 100% 768 GB 100%
Upgrade 2 384 GB 100% 1.5 TB 100% 1.5 TB 100%
Upgrade 3 1024 GB 100% n/a n/a
Base 2 768 GB 100% n/a n/a
Upgrade 4 1.5TB 100% n/a n/a
1919
Adapter OptionsStorwize V7000 G3 9150, 7200, 9200
Slot 1 4x16Gb FC4x32Gb FC2x25Gb Ethernet (RocE)2x25Gb Ethernet (WARP)
4x16Gb FC4x32Gb FC2x25Gb Ethernet (RocE)2x25Gb Ethernet (WARP)
Slot 2 4x16Gb FC4x32Gb FC2x25Gb Ethernet (RocE)2x25Gb Ethernet (WARP)
4x16Gb FC4x32Gb FC2x25Gb Ethernet (RocE)2x25Gb Ethernet (WARP)
Slot 3 2 x12Gb SAS Expansion 4x16Gb FC4x32Gb FC2x25Gb Ethernet (RocE)2x25Gb Ethernet (WARP)2x12Gb SAS Expansion
Port MaximumsPer Enclosure
16 Fibre Channel8 Ethernet4 SAS Expansion
24 Fibre Channel12 Ethernet4 SAS Expansion 20
FlashSystem5010
FlashSystem5030
FlashSystem5100
FlashSystem7200
FlashSystem9200
FlashSystem9200R
All-Flash and/or Hybrid AF ✓ - H ✓ AF ✓ - H ✓ AF ✓ - H ✓ AF ✓ - H ✓ AF ✓ - H ✘ AF ✓ - H ✘
Max cache per control enclosure 64GB 64GB 576GB 1.5TB 1.5TB 1.5TB x 4
Host adapter slots per control enc. 2 2 2 6 6 6 x 4
Storage Class Memory support No No Yes Yes Yes Yes
NVMe SSDs and IBM FCMs No No Yes Yes Yes Yes
NVMe-oF support No No Yes Yes Yes Yes
SAS SSDs and SAS HDDs ✓ and ✓ ✓ and ✓ ✓ and ✓ ✓ and ✓ ✓ and ✘ ✓ and ✘
Support for SAS devices Yes – Control & Exp. Yes – Control & Exp. Yes – Expansion Yes – Expansion Yes – Expansion Yes – Expansion
Max physical capacity rawin 2U control enclosure 720TB 720TB 921.6TB 921.6TB 921.6TB 921.6TB x 4
Max usable effective capacity rawin 2U control enclosure 720TB Up to 3.6PB* 1.8PB**
Or up to 4.6PB*1.8PB**
Or up to 4.6PB*1.8PB**
Or up to 4.6PB*1.8PB** x 4
Or up to 4.6PB* x 4
Maximum capacity with clustering NA 32PB (2-way) 32PB (2-way) 32PB (4-way) 32PB (4-way) 32PB (4-way)
Data Reduction None Software DRP FCM (no impact)DRP (Hdw assist)
FCM (no impact)DRP (Hdw assist)
FCM (no impact)DRP (Hdw assist)
FCM (no impact)DRP (Hdw assist)
Installation and support Customer set-up Customer set-up Customer set-up Customer set-upECS optional
IBM install, technical advisor and ECS
IBM install, technical advisor and ECS
Support for CSI and other multicloud Yes Yes Yes Yes Yes Yes
Quick Comparison of FlashSystem Family
All-Flash OnlyAll-Flash and Hybrid* With 5:1 data reduction via DRP ** With 2:1 data reduction via FCM
IBM SAN VolumeController
4Q19 1Q20
SV1 SV1
SV2
SA2 22
IBM SAN Volume Controller (SV2/SA2)
23
Interface Card Slot (3)(up to 24 ports per 2U)
10GbE Ports (4)Tech Port USB Ports (2) 2KW PSU (2)
Dual 16-core(SV2) or 8-core (SA2)processors
Up to 768GBCache
23
Per Node Per Node Per Node
CPU 8 core 3.2GHz x2(Broadwell EP)
16 core 2.3GHz x2(Cascade Lake)
8 core 2.1GHz x2(Cascade Lake)
Mem (min/max) 64GiB / 256GiB 128GiB / 768GiB 128GiB / 768GiB
Boot Drives 2 x external 2 x internal 2 x internal
Battery 2 x external 1 x internal 1 x internal
Compression 0-2 compression cardsRACE or DRP
Built in (equivalent to 5 SV1 cards)DRP only
Built in (equivalent to 2 SV1 cards)DRP only
FC attach Quad port 16Gbs Quad port 32GbsQuad port 16Gbs
Quad port 32GbsQuad port 16Gbs
HIC Slots 4 3 3
SAN Volume Controller Model Comparison
SV2 SA2SV1
Increased FC bandwidth + double PCIe bandwidth + huge cache increase = improved latency and IOPs 24
24
IBM SVC Memory
Per Node SV1 SV2 SA2
Memory Memory Bandwidth
Memory Memory Bandwidth
Memory Memory Bandwidth
Minimum Memory
64 GB 33% 128 GB 33% 128 GB 33%
Upgrade 1 128 GB 66% 384 GB 100% 384 GB 100%
Upgrade 2 192 GB 66% 768 GB 100% 768 GB 100%
Upgrade 3 256 GB 66% n/a n/a
2525
SAN Volume Controller Adapter OptionsSV1 SV2/SA2
Slot 1 Empty 4x16Gb FC4x32Gb FC2x25 Gb Ethernet (RocE or iWARP)*
Slot 2 2 x12G SAS Expansion
Slot 3 4x16Gb FC4x10Gb Ethernet2x25Gb Ethernet (RocE or iWARP)*
Slot 4 4x16Gb FC4x10Gb Ethernet2x25Gb Ethernet (RocE or iWARP)*
n/a
Slot 5 12Gb SAS ExpansionCompression
Slot 6 4x16Gb FC4x10Gb Ethernet2x25Gb Ethernet (RocE or iWARP)*
Slot 7
Slot 8 12Gb SAS ExpansionCompression
Port Maximums 2 12Gb SAS Expansion Ports2 Compression Cards16 Fibre Channel Ports8 Ethernet Ports
12 Fibre Channel Ports6 Ethernet Ports
26* iWARP and RocE are 2 different Interface Cards
26
Clustering Rules• SV2/SA2 can cluster with any SV1 and DH8 SVC clusters running the minimum
level of software (8.3.1) up to 4 IO Groups
• Storwize V7000 (model 524, 624 and 724) can cluster with FlashSystem 7200, 9100, and 9200 running the appropriate minimum level of software up to 4 IO Groups
• FlashSystem 5100 can cluster with Storwize V5100 up to 2 IO groups• FlashSystem 5030 can cluster with Storwize V5030 up to 2 IO Groups• FlashSystem 5010 supports only a single IO Group
When Clustered, the name of the system changes in the following priority:• Storwize -> FlashSystem• V7000->7200 -> 9100 -> 9200
2727
Hardware Upgrades• SVC HW upgrade is supported to SV2/SA2 provided the following pre-
reqs:• No SAS attached enclosures• No RACE compressed volumes (compressed volumes in a standard pool)• Memory configured must be >= existing IO Group memory• Spare nodes can only provide spares for exactly matching memory and port
configurations that are equal to or greater than existing nodes
• In place Canister Upgrade is only supported between:• Storwize V5010 -> V5030 • FlashSystem 5010 -> 5030• FlashSystem 9110 -> FlashSystem 9150
2828
FlashSystem 9200R
IBM FlashSystem 9200R A Clustered FlashSystem 9200 Packaged in a Rack
2, 3 or 4 FlashSystem 9200s, clustered9848 AG8 (no UG8)Branded as 9202R, 9203R, 9204R, per number of 9200s
Optional expansions9848 AFF 2U 24 drive and 9848 A9F 5U 92 drive options, SSD only
Dedicated fibre channel backboneIsolated from host and copy service trafficChoice of 2 each of 32Gb 8960-F24 Brocade or 8977-T32 Cisco switches
Packaged in 7965-S42 rackFlashSystem brandingWith dual PDUs appropriate for selected system
Assembled and delivered by IBMManufacturing places components in rack and ships to customer IBM Lab Service engagement performs final signal cabling and integration into customer SAN if required
30
If It Looks Like A Rack And It Smells Like a Rack…
WHAT THIS ISN’T
It is not a single MTM product
Because each component is an individual MTM, they’re all supported and managed separately
There’s not unified management interface– GUI is a 9200 GUI (and not a 9200R)– 9200 GUI will surface most/all issues– Switches have separate management interface
WHAT THIS IS
A FlashSystem 9200RH–Announce of 11th Feb 2020, GA of 20th Mar 2020–9200 cluster with optional expansions, a dedicated Brocade or Cisco FC network, installed in an S42 rack–9202R, 9203R, 9204R in spec sheet compares, ordering, etc
Has unique FlashSystem branding
A bundle of multiple MTMs–Econfig will allow ordering as a 920xR, policing config where appropriate–Rack placement will be consistent and predictable–Documentation aims for consistent configuration
31
FlashSystem 9200 Ordering Rules
2, 3 or 4 9848-AG8s (FlashSystem 9200)– 1 32Gb FC card per canister (for FC backbone)
• With 5 metre FC cables– 1 SAS card per canister, with cables, if required– Minimum cache of 768GB (FC ACGJ)
• Defaults to max cache– Appropriate power cords for rack PDUs
– Feature codes for manufacturing• AL0R is a charge-for code on first AG8• AL01, AL02, AL03, AL04 indicating rack location• FSRS indicating FlashSystem rack product• 4651 indicating inclusion in rack #1• 0983 TAA indicator, will also drive selection on other components
– Remaining feature codes can be applied per existing product rules
One 9848-AFF or A9F expansion per AG8– Up to 4 AFFs (with 4 AG8s), or up to 2 A9Fs (with 2 or
more AG8s)– Cannot mix and match AFF with A9F– Appropriate power cords for rack PDU
• A9F requires higher capacity PDU with C19/20 cable
– Feature codes for manufacturing• AL01, AL02, AL03, AL04 indicating rack location• FSRS indicating FlashSystem rack product• 4651 indicating inclusion in rack #1
– Remaining feature codes can be applied per existing product rules
32
Switch and Rack Ordering Rules
8960-F24 Brocade and 8977-T32 Cisco switches – 2 each of either Brocade or Cisco– All ports enabled and 32Gb SFP+s supplied– Appropriate power cord for rack PDUs
– Feature codes for manufacturing• FSRS indicating FlashSystem rack product• 4651 indicating inclusion in rack #1
– No flexibility in configuration. Feature Codes will be generated and policed by econfig
7965-S42 “Constellation” rack– New FlashSystem branded door– IBM branded sides– Backdoor– Dual PDUs depending on rack configuration
• Generated programmatically to support selected components– 3 year warranty – Blanking plates added as appropriate
– Feature codes for manufacturing• RTSM indicating route to Storage manufacturing line• FSRS indicating FlashSystem rack product• 4651 indicating rack #1• ERCH indicating 8960-F24• ERCJ indicating 8977-T32• ERCK indicating 9848-A9F• ERCL indicating 9848-AFF• ERCM indicating 9848-AG8
– User must choose power cords, but no further flexibility33
Rack Placement Rules
Switches are located at the top of the rack– This avoids the need for thermal ducting
Room for customer expansion
PDUs are located in the sides of the rack
AG8s are located in the same position, lower in the rack to support shipping without tipping issues
Expansion enclosures are in the same 10U space, ensuring A9F low in the rack
Bottom 1U is left free to allow for cabling
34
Some Final Points
This is “just” a 9200 cluster in a rack - don’t over think it– Reuses as much of the existing documentation as possible– Some specific 9200R topics in the Knowledge Center to ensure consistent cabling and configuration– Specific 9200R customer requirements in TDA
A customer can add to the system post purchase and deviate from initial configurations– Order additional 9848 components using econfig– MES additional hardware using econfig– 9848 components are IBM install, but customer will have same responsibilities as they have today
• i.e. There’s no Lab Services person doing the fibre channel configuration• And customer is responsible for ensuring appropriate power and cooling
– Regardless, any configuration must conform to Spectrum Virtualize configuration rules
A customer (or Business Partner) can order the components separately– Greater flexibility over the configurations offered– Delivered in separate boxes, unassembled, but as a tied order to arrive together– SSR will perform 9848 component installs when the rack is sited, customer or BP will need to finish signal cabling and final
configuration
35
IBM FlashWatch
Technical Update
February 2020
It’s the confidence to purchase, own and upgrade your IBM Storage
Acquisition
High Availability GuaranteeProven 99.9999% availability
Optional 100% commitment when using HyperSwap
Data Reduction Guarantee2:1 self-certified
Up to 5:1 with workload profiling
All-inclusive LicensingAll storage function included in licensing
cost for internal storage
Operation
Comprehensive CareUp to 7 years of 24x7 support, with Technical
Advisor, enhanced response times and managed code upgrades
Cloud AnalyticsStorage Insights included at no extra cost to
proactively manage your environment
Flash Endurance GuaranteeFlash media is covered for all workloads
whilst under warranty or maintenance
Migration
IBM Flash MomentumStorage Upgrade Program
Replace your controller and storage every 3 years with full flexibility
Cloud-like PricingStorage Utility pricing has monthly payments
for just the storage you use
No Cost Migration90 days no-cost data migration from over 500 storage controllers, IBM and non-IBM
Replaces all previous “Controller Upgrade”, “Peace of Mind” and “FlashWatch” Programs commencing with purchases made on or after February 11th 2020Program applicability varies by product. Check “FlashWatch Product Matrix”
What is IBM FlashWatch?
37
Decision period
Migration period
What?
Maintain control over your environment with a fully flexible offering that refreshes your storage and controller with no lock in
Finance a FlashSystem over a 3 year term
Just before 3 years, you decide whether to keep your FlashSystem, refresh or simply walk awayRefresh your FlashSystem for the same monthly price or less, or up/down size your system to meet your needs
3 month period of transition with no dual billing
Also available with cloud-like pricing3 years
3 years
Migration | IBM Flash Momentum Storage Upgrade Program
38
SVCFlashSystem
5000/HFlashSystem
5100/HFlashSystem
7200/HFlashSystem
9200FlashSystem
9200RHA Guarantee using HyperSwap Yes 5030 Only Yes Yes Yes Yes
Data Reduction Guarantee 5030 Only Yes Yes Yes Yes
All-inclusive Licensing(Exc. external virtualisation, encryption)
Yes Yes Yes
Comprehensive Care (ECS)
Yes, included with2147-SA2, SV2
Alternative optional services available
Alternative optional services available
Yes, optional service
Yes, included with9848-AG8, UG8
Yes, included with9848-AG8, UG8
Cloud Analytics with Storage Insights Yes Yes Yes Yes Yes Yes
Flash Endurance Guarantee Yes Yes Yes Yes Yes
IBM Flash MomentumStorage Upgrade Program
Yes(2072-2H2, 2H4, 3H2, 3H4, 12G, 24G, 92G)
Yes (2078-4H4, UHB, 12G, 24G, 92G)
Yes(2076-824, U7C, 12G, 24G, 92G)
Yes(9848-AG8, UG8,
AFF, A9F)
Yes(9848-AG8)
On-Demand Pricing (Storage Utility)
Yes(2078-UHB)
Yes(2076-U7C)
Yes(9848-UG8)
No Cost Migration Yes Yes Yes Yes Yes Yes
IBM FlashWatch Product Matrix
39
New Drive TechnologyNew DrivesSTAT Tool in GUI
Storage Class Memory (SCM)
A non-volatile media that sits between DRAM & commercial NAND Flash in performance (latency), density & endurance
Spectrum Virtualize brings support for SCM in 8.3.1 for a use as a storage tier
FlashSystem 5100/7200/9250, with Cascade Lake, set stage for SCM as extended caching capacity
SV 8.3.1 Use Cases for SCM
- EasyTier (tier_scm >> tier0_ssd >> tier1_ssd)
- Storage Pool (exclusively SCM for highest perf)
- DRP metadata DRP will automatically pull extents for exclusive use by metadata, typically index and reverse lookup volumes
41
SCM Offerings
Intel Optane (3D XPoint technology) Samsung Z-SSD (Z-NAND or Low-latency NAND)
IBM Part # 01CM517 01CM518
Capacity 375 GB 750 GB
Endurance 30 DWPD
Latency ~20us
Base Supply Model
Intel Optane D4800X
IBM Part # 01CM522 01CM523
Capacity 800 GB 1600 GB
Endurance 30 DWPD
Latency ~20us
Base Supply Model
Samsung SZ1735
All SCM drives supported by IBM are:Dual-ported PCIe 3.0 NVMe
Self-encrypting (standard TCG Opal specifications)Very Fast
42
SCM Nuances
Due to restrictions in firmware, SCM drives may only be populated in the four (4) right-most slots (#21-24) of an NVMe enclosure
• Up to 16 drives per cluster following the above rule
• Error will alert if drive inserted into a slot below #21
Array restrictions apply to TRAID10 and DRAID5
• Smaller SSDs can be used as forced spare to SCM drive **Arrays perform as fast as slowest member **
No UNMAP support (it’s not traditional NAND, little value) so host calls write 0’s instead
Intel Optane drives may take 15min to format to become ‘candidate’ drives
43
FCM Update
Up to
38.4 TB Usable Capacity
Endurance
DWPD 2
Performance
Up to 50% Better Performanceover First Gen FCM
FlashCore
Gzip Compression
Consistency
Longevity
Adaptive
44
FCM Options
Usable Capacity(TBu)
Max Effective Capacity (TBe)
Small 4.8 22
Medium 9.6 22
Large 19.2 44
XLarge 38.4 88
FCMs still require DRAID6 à Do not override this
Rebuilds are still 2.5TB/hr against TBe capacity
Small (4.8 TB) has only two dies per package
Medium, Large, Xlarge share same performance profile45
Inside the new FCM
96 Layer 3D NAND
Mix of pSLC, QLC, and FlashCore
Dynamic SLC cache
Smart Data Placement
Same 2” U.2 Dual-Port NVMe form factor
Same 25W Power Max
Same GzipCompression
Same Encryption
Runs a 2.x microcode
New FCMs can be a field replacement for First-Gen FCM
FCMs can be intermixed between generations within a DRAID config
New FCM can exist in a 8.3.0 or lower code level**
**Largest 38.4 TB FCM needs SV 8.3.1 to increase max DRAID stripe capability
All drives look to DRAID6 for Variable Stripe RAID protection
46
Dynamic SLC Cache Blocks within drive get reprogrammed to run as Single-Layer-Cell mode flash
• Highest Resiliency
• Highest Performance
• All writes land on SLC (after ACK by MRAM)
• Write-heavy workloads STAY on SLC tier
• SLC cache will ‘float’ across blocks to wear evenly
When 20% or less, then all data is on SLC
SLC will concede blocks to capacity storage upon demand
Following 80% guidance for capacity, performance will stay nominal
Above 80% Usage and has potential to show capacity-biased performance
0%
20%
40%
60%
80%
100%
0% 20% 40% 60% 80% 100%
% o
f Sto
rage
in S
LC a
nd Q
LC
Usable Drive Capacity Consumed
SLC vs. Standard NAND Used for Storage
SLC Standard NAND47
Smart Data Placement
During routine garbage collection, data is re-ordered onto capacity flash based upon read intensity
Read-intensive data stays on lowest page for lowest latency
Stages four different tiers of reads on capacity flash
Recently written data gets read from SLC
1st Page 2nd Page 3rd Page 4th Page
GC Engine
Write entire page(multi-plane programming)
Read whatis needed
0 ~200 us
48
STAT Tool
View & Export EasyTier data right from the GUI
No more dpa_heat file exports, parse utility, then STAT excel macro
Select individual pools to view & export
Currently for Standard Pools only (inaccurate for DRP for volume or data view)
49
501 March 2020/ © 2020 IBM Corporation
Easy Tier Storage Class Memory SupportConfiguration ET Top Tier ET Middle Tier ET Bottom Tier Comments
Tier_SCM Tier_SCM
Tier_SCM + Tier0_Flash Tier_SCM Tier0_Flash
Tier_SCM + Tier0_Flash + Tier1_Flash Tier_SCM Tier0_Flash Tier1_Flash
Tier_SCM + Tier0_Flash+Tier_Enterprise + Tier_Nearline
Not supported: ET will go into measure mode
Tier_SCM + Tier0_Flash + Tier_Enterprise / Tier_Nearline Tier_SCM Tier0_Flash Tier_Enterprise/Tier_Nearline
Tier_SCM + Tier0_Flash + Tier1_Flash + Tier_Enterprise + Tier_Nearline
Not supported: ET will go into measure mode
Tier_SCM + Tier1_Flash Tier_SCM Tier1_Flash
Tier_SCM + Tier1_Flash + Tier_Enterprise/Tier_Nearline Tier_SCM Tier1_Flash Tier_Enterprise /
Tier_NearlineTier_SCM + Tier1_Flash + Tier_Enterprise + Tier_Nearline Tier_SCM Tier1_Flash + Tier_Enterprise Tier_Nearline
Tier_SCM + Tier_Enterprise/Tier_Nearline Tier_SCM Tier_Enterprise /
Tier_NearlineTier_SCM + Tier_Enterprise + Tier_Nearline Tier_SCM Tier_Enterprise Tier_Nearline
+ = and / = or +/ = and/or
Spectrum Virtualize for Public Cloud
Positioning Differences between Spectrum Virtualize & Spectrum Virtualize for Public Cloud
Spectrum Virtualize is the software that runs on premises in the IBM Storwize family, FlashSystem family, SVC (SAN Volume Controller).
Spectrum Virtualize for Public Cloud is a separate product, with separate licensing terms and runs on supported public cloud infrastructure - IBM Cloud, and AWS today.
SVC/VSC SCU Capacity can be used for SV4PC with a simple one time registration license purchased. Use Category 1 SCUs
Technically, Spectrum Virtualize for Public Cloud is largely based on Spectrum Virtualize, but adapted to run on public cloud infrastructure, such as IBM Cloud and AWS EC2. Key differences:
• Clustering / Failover• Shared Network Infrastructure• Back end virtualization• Deployment
FlashSystem 9200
FlashSystem7200
All-flash and hybrid systems
FlashSystem 5000
FlashSystem 9200R
SAN Volume Controller
* New option in SVC available March 6, 202052
Architectureon IBM Cloud
On IBM Cloud, IBM Spectrum Virtualize for Public Cloud is built on bare metal servers, with Endurance or Performance storage
Host
RHEL 7 RHEL 7
IBM Spectrum Virtualizefor Public Cloud
Host
On-premises IBM Cloud
IBM Cloud block storageEndurance, Performance
iSCSI
IP replication
iSCSI/FC
FC or iSCSI storage FC or iSCSI
iSCSI
IBM Cloud bare metal server IBM Cloud bare metal server
KVM KVM
IBM Spectrum Virtualizefor Public Cloud Clustering
IBM Cloud
IBM Spectrum Virtualize for Public Cloud on IBM Cloud
53
Architectureon AWS
On AWS, IBM Spectrum Virtualize for Public Cloud is built on Elastic Compute Cloud (EC2) instances, with EBS storage
Host
IBM Spectrum Virtualizefor Public Cloud
Host
On-premises AWS
iSCSI
IP replication
iSCSI/FC
FC, NVMe or iSCSI storage
EC2 EC2
IBM Spectrum Virtualizefor Public Cloud Clustering
Amazon EBS Amazon EBS
EC2
IBM Spectrum Virtualize for Public Cloud on AWS
54
8.3.1 Provides More Scalability on AWS Infrastructure with Spectrum Virtualize for Public Cloud
25% Improvement per cluster
• Increased EBS Volumes per node, from 16 EBS to 20
• Increasing the total capacity of a cluster to 320TB
Twice the Nodes to Improve Performance
• Deploy up to 4-node cluster on AWS for improved scalability and performance
55
4 Node Cluster on Amazon AWS
In this release, customer will be able to:1. Create a new 4 nodes SV cloud cluster with the CloudFormation Template (CFT)
• The user can select 1/2 IO group in the CFT based deployment
2. Expand existing 2 node cluster to 4 node cluster• Two ways to add two new nodes:
1. By updating existing CFT stack. With limitations that existing nodes configurations must be unchanged: EC2 types, number of nodes, IP configurations.
2. Or install another 2 nodes cluster and then make both node leave the cluster to become candidate nodes
• Add new nodes into cluster
3. Shrink existing 4 node cluster to 2 node cluster• By updating existing CFT stack
56
Upgrading from a 2 node cluster to a 4 node cluster (part2)
6. After CFT update done, two new nodes are available a. Add these two nodes to IP discovery net to make them candidate nodesb. Add these two nodes to cluster
* Have to do this manually because installer doesn’t have access rights to existing system
Backend Storage (EBS) rebalances (automatically)
node1 node2 node3 node4
1 2 3 4 5 6 7 8
node1 node2
1 2 3 4 5 6 7 8EBS
EC2
57
Data Reduction Pools (DRP) in Cloud
In Spectrum Virtualize 8.3.1 we enabled DRP which is an increased storage efficiency functionalities that will allow customers to:
• Automatically de-allocate and reclaim capacity of thin provisioned volumes that had data deleted.
• Reclaimed capacity can be reused by other volumes.
• Use the deduplication process to identify unique chunks of data, or byte patterns, and stores a signature of the chunk for reference when writing new data chunks.
• Depending on the data and workload, this can provide customers with the ability to reduce their monthly costs of EBS or IBM Cloud block storage.
Machine Type DRP DEDUP Compress (DRP based)
Host unmap Backend unmap
IBM Cloud bare metal
Yes Yes No Yes No
AWS EC2 c5.4xlarge Yes No Yes Yes No
AWS EC2 c5.9xlarge Yes Yes Yes Yes No
AWS EC2 c5.18xlarge Yes Yes Yes Yes No
• Compression is CPU based• Compression in only enabled in DRP
• RTC based compression is not available58
Performance enhancements in AWS in 8.3.1
•Configure Ethernet MTU 9000 as default (1500 in 8.3.0)
•Use multi-threads to send IO to EBS
•Improve CPU resources assignment on IO and IRQ handling
Up to 20% on IOPS improvement
Up to 60% on bandwidth improvement
59
IBM Cloud Currency Update in 8.3.1
• Previously we only supported Intel Xeon E5-2620 or E5-2650 bare metal servers.
• In this release, you will be able to provision newer bare metal servers running Intel Xeon 5120 on the IBM Cloud for Spectrum Virtualize for Public Cloud.
• Previously Spectrum Virtualize for IBM Cloud only used 24GB RAM out of 64GB available RAM
• In this release, Spectrum Virtualize for IBM Cloud uses 58GB RAM out of 64GB available RAM
• The newer servers can be purchased through the IBM Cloud portal. 60
IBM Spectrum Virtualize for Public Cloud Datasheet
IBM Spectrum Virtualize for Public Cloud Quick Reference Guide
IBM Spectrum Virtualize for Public Cloud Client Presentation
IBM Spectrum Virtualize for Public Cloud Seller Presentation
IBM Spectrum Virtualize for Public Cloud Version 8.3 Client Deck
IBM Generic Seismic Link
Seller Resources
All links above are valid as of December 20, 2019 4PM MST RSJ
61
IBM Storage Modeller (StorM)Support for Spectrum Virtualize 8.3.1
February, 2020www.ibm.com/tools/storage-modeller
64
System with 8.3.1
StorM Support Target Date
CapacityFS9200 FEB 11
SV2 FEB 11FS7200 FEB 11
SA2 FEB 11FS5100 FEB 11
FS5010 and FS5030 FEB 11SV1 FEB 11
FS9100 FEB 11V7000 Gen 3 FEB 11
New IBM Storage Modeller – Support for Spectrum Virtualize 8.3.1New IBM Storage Modeller offers fully integrated capacity and performance modelling support. Storage Modeller
replaces legacy Capacity Magic and Disk Magic tools.
Access training replay links via the ‘bell’ icon in the StorM top menu bar.
For post-Announce / pre-StorM support performance modelling assistance, submit a sizing support request
using the StorM support form – accessed via the ‘?’ icon on the right side of the top menu. Choose the
‘General Sizing Assistance’ category and indicate ‘8.3.1’ in Subject line for your request.
Storage Techline will contact you and provide 8.3.1. performance sizing assistance for your opportunity.
www.ibm.com/tools/storage-modeller
Watch for StorM performance modelling support for the products listed in the table to start rolling out next week. Until then, follow the instructions to the right to request assistance.
65
• Register for FEB 19 V2.3 Training session: https://ibm.webex.com/ibm/onstage/g.php?MTID=eb2af6d58a3155821e1a42c245d7c1b57
• Capacity Modelling• Storage Modeller V2.3 introduces capacity modelling support for the new Spectrum Virtualize 8.3.1 storage systems, as well as
existing (and re-branded) systems running Spectrum Virtualize 8.3.0 and 8.3.1. Previous versions of Storage Modeller supported capacity modeling of Spectrum Virtualize 8.2.1 and earlier systems.
• Capacity support is updated for the new drives available with Spectrum Virtualize 8.3.0 and 8.3.1 systems, including drives with the new Storage Class Memory (SCM) technology in 8.3.1.
• Performance Modelling• Storage Modeller V2.3 introduces support for modeling the performance of SAN Volume Controller (SVC) storage systems. Previous
versions of Storage Modeller supported performance modeling only for FlashSystem 9100, Storwize V7000, and Storwize V5000 storage systems. With this support, Storage Modeller supports the entire range of storage systems in the Spectrum Virtualize family. However, support is currently limited to Spectrum Virtualize 8.3.0 and earlier systems.
• Spectrum Virtualize 8.3.1 new systems performance modelling support will be released in the coming weeks. In the meantime, create your 8.3.1 System(s) Capacity model and submit a Storage Modeller Support Form for 'General Sizing Assistance' to receive performance modelling guidance on unsupported systems.
• V2.3 introduces support for performance modeling of virtualization relationships between virtualizer storage systems and back-end storage systems. Previous versions of Storage Modeller supported capacity modeling, but not performance modeling of virtualization relationships.
• V2.3 also adds support for 32 Gbps FC adapters for Spectrum Virtualize 8.3.0 systems.
Storage Modeller V2.3 adds new Spectrum Virtualize Capacity and Performance Modelling Support
3 Site CoordinatedMetro/Global Mirror
3 Site Metro/Global Mirror Timeline
• Three site coordinated Metro/Global Mirror will be available by SCORE(RPQ) 3/20/2020 with version 8.3.1.2
• Version 8.3.1.3, which will happen sometime in the 2nd or 3rd qtr will provide the function for GA
67
Metro/Global MirrorHow Does it Work?
Site 3
Site 2Site 1Metro MirrorRelationship
Global Mirror
Access Point
DR Volume
Global Mirror
AccessPoint
Access Points
First GM relationship performs incremental cycles using FlashCopy into GM to send deltas to DR
Second GM relationship shadows operation of first and is able to restart a cycle if it is interrupted
68
3 Site Metro/Global Mirror Overview
• You may hear this referred to a 3-Site Replication in other presentations• While “technically” true this is NOT 3 site replication, meaning any means of replication between 3 sites
• Provides a coordinated Metro/Global Mirror function to Spectrum Virtualize• Existing MM implementations can be converted non-disruptively
• Coordinated Metro/Global Mirror is orchestrated by a small RHEL 7.5 (or above) instance running on a standalone server or VM
• Synchronous mirror between 2 near sites with a 3rd far site• Standard MM restrictions apply for near sites• Standard GM restrictions apply for far site• Uses a methodology similar to GMCV
• Far site copy can be maintained from either the main site or the near site without the need to restart the mirror from scratch
• Will function between all Spectrum Virtualize products with a remote copy license• Layer between all systems must be the same (replication/storage)
69
Three systems with FC partnerships between them
Orchestrator hosted on RHEL installation
Provides overall monitoring plus end-user command line to monitor
Alerts sent via existing systems
How is the solution actually deployed?
StandbyOrchestrator
Raleigh Durham
Boca Raton
Orchestrator IP(ssh)
70
3 Site Metro/Global Mirror Terminology
• Primary / Production : User application data is accessed from this site. This site synchronously replicates data to the other near site
• This could be the same data center or anything within MM distances/latency
• Master Site: Near site participating in 3-Site replication. Generally synonymous with the production site
• Auxiliary Near Site (Auxnear): Near site participating in 3-Site replication • Generally the MM target
• Auxiliary Far Site (Auxfar): Far site participating in 3-site replication.• GMCV target site
• Periodic Replication Source: This is one of the near sites normally used to replicate data to Auxfar site.
• 3-Site orchestrator : 3-Site orchestrator software is installed and run from a Linux host located preferably on Auxfar site.
• Standby servers are allowed
• Active Link: This is the active remote copy link between Auxiliary Far site and periodic replication source site.
71
Star or Cascade
Star Cascade72
3-Site Metro/GM Prerequisites
Spectrum Virtualize systems
• 3 Spectrum Virtualize systems with version 8.3.1.2 acting as 3 sites• Storage layer must be the same between systems (replication/storage)
• Fibre Channel remote copy partnerships established between all 3 sites
• MM remote copy consistency groups setup between near sites
• 3-site user with role ‘3SiteAdmin’
3-Site Orchestrator host system
• A single, standalone Linux host running RHEL 7.5 ( or higher)
• IP connectivity from orchestrator to all 3 sites
• Public/private keypair SSH established between orchestrator host and all 3 sites for 3-site user
• Firewall configured to allow orchestrator ports to establish ssh tunnel with 3 sites
• Orchestrator host should be preferably setup on a location different from 3 sites or at Auxiliary Far site
3-Site Orchestrator packaging and installation
• This will be made available as a Linux RPM on IBM Fix central• When GA, until then it will be provided as part of the SCORE/RPQ
• Administrator will install this package on Linux RHEL host. This installs 3-Site services on orchestrator host
• A detailed 3-site configuration and CLI usage guide will be available73
Metro/GM Limitations • Initially Metro/GM replication is available as a SCORE/RPQ
• Spectrum Virtualize GUI does not support setting up, managing and monitoring 3-Site replication
• Orchestrator is CLI only
• Auxfar/Tertiary site can not be used as primary/production site and/or as source for periodic replication in 3-Site configuration
• Does not support HyperSwap between two near sites
• Following operations are not supported on volumes which are part of 3-Site replication : restore and incremental fcmap.
• 3-Site replication is not supported over IP partnership
• Only a single 3-site orchestrator host instance can be active at a time
• Standalone relationships are not supported in 3-site
• MetroMirror Consistency Protection is not supported
• 16 consistency groups supported with up to 256 relationships in each group
• Minimum cycle period for far site replication is 5 minutes and RPO is 10 minutes
74
Process for requesting 3-site RPQ
3-site replication is announced new function in 8.3.1• No new software PID or chargeable feature
• Remote Copy required
Orchestrator software will not be published publicly at initial release• Software available only via SCORE/RPQ
• SCORE request should include configuration details listed in 3-site questionnaire
Click this link to open SCORE request
Link to questionnaire to be posted
Launch materialsAnnouncement letterWebsite and datasheetRedbook and product guide
Technical docs and prerequisitesKnowledge Center 8.3.1 Limits and Restrictions
Orchestrator configuration and CLI user guide
75
Distributed RAID (DRAID) Expansion
Distributed RAID Array Expansion – What is it?
• Ability to add one or more drives to an existing DRAID array non-disruptively
• New drives are integrated into array and existing data is re-striped to maintain DRAID efficiencies
• Final result is the same as if that array had been created from scratch
• Stripe size does not change
77
77
Array Expansion
78
What Drives Can Be Added? • Ideally: Same as existing drives!• This means no performance bottlenecks
• There is allowance for cases where that is not possible
• Similar to using the allow superior flag when creating arrays
• Must be same or larger capacity• Extra capacity is not used• Compressing arrays must use drives of the same
capacity
• Must be same or better performance• Though in reality we expect same performance class
is preferred
79
79
How Many Drives/ How Long? • Between 1 and 12 in any one expansion process• Adding >12 drives can be done with a series of expansions• When adding lots of drives, it might be quicker to create a
new array, migrate the data and then expand the new array to incorporate the old drives
• A drive can either add extra capacity or extra rebuild areas
• Recommended to stick with 1 rebuild area for every 24 48 drives in the array
• Does not apply to FCMs
• Length of expansion depends on a number of factors
• Expect between many hours (esp. NVMe drives) and many days (NL-SAS)
• Goal is to run in the background without impacting host IO performance
• Affected by:• Drive capacity – but also how much is filled with data• Host IO workload – and its distribution
• Estimated completion time in CLI output• lsarrayexpansionprogress• Seen as long running task in the GUI
80
80
Stripe Size Will Not Change • This means that your overall percentage of usable space will stay the same
• Can be a challenge for smaller customers that purchased 6 drives.
• In this case the usable capacity is 50%. After expansion (of any number of drives) the overall usable capacity will still be 50% of the total
• If purchasing a larger number of drives, consider making a new DRAID array with a larger stripe size. Then take the old DRAID array out of the pool and expand the DRAID array with the larger stripe size
81
81
Other Points of Consideration • Array rebuild takes priority over expansion• If a drive fails mid-expansion, expansion is suspended
to allow rebuild to run unimpeded• Expansion will resume after rebuild process is
complete
• Expansion takes priority over copy-back process• If a failed drive is replaced mid-expansion, the
expansion continues• Copy-back process will be started after expansion• Copy-back can only be performed after drive
initialization is complete• Same serialization takes place on array creation
• DRAID-6 is now default array type recommended by GUI for 6 drives or higher
• Expansion cannot be cancelled or reversed• Deleting an array suspends expansion while data
migrated off
• Work area will consume 5 extents in most cases and this area will be reserved until the expansion is finished
82
82
Volume Protection/HyperSwap Volume Expansion
Volume Protection • Volume protection is a feature that has been available since 7.4 code and by default is set to off.
• In 8.3.1 on new systems, volume protection will set to on by default
• Existing systems that are upgraded will remain the same unless upgrading from a system prior to 7.4, in which case it will be on after the upgrade
• Volume protection can be enable/disabled at a system level or at an individual pool/child pool level
• When creating a new remote copy relationship, the target volume now falls under the protection of volume protection
• Feature is now available in the GUI
84
84
Volume Protection
System/Settings/Volume Protection
85
Volume Protection
vdisk_protection_enabled (system)
vdisk_protection_enabled (pool)
vdisk_protection_status (pool)
yes yes activeyes no inactiveno yes inactiveno no inactive
86
Other Changes to Enhance System/Data Protection
• The system will enforce SCSI LUN ID parity between all I/O groups
• Previously this had to be done manually and if the SCSI LUN ID was different between 2 I/O groups it could confuse certain OS’s
• A volume assignment will fail if the system cannot give access to that LUN on the same SCSI ID to a server/cluster across all I/O groups
• Policing to ensure the owning I/O group is always part of the access I/O groups• Previously, a user could accidently shut off the
owning I/O group from access. This would cause undo traffic across I/O groups and at times cause problems
• The Upgrade Test Utility will also police for this and not let an upgrade continue if the condition exists
87
87
Expanding HyperSwap Volumes
l Support for expanding HyperSwap volumes using GUI and CLI.
l Volumes can be expanded without stopping host I/O and the new capacity is available immediately.
l You can expand the size of a HyperSwap volume provided:
l All copies of the volume are synchronizedl All copies of the volume are thin or compressedl There are no mirrored copies
l This one will likely be lifted laterl The volume is not in a remote copy consistency
groupl The volume is not in any user-owned FlashCopy
mapsl New CLI command: expandvolume
l An example to increase the capacity of volume5 by 10 GB:
expandvolume -size 10 -unit gb volume5
88
88
Miscellaneous Enhancements
SNMPv3
Outbound support for SNMP (Simple Network Management Protocol) in Spectrum Virtualize has been updated from version 2 to include both version 2 and version 3.
Version 3 uses the same base protocol as earlier versions but introduces encryption and much improved authentication mechanisms. Depending on how you authorize with the SNMP agent on a device, you may be granted different levels of access. The security level depends on what credentials you must provide to authenticate successfully
90
Security Level Description Security Name and Engine id
Authentication Protocol
Authentication Password
Privacy Protocol
Privacy Password
NoAuthNoPriv No authentication nor encryption ü X X X X
AuthNoPriv Messages are authenticated but not encrypted
ü ü ü X X
AuthPriv Messages are authenticated and encrypted
ü ü ü ü ü
SNMP V3 Security Options91
91
We have added six new parameters to mksnmpserver to support SNMP Version 3:
• -engineid : (Required for V3) Specifies the engine ID for the SNMPv3 server. This value has a max of 32 bytes. You must specify -securityname with this parameter.
• -securityname : (Required for V3) Specifies the security name for the SNMPv3 server. This value has a max of 32 characters. You must specify -engineid with this parameter.
• -authprotocol : (Optional) Specifies the authentication protocol for the SNMPv3 server. Valid values are sha or md5. You must specify -authpassphrase with this parameter.
• -authpassphrase : (Optional) Specifies the authentication password for the SNMPv3 server. This value must be in the range of 8 - 255 characters. You must specify -authprotocol with this parameter.
• -privprotocol : (Optional) Specifies the privacy protocol for the SNMPv3 server. Valid values are aes or des. You must specify -privpassphrase with this parameter.
• -privpassphrase : (Optional) Specifies the privacy password for the SNMPv3 server. This value must be in the range of 8 - 255 characters. You must specify -privprotocol with this parameter.
svctask mksnmpserver92
92
Enhanced Audit LoggingAudit logging has been enhanced to cover
– Auditing of satask and svctask commands at the CLI level
– Auditing of login attempts to the cluster or an individual node (GUI)
– SSH login attempts
Captured by default at the config node
– Authentication attempts do not show up in the GUI audit log
• CLI logs are located in: var/log/cli_audit
• Authentication attempt are located in: var/log/secure
– These files are part of a snap capture
By default, on upgrades these messages are off for syslog servers
– chsyslogserver –audit on <server_id>
• CLI log messages use the ”NOTICE” level of the syslog selector, whose format is usually <facility>.<level>
• *.notice /var/log/svc_commands (syslog rule to write to a unique file)
– chsyslogserver –login on <server_id>
• Authentication messages are sent to “authpriv” facility of a syslog selector
• authpriv.* /var/log/svc_auth (syslog rule to write to a unique file)
93
93
Enhanced Audit Logging Now supports both UDP and TCP with the ability to designate the port
Syslog command now supports DNS name (If DNS server designated) rather than just IP address
94
NVMe-oF Big Host Support • Improving performance for hosts with high number of CPU cores
• The more CPU cores the host has, the more IO queues it can use, thus more concurrent IO requests towards NVMe controller
• The actual change:• Increasing the number of IO queues from 8 to 16• Increasing the size of IO queues from 10 to 32
• The results• More throughput• Lower latency
95
95
Secure Data Deletion8.3.1.1
• Allows data to be securely erased from internal drives as well as boot drives
• 3 levels• Cryptographic Erase – Instant• Block Erase – Fast
• Raises and lowers voltage level of the storage element. Specific elements are modified with vendor specific value
• Data Overwrite – Slow• Replaces existing data with random data
• chdrive –task erase
• Boot drive erase• satask rescuenode –secureerase
96
1H20 Licensing Discussion for the Spectrum Virtualize Family
FlashSystem 9200 and 7200 All Inclusive License for INTERNAL Storage External Virtualization is licensed by SCU via SVC/VSC Software Licenses
Storage Capacity Unit (SCU) Counting AdjustmentsStorage Class Memory added, Percentage adjustments
Spectrum Virtualize for Public CloudAvailable in eConfig on the Storage HW configurationsSVC/VSC entitlement extension to SV4PC with one time registration purchase ($1500 list)
FlashSystem Enhanced High Availability (EHA) Solution9200 Added to the general offering7200 added to the RPQ offeringSVC nodes can now be 2 nodes per 9200 to support Enhanced Stretched Cluster
97
1H2020 Storage Capacity Unit (SCU) ChangesIBM Spectrum Virtualize for SVC and External Virtualization for 9100, 9200 and 7200
Current
| 98
Storage Category SCU Category SCU
Flash and SSD Category 1 1 TiB to 1 SCU
10K, 15K and systems using Category 3 drives with advanced architectures
Category 2 1 TiB to .847 SCU
NL-SAS and SATA Category 3 1 TiB to .25 SCU
Storage Category SCU Category SCUStorage Class Memory and SV4PC Category 1 1 TiB to 1 SCU
Flash and SSD Category 2 1 TiB to .847 SCU
10K, 15K and systems using Category 3 drives with advanced architectures
Category 3 1 TiB to .50 SCU
NL-SAS and SATA Category 4 1 TiB to .25 SCU
NEW
98Note: SVC4PC is Category 1 because it is all in licensing, meaning all functions are included
Thank You!