© 2010 hitachi data systems. all rights reserved. vsp architecture overview hds technical training

25
© 2010 Hitachi Data Systems. All rights reserved. VSP Architecture Overview HDS Technical Training

Upload: quentin-dingley

Post on 14-Dec-2015

255 views

Category:

Documents


9 download

TRANSCRIPT

Page 1: © 2010 Hitachi Data Systems. All rights reserved. VSP Architecture Overview HDS Technical Training

© 2010 Hitachi Data Systems. All rights reserved.

VSP Architecture Overview

HDS Technical Training

Page 2: © 2010 Hitachi Data Systems. All rights reserved. VSP Architecture Overview HDS Technical Training

2

VSP Introduction

VSP is a completely new, highly scalable enterprise array VSP is the first “3D Array”

– Scales up within a single chassis by adding logic boards (I/O processors, cache, host ports, disk controllers), disk containers and disks (to 1024 disks)

– Scales out by adding a second fully integrated chassis to double the cache, disk capacity and host connectivity of a single chassis (to 2048 disks)

– Scales deep with external storage VSP continues support of Hitachi Dynamic Provisioning and

Universal Volume Manager (virtualized storage), as well as most other Hitachi Program Products available on the USP V

VSP has a new feature within HDP named Hitachi Dynamic Tiering to migrate data among different storage tiers (SSD, SAS, SATA) located within a single HDP Pool based on historical usage patterns

VSP provides up to 40% better power efficiency than USP V and a much smaller footprint

Page 3: © 2010 Hitachi Data Systems. All rights reserved. VSP Architecture Overview HDS Technical Training

3

VSP Changes Overview

The VSP shares no hardware with the USPV The VSP architecture is 100% changed from the USP V VSP does reuse much of the USP V software, such as HDP and

other Program Products Major changes from the USP V include:

– The previous Universal Star Network switch layer (PCI-X, 1064MB/s paths) has been upgraded to a new HiStar-E grid (PCI-e, 2048MB/s paths)

– The MP FED/BED processors have been replaced with Intel Xeon quad-core CPUs located on a new Virtual Storage Director I/O processor board

– The discrete Shared Memory system has been replaced by a Control Memory (CM) system. This uses processor board local memory plus a master copy in a region of cache that is updated by the individual VSDs

– Each VSD board manages a discrete group of LDEVs that may be accessed from any port, and has a reserved partition in cache to use for these LDEVs

– Individual processes on each VSD Xeon core dynamically execute tasks for the different modes: Target, External, BED (disk), HUR Initiator, HUR Target, various mainframe modes, and various internal housekeeping modes

Page 4: © 2010 Hitachi Data Systems. All rights reserved. VSP Architecture Overview HDS Technical Training

4

VSP Configurations Overview

A single chassis array can include up to:– 3 racks and one logic box– 4 VSD boards– 64 8Gbps FC or FICON ports (no ESCON) – 8 FED boards*– 256GB cache – 8 DCA boards (using 4GB DIMMs)– 1024 2.5” disks (or 640 3.5” disks) – 64 HDUs– 32 6Gbps back-end SAS links – 4 BED boards– 65,280 Logical Devices

A dual chassis array can have up to: – 6 racks and two logic boxes– 8 VSD boards– 128 8Gbps FC or FICON ports – 16 FED boards*– 512GB of cache – 16 DCA boards (using 4GB DIMMs)– 2048 2.5” drives (or 1280 3.5” drives) – 128 HDUs– 64 6Gbps back-end SAS links – 8 BED boards– 65, 280 Logical Devices

* More if some DKAs are deleted

Page 5: © 2010 Hitachi Data Systems. All rights reserved. VSP Architecture Overview HDS Technical Training

5

VSP Disk Choices

2.5” SFF Disks (SFF DKU):– 200 GB SSD (3 Gbps**)– 146 GB 15K RPM SAS (6 Gbps)– 300 GB 10K RPM SAS (6 Gbps)– 600 GB 10K RPM SAS (6 Gbps)

3.5” LFF Disks (LFF DKU): – 400 GB SSD (3 Gbps**)

• (~20% slower on writes than the 200 GB SSD)– 2 TB 7.2K SATA (3 Gbps)

** In the future, the SSDs will have the 6 Gbps interface.

Disks of different interface speeds may be intermixed in the DKUs as the BEDs drive each “conversation” at the speed of the individual drive over the switched SAS back-end.

Page 6: © 2010 Hitachi Data Systems. All rights reserved. VSP Architecture Overview HDS Technical Training

6

VSP Design

Each FED board has a Data Accelerator chip (“DA”, or “LR” for local router) instead of 4 MPs. The DA routes host I/O jobs to the VSD board that owns that LDEV and performs DMA transfers of all data blocks to/from cache.

Each BED board has 2 Data Accelerators instead of 4 MPs. They route disk I/O jobs to the owning VSD board and move data to/from cache. Each BED board has 2 SAS SPC Controller chips that drive 8 SAS 6Gbps switched links (over four 2-Wide cable ports).

Most MP functions have been moved from the FED and BED boards to new multi-purpose VSD boards. No user data passes through the VSD boards! Each VSD has a 4-core Intel Xeon CPU and local memory. Each VSD manages a private partition within global cache.

Unlike the previous Hitachi Enterprise array designs, the FED board does not decode and execute I/O commands. In the simplest terms, a VSP FED accepts and responds to host requests by directing the host I/O requests to the VSD managing the LDEV in question. The VSD processes the commands, manages the metadata in Control Memory, and creates jobs for the Data Accelerator processors in FEDs and BEDs. These then transfer data between the host and cache, virtualized arrays and cache, disks and cache, or HUR operations and cache. The VSD that owns an LDEV tells the FED where to read or write the data in cache.

Page 7: © 2010 Hitachi Data Systems. All rights reserved. VSP Architecture Overview HDS Technical Training

7

VSP LDEV Management

In VSP, VSDs manage unique sets of LDEVs, and their data is contained within that VSD’s cache partition. Requests are routed to the VSDs by the Data Accelerator chips on the FED and BED boards using their local LDEV routing tables.

LDEV ownership can be looked at in Storage Navigator, and may be manually changed to another VSD board.

When creating new LDEVs, they are round-robin assigned to the installed VSDs in that array.

If additional VSDs are installed, groups of LDEVs will be automatically reassigned to the new VSDs. There will be a roughly even distribution across the VSDs at this point. This is a fast process.

LDEV ownership by VSD means that VSP arrays don’t have an LDEV coherency protocol overhead. There is only one VSD board that can manage all I/O jobs for any given LDEV, but any core on that Xeon CPU may execute those processes.

Page 8: © 2010 Hitachi Data Systems. All rights reserved. VSP Architecture Overview HDS Technical Training

8

Paths Per LDEV

VSP should be relatively insensitive to how many different active paths are configured to an LDEV.

On the USP V, we generally advise 2 paths for redundancy, and 4 paths where performance needs to be maintained across maintenance actions, but never use more than 4 active ports because of the LDEV coherency protocol “bog-down” in Shared Memory that happens as you increase the number of paths.

Page 9: © 2010 Hitachi Data Systems. All rights reserved. VSP Architecture Overview HDS Technical Training

9

VSP I/O Operations

Note that a VSD controls all I/O operation for an LDEV, whether it is processing a host I/O, a disk I/O, an external I/O, or a Copy Product operation.

Copy products PVOLs and SVOLs must be on the same VSD, as the data has to available from the same cache partition.

Page 10: © 2010 Hitachi Data Systems. All rights reserved. VSP Architecture Overview HDS Technical Training

10

Performance on VSP

Basically, on the USP V, we know that – Small block I/O is limited by MP busy rate (FED-MP or BED-MP

busy)– Large block I/O is limited by path saturation paths (port MB/s or

cache switch path MB/s, etc.)

On VSP, the “MPs” are separated from the ports.– Where there are multiple LDEVs on a port, these can be owned by

different VSD boards.– Where there are multiple LDEVs on a port that are owned by a

single VSD board, the 4 cores in the VSD board can be processing I/Os for multiple LDEVs in parallel.

VSP can achieve very high port cache hit IOPS rates. Tests using 100% 8KB random read, 32 15K disks, RAID-10 (2+2), we saw:

– USP V: 1 port, about 16,000 IOPS (2 ports-2MPs, 31,500 IOPS)– VSP: 1 port, about 67,000 IOPS (2 ports, 123,000 IOPS)

Page 11: © 2010 Hitachi Data Systems. All rights reserved. VSP Architecture Overview HDS Technical Training

11

VSP Architecture Overview

Page 12: © 2010 Hitachi Data Systems. All rights reserved. VSP Architecture Overview HDS Technical Training

12

Fully populated Dual Chassis VSP has 6 racks

RK-00

2 DKC racks, each with a DKC box and 2 DKU boxes4 DKU racks, each with 3 DKU boxes

RK-01

RK-02

RK-10

RK-11

RK-12

DKC Module-0

DKC Module-1

11.8 ft

6.5 ft

3.6 ft

4 VSDs 8 VSDs

HDD (SFF) 1,024 2,048

FED ports 64 (80/96*1) 128 (160*2)

Cache 256GB (512GB)*3 512GB (1,024GB)*3

*1 80 ports with 1 BED pair96 ports in a diskless (all FED) configuration

*2 160 ports with 1 BED pair per DKC module (Diskless not supported on 2 module configurations.)

*3 Enhanced(V02)

Page 13: © 2010 Hitachi Data Systems. All rights reserved. VSP Architecture Overview HDS Technical Training

13

VSP Single Chassis Architecture w/ Bandwidths

GSW

CM

GSW GSW GSW

CM

CM

CM

DCA

DCA

DCA

DCA

VSD

BED

FED

BED

FED FEDFED

VSDVSD

VSD

BED BED

FED FEDFED FED

16 x 1GB/s Send 16 x 1GB/s Receive

16 x 1GB/s Send 16 x 1GB/s Receive

16 x 1GB/s Send 16 x 1GB/s Receive

32 x 1GB/s Send 32 x 1GB/s Receive

8 x 6Gbps SAS Links per BED

8 x 8Gbps FC Ports per FED

To Other GSWs16 x 1GB/s Send

16 x 1GB/s Receive

256GB Cache

64 x 8Gbps FC Ports

32 x 6Gbps SAS Links

VSP Single Chassis

96 GSW links

4 BED boards8 SAS Processors8 DA Processors

8 BED boards8 DA Processors

Page 14: © 2010 Hitachi Data Systems. All rights reserved. VSP Architecture Overview HDS Technical Training

14

VSP Single Chassis Grid Overview

GSW

DCA VSD

BED

GSW GSW GSW

DCA DCA DCADCA DCA DCA VSDDCAVSD VSD

BEDFEDFEDBEDFEDFED FEDFED BED FEDFED

VSP Single Chassis - Boards

• 8 FEDs• 4 BEDs

HiStar-E Network4 PCIe Grid Switches (96 ports)

• 8 Cache

• 16 CPU Cores

Page 15: © 2010 Hitachi Data Systems. All rights reserved. VSP Architecture Overview HDS Technical Training

15

Dual Chassis Arrays

The VSP can be configured as a single or dual chassis array. It is still a single homogeneous array.

A VSP might be set up as a dual chassis array from the beginning, with a distribution of boards across the two chassis.

A single chassis VSP can be later expanded (Scale Out) with a second chassis.

The second chassis may be populated with boards in any of these scenarios:

– Adding 2 or 4 Grid Switches and 4-8 Cache boards to provide larger amounts of cache

– Adding 2 or 4 Grid Switches and 2-4 VSDs to add I/O processing power (for random I/O)

– Adding 2 or 4 Grid Switches and 2-8 FEDs to add host, HUR, or external ports– Adding 2 or 4 Grid Switches and 1-2 BEDs to add disks and SAS paths– Any combinations of the above

Page 16: © 2010 Hitachi Data Systems. All rights reserved. VSP Architecture Overview HDS Technical Training

16

VSP Second Chassis - Uniform Expansion

GSW

DCA VSD

BED

GSW GSW GSW

DCA DCA DCADCA DCA DCA VSDDCAVSD VSD

BEDFEDFEDBEDFEDFED FEDFED BED FEDFED

VSP Second Chassis - Boards

• 8 FEDs• 4 BEDs

HiStar-E Network4 PCIe Grid Switches (96 ports)

• 8 Cache

• 16 CPU Cores

4 GSW Paths to Chassis-1

Page 17: © 2010 Hitachi Data Systems. All rights reserved. VSP Architecture Overview HDS Technical Training

17

VSP and USP V Table of Limits

Table of Limits VSP USP V

Single Chassis Dual Chassis Maximum

Data Cache (GB) 32-256 (512) 32-512 (1024) 512

Raw Cache Bandwidth 64GB/s 128GB/s 68 GB/sec

Control Memory (GB) 8-48 8-48 24

Cache Directories (GB) 2 or 4 6 or 8 -

SSD Drives 128 256 128

2.5“ Disks (SAS and SSD) 1024 2048 -

3.5“ Disks (SATA, SSD )640 1280 -

3.5" Disks (FC, SATA) - - 1152

Logical Volumes 65,280 65,280 65,280

Logical Volumes per VSD 16,320 16,320 -

Max Internal Volume Size 2.99TB 2.99TB 2.99TB

Max CoW Volume Size 4TB 4TB 4TB

Max External Volume Size 4TB 4TB 4TB

IO Request Limit per Port 2048 2048 2048

Nominal Queue Depth per LUN 32 32 32

HDP Pools 128 128 128

Max Pool Capacity 1.1PB 1.1PB 1.1PB

Max Capacity of All Pools 1.1PB 1.1PB 1.1PB

LDEVs per Pool (pool volumes) 1024 1024 1024

Max Pool Volume size (internal/external) 2.99/4TB 2.99/4TB 2.99/4TB

DP Volumes per Pool ~62k ~62k 8192

DP Volume Size Range (No SI/TC/UR) 46MB-60TB 46MB-60TB 46MB-4TB

DP Volume Size Range (with SI/TC/UR) 46MB - 4TB 46MB - 4TB 46MB - 4TB

Page 18: © 2010 Hitachi Data Systems. All rights reserved. VSP Architecture Overview HDS Technical Training

18

Board Level Details

Page 19: © 2010 Hitachi Data Systems. All rights reserved. VSP Architecture Overview HDS Technical Training

19

Logic Box Board Layout

ESW-1 (2SD)

DCA-1 (2CD)

MPA-1 (2MD)

DKA-1 (2ML)

CHA-3 (2RL)

VSP Chassis #1

DCA-0 (2CC)

DCA-2 (2CG)

DCA-3 (2CH)

DCA-0 (1CA)

DCA-1 (1CB)

DCA-2 (1CE)

DCA-3 (1CF)

MPA-1 (1MB)

MPA-0 (2MC)

MPA-0 (1MA)

SVP-1 / HUB-01

CHA-2 (2RU)

CHA-1 (2QL)

CHA-2 (1FU)

CHA-0 (1EU)CHA-1 (1EL)

CHA-3 (1FL)

CHA-0 (2QU)

DKA-0 (2MU)

DKA-0 (1AU)DKA-1 (1AL)

ESW-0 (2SC)

ESW-0 (1SA)

ESW-1 (1SB)

SVP-0

DKC-0 Front DKC-0 RearCl

uste

r 2

Clus

ter 2

Clus

ter 1

Clus

ter 1

Page 20: © 2010 Hitachi Data Systems. All rights reserved. VSP Architecture Overview HDS Technical Training

20

FED Port Labels (FC or FICON)

DKC-0 Rear Slots

CHA Slot LEFT DKC-0 RIGHT CHA Slot LEFT DKC-0 RIGHTName Name Port 1 Port 2 Port 3 Port 4 Port 5 Port 6 Port 7 Port 8 Name Name Port 1 Port 2 Port 3 Port 4 Port 5 Port 6 Port 7 Port 8

CHA-3 2RL 2G 4G 6G 8G 2H 4H 6H 8H CHA-2 2RU 2E 4E 6E 8E 2F 4F 6F 8F

CHA-1 2QL 2C 4C 6C 8C 2D 4D 6D 8D CHA-0 2QU 2A 4A 6A 8A 2B 4B 6B 8B

CHA-4 2ML AA CA EC GC AD CD ED GD CHA-5 2MU AA CA EA GA AB CB EB GB

CHA-4 1AL 9C BC DC FC 9D BD DD FD CHA-5 1AU 9A BA DA FA 9B BB DB FB

CHA-1 1EL 1C 3C 5C 7C 1D 3D 5D 7D CHA-0 1EU 1A 3A 5A 7A 1B 3B 5B 7B

CHA-3 1FL 1G 3G 5G 7G 1H 3H 5H 7H CHA-2 1FU 1E 3E 5E 7E 1F 3F 5F 7F

DKC-1 Rear Slots

CHA Slot LEFT DKC-1 RIGHT CHA Slot LEFT DKC-1 RIGHTName Name Port 1 Port 2 Port 3 Port 4 Port 5 Port 6 Port 7 Port 8 Name Name Port 1 Port 2 Port 3 Port 4 Port 5 Port 6 Port 7 Port 8

CHA-9 2UL 2Q 4Q 6Q 8Q 2R 4R 6R 6R CHA-8 2UU 2N 4N 6N 8N 2P 4P 6P 8P

CHA-7 2TL 2L 4L 6L 8L 2M 4M 6M 8M CHA-6 2TU 2J 4J 6J 8J 2K 4K 6K 8K

CHA-10 2XL AL CL EL GL AM CM EM GM CHA-11 2XU AJ CJ EJ GJ AK CK EK GK

CHA-10 1LL 9L BL DL FL 9M BM DM FM CHA-11 1LU 9J BJ DJ FJ 9K BK DK FK

CHA-7 1GL 1L 3L 5L 7L 1M 3M 5M 7M CHA-6 1GU 1J 3J 5J 7J 1K 3K 5K 7K

CHA-9 1HL 1Q 3Q 5Q 7Q 1R 3R 5R 7R CHA-8 1HU 1N 3N 5N 7N 1P 3P 5P 7P

Page 21: © 2010 Hitachi Data Systems. All rights reserved. VSP Architecture Overview HDS Technical Training

21

DKU and HDU Overviews

Page 22: © 2010 Hitachi Data Systems. All rights reserved. VSP Architecture Overview HDS Technical Training

22

DKU and HDU Map – Front View, Dual Chassis

HDU HDU HDU HDU HDU HDU HDU HDU HDU HDU HDU HDU17-5 17-4 14-5 14-4 11-5 11-4 01-5 01-4 04-5 04-4 07-5 07-4

HDU HDU HDU HDU HDU HDU HDU HDU HDU HDU HDU HDU17-1 17-0 14-1 14-0 11-1 11-0 01-1 01-0 04-1 04-0 07-1 07-0

HDU HDU HDU HDU HDU HDU HDU HDU HDU HDU HDU HDU16-5 16-4 13-5 13-4 10-5 10-4 00-5 00-4 03-5 03-4 06-5 06-4

HDU HDU HDU HDU HDU HDU HDU HDU HDU HDU HDU HDU16-1 16-0 13-1 13-0 10-1 10-0 00-1 00-0 03-1 03-0 06-1 06-0

HDU HDU HDU HDU HDU HDU HDU HDU16-5 16-4 12-5 12-4 02-5 02-4 05-5 05-4

HDU HDU HDU HDU HDU HDU HDU HDU16-1 16-0 12-1 12-0 02-1 02-0 05-1 05-0

power power power power power power power power power power power power

DKU-05

DKU-04

RK-01

DKU-10 DKU-00

Logic BoxChassis 0

Logic BoxChassis 1

DKU-16 DKU-13

DKU-14

Chassis #1 Chassis #0front view front view

RK-12 RK-11

DKU-17

RK-02RK-00

DKU-01

RK-10

DKU-15

DKU-03

DKU-12

DKU-11

DKU-06

DKU-07

DKU-02DKC-1 DKC-0

DKC-0 DKC-10 1 2 3 4 5 6 7 10 11 12 13 14 15 16 170 1 2 3 4 5 6 7 10 11 12 13 14 15 16 17

BED 3,4

DKU #

BED 1, 2

DKU #

Page 23: © 2010 Hitachi Data Systems. All rights reserved. VSP Architecture Overview HDS Technical Training

23

DKU and HDU Map – Rear View, Dual Chassis

HDU HDU HDU HDU HDU HDU HDU HDU HDU HDU HDU HDU07-7 07-6 04-7 04-6 01-7 01-6 11-7 11-6 14-7 14-6 17-7 17-6

HDU HDU HDU HDU HDU HDU HDU HDU HDU HDU HDU HDU07-3 07-2 04-3 04-2 01-3 01-2 11-3 11-2 14-3 14-2 17-3 17-2

HDU HDU HDU HDU HDU HDU HDU HDU HDU HDU HDU HDU06-7 06-6 03-7 03-6 00-7 00-6 10-7 10-6 13-7 13-6 16-7 16-6

HDU HDU HDU HDU HDU HDU HDU HDU HDU HDU HDU HDU06-3 06-2 03-3 03-2 00-3 00-2 10-3 10-2 13-3 13-2 16-3 16-2

HDU HDU HDU HDU HDU HDU HDU HDU05-7 05-6 02-7 02-6 12-7 12-6 16-7 16-6

HDU HDU HDU HDU HDU HDU HDU HDU05-3 05-2 02-3 02-2 12-3 12-2 16-3 16-2

power power power power power power power power power power power power

DKU-17

RK-02 RK-01 RK-00

DKU-07 DKU-01

Chassis #0 Chassis #1

DKU-11

RK-10 RK-11

DKU-14

DKU-05 DKU-02 DKU-12 DKU-15Logic BoxChassis 1

Logic BoxChassis 0

DKC-0 DKC-1

DKU-06 DKU-03 DKU-00 DKU-10 DKU-13 DKU-16

rear view rear viewRK-12

DKU-04

DKC-0 DKC-10 1 2 3 4 5 6 7 10 11 12 13 14 15 16 170 1 2 3 4 5 6 7 10 11 12 13 14 15 16 17

BED 3,4

DKU #

BED 1, 2

DKU #

Page 24: © 2010 Hitachi Data Systems. All rights reserved. VSP Architecture Overview HDS Technical Training

24

BED to DKU Connections (Single Chassis)

• Up to 1024 SFF (shown) or 640 LFF disks, 32 600MB/s SAS links (16 2W ports), 8 DKUs, 64 HDUs

BED-0

BED-1

BED-1

Rack - 00

DKU-01DKU-00 DKU-07DKU-02 DKU-03 DKU-04 DKU-05 DKU-06

Rack - 02Rack - 01

DKC-0

2 DKUs, 16 HDUs

16 16

16 1616 16 16 16 16 16 16 16 16 16 16 16 16 16

16 1616 16 16 16 16 16 16 16 16 16 16 16

16 1616 16 16 16 16 16 16 16 16 16 16 16

16 1616 16 16 16 16 16 16 16 16 16 16 16

16 16

16 16

BED-0

SFF SFFSFFSFFSFFSFFSFFSFF

3 DKUs, 24 HDUs 3 DKUs, 24 HDUs

32 6Gbps SAS Links (16 2W cables)

Page 25: © 2010 Hitachi Data Systems. All rights reserved. VSP Architecture Overview HDS Technical Training

25

Q and A