fms 2015 seminar e - network effect-storage_system

14
How Networking Affects Flash Storage Systems Gunna Marripudi Tameesh Suri Samsung Semiconductor Inc. Flash Memory Summit 2015 Santa Clara, CA 1

Upload: gunna-marripudi

Post on 13-Jan-2017

226 views

Category:

Software


3 download

TRANSCRIPT

How Networking Affects Flash Storage

Systems

Gunna Marripudi

Tameesh Suri

Samsung Semiconductor Inc.

Flash Memory Summit 2015

Santa Clara, CA

1

SSD Interface Improvements

Flash Memory Summit 2015

Santa Clara, CA

2

Latency is ~60% lower

IOPS is 2 to 8x better

Throughput is 2 to 6x

better

High-performance networking and

higher performance SSD devices

New design considerations

to achieve high performance!

Storage System Design Choices

Flash Memory Summit 2015

Santa Clara, CA

3

Placement of NVMe drives and NICs

Interrupt Management

CPU cores

Symmetry

Flash Memory Summit 2015

Santa Clara, CA

4

Local Socket Latency* Remote Socket Latency*

80-110ns 120-160ns

Asymmetry

NIC NVMe NVMe NIC

* Idle latency measured on different generations of Intel processors using : https://software.intel.com/en-us/articles/intelr-memory-latency-checker

Flash Memory Summit 2015

Santa Clara, CA

5

System Dual socket x86

server

2x40GbE RDMA NICs

10 NVMe SSDs

Linux 3.14 kernel

iSER LIO stack

Test Configuration

Performance Comparison

Flash Memory Summit 2015

Santa Clara, CA

6

Storage stack architecture will influence the choice between Symmetric and Asymmetric configurations

Asymmetric Symmetric

1x

2.1x

4K Random Read

Asymmetric Symmetric

1x 1x

128K Sequential Read

Interrupt Management

Flash Memory Summit 2015

Santa Clara, CA

7

NVMe drives and NICs can consume lot of interrupts

They can target interrupts to multiple cores

Interrupt to CPU core affinity improves performance

Operating system tools to manage interrupts do so without any specific knowledge of NIC and SSD association

Test Configuration

Flash Memory Summit 2015

Santa Clara, CA

8

System Dual socket x86

server

1x40GbE RDMA NICs

5 NVMe SSDs

Linux 3.14 kernel

iSER LIO stack

Performance Comparison

(Symmetric Configuration)

Flash Memory Summit 2015

Santa Clara, CA

9

Two approaches in

interrupt

management

Using irqbalance

in Linux

Assigned NIC and

NVMe drive

interrupts to specific

CPU cores

Interrupt Management in Action

(Symmetric Configuration)

Flash Memory Summit 2015

Santa Clara, CA

10

No IRQ

Balancing

Manual NIC

and NVMe

interrupt

assignment to CPU cores

CPU Core Count in the System

Flash Memory Summit 2015

Santa Clara, CA

11

Resource contention from 3 sources High-speed NICs

High-performance NVMe drives

Storage stack processing

High volume dual socket servers can support a wide range of cores 8 to 36 cores* (with HT off)

* Source: http://ark.intel.com/products/family/78583/Intel-Xeon-Processor-E5-v3-Family#@Server

Performance Comparison

Flash Memory Summit 2015

Santa Clara, CA

12

Storage Stack 1

1x40GbE

4 NVMe SSDs

Storage Stack 2

4x40GbE

20 NVMe SSDs

Summary

Flash Memory Summit 2015

Santa Clara, CA

13

• Choose Symmetric or Asymmetric design as driven by the storage stack architecture

Placement of NVMe drives and NICs

• Explicit mapping of NIC and NVMe drive interrupts to CPU cores can improve performance

Interrupt Management

• Use the appropriate core count to accommodate increased contention for CPU resources

CPU cores in the system

High performance networking complements higher

performance SSDs to deliver high performance!