san panasas

Upload: deepikasan

Post on 03-Apr-2018

225 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/28/2019 San Panasas

    1/19

    STORAGE NETWORKING

    A quick introduction to SANs and Panasas ActivStor

    Kevin Haines, eScience Centre, STFC, RAL

  • 7/28/2019 San Panasas

    2/19

    WikiPedia defines a SAN:

    A storage area network (SAN) is anarchitecture to attach remote computerstorage devices (such as disk arrays, tapelibraries, and optical jukeboxes) to servers insuch a way that the devices appear as locallyattached to the operating system.

    http://en.wikipedia.org/wiki/Storage_area_networkhttp://en.wikipedia.org/wiki/Storage_area_network
  • 7/28/2019 San Panasas

    3/19

    Its like a LAN...

    SAN Topology

    Host/HBA 1

    Host/HBA 2

    Host/HBA 3

    Storage

    1

    SAN Switch

    Storage2

  • 7/28/2019 San Panasas

    4/19

    Storage

    1

    SAN Switch

    Storage2

    Host/HBA 1

    Host/HBA 2

    Host/HBA 3

    Its like a LAN... it has a switch...

    SAN Topology

  • 7/28/2019 San Panasas

    5/19

    Storage

    1

    SAN Switch

    Storage2

    Host/HBA 1

    Host/HBA 2

    Host/HBA 3

    Its like a LAN... it has a switch, and network cards.

    SAN Topology

  • 7/28/2019 San Panasas

    6/19

    Storage

    1

    SAN Switch

    Storage2

    Host/HBA 1

    Host/HBA 2

    Host/HBA 3

    Its like a LAN... it has a switch, and network cards.

    SAN Topology

  • 7/28/2019 San Panasas

    7/19

    Its like a LAN...but a little different.

    SAN Topology

    Storage

    1

    SAN Switch

    Storage2

    Host/HBA 1

    Host/HBA 2

    Host/HBA 3

    Fibre Connectors 2/4/8Gb/s

    WWNs

    Initiators Targets

  • 7/28/2019 San Panasas

    8/19

    Zoning implemented by the switch

    Controlling Access

    Zone 1 = HBA1 + Storage1

    Zone 2 = HBA2 + HBA3 + Storage 2

    Storage

    1

    SAN Switch

    Storage2

    Host/HBA 1

    Host/HBA 2

    Host/HBA 3

  • 7/28/2019 San Panasas

    9/19

    LD3

    LD2

    LUN masking on the storage array

    Controlling Access

    Zone 1 = HBA1 + Storage1

    Zone 2 = HBA2 + HBA3 + Storage 2

    Storage

    1

    SAN Switch

    LD1

    Host/HBA 1

    Host/HBA 2

    Host/HBA 3

    HBA2

    HBA2

    HBA3

  • 7/28/2019 San Panasas

    10/19

    LD3

    LD2

    LUN masking on the storage array

    Controlling Access

    Zone 1 = HBA1 + Storage1

    Zone 2 = HBA2 + HBA3 + Storage 2

    Storage

    1

    LD1

    Host/HBA 1

    Host/HBA 2

    Host/HBA 3

    HBA2

    HBA2

    HBA3SAN Switch

  • 7/28/2019 San Panasas

    11/19

    HOST

    Multipath

    Increasing Resilience

    Storage1

    SAN Switch 1HBA 1

    SAN Switch 2HBA 2

  • 7/28/2019 San Panasas

    12/19

    Questions about SANs?

  • 7/28/2019 San Panasas

    13/19

    Panasas ActivStor

  • 7/28/2019 San Panasas

    14/19

    Panasas ActivStor

  • 7/28/2019 San Panasas

    15/19

    Panasas ActivStor

    Director Blades

    Meta-data and volume management services

    NFS access gateway

  • 7/28/2019 San Panasas

    16/19

    Panasas ActivStor

    Storage Blades

    Two disks (up to 2TB per blade)

    2xGb/s ethernet (failover mode)

    11 per shelf (must be managed by DB)

  • 7/28/2019 San Panasas

    17/19

    Panasas ActivStor Resilience

    Data is striped across all Storage Blades (RAID5 or RAID1)

    One or more Storage Blades worth of space reserved for failures

    System monitoring for pre-emptive actions (Blade Drain)

    Two or more Director Blades can provide failover for each other

  • 7/28/2019 San Panasas

    18/19

    Panasas ActivStor Performance

    DirectFLOW clients

    Available for most major Linux distributions

    Directly communicates with Storage Blades

    RAID computations performed by client

    ~5000 supported (12000 option)

  • 7/28/2019 San Panasas

    19/19

    Panasas ActivStor Performance

    DirectFLOW clients

    Available for most major Linux distributions

    Directly communicates with Storage Blades

    RAID computations performed by client

    ~5000 supported (12000 option)

    Results show scalable claim is true

    RAL: 1.2 GB/s from 22 nodes (2 shelves)

    RoadRunner: 60GB/s (103 shelves)