maquette db2 purescale - ibm · pdf filemaquette db2 purescale ... server workload partition...
TRANSCRIPT
DB2 PureScale Cluster Actif / Actif
Gestion des verrous et de la mémoire cache
héritée du z/OS
Automatic workload balancing
Shared Data
InfiniBand network & DB2 Cluster Services
Cluster de noeuds DB2 actifs sur des serveurs P
Cluster Manager intégré
On-Demand Provisioning
Cluster Interconnect
DB2 PureScale
Single Database View
Clients
Shared Database
Log Log Log Log
Shared Storage Access
CS CS CSCS
CS CS
CS
Member Member Member Member
Primary2nd-ary
Les moteurs DB2 tournent sur n noeuds
– Ils coopèrent entre eux pour délivrer un
accès cohérent
Data sharing architecture– Accès partagés à la base de données
– Chaque noeud écrit ses propres log sur des
disques partagés
PowerHA PureScale technology:
– Global locking & buffer management
– Synchronous duplexing assurant la haute disponibilité
Low latency, high speed interconnect
– RDMA-capable interconnects (Infiniband)
« Clients connect anywhere,… see
single database »– Les clients peuvent se connecter sur chaque noeuds
– Load Balancing automatique
Cluster services intégré
– Détection de la perte ou de l'ajout d'un membre
– Recovery automation,En partenariat avec STG & Tivoli
General Parallel File System( GPFS)
Maquette IIC : Pour réaliser une maquette PureScale
Power7 P770
Baie de stockage DS5XXXSwitch san
Switch infiniband
288IBM 7874-240
128IBM 7874-120
48IBM 7874-040
24IBM 7874-024
PowerVM Rappels
IBM PowerVM Virtualization Features• Processor
– Shared or dedicated LPARs– Capped or uncapped LPARs– Multiple shared processor pools– Dynamic LPAR operations
(add/remove)– Shared dedicated LPARs
• I/O – Shared and/or dedicated I/O– Virtual Ethernet, virtual SCSI– Dynamic LPAR operations
(add/remove)– Integrated Virtual Ethernet– Virtual FC (N_Port ID Virtualization)– Virtual Tape Support
• Memory– Dedicated memory– Active Memory Sharing– Dynamic LPAR operations
(add/remove)– Active Memory Expansion (AIX
6.1)
• Other– Integrated Virtualization Manager– Live LPAR mobility– Workload partitions (AIX 6.1)– Workload partition mobility (AIX 6.1)– Lx86 for Linux applications (Linux)
Shared Processor Pool
OS
PowerVM Hypervisor
OS
Dedicated Processor LPARs
LPAR LPAR
WPAR
LPAR
Sub-Pool A
OS OS
LPAR LPAR
Sub-Pool B
OS OS
LPAR LPARLPAR
VirtualI/O
Server
LPAR
N_Port ID Virtualization (Virtual FC)
N_Port ID Virtualization Simplifies Disk Management
• N_Port ID Virtualization
– Multiple Virtual World Wide Port Names per FC port – PCIe 8 Gb adapter
– LPARs have direct visibility on SAN (Zoning/Masking)
– I/O Virtualization configuration effort is reduced
VIOS
AIXGeneric SCSI Disks
SAN
FC Adapters
DS8000 HDS
Virtual SCSI Model
VIOS
AIX
SAN
FC Adapters
DS8000 HDS
N_Port ID Virtualization
DS8000 HDS
N_Port ID Virtualization• N_Port ID Virtualization
– Virtualizes FC adapters
– Virtual WWPNs are attributes of the client virtual FC adapters not physical adapters
– 64 WWPNs per FC port (128 per dual port HBA)
• Customer Value– Can use existing storage management tools
and techniques
– Allows common SAN managers, copy services, backup/restore, zoning, tape libraries, etc
– Transparent use of storage functions such as SCSI-2 reserve/release and SCSI3 persistent reserve
– Load balancing across VIOS
– Allows mobility without manual management intervention
VIOS
VIOC
Multipath SW
NPIV EnabledSAN
VIOC
Multipath SW
Tape
Virtual FC - Client
Hypervisor
VIOS
PhysicalFC Ports (8 Gb FC)
Virtual FC - Server
N_Port ID Virtualization
Hypervisor
PV LUNs
B
A
VIOS 1
vfchost0 vfchost2
fcs0 fcs1
vfchost1 vfchost3
Client LPAR 1
fcs0
A
fcs1
Multi-Path Software
fcs2 fcs3
VIOS 2
vfchost0 vfchost2
fcs0 fcs1
vfchost1 vfchost3
Client LPAR 2
fcs0 fcs1
Multi-Path Software
fcs2 fcs3
B
VIOS NG
PowerVM Editions with VIOS v2.2
Enhancements
��������Shared Storage Pools
��������Thin Provisioning
��������Linked Clones
����Live Partition Mobility
����
����
2 per server
+ VIOS
Express
��������PowerVM Lx86
����Active Memory Sharing
��������Shared Processor Pools
���� ����
(Clustered)
���� ����
(Clustered)Virtual I/O Server
10 per core (up to 1000)
10 per core
(up to 1000)
Maximum VMs
EnterpriseStandardPowerVM Editions
* New functionality in VIOS v2.2 release
• PowerVM Express Edition
– Evaluations, pilots, PoCs
– Single-server projects
• PowerVM Standard Edition
– Production deployments
– Server consolidation
• PowerVM Enterprise Edition
– Multi-server deployments
– Cloud infrastructure
13
IBM Confidential
Virtual I/O Server (Classic)
LPARs
PHYP
VIOS
CentralizedPlatform Mgmt
Storage Mgmt
•Inventory
IBM Systems DirectorCore Mgmt
•Config
•Health
VIOS
PHYP
VIOSVIOS
PHYP
VIOSVIOS
PHYP
VIOSVIOS
PHYP
VIOSVIOS
PHYP
VIOSVIOS
•- - - - - -
Storage Pool Storage Pool Storage Pool
Storage Pool Storage Pool Storage Pool
SAN
IBM, EMC, Hitachi, SVC, and Other SAN
IBM Confidential
Virtual I/O Server v2.2
Extending Storage Virtualization Layer Beyond a
Single System
PHYP
CentralizedPlatform Mgmt
Storage Mgmt
•Inventory
IBM Systems DirectorCore Mgmt
•Config
•Health
•Provision
Clone
Snap
PHYP PHYP
PHYP PHYP PHYP
•- - - - - - Migrate
VIOSNG
Storage Pool of SAN & NAS
VIOSNG
VIOSNG
VIOSNG
VIOSNG
VIOSNG
VIOSNG
VIOSNG
VIOSNG
VIOSNG
VIOSNG
VIOSNG
Mobility Solutions on Power Systems
Live Partition Mobility
Movement of the OS and
applications to a different server with no loss of
service
Virtualized SAN and Network InfrastructureVirtualized SAN and Network Infrastructure
Live Application Mobility
AIX # 1 AIX # 2
WorkloadPartition
QA
WorkloadPartition
ApplicationServer
WorkloadPartition
Web
NFS or SAN NetworkNFS or SAN Network
Workload Partition
WorkloadPartition
Billing
• PowerVM Live Partition Mobility
– Move running partition from one system to
another with almost no impact to end users
– Requires POWER6,POWER7 PowerVM
Enterprise Edition, all I/O must be through the
VIO Server
– AIX V5.3, AIX 6, AIX7
• AIX Live Application Mobility
– Move running WPAR from one AIX system to
another with almost no impact to end users
– AIX 6.1 & Workload Partitions Manager
VIOS NG Base Capabilities
• vSCSI (standard vSCSI Target, including Persistent Reserve)
• Storage aggregation / pooling
• Thin provisioning (including notification framework)
• Thick provisioning
• Snapshot / rollback
• Consistency groups
• Linked-clones (space-efficient clones)
• Storage tiering
• Multiple storage pools
• Structured / distributed namespace
• CLI from any node in the cluster
Advanced Capabilities
• Import existing storage to VIOS NextGen
• Automated provisioning (storage, AMS, Hibernation)
• Live Storage Mobility
• Application consistent snapshot framework
• Consolidated backup / restore framework
• Virtual optical
• Pool Mirroring
• Storage isolation infrastructure for multi-tenancy
• Server / storage integration (accelerate/offload data ops to SAN)
• NAS support (NAS filer on the back-end)
• vSCSI device data encryption, compression, de-dup
• Centralized management console (GUI)
VIOS NextGen Phase 1 (12/2010)
• VIOS NextGen Phase 1 Features – GA December 2010
– vSCSI enhanced for persistent reserve
– Storage aggregation / pooling (Shared storage pool)
– Thin provisioning
– CLI management
– Single node VIOS
– Dual VIOS (redundant) config option with LVM mirroring
– Max physical disks: 128
– Max virtual disks in storage pool: 200
– Max client LPARs per VIOS: 20
2011 VIOS NextGen Release
• 2011 Release
– 10 node cluster
– 1024 Virtual disks
– 40 Client LPARs per VIOS (400 max clients / 200 clients with redundant
VIOS’s)
– 128 physical disks in the pool
– Snapshot / Rollback (device and consistency group)
– Linked Clones
– Live Storage Mobility
– Thick provisioned devices
– Image Management, Cluster Management (IBM Systems Director)
– Legacy VIOS capabilities (Client LPAR Mobility, LPM Data Mover, AMS PSP)
– Non-disruptive cluster upgrade
– 3rd party multipathing software
Contacts
• IIC : Vann LAM– [email protected]
• SWG : Patrick DIMPRE– [email protected]
• STG : Thierry DESBOURDES– [email protected]