power systems 2009 hardware

71
1 © 2009 IBM Corporation STG Technical Conferences 2009 SMP15 Power Systems 2009 Hardware Announcements and Future Insights Mark Olson IBM WW Power Systems Product Manager © 2009 IBM Corporation 2 STG Technical Conferences 2009 Power Systems 2009 Hardware Announcements and Future Insights Agenda: Hardware Future insights – Processor – I/O 2009 Hardware Announcements – TODAY!!!! 20 October, IBM announces … – April/May 2009

Upload: andrey-klyachkin

Post on 01-Jun-2015

1.765 views

Category:

Technology


7 download

TRANSCRIPT

Page 1: Power Systems 2009 Hardware

1

© 2009 IBM CorporationSTG Technical Conferences 2009

SMP15Power Systems 2009 Hardware Announcements and Future Insights

Mark OlsonIBM WW Power Systems Product Manager

© 2009 IBM Corporation2

STG Technical Conferences 2009

Power Systems 2009 Hardware Announcements and Future Insights

Agenda:

• Hardware Future insights

– Processor

– I/O

• 2009 Hardware Announcements

– TODAY!!!! 20 October, IBM announces …

– April/May 2009

Page 2: Power Systems 2009 Hardware

2

© 2009 IBM Corporation3

STG Technical Conferences 2009

Power Systems 2009 Hardware Announcements and Future Insights

What is your background ?

• OS interest?–AIX/Linux–IBM i

• POWER6 hardware models?–Blades–520/550–560/570–595

• IBM Power System Client or sales?

• Extremely familiar with 2009 April Power announcements?

© 2009 IBM Corporation4

STG Technical Conferences 2009

Power Systems 2009 Hardware Announcements and Future Insights

Processor Technology Roadmap

2001

� Dual Core � Chip Multi Processing� Distributed Switch� Shared L2� Dynamic LPARs (32)

2004

�Dual Core�Enhanced Scaling�SMT�Distributed Switch +�Core Parallelism +�FP Performance +�Memory bandwidth +�Virtualization

2007

� Dual Core� High Frequencies � Virtualization +� Memory Subsystem +� Altivec � Instruction Retry� Dyn Energy Mgmt� SMT +� Protection Keys

2010

� Multi Core� On-Chip eDRAM� Power Optimized Cores� Memory Subsystem ++� SMT++� Reliability +� VSM & VSX� Protection Keys+

� In Design

POWER4180 nm

POWER5130 nm

POWER665 nm

POWER745 nm

POWER8*

* or whatever it will be named

201x

Page 3: Power Systems 2009 Hardware

3

© 2009 IBM Corporation5

STG Technical Conferences 2009

Power Systems 2009 Hardware Announcements and Future Insights

• More cores per chip

• Better per core performance ���� better system performance

• Enhanced “Multi Threading” capabilities

• Advanced Hardware Features

� Next Generation memory: DDR3

� Greater IO Bandwidth

• Innovative Power Management capabilities

• Smooth Technology Upgrades

POWER7 Technology Directions

See SMP33 for more detail

© 2009 IBM Corporation6

STG Technical Conferences 2009

Power Systems 2009 Hardware Announcements and Future Insights

POWER6 Product Offerings –POWER7 Similar High-End

Compute

Midrange

Entry

Blades Operating Systems

• POWER6 was introduced over 2007+2008

• POWER7 rollout will be phased as well

Page 4: Power Systems 2009 Hardware

4

© 2009 IBM Corporation7

STG Technical Conferences 2009

Power Systems 2009 Hardware Announcements and Future Insights

Key Evolutionary I/O Technology Transitions

1. SCSI to SAS

2. PCI / PCI-X / PCI-X DDR to PCIe

3. RIO/HSL to 12X

4. IBM i : IOP-based to Smart IOA

,2008 PCIe available in 520/550/570 CEC, 2009 expand to I/O drawer

Disk = 3.5-inch & SFFRemovable media SAS & SATA

RIO/HSL to RIO-2/HSL-2 12X (SDR) and 12X DDR

© 2009 IBM Corporation8

STG Technical Conferences 2009

Power Systems 2009 Hardware Announcements and Future Insights

POWER7 capable

I/O Drawer Evolution – 2009 Adds PCIe & SFF

SPDSPD & SCSI 10k rpm disk

HSL/RIO-2PCI-X & SCSI15k rpm disk

12XPCI-X DDR & separate Disk drawr

12X DDRPCIe & SAS SFF

Pre 2000 2002/3 2007/8 2009

HSL/RIOPCI & SCSI10k rpm disk

2000

• 2009 drawers offer many advantages, but will require normal due diligence in planning the specific configuration

• Significant I/O technology transition

Big deltaSmall deltaModest deltaSmall deltastartingDisk

Big deltaSmall* deltaSmall deltaBig deltastartingPCI

Small deltaBig deltaSmall deltaBig deltastartingLoop

* IBM i & IOP = big delta

Page 5: Power Systems 2009 Hardware

5

© 2009 IBM Corporation9

STG Technical Conferences 2009

Power Systems 2009 Hardware Announcements and Future Insights

PCI-X to PCIe Adapter Transition

YNo PlansFCoE (FCoCEE)

No plansYIOP

YYUSB

Y

Y

Y

Y

Y

Y

Y

Y

Y

PCI-X

Not in 2009Crypto

No adapter plansInt xSeries Card (IXS)

Not in 2009iSCSI

No plansTwinax

YFibre Channel

Y except for large

cache not in 2009SAS

No plansSCSI

YWAN / Async

YLAN (Ethernet)

PCIeType adapter

Chart mixes AIX & IBM i adapters. Some adapters are operating system specific

© 2009 IBM Corporation10

STG Technical Conferences 2009

Power Systems 2009 Hardware Announcements and Future Insights

Purchase a Power 595 Today - your Footprint for the Future

�Components replaced include:

�Processor books, including memory

�System Controllers (2)

�12X I/O drawers & GX adapters will migrate

�Refresh RIO-based I/O drawers & GX adapters with 12X technology now to optimize performance and prepare for the POWER7 transition

Power 595: Simple processor book upgrade from POWER6 to POWER7

Smooth upgrades will also be available for Power 570 systems, enabling clients to quickly transition to POWER7 technology while leveraging their current investment

IBM announces plans to enhance your investment in Power servers with upgrades to POWER7

All statements regarding IBM's future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only. Any reliance on these Statements of Direction is at the relying party's sole risk and will not create liability or obligation for IBM.

Page 6: Power Systems 2009 Hardware

6

© 2009 IBM Corporation11

STG Technical Conferences 2009

Power Systems 2009 Hardware Announcements and Future Insights

High End Transition

ClockClock ClockClock

New Components�Nodes�Node Controllers �System Controllers

Preserve Power

Preserve I/O( 12X )

POWER6 POWER7

© 2009 IBM Corporation12

STG Technical Conferences 2009

Power Systems 2009 Hardware Announcements and Future Insights

Public Information

• SOD Enterprise for Enterprise Power Systems 2009 July: www.ibm.com/systems/power/hardware/sod.html

– POWER6 570/595 upgrades to POWER7

– Support of 12X I/O on POWER7

• Planning information2009 August: www.ibm.com/systems/power/hardware/sod2.html

– POWER5 to POWER7 upgrades through POWER6

– POWER7 support 12X drawers, but not RIO/HSL drawers/towers

– POWER7 not support small SCSI (36GB or smaller) or 10k SCSI

– POWER7 not support QIC tape

– POWER7 not support IOPs and IOP-based PCI adapters

All statements regarding IBM's future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only. Any reliance on these Statements of Direction is at the relying party's sole risk and will not createliability or obligation for IBM.

Page 7: Power Systems 2009 Hardware

7

© 2009 IBM Corporation13

STG Technical Conferences 2009

Power Systems 2009 Hardware Announcements and Future Insights

Statement of Direction for Enterprise Power Systems

IBM is committed to enhancing their clients’ investment in the IBM Power Systems family of servers.

Based on this commitment, IBM plans to provide an upgrade path from the current IBM Power® 595 server with 12X I/O to IBM's next-generation POWER7(TM) processor-based high-end server. The upgrade is planned as a simple replacement of the processor books and two system controllers with new POWER7 components, within the existing system frame.

IBM also plans to provide an upgrade path from the current IBM Power® 570 server with 12X I/O to IBM's next generation POWER7 processor-based modular enterprise server.

Enterprises with multiple systems leveraging PowerVM Live Partition Mobility may use this function to maintain application availability during an upgrade process that can

execute during normal business hours instead of at night or over a weekend.

IBM's Power 570 and 595 servers are engineered to deliver the highest levels of Power Systems reliability, availability and serviceability. IBM continues to execute a roadmap with innovations designed to help clients achieve continuous application availability, while easily and cost-effectively transitioning to new generations of technology .

Copy for Dirk

All statements regarding IBM's future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only. Any reliance on these Statements of Direction is at the relying party's sole risk and will not create liability or obligation for IBM.

© 2009 IBM Corporation14

STG Technical Conferences 2009

Power Systems 2009 Hardware Announcements and Future Insights

Planning Statements

POWER5 Upgrades (posted August 2009)• IBM plans that any upgrades preserving the same serial number into a POWER7 server will be from a POWER6 server. Clients

on POWER5 and earlier technology will need to upgrade to POWER6 prior to being able to upgrade to POWER7 if they want to preserve their serial number.

I/O Drawers (posted August 2009)• IBM plans that the POWER7 based systems will support the existing 12X I/O drawers currently supported on POWER6

systems. These include the #5796/7314-G30, #5797/5798, #5802, and #5803/5873. The older/slower RIO/HSL-attached I/O drawers will not be supported. POWER6 clients should consider replacing RIO/HSL I/O drawers with newer technology drawers to smooth eventual adoption of POWER7 servers. RIO/HSL I/O drawers include: #0595/5095/7311-D20, #5790/7311-D11, #5094/5294/5096/5296, #5088/0588 and #5791/5794/7040-61D.

SCSI Disk Drives (posted August 2009)• IBM plans that POWER6 systems will be the last servers to support the attachment of SCSI disk drives which are 36GB or

smaller or support the attachment of 10k rpm SCSI drives. Clients currently using smaller or slower SCSI drives should consider replacing these drives with newer technology drives which are currently supported on the POWER6 systems to smooth eventual adoption of POWER7 servers.

QIC Tape (posted August 2009)• IBM plans that POWER6 systems will be the last servers to support the use of the QIC (Quarter Inch Cartridge) tape drives. The

QIC media is also known as "SLR". Clients currently using QIC tape drives should consider migrating to newer technology media/drives which are currently supported on the POWER6 systems to smooth eventual adoption of POWER7 servers.

IOP and IOP-based PCI adapters (posted August 2009)• IBM plans that the POWER6 based systems (all models) will be the last models to support the IBM i IOP and IOP based

adapters. IOPless (Smart IOA) options are available for all I/O attachments (except Twinax and IXS) and provide more efficient attachment of I/O. Clients using IOP based I/O should make plans to move off IOPs to smooth the eventual adoption of POWER7 servers. IOP feature codes are #2843, #2844, #2847 (SAN Boot) and #3705. Note that there can be differences in the specific devices supported with or without an IOP by IBM i. For example some older tape libraries such as the 3590 require an IOP-based adapter. Or there may be a functional difference without an IOP, for example SDLC or X.25 on WAN/LAN adapters require an IOP. A partial list of adapters supported on POWER6 servers which require an IOP includes:

– #4746Twinax Workstation Controller (which means no twinax displays/printers unless an OEM conversion device is used) – #4812/4813 Integrated xSeries Server (IXS) (use iSCSI alternative)

– #2757/2780/5580/5778 Disk Controllers (use newer disk controllers) – #2787/5761/5760 Fiber Channel Adapters (use newer Fibre Channel Adapters)

– #2749 HVD SCSI Adapter or Ultra Media Controller) (use newer technology - adapter and probably device also)

All statements regarding IBM's future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only. Any reliance on these Statements of Direction is at the relying party's sole risk and will not create liability or obligation for IBM.

Page 8: Power Systems 2009 Hardware

8

© 2009 IBM Corporation15

STG Technical Conferences 2009

Power Systems 2009 Hardware Announcements and Future Insights

Agenda:

• Hardware Future insights

– Processor

– I/O

• 2009 Hardware Announcements

– TODAY!!!!!!! Oct 20th IBM announces …

– April/May 2009

© 2009 IBM Corporation16

STG Technical Conferences 2009

Power Systems 2009 Hardware Announcements and Future Insights

TODAY!!!!! IBM Announces …...

Highlights:

–Fibre Channel over Ethernet (FCoE)

–Diskless 19-inch PCIe 12X I/O drawer

–Additional SFF disk drives

–Solid State Drive (SSD) enhancements

– IBM i supports #5903 PCIe 380MB Cache RAID Adapter

–USB Removable Disk Drive

–New HMC

– I/O enhancements for Power BladeCenter

Announce: 20 OctobereConfig: 20 OctoberGA: 30 October

Page 9: Power Systems 2009 Hardware

9

© 2009 IBM Corporation17

STG Technical Conferences 2009

Power Systems 2009 Hardware Announcements and Future Insights

April Announcement Highlights

• Faster Power 520/550

– Power 520 ….. 4.7 GHz

– Power 550 ….. 5.0 GHz

• More powerful Power BladeCenter

– JS23/JS43 …. 4-core & 8-core 4.2 GHz

– I/O enhancements

• New I/O for POWER6

– PCIe 12X I/O drawers

– New PCIe adapters

– PCI-X large cache SAS disk controller

– High performance solid state drive (SSD)

• New I/O for POWER5

– SAS I/O introduced (some limitations)

– Smart Fibre Channel (IBM i 6.1)

• HMC enhancements

Announce: 28 AprilGA: May & June/July

© 2009 IBM Corporation18

STG Technical Conferences 2009

Power Systems 2009 Hardware Announcements and Future Insights

Combined 2009 Agenda (April + October)

• Faster Power 520/550

– Power 520 ….. 4.7 GHz

– Power 550 ….. 5.0 GHz

• More powerful Power BladeCenter

– JS23/JS43 …. 4-core & 8-core 4.2 GHz

– I/O enhancements

• New I/O for POWER6

– PCIe 12X I/O drawers & SFF Disk

– New PCIe adapters: SAS PCIe, SAS PCI-X, FCoE

– High performance solid state drive (SSD)

– USB Removable Disk Drive

• New I/O for POWER5

Page 10: Power Systems 2009 Hardware

10

IBM Training

© 2009 IBM Corporation19

Power 520 Enhancements POWER6+

Power 520

1-core 4.2 GHz

2-core 4.2 GHz

4-core 4.2 GHz

Power 520

1-core 4.2 GHz

2-core 4.7 GHz

4-core 4.2 GHz

Pre May 2009 Post May 2009

4-core 4.7 GHz

2-core 4.2 GHz

* 4-Core 4.2 GHz withdrawal from marketing announced, effective October 2009

Note: IBM i 6.1 required for 4.7 GHz

*

IBM Training

© 2009 IBM Corporation20

Power 520 Enhancements: Processor FeaturesPower 520

1-core 4.2 GHz

2-core 4.7 GHz

4-core 4.2 GHz

Post May 2009

4-core 4.7 GHz

2-core 4.2 GHz

*

• 4.7 GHz POWER6+ with L3 cache

– 4.2 GHz Power 520 has no L3 cache; 4.7 has 32MB / chip

– POWER6+ = GHz faster and additional AIX enablers

• New processor features

– #5577 2-core 4.7 GHz

– #5587 4-core 4.7 GHz

• Same ordering product structure and upgrade structure

0000

5555

10101010

15151515

20202020

25252525

30303030

35353535

40404040

1-core 1-core 1-core 1-core 2-core2-core2-core2-core 4-core4-core4-core4-core

rPerfrPerfrPerfrPerf

4.2 GHz4.2 GHz4.2 GHz4.2 GHz

4.7 GHz4.7 GHz4.7 GHz4.7 GHz

0000

5000500050005000

10000100001000010000

15000150001500015000

20000200002000020000

1-core 1-core 1-core 1-core 2-core2-core2-core2-core 4-core4-core4-core4-core

CPWCPWCPWCPW

4.2 GHz4.2 GHz4.2 GHz4.2 GHz

4.7 GHz4.7 GHz4.7 GHz4.7 GHz

Page 11: Power Systems 2009 Hardware

11

IBM Training

© 2009 IBM Corporation21

Power 520 Rack & Tower

YesRedundant Cooling

3 USB, 2 Serial, 2 HMC

Optional: SAS portIntegrated Ports

Optional

Up to 40 partitions

Yes / Max: 4 (PCIe) / 8 (PCI-X)

GX Bus connection: RIO2 / IB / IB2

1 Slim-line DVD

1 Half High Tape

• Dual Port 10/100/1000 Ethernet

•Optional: Quad 1Gbt or Dual 10Gbt

Yes Optional: RAID support

• PCIe: 3 Slots

• PCI-X 266: 2 Slots • GX Bus: 2 Slots (2nd slot requires 4 cores)

1 GX slot shares space with PCIe slot

6 DASD ( 3.5”)

Optional: 8 SFF DASD ( 10 or 15K )

Optional SSD support

Up to 64GB (Buffered )

POWER6 1 or 2 Cores 4.2 GHz

POWER6+ 2 or 4 Cores 4.7 GHz

L3 Cache: 32MB per chip with 4.7 GHz

Power 520 Rack & Tower 8204-E4A

SATA Media Bays

Redundant Power

Dynamic LPAR

Remote IO Drawers

Integrated Virtual

Ethernet

Integrated SAS / SATA

Expansion

Internal SAS Disks

DDR2 Memory

Architecture

IBM Training

© 2009 IBM Corporation22

Power 520 Enhancements: Product Structure

• 4.7 GHz product structure uses the same product structure as 4.2 GHz for new server acquisitions

– Same Express offerings

• Same AIX/Linux packages - same minimums/rules

• Same IBM i editions - same minimums/rules – just pick the

appropriate GHz when ordering

– Same IBM i CBU offering/rules - #0444 specify

– IBM i 6.1 required for 4.7 GHz 520

AIX/Linux Express Offerings

• 1-core 4.2 GHz config

• 2-core 4.2 GHz config

• 2-core 4.7 GHz config

• 4-core 4.2 GHz config

• 4-core 4.7 GHz config

• Oracle 2-core 4.2 GHz

• Oracle 4-core 4.2 GHz

• SAP 2-core 4.2 GHz

• SAP 4-core 4.2 GHz

Select rack or tower

IBM i Express Offerings

• #9633 1-core entry 4.2 GHz

• #9634 1-core growth 4.2 GHz

• #9636 2-core 30-user 4.2 or 4.7 GHz

• #9637 2-core 150-user 4.2 or 4.7 GHz

• #9638 2-core unlimited 4.2 or 4.7 GHz

• #9639 4-core 50-user 4.2 or 4.7 GHz

• #9640 4-core 150-user 4.2 or 4.7 GHz

• #9643 4-core unlimited 4.2 or 4.7 GHz

• #9635 Solution edition 4.2 or 4.7 GHz

Select rack or tower

Power 520

1-core 4.2 GHz

2-core 4.7 GHz

4-core 4.2 GHz

Post May 2009

4-core 4.7 GHz

2-core 4.2 GHz

*

Note: 4.7GHz QPRCFEAT = #5577 and 5587

Page 12: Power Systems 2009 Hardware

12

IBM Training

© 2009 IBM Corporation23

*

Power 520 Upgrade Paths

520 4-core8203-E4A

5209406-520

520

9111

POWER6

POWER5/5+

Unified system

System i server

System p serverPOWER5/5+

POWER5/5+

POWER6/POWER6+

5159407-515

No upgrade paths

520 2-core8203-E4A

POWER6/POWER6+

520 1-core

8203-E4A

POWER6

520 1-core9407-M15

520 2-core9408-M25

POWER6

5209405-520

5259406-525POWER5+

POWER5+

No upgrade paths into 4-core

No upgrade

paths

No upgrade paths

Power with i flavor

May 2009: Same 9406 paths as before, but now to 4.2 or 4.7GHz 2-core 8203 (during 9406

upgrade) (i 6.1 needed for 4.7 GHz)

* 9408 conversion to 8203 (2-core to 2-core)

** 9407 conversion to 8203 (1-core to 1-core)

**

NOTE: No upgrade

paths from 4.2 to 4.7

GHz keeping serial

number

IBM Training

© 2009 IBM Corporation24

Power 550 Enhancements POWER6+

Pre May 2009 Post May 2009

Power 550

4.2 GHz

3.5 GHz

Power 550

4.2 GHz

5.0 GHz

3.5 GHz

Page 13: Power Systems 2009 Hardware

13

IBM Training

© 2009 IBM Corporation25

Power 550 Enhancements: Processor FeaturesPower 550

3.5 GHz

Post May 2009

5.0 GHz

4.2 GHz

*

New POWER6+ 5.0 GHz Processor card features

– #4967 2-core 5.0 GHz

– POWER6+ = faster GHz and additional AIX enablers

– Same ordering product structure and “same serial number” upgrade” structure

0000

5000500050005000

10000100001000010000

15000150001500015000

20000200002000020000

25000250002500025000

30000300003000030000

35000350003500035000

40000400004000040000

4-core 4-core 4-core 4-core 6-core6-core6-core6-core 8-core8-core8-core8-core

CPWCPWCPWCPW

3.5 GHz3.5 GHz3.5 GHz3.5 GHz

4.2 GHz4.2 GHz4.2 GHz4.2 GHz

5.0 GHz5.0 GHz5.0 GHz5.0 GHz

2-Core values not shown to save space

0000

10101010

20202020

30303030

40404040

50505050

60606060

70707070

80808080

4-core 4-core 4-core 4-core 6-core6-core6-core6-core 8-core8-core8-core8-core

rPerfrPerfrPerfrPerf

3.5 GHz3.5 GHz3.5 GHz3.5 GHz

4.2 GHz4.2 GHz4.2 GHz4.2 GHz

5.0 GHz5.0 GHz5.0 GHz5.0 GHz

2-Core values not shown to save space

IBM Training

© 2009 IBM Corporation26

Power 550 Rack & Tower

YesRedundant Cooling

YesNEBS / DC Power

3 USB, 2 Serial, 2 HMC

Optional: SAS portIntegrated Ports

Optional

Up to 80 partitions

Yes / Max: 4 (PCIe) / 8 (PCI-X)

GX Bus connection: RIO-2 / IB / IB2

1 Slim-line DVD

1 Half High Tape

• Dual Port 10/100/1000 Ethernet

•Optional: Quad 1Gbt or Dual 10Gbt

Yes Optional: RAID support

• PCIe: 3 Slots

• PCI-X 266: 2 Slots

• GX Bus: 2 Slots (2nd slot requires min 4 core)

Each GX slot shares space with PCIe slot

6 DASD ( 3.5”)

Optional: 8 SFF DASD ( 10 or 15K )

Optional SSD support

Up to 256GB (Buffered )

2, 4, 6, or 8 Cores

POWER6: 3.5 / 4.2 POWER6+: 5GHz

L3 Cache: 32MB per chip

Power 550 Rack & Tower 8204-E4A

Media Bays

Redundant Power

Dynamic LPAR

Remote IO Drawers

Integrated Virtual

Ethernet

Integrated SAS / SATA

Expansion

Internal SAS Disks

DDR2 Memory

Architecture

Page 14: Power Systems 2009 Hardware

14

IBM Training

© 2009 IBM Corporation27

Power 550 Enhancements: Product Structure

• 5.0 GHz product structure uses the same product structure as 3.5 and 4.2 GHz for new server acquisitions

– Same Express offerings• Same AIX/Linux packages - same minimums/rules

– Plus 6-core packages in addition to 2-, 4- and 8-core packages

• Same IBM i editions - same minimums/rules

– Same IBM i CBU offering/rules - #0444 specify

– IBM i 6.1 required for 5.0 GHz 550

AIX/Linux Express Offerings

• 2-core config

– 3.5, 4.2 or 5.0 GHz

• 4-core config

– 3.5, 4.2 or 5.0 GHz

• 6-core config

– 3.5, 4.2 or 5.0 GHz

• 8-core config

– 3.5, 4.2 or 5.0 GHz

• SAP 2-core 4.2 GHz

• SAP 4-core 4.2 GHz Select rack or tower

IBM i Express Offerings

• #9642 2-, 4-, 6-, 8-core

– 3.5 or 4.2 or 5.0 GHz

• #9645 4-core Solution with minimum of 3

IBM i processor license entitlements

– 4.2 or 5.0 GHz

• #9646 4-core Solution with minimum of 4

IBM i processor license entitlements

– 4.2 or 5.0 GHz

Select rack or tower

Power 550

3.5 GHz

Post May 2009

5.0 GHz

4.2 GHz

*

Note: 5.0 GHz QPRCFEAT = #4967

IBM Training

© 2009 IBM Corporation28

Power 550 Upgrade Paths

550 8-core

8204-E8A

5509406-550

5509113

POWER5/5+

Unified system

System i server

System p server

POWER5/5+

POWER6/POWER6+

550 4-core

8204-E8A4.2 GHz or 5.0 GHz POWER6/POWER6+

No

upgrade paths

Power with i flavor

550 6-core8204-E8A

POWER6/POWER6+

550 2-core

8204-E8A

POWER6/POWER6+

May 2009: Same 9406 paths as before, but now to 4.2 or 5.0 GHz (during 9406 upgrade) (i 6.1 required for 5.0 GHz)

No upgrade paths into 3.5 GHz from 9406-550

Same capability to add more processor cards

up to an 8-core

Page 15: Power Systems 2009 Hardware

15

IBM Training

© 2009 IBM Corporation29

Power 550 – Replacing Slower GHz with Faster GHz

5508204-E8A3.5 GHz

Term “upgrade” means different things

• If define “upgrade” as getting faster 8204 processor cards at a lower price, then no upgrades.

• If define “upgrade” as getting faster 8204 processor cards at full price, then yes to 4.2 and no to 5.0 upgrades.

• All processor cards in a server must be at the same GHz

• Due to backplane differences, can not replace existing 3.5 or 4.2 GHz

processor cards with 5.0 GHz processor cards.

5508204-E8A4.2 GHz

5508204-E8A5.0 GHz

OK

IBM Training

© 2009 IBM Corporation30

Combined 2009 Agenda (April + October)

• Faster Power 520/550

– Power 520 ….. 4.7 GHz

– Power 550 ….. 5.0 GHz

• More powerful Power BladeCenter

– JS23/JS43 …. 4-core & 8-core 4.2 GHz

– I/O enhancements

• New I/O for POWER6

– PCIe 12X I/O drawers & SFF Disk

– New PCIe adapters: SAS PCIe, SAS PCI-X, FCoE

– High performance solid state drive (SSD)

– USB Removable Disk Drive

• New I/O for POWER5

Page 16: Power Systems 2009 Hardware

16

IBM Training

© 2009 IBM Corporation3131

BladeCenter JS23 BladeCenter JS23 POWER6+

Architecture 4 core / 2 Socket / 4.2GHz

L3 Cache 32MB / socket

DDR2 Memory 4GB to 64GB (ChipKill)

DASD / Bays 0-1 SAS disk or 0-1 SSD

Daughter Card Options1 PCI-E CIOv Expansion Card

1 PCI-E CFFh High Speed Expansion Card

Integrated FeaturesDual Port 10/100/1000 EthernetSAS Controller, USB & KVM

Fiber Support Yes ( via Blade Center )

Media Bays 1 Blade Center

Redundant Power Yes BladeCenter

Redundant Cooling Yes BladeCenter

Service Processor Yes

Virtualization Built-in PowerVM Standard Edition

Systems ManagementIBM Director and CSMIBM EnergyScale™

OS Support AIX 5.3 , 6.1, Linux and IBM i

BC Chassis BCH, BCHT and BCS

JS23 QPRCFEAT value = 52C1, QMODEL = 23X.

7778-23X

IBM Training

© 2009 IBM Corporation32

POWER6+ Blade Packaging --- JS23 Upgrade to JS24

+

=

7778-23X

7778-23X + feature #8446

Feature #84464 core 4 core

8 core

Page 17: Power Systems 2009 Hardware

17

IBM Training

© 2009 IBM Corporation3333

BladeCenter JS43BladeCenter JS43 POWER6+

Architecture8 core / 4 Socket / 4.2GHzSMP Interconnect

L3 Cache 32MB / socket

DDR2 Memory Up to 128GB (Double Wide)

DASD / Bays 0-2 SAS disk or 0-2 SSD

Daughter Card Options

2 PCI-E CIOv Expansion Card

2 PCI-E CFFh High Speed Expansion Card

Integrated FeaturesQuad Port 10/100/1000 EthernetSAS Controller, USB & KVM

Fiber Support Yes ( via Blade center )

Media Bays 1 Blade Center

Redundant Power Yes Blade Center

Redundant Cooling Yes Blade Center

Service Processor Yes

Virtualization Built-in PowerVM Standard Edition

Systems ManagementIBM Director and CSMIBM EnergyScale™

OS Support AIX 5.3 , 6.1, Linux, and IBM i

BC ChassisBCH, BCHT and BCS

JS43 QPRCFEAT value = 52C0, QMODEL = 23X.

7778-23X + feature #8446

IBM Training

© 2009 IBM Corporation34

POWER6+ Blades Performance Boost

0

10

20

30

40

50

60

70

JS12-2 Core JS22-4 Core JS23-4 Core JS43-8 Core

POWER6+@ 4.2 GHz

0

5000

10000

15000

20000

25000

JS12-2 Core JS22-4 Core JS23-4 Core JS43-8 Core

POWER6@ 4.0 GHzPOWER6

@ 3.8 GHz

POWER6+@ 4.2 GHzPOWER6

@ 4.0 GHzPOWER6@ 3.8 GHz

CPWrPerf

Page 18: Power Systems 2009 Hardware

18

IBM Training

© 2009 IBM Corporation35

rPerf Performance

0

10

20

30

40

50

60

70

JS12 JS22 JS23 JS43 550

Power 5508 Cores4.2 GHz

Power6+

Power6

BladeCenter vs 4.2 GHz 550:

About the same great performance, but note other factors in addition to rPerf or CPW need to be considered when choosing the best option for your shop.

IBM Training

© 2009 IBM Corporation36

BladeCenter I/O Enhancements

QLogic 10GbE FCoCEE expansion card

Brocade 10 Port 8Gb SAN Switch Module

Brocade 20 Port 8Gb SAN Switch Module

Brocade 8Gb SFP+ Optical Transceiver

BNT 10GbE switch module

10GEnet pass thru module

Voltaire 4X IB QDR Switch Module

October Adapters

April Adapters Storage Options

• QLogic 8 Gb Fibre Channel Expansion Card (CIOv)

• Emulex 8 Gb Fibre Channel Expansion Card (CIOv)

• QLogic 4 Gb Fibre Channel Expansion Card (CIOv)

• 3 Gb SAS Passthrough Expansion Card (CIOv)

• QLogic Ethernet and 4 Gb Fibre Channel Expansion Card (CFFh)

• 4x Infiniband Dual Port (CFFh)

• 69GB SFF Solid State Drive

• IBM 300GB SAS 10K SFF HDD

• IBM 146GB SAS 10K SFF HDD

• IBM 73GB SAS 10K SFF HDD

• IBM BladeCenter S SAS RAID Controller

• IBM i Virtual Tape support

Page 19: Power Systems 2009 Hardware

19

IBM Training

© 2009 IBM Corporation37

Power BladeCenter Hardware Announcement Summary

• FCoE / CNA

– FC #8275 : QLogic 10GbE FCoCEE expansion card (BCH)

• JS12/JS22/JS23/JS43

• Switch related additions

– FC #5045: Brocade 10 Port 8Gb SAN Switch Module (BCH)

– FC #5869 : Brocade 20 Port 8Gb SAN Switch Module (BCH)

– FC #5358 : Brocade 8Gb SFP+ Optical Transceiver (BCH)

– FC #3248 : BNT 10GbE switch module (BCH) (related also to #8725)

– FC #5412 10GEnet pass thru module (BCH) (related also to #8725)

• InfiniBand enhancements

– FC #3204 : Voltaire 4X IB QDR Switch Module (BCH)

– FC #3249: QDR InfiniBand QSFP Cable (BCH)

IBM Training

© 2009 IBM Corporation38

Scalable BladeCenter JS23 and JS43 ExpressBest blade for UNIX, Built-in Virtualization, ~2X better performance than HP BL860c,

• IBM BladeCenter JS23 Express– Single Wide 4-core blade, 4.2GHz POWER6 processors with L3 Cache

– Up to 64GB of on board memory– Elegantly simple scalability and support for Solid State Drives

• IBM BladeCenter JS43 Express – Double Wide 8-core blade, 4.2GHz POWER6 processors with L3 Cache

– Up to 128GB of on board memory

– Doubles I/O and Expansion Capabilities

• New I/O and Storage– Expanded portfolio of I/O and Storage options for blades

IBM BladeCenter JS43 Express

IBM BladeCenter JS23 Express

Adapters Storage Options

• QLogic 8 Gb Fibre Channel Expansion Card (CIOv) (#8242)*

• Emulex 8 Gb Fibre Channel Expansion Card (CIOv) (#8240)*

• QLogic 4 Gb Fibre Channel Expansion Card (CIOv) (#8241)

• 3 Gb SAS Passthrough Expansion Card (CIOv) (# 8246)

• QLogic Ethernet and 4 Gb Fibre Channel Expansion Card (CFFh) (# 8252)

• 4x Infiniband Dual Port (CFFh) (#8258)

• 69GB SFF SAS Solid State Drive (# 8273)

• IBM 300GB SAS 10K SFF HDD (#8274)

• IBM 146GB SAS 10K SFF HDD (#8236)

• IBM 73GB SAS 10K SFF HDD (#8237)

• IBM BladeCenter S SAS RAID Controller (#3734)

ALL JS blades include:� Built-in PowerVM Standard Edition� Support for AIX, IBM i 6.1, and Linux

INTRODUCING

Page 20: Power Systems 2009 Hardware

20

IBM Training

© 2009 IBM Corporation39

BladeCenter S SAS RAID Controller Module

Note: Does not support connection to DS3200IBM i is not pre-installed with RSSM configurations

• Provides additional protection options for BladeCenter S storage

• Fully redundant SAN integrated into BladeCenter S chassis

– High-performance, fully duplex, 3Gbps speeds

– Support for RAID 0, 1, 10, & 5

– Supports 2 disk storage modules with up to 12 SAS drives

– Supports external SAS tape drive

– Supports existing #8250 CFFv SAS adapter on blade

– 1GB of battery-backed write cache between the 2 modules

– Two SAS RAID Controller Modules required

• Supports Power and x86 Blades

– Recommend separate RAID sets • For each IBM i partition

• For IBM i and Windows storage

– Requirements• Firmware update for SAS RAID Controller Switch Modules

• VIOS 2.1.1, eFW 3.4.2

IBM Training

© 2009 IBM Corporation40

IBM i Support for Virtual Tape

• Virtual tape support enables IBM i partitions to directly backup to PowerVM VIOS attached tape drive saving hardware costs and management time

• Simplifies backup and restore processing with BladeCenter implementations

– IBM i 6.1 partitions on BladeCenter JS12, JS22, JS23, JS43

– Supports IBM i save/restore commands & BRMS

– Supports BladeCenter S and H implementations

• Simplifies migration to blades from tower/rack servers– LTO-4 drive can read backup tapes from LTO-2, 3, 4 drives

• Supports IBM Systems Storage SAS LTO-4 Drive– TS2240 for BladeCenter and Power Servers

– 7214 Model 1U2 with FC#1404, and FC#5720 DVD/Tape SAS External Storage Unit with FC#5746 for Power servers

• Requirements– VIOS 2.1.1, eFW 3.4.2, IBM i 6.1 PTFs

Page 21: Power Systems 2009 Hardware

21

IBM Training

© 2009 IBM Corporation41

Combined 2009 Agenda (April + October)

• Faster Power 520/550

– Power 520 ….. 4.7 GHz

– Power 550 ….. 5.0 GHz

• More powerful Power BladeCenter

– JS23/JS43 …. 4-core & 8-core 4.2 GHz

– I/O enhancements

• New I/O for POWER6

– PCIe 12X I/O drawers & SFF Disk

– New PCIe adapters: SAS PCIe, SAS PCI-X, FCoE

– High performance solid state drive (SSD)

– USB Removable Disk Drive

• New I/O for POWER5

IBM Training

© 2009 IBM Corporation42

New 12X I/O Drawers – PCIe & SFF Disk

• Greater throughput – up to 2x other drawers

• PCIe slots

• SFF (Small Form Factor) SAS drive bays

• 4U, full width• 10 PCIe slots & 18 SFF disk bays• Choose: with/without* disk bays• POWER6 520/550/560/570• Minimum AIX 5.3, IBM i 6.1, Linux SLES10/RHEL4.7

• 4U, full width• 20 PCIe slots & 26 SFF disk bays• Choose: with/without disk bays• POWER6 595, 575• Minimum AIX 5.3, IBM i 6.1, Linux SLES10/RHEL4.7

12X DDRPCIe SFF

24-inch form factor19-inch form factor

* October 2009

#5803, #5873

#5802,#5877*

Page 22: Power Systems 2009 Hardware

22

IBM Training

© 2009 IBM Corporation43

New 12X I/O Drawers – PCIe & SFF Disk

• Greater throughput – up to 2x* other drawers* Full 2x requires 8x PCIe cards & 12X DDR

• PCIe slots– Leveraging industry technology

• SFF (Small Form Factor) SAS disk drives– 10k rpm for AIX/Linux/VIOS – 15k rpm for AIX/Linux/VIOS and IBM i– Run by PCIe adapter in drawer

• 4U, full width• 10 PCIe slots• 18 SFF disk bays• Choose: with/without** disk bays• Max 2 drawer per loop• POWER6 520/550/560/570

• Min AIX 5.3, IBM i 6.1, Linux SLES10/RHEL4.7

• 4U, full width• 20 PCIe slots• 26 SFF disk bays• Choose: with/without disk bays• Max 1 drawer per loop• POWER6 595, 575

• Min AIX 5.3, IBM i 6.1, Linux SLES10/RHEL4.7

12X DDRPCIe SFF

24-inch form factor 19-inch form factor

* October 2009

#5803, #5873

#5802,#5877*

IBM Training

© 2009 IBM Corporation44

Diskless 19-inch 12X I/O Drawer October 2009

#5877 announced October 2009– 10 PCIe slots– Zero SFF bays

#5877 is lower cost option for clients using

• all-SAN disk storage or virtual partition migration

• lots of PCIe slots

• #5877 requires AIX 5.3 or later; IBM i 6.1.1 or later; Linux SLES 10 or RHEL 4.6 or later

#5802 prices shown are IBM's USA suggested list prices under 8204-E8A as of Oct 2009 and are subject to change without notice; reseller prices may vary. #5877 projected prices are unannounced and highly subject to change. Actual prices will be announced later in October

#5802 announced April 2009– 10 PCIe slots– 18 SFF bays

#5877 pricing compared to #5802:

• List price: #5877 about 10% lower

• Maintenance: #5877 about 60% lower

Page 23: Power Systems 2009 Hardware

23

IBM Training

© 2009 IBM Corporation45

Positioning New 12X I/O Drawers

Compared to #5797/5798• Up to 2x more bandwidth• Six 2-4x faster PCI slots, Fourteen 0-2x faster PCI slots (vs PCI-X & PCI-X DDR)

• same number PCI slots• But disk controller uses slot(s)

• Can use 8 Gb FC with NPIV• Slightly faster SFF disk bays• Newer technology SAS drives• Same Max 1 drawer per loop• Has diskless option

• New 12X drawer is clear winner most of the time !!!

• Use older 12X drawer if need to use a PCI-X adapter*

Compared to #5796 + #5886• Up to 2x more bandwidth• 0-2X faster PCI slots (vs PCI-X DDR)

• 4 more PCI slots per box• 2 drawers per loop, vs four #5796

• Slightly fewer (but faster) PCI slots per loop

• 18 SAS disk bays vs zero disk bays• Plus can partition bays (unlike #5886 EXP 12S Disk Drawer

• Newer SAS SFF disk bays• Diskless option October 2009

• New drawer is usually winner • Use older 12X drawer if need PCI-X adapter*

• Review if need disk in drawer• IBM i – RIO #5790 for IOPs

* IBM i does not have controller with write cache for the SFF disk until 5903 SOD fulfilled

IBM Training

© 2009 IBM Corporation46

Power 595 I/O Drawer Comparisons

0 built in (use PCIe slot)

26 SAS SFF

(#5803)120 PCIe24-inch12X DDR#5803/5873

#5797/5798

#5794

#5791

I/O Drawer

4 built in (zero write cache)

2 built in (zero write cache)

4 built in (zero write cache)

AIX disk controller

1

1

1

Drawer / loop

24-inch

24-inch

24-inch

Form Factor

14 PCI-X DDR + 6 PCI-X

20 PCI-X

20 PCI-X

PCI slots

16 SCSI

8 SCSI

16 SCSI

Max # disk in drawer

12X

RIO

RIO

Loop

AIX/Linux heritage

Page 24: Power Systems 2009 Hardware

24

IBM Training

© 2009 IBM Corporation47

Power 595 I/O Drawer Comparisons

380MB write cache **

1.5GB write cache for SCSI

drawer *

1.5GB write cache for SCSI

drawer

1.5GB write cache

IBM i disk controller –

biggest cache

26 SAS SFF**

120 PCIe24-inch12X DDR#5803/5873

#5797/5798

#5790

#5094/9094

5294= two 5094

I/O Drawer

1

6

6

Drawer / loop

24-inch

19-inch

19-inch

Form Factor

14 PCI-X DDR + 6 PCI-X

6 PCI-X

14 PCI-X

PCI slots

16 SCSI, but

AIX only

0

45 SCSI

Max # disk in same drawer

12X

HSL/RIO

HSL/RIO

Loop

** IBM i 2009 SOD for support of PCIe controller with write cache for #5803

IBM i heritage

IBM Training

© 2009 IBM Corporation48

19-inch I/O Drawer Comparisons

380MB write cache for SAS

drawer0210 PCIe19-inch

12X or 12X DDR

#5877

Oct 2009

380MB write cache

18 SAS SFF

210 PCIe19-inch12X or 12X DDR

#5802

#5796 or

5714-G30

#5790 or

7311-D11

#0595 or

7311-D20

I/O Drawer

1.5GB write cache for SCSI

drawer *

1.5GB write cache for SCSI

drawer *

1.5GB write cache

IBM i disk controller –

biggest cache

4

6

6

Drawer / loop

19-inch

19-inch

19-inch

Form Factor

6 PCI-X DDR

6 PCI-X

7 PCI-X

PCI slots

0

0

12 SCSI

Max disk in same drawer

12X SDR

HSL/RIO

HSL/RIO

Loop

* Plus 2009 new PCI-X 1.5GB write cache for SAS disk drawer** IBM i add support of 380 MB PCIe controller Oct 2009

AIX / IBM i / Linux heritage

Page 25: Power Systems 2009 Hardware

25

IBM Training

© 2009 IBM Corporation49

PCIe 12X I/O Drawer – SFF Drive Bays

Disk bays in drawer configured as one, two or four sets–Allows for partitioning of disk bays–Configuration done via physical mode switch on drawer

• Mode change requires power on/power off of drawer

–Each disk bay set can be attached to its own controller/adapter–Four SAS connections to drive bays

• Connects to PCIe SAS adapters/controllers• PCIe adapter selected

#5802 12X I/O DrawerAIX/Linux

• One set: 18 bays

• Two sets: 9 + 9 bays

• Four sets: 5 + 4 + 4 + 5 bays

IBM i

• Two sets: 9 + 9 bays

#5803 12X I/O DrawerAIX/Linux

• One set: 26 bays

• Two sets: 13 + 13 bays

• Four sets: 7 + 6 + 6 + 7 bays

IBM i

• Two sets: 13 + 13 bays

MODE SWITCH

1

2

4

IBM Training

© 2009 IBM Corporation50

#5803 12X I/O Drawer - PCIe & SFF

• Can join using 0.6m 12X DDR cable and put entire drawer on one 12X loop (maximum number of 24” I/O drawers) – Single Loop Mode

• Or can “split” drawer, placing each half on separate 12X loop (highest performance

option – maximum bandwidth)

PCIe

PCIe

PCIe

PCIe

PCIe

PCIe

PCIe

PCIe

PCIe

PCIe

12X

DASD

DASD

DASD

DASD

DASD

DASD

DASD

SASDASD

DASD

DASD

DASD

DASD

DASD

SAS

PCIe

PCIe

PCIe

PCIe

PCIe

PCIe

PCIe

PCIe

PCIe

PCIe

12X

DASD

DASD

DASD

DASD

DASD

DASD

DASD

SASDASD

DASD

DASD

DASD

DASD

DASD

SAS

Front

Rear

12X DDR cable to processor book12X DDR cable to

processor book

0.6m 12X DDR cable

Each half has redundant power supplies

Each half has redundant power supplies

Single loop mode example(eConfig does not default this)

Page 26: Power Systems 2009 Hardware

26

IBM Training

© 2009 IBM Corporation51

#5873 12X I/O Drawer - PCIe (no SSF)

• Can “split” drawer, placing each half on separate 12X loop (highest performance

option – maximum bandwidth) -- “Double Loop Mode”

• Or can join using 0.6m 12X DDR cable and put entire drawer on one 12X loop (maximum number of 24” I/O drawers)

• Reasons for #5873 (no SFF slots) instead of #5802 (with SFF slots)– Use SAN for boot drive / load source, or just need lots of PCI slots

– Lower price & lower maintenance

PCIe

PCIe

PCIe

PCIe

PCIe

PCIe

PCIe

PCIe

PCIe

PCIe

12X

PCIe

PCIe

PCIe

PCIe

PCIe

PCIe

PCIe

PCIe

PCIe

PCIe

12X

Front

Rear

12X DDR cables to processor book12X DDR cables to

processor book

Each half has redundant power supplies

Each half has redundant power supplies

No conversions of #5873 � #5803 (No adding SFF slots)

Double loop mode example(eConfig defaults this mode)

IBM Training

© 2009 IBM Corporation52

• 24", 4U drawer• 26 DASD, SAS SFF• SAS Controller (optional) on PCIe card• 20 PCIe 2.5 Gb/s, 8X (8 lanes)

–Full Length Slots

IO Drawer Internal Diagram ( 24 inch )

SAS

SAS

SAS

SAS

SAS

SAS

SAS

SAS

SAS

SAS

SAS

SAS

SAS

SAS

SAS

SAS

SAS

SAS

SAS

SAS

SAS

SAS

SAS

SAS

SAS

SAS

SASPortExp

SASPortExp

SASPortExp

SASPortExp

DASD Backplane

IB2 Riser

PCIe HubPCIe

PCIe

PCIe

PCIe

PCIe

PCIe

PCIe

PCIe

PCIe

PCIe

PCIe Hub

PCIe Hub

8X: 4GB/s

12X: IB

IB2 Riser

PCIe HubPCIe

PCIe

PCIe

PCIe

PCIe

PCIe

PCIe

PCIe

PCIe

PCIe

PCIe Hub

PCIe Hub

8X: 4GB/s

12X: IB

12X IB DDR portsSustained: 9.5 GB/sec per port

12X IB DDR portsSustained: 9.5 GB/sec per port

• Hot Plug Redundant Power• Hot Plug Drives• Hot Plug I/O slots

Page 27: Power Systems 2009 Hardware

27

IBM Training

© 2009 IBM Corporation53

• 19", 4U drawer

• 18 DASD, SAS SFF

• SAS Controller (optional) on PCIe card

• 10 PCIe 2.5 Gb/s, 8X (8 lanes)

–Full Length Slots

IO Drawer Internal Diagram ( 19 inch )

SAS

SAS

SAS

SAS

SAS

SAS

SAS

SAS

SAS

SASPortExp

SASPortExp

SASPortExp

SASPortExp

DASD Backplane

IB2 Riser

PCIe HubPCIe

PCIe

PCIe

PCIe

PCIe

PCIe

PCIe

PCIe

PCIe

PCIe

PCIe Hub

PCIe Hub

8X: 4GB/s

12X: IB

12X IB DDR portsSustained: 9.5 GB/sec per port

(12GB/sec per port peak)

• Hot Plug Redundant Power

• Hot Plug Drives

• Hot Plug I/O slots

SAS

SAS

SAS

SAS

SAS

SAS

SAS

SAS

SAS

IBM Training

© 2009 IBM Corporation54

Additional 24-inch I/O Drawer Configuration Insights

• Max 32 #5803, but only max 31 #5873– Want the first PCIe 12X I/O drawer to be a #5803, not #5873 to ensure

that a #5720 Media Drawer can be installed– #5873 does not support #5720 attachment

• IBM Manufacturing will assume a #5912 in slot 10 will be dedicated for the #5720. Client can move/re-assign as desired.

• Physical planning information– #5803 can take a little more power than a 5797/5798

#5803 PCIe slots + drive bay, #5873 PCIe slots + no drive bays

#5803 specs• Max power & heat: 1520 W & 5190 BTU• “Typical” w/ 20 PCIe cards & 26 disk: 750 W & 2571 BTU • Wattage insights: PCIe W vary, SFF disk 8-9 W per drive• Max weight 160 lbs

Page 28: Power Systems 2009 Hardware

28

IBM Training

© 2009 IBM Corporation55

PCIe

PCIe

PCIe

PCIe

PCIe

PCIe

PCIe

PCIe

PCIe

PCIe

12X

DASD

DASD

DASD

DASD

DASD

DASD

DASD

DASD

DASD

DASD

DASD

DASD

DASD

DASD

SASDASD

DASD

DASD

DASD

#5802/5877 12X I/O Drawers

Front

Rear

12X DDR cable to CEC12X DDR

cable to CEC

Front

RearPCIe

PCIe

PCIe

PCIe

PCIe

PCIe

PCIe

PCIe

PCIe

PCIe

12X

5802: 4U drawer

kVA (maximum) .768 kVA

Rated voltage and frequency 100-127V or 200-240V

Thermal output (maximum) 2542 BTU/hr

Power requirements (maximum) 745 W

Weight 54 kg 120 lb

• Max two #5802/5877 per 12X loop

• Requires DDR 12X cable, not original SDR 12X cable

– #1861, #1862, #1863, #1865, #1864 = DDR cables

• Can NOT convert #5877 to #5802 (order correctly initially)

#5802 #5877 (Oct 2009)

5877: 4U drawer

kVA (maximum) .531 kVA

Rated voltage and frequency 100-127V or 200-240V

Thermal output (maximum) 1760 BTU/hr

Power requirements (maximum) 515 W

Weight 48 kg 105 lb

IBM Training

© 2009 IBM Corporation56

19-inch I/O Drawer/Tower Configuration Rules

If server limited on number of loops, I/O drawer selection can be impacted

RIO/HSL

Max 6 per loop

12X PCI-X DDR

Max 4 per loop

6 slots per drawer

12X PCIe

Max 2 per loop

10 slots per drawer

No mixing RIO/HSL and 12X on same loop

No mixing PCI-X 12X and PCIe 12X on same loop

#5802 or 5877

#5796

5714-G30

#5796

5714-G30

#5796

5714-G30

#5796

5714-G30

Many #’s

& MTMs

Many #’s

& MTMs

Many #’s

& MTMs

Many #’s

& MTMs

Many #’s

& MTMs

Many #’s

& MTMs

8570 16- or 32-core

3560 16-core

2520/550 4-core or larger

1520/550 2-core

Max loops POWER6 model

#5802 or 5877

5877 Oct 2009

Page 29: Power Systems 2009 Hardware

29

IBM Training

© 2009 IBM Corporation57

New 12X DDR Cables

• New PCIe 12X I/O Disk drawers require new 12X DDR cables

– New PCIe 12X I/O drawers NOT supported by earlier SDR 12X cables

– Can NOT mix PCIe and PCI-X I/O 12X drawers on the same loop

#1864

--

#1863

--

#1861

DDR cables

for 24”#5803/5873

#1864

#1865

#1863

#1862

#1861

New cables – use for for PCIe 12X I/O drawers

#1865#18403.0 meter

#1864#18348.0 meter

--#18312.5 meter

#1862#18301.5 meter

#1861#18290.6 meter

DDR cables

For 19”#5802

SDR cables –use for PCI-X 12X I/O drawers

Length

New 12X DDR cables have same dark green color connector as original 12X cable, but have a different labels. Be careful. Physically the old cables can be connected to the new drawers – size/keying identical.

DDR = double data rate

“D”

Same price for equivalent SDR and DDR cables

IBM Training

© 2009 IBM Corporation58

12X DDR GX Adapter (Highest Speed Loop)

• The new 2009 12X I/O Drawer can take advantage of faster GX adapters running DDR2 (double data rate)

• Can also attach to GX adapters running SDR (single data rate)

Same feature code – originally announced adapter is DDR

32yesPOWER6 595

Doesn’t use GX adaptern/an/aPOWER6 575

n/anoPOWER6 570

n/anoPOWER6 560

Need 4-core or larger 550, use newest DDR feat #5609 for new I/O drawer, not #5608

1yesPOWER6 550 (8204)

Need 4-core 520, use newest DDR feat #5609 for new I/O drawer, not #5608

1yesPOWER6 520 (8203)

No 12X, need POWER6n/aNo 12XPOWER5

notesMax quantity

Offer 12X DDR Adapter

Page 30: Power Systems 2009 Hardware

30

IBM Training

© 2009 IBM Corporation59

New #5609 12X+ GX Loop Adapter for Power 520/550

• #5609 can support double date rate of initial #5616 12X adapter

• #5609 has same data rate of #5608 12X+ adapter introduced late 2008

• #5608/5609 positioning:

– Both equally great for clustering

– Both require 4-core or larger Power 520/550

– Both have about the same performance attaching #5796 or 7314-G30 12X I/O drawers

– #5609 supports #5802 PCIe12X I/O drawers, BUT #5608 does not

• Note SDR #5616 GX loop adapter supports attachment of #5802 drawer at SDR speed

– Both supported on 4.7 GHz Power 520 and 5.0 GHz Power 550

All prices shown are IBM's USA planned suggested list prices and are subject to change without notice; reseller prices may vary

$2200n/a12X+ (DDR)#5609

$1300$220012X+ (DDR)#5608

$1100$110012X (SDR)#5616

USA List Price April 2009

USA List Price March 2009

12X GX adapter Lower price reflects #5608 subset of #5609 capability

“Good deal” unless really need #5609

GX Adapter

IBM Training

© 2009 IBM Corporation60

Power 520/550 GX Loop Adapters

General 520/550 GX Rules:• Can chose either 12X or HSL-2/RIO-2 (RIO-2 = HSL-2)• Power 520 1-core: zero loops.• Power 520/550 2-core:

– Max of one loop. Therefore can’t mix HSL/RIO & 12X I/O drawers on the same 2-core system. Nor can you mix PCIe 12X I/O drawers with PCI-X 12X I/O drawers.

– Choice of #5614 or #5616. Can NOT chose 12X+ adapter (#5608 or #5609)• Power 520 4-core or Power 550 4-8-core:

– Max of two loops. Can mix loops on the same 4-core system, but all RIO or all 12X drawers per individual loop.

– Choice of #5614, #5616, #5608 or #5609 But a max of one 12X+ adapter (#5608 or #5609). • #5608/5609 GX+ adapters must be located in the one specific GX++ slot in the CEC

– Initial loop slot (P1-C6) in the Power 520 and Card slot 1 (P1-C7) in the Power 550• #5608/5609 not announced on 9407-M15, 9408-M25, 9409-M50; convert server to 8203/8204• GX slot for Power 520 2-, 4-core shares space with PCIe slot #1

– Reduces PCIe slots from quantity 3 to quantity 2 if GX adapter is used with 2-core or 4-core.– Note 2nd GX adapter does not share space with PCI slot in 4-core configuration. (not available in

2 core)• GX slots for Power 550 share space with PCIe slots #1 & #2

– Reduces PCIe slots from quantity 3 to quantity 2 when one loop is used– Reduces PCIe slots from quantity 3 to quantity 1 when two loops is used

Power 520/550 CECGX Adapter

#5614 = HSL/RIO

#5616 = 12X

#5608 = 12X+

#5609 = 12X+

Page 31: Power Systems 2009 Hardware

31

IBM Training

© 2009 IBM Corporation61

New SFF SAS Hard Disk Drives

$ 798139 GB ** #1888146 GB #188615k

$1,050 n/a300 GB #188510k

69 GB ** #1884

n/a

RAID formatted for IBM i 6.1

$ 49873 GB #188315k

$ 650146 GB #188210k

USA List price (8203-E4A)

JBOD formatted for AIX/LinuxSFF

Three new SFF (2.5-inch) features

–One 10k rpm … 300GB

–Two 15k rpm … 146/139GB

IBM USA suggested list prices. Reseller prices may vary. Prices of #1885, 1886/88 are projected prices and very subject to change. Official prices to be provided October 2009. Other prices shown are as of Oct 2009 and are subject to change without notice.

SFF drive (front/back)

SFF uses ½ the energy of 3.5-inch disk drives and can be more densely packaged to save floor space.

IBM Training

© 2009 IBM Corporation62

SAS Hard Disk Drive (HDD) Options

$ 650n/a146 GB #188210k

$ 798139 GB ** #1888146 GB #188615k

69 GB ** #1884

n/a

n/a

IBM i formatted

$ 49873 GB #188315k

$1,050300 GB #188510k

$ 498 wfm73 GB #1881 wfm10k

USA List price (8203-E4A)AIX/Linux formattedSFF

428 GB ** #3658

283 GB * #3678

139 GB #3677

69 GB #3676 wfm

n/a

IBM i formated

$1,599450 GB #364915k

$1,150300 GB #364815k

$ 498146 GB #364715k

---73 GB #3646 wfm15k

n/an/a10k

USA List price (8203-E4A) AIX/Linux formatted3.5”

wfm = withdrawn from marketing

3.5-inch and SFF (2.5-inch) offer different capacity and rpm options

* not supported as IBM i 5.4 load source ** IBM i 6.1 required

SFF drive (front/back)

IBM USA suggested list prices. Reseller prices may vary. Prices of #1885, 1886/88 are projected prices and very subject to change. Official prices to be provided October 2009. Other prices shown are as of Oct 2009 and are subject to change without notice.

Page 32: Power Systems 2009 Hardware

32

IBM Training

© 2009 IBM Corporation63

Positioning Insights for HDDs Announced October ‘09

IBM i

Combined with #5903 PCIe 380MB Controller announcement in October, the PCIe 12X I/O drawers announced in April 2009 are now MUCH more attractive as a disk enclosure for IBM i 6.1 clients

AIX / Linux

#5903 support was already in place for AIX 5.3 and later clients, so additional capacity points a nice additional option.

IBM Training

© 2009 IBM Corporation64

Combined 2009 Agenda (April + October)

• Faster Power 520/550

– Power 520 ….. 4.7 GHz

– Power 550 ….. 5.0 GHz

• More powerful Power BladeCenter

– JS23/JS43 …. 4-core & 8-core 4.2 GHz

– I/O enhancements

• New I/O for POWER6

– PCIe 12X I/O drawers & SFF Disk

– New PCIe adapters: SAS PCIe, SAS PCI-X, FCoE

– High performance solid state drive (SSD)

– USB Removable Disk Drive

• New I/O for POWER5

Page 33: Power Systems 2009 Hardware

33

IBM Training

© 2009 IBM Corporation65

Additional AIX/Linux PCI Adapters, April and October 2009

Yes 10GbNFCoE (or FCoEE)

Y

2-port

Y

4 Gb

• Y

• Y 175 MB

• Y 1500 MB

• Y

• not supported*

• N

2/8-port

1 Gb & 10 Gb

PCI-X

Not in 2009Crypto

4-portUSB

Not in 2009 iSCSI

4 Gb & 8 GbFibre Channel

• Yes

• Yes 380 MB

• Not in 2009

SAS

•Tape/disk 0 cache

•Disk medium cache

•Disk big cache

• No plans

• No plans

• No plans

SCSI

• Tape/disk 0 cache

• Disk medium cache

• Disk big cache

4-portAsync

1 Gb & 10 Gb LR & 10Gb SR & CX4

LAN (Ethernet)

PCIeType adapter

*not supported on POWER6

IBM Training

© 2009 IBM Corporation66

Additional IBM i PCI Adapters, April & October 2009

NIC in 2009, SOD for FC

2H10NFCoE (or FCoEE)

Y

Y with IOP

Y

Y, with IOP

4 Gb

• Y

• No 175 MB

• Y 1500 MB

• Y

• Y 90 MB

• Y 1500 MB

2-port & 4-port(SNA only with IOP)

1 Gb & 10 Gb

PCI-X

Not in 2009Crypto

No adapter plansInt xSeries Card (IXS)

No adapter plansiSCSI

No plansTwinax

4 Gb & 8 GbFibre Channel

• Yes

• Yes 380 MB

• Not in 2009

SAS

•Tape/disk 0 cache

•Disk medium cache

•Disk big cache

• No plans

• No plans

• No plans

SCSI

• Tape/disk 0 cache

• Disk medium cache

• Disk big cache

2-port(No SNA)

WAN

1 Gb & 10 Gb LRLAN (Ethernet)

PCIeType adapter

Page 34: Power Systems 2009 Hardware

34

IBM Training

© 2009 IBM Corporation67

POWER6 PCIe Adapters

• LAN– 4-Port 10/100/1000 Base-TX PCI Express Adapter (#5717, AIX/Linux)– 10 Gigabit Ethernet-CX4 PCI Express Adapter (#5732 AIX/Linux)*– 2-Port 10/100/1000 Base-TX Ethernet PCI Express Adapter (#5767, AIX/Linux)– 2-Port Gigabit Ethernet-SX PCI Express Adapter (#5768, AIX/IBM i/Linux)– 10 Gigabit Ethernet-SR PCI Express Adapter (#5769, AIX/Linux)*– 10 Gigabit Ethernet-LR PCI Express Adapter (#5772, AIX/IBM i/Linux*

• WAN/Async– PCIe 2-Line WAN w/ with Modem (#2893, IBM i)– PCIe 2-Line WAN w/ with Modem CIM (#2894, IBM i)– 4 Port Async EIA-232 PCIe Adapter (#5785, AIX/Linux)

• SAS– PCIe Dual-x4 SAS Adapter (#5901, AIX/IBM i/Linux)*– PCIe 380MB 380 MB Cache Dual-x4 3Gb SAS RAID Adapter (#5903, AIX/Linux) (IBM i )

• Fibre Channel– 8 Gigabit PCI Express Dual Port Fibre Channel Adapter (#5735, AIX/IBM i/Linux)*– 4 Gigabit PCI Express Single Port Fibre Channel Adapter (#5773, AIX/Linux)– 4 Gigabit PCI Express Dual Port Fibre Channel Adapter (#5774, AIX/IBM i/Linux)

• Fibre Channel over Ethernet (FCoE, FCoEE, CNA)– 10Gb FCoE PCIe Dual Port Adapter (#5708, AIX/Linux, limited IBM i + SOD IBM i full)*

• Graphics– POWER GXT145 PCI Express Graphics Accelerator (#5748, AIX/Linux)

• USB– 4 Port USB PCIe Adapter (#2728, AIX/Linux)

* PCIe 8x adapter

IBM Training

© 2009 IBM Corporation68

Combined 2009 Agenda (April + October)

• Faster Power 520/550

– Power 520 ….. 4.7 GHz

– Power 550 ….. 5.0 GHz

• More powerful Power BladeCenter

– JS23/JS43 …. 4-core & 8-core 4.2 GHz

– I/O enhancements

• New I/O for POWER6

– PCIe 12X I/O drawers & SFF Disk

– New PCIe adapters: SAS PCIe, SAS PCI-X, FCoE

– High performance solid state drive (SSD)

– USB Removable Disk Drive

• New I/O for POWER5

Page 35: Power Systems 2009 Hardware

35

IBM Training

© 2009 IBM Corporation69

PCIe SAS Adapters/Controllers

• PCIe 380MB cache 8x, Dual-x4, 3Gb SAS RAID• Runs SAS HDD and SSD

– HDD in #5802, #5803, #5886, 560/570 CEC– SSD in #5886 and 560/570 CEC

• Protection features

– Battery (hot swap maintenance) protecting cache– Always paired with another #5903 for

redundancy of adapter and write cache• Features:

– 380 MB write cache – Dual port adapter

– AIX/Linux: RAID 0, RAID 10, RAID 5, RAID 6, hot spare provided by adapter

– Power 560/570 Split Backplane support (AIX/Linux)

• AIX 5.3, SLES10.2, RHEL5.2 or later • IBM i supported Oct 2009 with i 6.1.1)

• Functionally very much like #5902 PCI-X DDR adapter … #5903 has 2x write cache and also supports SSD

#5903CCIN = 574E

• PCIe 8x, Dual-x4, 3Gb SAS• Runs SAS HDD and SAS removable media

– HDD in #5802, #5803, #5886, 520/550 CEC (use #5909/5911 for 560/570 CEC)

– Removable media in drawers external to CEC

• Protection features– Optionally paired with another #5901 for

redundancy of adapter • Features:

– Zero write cache – Dual port adapter

– AIX/Linux: RAID 0, RAID 10 provided by adapter– IBM i: striping or mirroring provided by OS, not

adapter– Power 520/550 Split Backplane support

(AIX/Linux)• AIX 5.3, IBM i 6.1, SLES10.2, RHEL5.2 or later

• Functionally very, very much like #5900/5912 PCI-XDDR adapter

#5901CCIN = 57B3

IBM Training

© 2009 IBM Corporation70

REVIEW: POWER6 Rules – Disk & Controllers

Rules do not apply to SAN disk storage -- SAN provides disk protection

Supported, write cache must be protected

Supported, write cache must be protected

PCI-X SAS

Supported, write cache must be protected

Not supported on POWER6PCI-X SCSI

October 2009: Supported, write cache must be protected

Supported, write cache must be protected

PCIe SAS

IBM iAIX/LinuxDisk controllers PCI cards with write cache

Require protecting (RAID 5, 6, or mirroring)

Highly recommend protecting (RAID 5, 6, or 10) – but optional

Disk drives – SCSI or SAS (or SSD)

IBM iAIX/Linux

Page 36: Power Systems 2009 Hardware

36

IBM Training

© 2009 IBM Corporation71

REVIEW: POWER6 Rules – Disk & Controllers Details

Does not apply to SAN disk storage -- SAN provides disk protection

Supported, must use aux cache or mirror controller

Not supported on POWER6PCI-X SCSI 757/1500 MB #2780/5580/5581/5583/5590/5591*

Supported, includes aux cache for protection. Can mirror for controller redundancy

Not supported on POWER6PCI-X SCSI 1500 MB #5778/5780/5782*

Not supportedSupported, but must be paired for protection of cache

PCI-X SAS 175 MB #5902

Supported, but must be mirrored for protection of cache

Not supported on POWER6PCI-X SCSI 90MB #5776*

Supported, includes aux cache for protection. Can mirror for controller redundancy

Supported, includes aux cache for protection. Can mirror for controller redundancy

PCI-X SAS 1500 MB #5904/5906/5908

Oct 2009 Supported, but must be mirrored for protection of cache

Supported, but must be paired for protection of cache

PCIe SAS 380 MB #5903

IBM iAIX/LinuxDisk controllers PCI cards with write cache

Require protecting (RAID 5, 6, or mirroring)

Highly recommend protecting (RAID 5, 6, or 10) – but optional

Disk drives – SCSI or SAS (or SSD)

IBM iAIX/Linux

* using unified feature codes, 9406/9407/9408 has add’l feat codes for the same cards, but POWER6 protection rules are the same.

IBM Training

© 2009 IBM Corporation72

#5901 PCIe Disk Adapter & #5802/5803 12X I/O Drawer

• One #5901 per group of SFF drive bays. Mode switch on drawer sets to 1, 2 or 4 groups. Diagram above shows mode 4 example

• With AIX/Linux, up to 4 #5901 accessing 4 sets of SFF drive bays.

• With IBM i, only mode 2 (2 groups of drive bays) supported

• For redundancy a pair of #5901 can be used, but then mode 4 not supported.

• #5901 has two ports. 2nd port can optionally be used.

SAS AE cables #3688

#5802/5803 I/O Drawer

• AIX/IBM i*/Linux support May

#5901 SAS adapter

#5901 SAS adapter

#5901 SAS adapter

#5901 SAS adapter

* IBM i uses only with mode switch as “2”, thus max=min as two sets of drive bays

#5901 PCIe0 MB cache

Mode 4 example

5802/5803 SFF bays need PCIe controller(s) in the same drawer

Page 37: Power Systems 2009 Hardware

37

IBM Training

© 2009 IBM Corporation73

#5901 PCIe Disk Adapter & #5802/5803 12X I/O Draweradditional examples

• #5901 has two ports. 2nd port can drive

– a second group of drives in the #5802/5803 I/O drawer (first red line)

– or it can drive a #5886 EXP 12S Disk Drawer (or two EXP 12S if cascaded using EE Cables) (second red line)

– Or it can drive a removable media drive (usually not recommended due to potential performance conflicts) (not shown)

SAS AE cables #3688

#5802/5803 I/O Drawer

• AIX/IBM i*/Linux support May

#5901 SAS adapter

#5901 SAS adapter

* IBM i uses only with mode switch as “2”, thus max=min as two sets of drive bays

#5901 PCIe0 MB cache

#5886 EXP 12S SAS Disk Drw

#5901 SAS adapter

SAS YO cable

Cascade of EXP 12S not shown

Mode 4 example

5802/5803 SFF bays need PCIe controller(s) in the same drawer

IBM Training

© 2009 IBM Corporation74

#5903 RAID Disk Adapter & #5802/5803 12X I/O Drawer

• Pair of SAS adapters provides redundancy

– Total of two PCIe slots used

– Adapters linked together via connections in #5802/5803 and write cache contents mirrored

– If one adapter fails/disconnected, contents of write cache written out and write cache disabled until pairing restored. Often significant performance impact possible without write cache, especially if running RAID-5 or RAID-6

• Max of 18 (#5802) or 26 (#5803) SAS disk drives located in 12X I/O drawer per pair of adapters

• Can also use the 2nd pair of SAS ports on the pair of #5903 to attach to #5886. This expands the number of disk drives by 12 drives with one EXP 12S, or expands by 24 drives if two EXP 12S are cascaded together via SAS EE cables

SAS AT cables #3688

#5802/5803 I/O Drawer

Pair of SAS adapters

#5903 adapter

#5903 adapter

Adapters must be in PCIe I/O drawer

• AIX/Linux support May 2009• IBM i 6.1.1 support October 2009

Diagram shows using ½ of each adapter’s ports

Page 38: Power Systems 2009 Hardware

38

IBM Training

© 2009 IBM Corporation75

#5903 RAID Disk/SSD Adapter & #5886

• Pair of SAS adapters provides redundancy – Total of two PCI slots used

– Adapters linked together via SAS X cable and write cache contents mirrored

– If one adapter fails/disconnected, contents of write cache written out and write cache disabled until pairing restored. Usually significant performance impact possible without write cache, especially if running HDD RAID-5 or RAID-6

• Max of 48 SAS disk drives per pair of adapters – Max of 24 SAS drives per paired #5903 port using two pair of two #5886 linked

together using EE SAS cables

– performance considerations if try to attach this many drives

• Max of 8 or 9 Solid State Drives per #5903 adapter pair– Max 8 in #5886 I/O drawer, Max 9 in #5802/5803 I/O drawer

– Note #5903 may throttle SSD performance with this many SSDs (8 or 9)

SAS X cable#5903 adapter

#5886 EXP 12S SAS Disk Drw

#5903 adapter • AIX/Linux support May 2009• IBM i 6.1.1 support October 2009

Diagram shows using ½ of each adapter’s ports

Pair of SAS adapters

IBM Training

© 2009 IBM Corporation76

Paired Disk Controller – New for IBM i Oct 2009

Disk adapter

Disk

Disk adapter

• Ability to mirror disk controllers with mirrored drives for years

• October 2009 with IBM i 6.1.1 has new configuration option

Disk

Disk adapter

Disk adapter

Disk

mirror

mirrorRAID5

RAID6

mirror

Page 39: Power Systems 2009 Hardware

39

IBM Training

© 2009 IBM Corporation77

Active/Active Performance Enhancement

• Active/Active capability associated with “paired” or “dual” SAS controllers (not mirrored)

– For #5901, #5902, #5903, #5904/6/8, #5912 (not #5900)

– For POWER6 (AIX, IBM i, Linux), POWER5 (AIX, Linux)

– “Dual Storage IOA configuration” or “multi-initiator” in documentation

• Enhancement compared to Active/Passive (currently used by AIX and Linux)

– “Load balancing” of work on dual SAS controllers

– Depending on workload can allow more throughput over same hardware configuration

– Helpful to both SSD and HDD

– Most helpful when dealing with lots of reads and with situations trying to handle reading a

lot of data …. low impact to boot drives, application binaries, load source

– Must have more than one array under controller pair, else active/passive used

Cntrl

Cntrl

System

Cntrl

Cntrl

1

2System

Read/

Write

“Standby” except for cache content copy

Active/Passive Active/Active

Read/ Write

Write

1

2

Write

Read/

Write

IBM Training

© 2009 IBM Corporation78

Paired SAS Disk Adapters & #5802/5803 12X I/O Drawer

• Note there are four SAS ports on the #5802/5803 drawers

• Each pair of #5903 need two ports

• Therefore the max number of controllers = two pair

• Thus the max number of groups/partitions per #5802/5803 = two using #5903

• If want 4 groups of disk slots, use four unpaired #5901

SAS AT cables #3688Pair of #5903 PCIe

380 MB cache

#5903 SAS adapter

#5903 SAS adapter

• Optionally, can pair the #5901 PCIe SAS Adapters controller for redundancy

• “Optional” pairing because no write

cache to protect

• If paired, same two partition maximum as with #5903 adapter

4 SAS ports

Pair of #5901 PCIe

0 MB cache

SAS adapter

SAS adapter

• Supported AIX/Linux May

• IBM i 6.1.1 Oct 2009

Supported AIX/Linux

Not supported IBM i

Adapters for 5802/5803 disk must be in PCIe I/O drawer

Adapters for 5802/5803 disk must be in PCIe I/O drawer

Page 40: Power Systems 2009 Hardware

40

IBM Training

© 2009 IBM Corporation79

10 Gb Ethernet LAN

Position 3 different adapters

• PCIe– 10 Gigabit Ethernet-CX4 PCI Express Adapter (#5732 AIX/Linux)– 10 Gigabit Ethernet-SR PCI Express Adapter (#5769, AIX/Linux)– 10 Gigabit Ethernet-LR PCI Express Adapter (#5772, AIX/IBM i/Linux

• PCI-X currently available (for comparison)– 10 Gb Ethernet-SR PCI-X 2.0 DDR Adapter (#5721) AIX/IBM i/Linux

• Multiple cable options provide flexibility to leverage existing cabling, lowering site

installation costs

• Cabling

– Existing #5722 Long Range (LR) adapter: single-mode (1310 nm) optical fiber for up to 10 km.

– New #5769 Short Range (SR) adapter: multi-mode (850nm) optical fiber for up to 300 meters.

– New #5732 CX4: twinax copper for up to 15 meters

• Different cabling from the AS/400 heritage twinax cabling

• Functional differences (beyond cabling)

– #5769 SR and #5732 CX4 provide Linux: iSCSI hardware initiator support and RDMA (Remote Direct Memory Access) (#5722 does not provide)

IBM Training

© 2009 IBM Corporation80

#5735 PCIe 8Gb Fibre Channel Adapter

• Dual port adapter - each port provides single initiator

• Speeds: 8 Gbps, 4 Gbps, 2 Gbps

– Automatically adjusts to speed of attached I/O and SAN fabric

– LED on card indicates link speed

– Limited number of SAN infrastructure or I/O units currently support 8 Gbps

• Supported 8203-E4A, 8204-E8A, Power 560, 9117-MMA, Power 575 System unit

– New PCIe 12X I/O drawers expand placement beyond system units

– Not announced on the 9407-M15, 9408-M50, 9406-MMA

• Ports have LC type connectors (using shortwave laser optics)

– Cables are the responsibility of the customer.

– Use multimode fibre optic cables with short-wave lasers:

• OM3 - multimode 50/125 micron fibre, 2000 MHz*km bandwidth – 2Gb (.5 – 500m) 4Gb (.5 – 380m) 8Gb (,5 – 150m)

• OM2 - multimode 50/125 micron fibre, 500 MHz*km bandwidth – 2Gb (.5 – 300m) 4Gb (.5 – 150m) 8Gb (,5 – 50m)

• OM1 - multimode 62.5/125 micron fibre, 200 MHz*km bandwidth – 2Gb (.5 – 150m) 4Gb (.5 – 70m) 8Gb (,5 – 21m)

With new PCIe 12X I/O Drawers• New to POWER6 595 since no PCIe previously• Expanded use on Power 520-575 with PCIe I/O drawers

NPIV available* on this

adapter

* with proper software levels of AIX. IBM i = SOD

Page 41: Power Systems 2009 Hardware

41

IBM Training

© 2009 IBM Corporation81

Combined 2009 Agenda (April + October)

• Faster Power 520/550

– Power 520 ….. 4.7 GHz

– Power 550 ….. 5.0 GHz

• More powerful Power BladeCenter

– JS23/JS43 …. 4-core & 8-core 4.2 GHz

– I/O enhancements

• New I/O for POWER6

– PCIe 12X I/O drawers & SFF Disk

– New PCIe adapters: SAS PCIe, SAS PCI-X, FCoE

– High performance solid state drive (SSD)

– USB Removable Disk Drive

• New I/O for POWER5

IBM Training

© 2009 IBM Corporation82

PCI-X DDR 1.5GB Cache SAS RAID Adapter

1. High interest to IBM i (5.4 and later) clients moving off SCSI disk drives

• Already had SCSI version of this controller• Provides great disk performance (also true for AIX/Linux)

2. Great controller for SSD providing highest performance • AIX, IBM i or Linux

Industry unique• 1.5GB write cache• 1.6GB read cache

Huge Cache SAS RAID Disk/SSD Controller

#5904

#5906 (BSC for 24-inch)

#5908 (BSC for 19-inch)

CCIN = 572F/575C

Page 42: Power Systems 2009 Hardware

42

IBM Training

© 2009 IBM Corporation83

PCI-X DDR 1.5GB Cache SAS RAID Controller

• “Replacement” for IBM i SCSI 1.5GB write cache controller …. Nice “add” for AIX/Linux

• Double card PCI-X adapter takes 2 slots

• Write cache protection included (auxiliary

cache)

• 1.6GB read cache

• IBM i 5.4, AIX 5.3, SLES10, RHEL5.2 or later required

• POWER6* 520, 550, 560, 570, 595

• Max of 60 HDD per controller

• Max of 8 SSD per controller

• EXP 12S SAS Disk drawer

• 2U disk drawer

• 19-inch rackmount

• 12 disk bays per drawer

• Disk currently available

– 139/146 GB

– 282/300 GB

– 428/450 GB

* System i POWER5 also supported

Can also be used in Power CEC

IBM Training

© 2009 IBM Corporation84

1.5GB Cache RAID Adapter Performance

• Compared to SCSI 1.5GB Adapter versus SAS 1.5GB adapter

• SAS 1.5GB runs more disk drives per adapter

• SAS 1.5GB Faster

Up to 35% more powerful

“Faster” Note: For applications with lots of reads/writes per transaction, even a small response time improvement can add up and can result in a noticeable user response time improvement.

1.5GB Adapter Comparison Running 36 HDDs 1.5GB Adapter Comparison Running 36 HDDs 1.5GB Adapter Comparison Running 36 HDDs 1.5GB Adapter Comparison Running 36 HDDs

Chart shows I/O response time as larger and larger amounts of workload per minute are run

On this chart, the farther to the right and to the bottom, the better

Page 43: Power Systems 2009 Hardware

43

IBM Training

© 2009 IBM Corporation85

#5904/5906/5908 1.5GB Cache RAID Adapter Placement

• Double card adapter takes two adjacent PCI-X slots

• CEC

– in PCI-X slots on the Power 520, 550, 560, or 570. (8203, 8204, 8234, 9117)

– Note: NOT in POWER6 9406, 9407, 9408, 9409 (convert to 9117, 8203, 8204)

– Note: NOT in POWER5 CEC

• 12X I/O drawers

– #5796 / 7314-G30 Max 2 (#5908)

– #5797/5798 Max 8 (#5906)

– #5802/5803/5873 Max 0 zero, these drawers have PCIe slots

• RIO/HSL I/O drawers

– #0595/5095 Max 2 (#5904) only C1/C2, C2/C3, C3/C4

– #5094/5096 Max 1 (#5904) only slots C14/15

– #5294/5296 Max 2 (#5904) only slots C14/15

– #5790 Max 2 (#5908)

– Note: Not in #0588/5088, 7311-D20, 7311-D11

IBM Training

© 2009 IBM Corporation86

#5904/5906/5908 Cabling to #5886 EXP 12S Drawer

* max of 60 HDD (5 #5886 drawers) per adapter on POWER6. POWER5 does not supported use of EE cables (max 36 HDD)

3 SAS ports on Adapter

For drawer attachment

EE cables cascade from one EXP 12S to a second drawer*

YO SAS cable

– #3691 1.5m (4.9 ft)

– #3692 3.0m (9.8 ft)

– #3693 6.0m (19.6 ft)

– #3694 15m (49.2 ft)“pairing” of 1.5GB adapter like the #5902/03 adapters with X cables not announced. Use mirrored controllers and drives instead for this level of redundancy.

Page 44: Power Systems 2009 Hardware

44

IBM Training

© 2009 IBM Corporation87

#5904/5908 Cabling to Power 520/550 & 560/570 CEC Drives

Use SAS AI Cables for CEC

– #3679 1 meter

3 SAS ports on Adapter

Power 520/550 (with #5904)

• Drive split backplane for AIX/Linux

• Can drive SSD or HDD (but not both)

• Only uses one #5904 SAS adapter port, can use other two ports for #5886 EXP12S Disk Drawers. (SAS YO cable) assuming no SSD

Power 560/570 (with #5908)

• Drive split backplane (3 bays) #3650 for AIX/Linux

• Drive full backplane (6 bays) #3651 for AIX/IBM i/Linux

• Can drive SSD or HDD (but not both)

• Only uses one #5908 SAS adapter port, can use other ports for

• #5886 EXP12S Disk Drawer max 2 (SAS YO cable)

• Other 560/570 CEC drawer max 2 of the same system (SAS AI cable)

IBM Training

© 2009 IBM Corporation88

Sample Config #1: 1.5GB Controller Price Comparison – 24-to-24

SCSI Config (USA Power 520 list price)

• Adapter #5778 $ 8,407

• 1 EXP24 Disk Drawer $ 8,433

• 24 141GB SCSI disk $23,544

SAS Config (USA Power 520 list price)

• Adapter #5904 $ 8,500

• 2 EXP 12S Disk Drawers $ 9,000

• 24 139GB SAS disk $ 11,952

0000

5000500050005000

10000100001000010000

15000150001500015000

20000200002000020000

25000250002500025000

30000300003000030000

35000350003500035000

40000400004000040000

45000450004500045000

SASSASSASSAS SCSISCSISCSISCSI

diskdiskdiskdisk

drawersdrawersdrawersdrawers

adapteradapteradapteradapter

All prices shown are IBM's USA suggested list prices as of April 2009 and are subject to change without notice; reseller prices may vary.

24 disk 24 disk

24 bays

1 adapter

24 bays

1 adapter

Plus the SAS adapter typically able to handle more drives, further increasing savings

Page 45: Power Systems 2009 Hardware

45

IBM Training

© 2009 IBM Corporation89

Sample Config #2: 1.5GB Controller Comparison – Growth 30-to-24

SCSI Config (USA Power 520 list price)

• Adapter #5778 $ 8,407

• 1 EXP24 Disk Drawer $ 8,433

• 24 141GB SCSI disk $23,544

SAS Config (USA Power 520 list price)

• Adapter #5904 $ 8,500

• 3 EXP 12S Disk Drawers $ 13,500

• 30 282GB SAS disk $ 34,500

0000

10000100001000010000

20000200002000020000

30000300003000030000

40000400004000040000

50000500005000050000

60000600006000060000

SASSASSASSAS SCSISCSISCSISCSI

diskdiskdiskdisk

drawersdrawersdrawersdrawers

adapteradapteradapteradapter

All prices shown are IBM's USA suggested list prices as of April 2009 and are subject to change without notice; reseller prices may vary.

24 141GB

disk

36 bays

1 adapter

24 bays

1 adapter

30 282GB

disk

Plus there are six empty SAS bays still available for even more growth

IBM Training

© 2009 IBM Corporation90

#5903 vs #5904/6/8 Comparison

PCI-X (two adjacent)PCIe (two)PCI slots required

5 – 6 may max out with

busy SSD

3 – 4 may max out with busy

SSD & “typical” write/read mix

Rule of thumb – “typical” max SSD attached – yours may vary

1500MB effective* write cache

380MB write cache (physically 2x380MB, but mirrored)

Write cache

24 – 30 (maybe up to 36)12 – 18 (maybe up to 24)Rule of thumb – “typical” max HDD attached – yours may vary

60 HDD8 SSD

48 HDD9*** SSD

Max drives attached **

$8,500 (for one)$4,399 (for pair)List price (assuming model 550)

1600MB effective* read cache

0 read cacheRead cache

One #5904/6/8Pair of #5903

* uses compression, physical cache smaller than effective** with busy drives, adapter becomes the performance bottleneck*** quantity 9 picked for packaging convenience, 5904/6/8 actually capable of larger number SSDs than 5903

All prices are IBM's USA suggested list prices as of October 2009 and are subject to change without notice; reseller prices may vary.

Page 46: Power Systems 2009 Hardware

46

IBM Training

© 2009 IBM Corporation91

#5903 to #5904/5906/5908 comparison

• #5903 Pro’s

– Usually equally powerful as #5904/6/8 for a smaller number of drives

– Mandatory paired adapters provide controller redundancy (and write cache redundancy)

– Lower list price than #5904/6/8

– Newest PCIe technology provides longest term usage

– Greatly enhances IBM i usage of SFF drives in new 12X PCIe I/O drawers

• #5904/5906/5908 Pro’s

– More powerful, can support more drives

– Double slot card contains integrated write cache redundancy

• When two adapters mirrored or paired, write cache continues to be used even if one adapter not in

operation

– PCI-X DDR technology provides flexibility to use many existing RIO/HSL and 12X I/O

drawers

– Can be used on System i POWER5 as well as POWER6 servers

IBM Training

© 2009 IBM Corporation92

Combined 2009 Agenda (April + October)

• Faster Power 520/550

– Power 520 ….. 4.7 GHz

– Power 550 ….. 5.0 GHz

• More powerful Power BladeCenter

– JS23/JS43 …. 4-core & 8-core 4.2 GHz

– I/O enhancements

• New I/O for POWER6

– PCIe 12X I/O drawers & SFF Disk

– New PCIe adapters: SAS PCIe, SAS PCI-X, FCoE

– High performance solid state drive (SSD)

– USB Removable Disk Drive

• New I/O for POWER5

Oct 2009

Page 47: Power Systems 2009 Hardware

47

IBM Training

© 2009 IBM Corporation93

Fibre Channel over Ethernet

• Save PCI slots • Save switches• Save space/electrical/cooling• Simplify wiring• Improve flexibility

CEC or I/O drawer

FC Switch

Ethernet

Ethernet and Fibre Channel cables

Ethernet Switch

CEC or I/O drawer

FC

Ethernet cable

Ethernet device / switch

Fibre Channel cable

Fibre Channel (FC) device or FC switch

rack

FCoE Switch

FCoE

Ethernet cables

CEC or I/O drawer

rack

Ethernet cable

Ethernet device / switch or FCoE device / switch

Fibre Channel cable

Fibre Channel (FC) device or FC switch

IBM Training

© 2009 IBM Corporation94

Converged Network Adapter (CNA)

• FCoE uses Converged Network Adapters (CNA)• CNA run either Ethernet NIC traffic AND/OR Fibre Channel

traffic– Enhanced Ethernet protocol supports FC traffic

• Enhancements adds loss-less data transmission and additional management functions

– FCoE also called FCoCEE – Fibre Channel over Converged Enhanced Ethernet

• Physically CNA use 10Gb Ethernet ports– Each port can run all NIC, all FC, or mixed NIC/FC traffic

• AIX & Linux support– AIX 5.3 or later, SLES 10, REHL 5.4 or later– SOD for NPIV function 1H 2010 through VIOS

• VIOS support– VIOS 2.1.2.0 or later

• Limited IBM i support– NIC only supported only through VIOS, requires IBM i 6.1.1– SOD for FC and NPIV function 2H 2010 (both functions thru VIOS)

Page 48: Power Systems 2009 Hardware

48

IBM Training

© 2009 IBM Corporation95

FCoE Physical Connections

• Cabling from FCoE PCIe adapter or BladeCenter Pass Thru module is Ethernet SR Optical Fibre

• Cabling goes from FCoE Adapter to

a) FCoE switch

b) Ethernet switch

FCoE Switch

FCoE

Ethernet cables

Ethernet cable

Ethernet device / switch or FCoE device / switch

Fibre Channel cable

Fibre Channel (FC) device or FC switch

CEC or I/O drawer

rack

• Ethernet cabling from FCoE switch goes to

a) FCoE adapter

b) Ethernet switch (no FC traffic

in this case)

• Fibre Channel cabling from FCoE switch goes to

a) Fibre Channel Switch

b) Fibre Channel device (since

most devices don’t have a

FCoE adapter port yet)

IBM Training

© 2009 IBM Corporation96

Implementing FCoE with Existing Networks

• Mixing FCoE and Existing FC and Ethernet networks easy and expected

• Most probable implementation is “from the edge” … adding FCoE with new equipment while keeping existing Ethernet and FC hardware/cabling in place until

it make sense to replace with FCoE

Ethernet Switches, adapters, cabling

Ethernet Switches, adapters, cabling

Ethernet Switches, adapters, cabling

Fibre Channel Switches,

adapters, cabling

Fibre Channel Switches,

adapters, cabling

Fibre Channel Switches,

adapters, cabling

FCoE switches, adapters, cabling

Note, if need 1Gb Ethernet switch, FCoE does not provided this physical connection

IBM announced two FCoE switches in July 2009, the IBM Converged Switch B32 (5758-B32) and the Cisco Nexus 5000 for IBM System Storage (3722-S51)

Page 49: Power Systems 2009 Hardware

49

IBM Training

© 2009 IBM Corporation97

#5708 10Gb FCoE PCIe Dual Port Adapter

• #5708 is a CNA (Converged Network Adapter)• Dual 10Gb ports

– Physically are Ethernet ports

– Each port can run all NIC, all FC, or mixed

NIC/FC traffic

– SR optical fiber cabling

• PCIe adapter supported on– POWER6 520/550/560/570/575/595 (located in

CEC or I/O drawer PCIe slots)

• AIX & Linux support– AIX 5.3 with the 5300-11 Technology Level, or later– AIX 6.1 with the 6100-04 Technology Level, or later– SUSE Linux Enterprise Server 10 Service Pack 3 or later– Red Hat Enterprise Linux 5.4 or later– SOD for NPIV function 1H 2010 through VIOS

• VIOS support– VIOS 2.1.2.0 or later

• Limited IBM i support– NIC only supported only through VIOS, requires IBM i 6.1.1– SOD for FC and NPIV function 2H 2010 (both functions thru VIOS)

• Firmware level required: 3.5.0 or later

• PCIe 8x Gen 1 Adapter

• CCIN = 2B3B

IBM Training

© 2009 IBM Corporation98

#5708 FCoE Configuration/Performance Considerations

• At a high level– Very good PCIe configuration flexibility

• Note only newer 12X I/O drawers have PCIe slots and only some of the PCI slots in POWER6 520/550/560/570/575 system unit are PCIe slots

• Need to coordinate usage with FCoE switches

– Great performance• FC: #5708 10Gb connectivity provides up to 45% more throughput than 8Gb FC

adapter • Ethernet NIC: with two ports versus one port, #5708 has at least a 50% total

throughput advantage.

• High level, executive sales personnel stop reading here

• Technical sales specialists knowing that “it always depends”, please study additional details on the following slide

Page 50: Power Systems 2009 Hardware

50

IBM Training

© 2009 IBM Corporation99

#5708 FCoE Configuration/Performance Considerations

• #5708 has dual 10Gb ports– PCIe slot can not support both ports running at max– Assuming both ports busy, probably get around 1.5X a single port, not 2X

• For NIC performance, recommend max of one #5708 adapters with high usage per four active processors cores

• For NIC connectivity, recommend max of two #5708 adapters with high usage per one physical processor core

• Obviously if consolidating workloads, need to plan for combined peak work

• Fibre Channel: fairly easy to compare to PCIe Dual port 8Gb FC adapter #5735– Can achieve 5-45% higher throughput– Can attach via FCoE switch to 1Gb, 2Gb, 4Gb, 8Gb switches/adapters

• Ethernet: tougher comparison as PCIe Ethernet 10Gb adapters are single port cards– Compared to a PCIe Single port 10Gb Ethernet adapter the two port #5708 total throughput about

1.5x more assuming very busy adapters– Compared to two single port adapters, the #5708 through put lower, but the non#5708 solution

needs two PCIe slots – If you are 1) only using Ethernet NIC workloads with normal frame (MTU=1500) and 2) you are

pushing the performance maximums for that port; THEN comparing one port of the #5708 FCoE versus the one #5732/5769, you can get up to 20% more throughput with the #5732/5769. Similarly you can get up to 15% more throughput with the #5772 versus the #5708. If you are using jumbo frame (MTU=9000) one port throughput differences are negligible between the #5732/69/72 and the #5708.

– Comparing one port of the #5708 to the one port #5732/69/72 when pushing the performance maximums for that port, you can get up to 30% better latency with the #5723/5769/5772 compared to the #5708. So if you have an application sensitive to latency like high performance clustering with high traffic, carefully look at this aspect.

IBM Training

© 2009 IBM Corporation100

#8725 QLogic 2-port 10Gb CNA CFFh

• #8725 is a CNA (Converged Network Adapter)• Dual 10Gb ports

– Physically are Ethernet ports– Each port can run all NIC, all FC, or mixed NIC/FC traffic – #8725 connected to midplane (no cables)– Use #5412 10GEnet Pass Thru Module on other side of

midplane to connect to FCoE switch (with SR optical fiber or copper cables)

• Adapter supported on BladeCenter– JS12, JS22, JS23, JS43– In BladeCenter H enclosures

• AIX & Linux support– AIX 5.3 with the 5300-11 Technology Level, or later– AIX 6.1 with the 6100-04 Technology Level, or later– SUSE Linux Enterprise Server 10 Service Pack 3 or later– Red Hat Enterprise Linux 5.4 or later– SOD for NPIV function 1H 2010 through VIOS

• VIOS support– VIOS 2.1.2.0 or later

• Limited IBM i support– NIC only supported only through VIOS, requires IBM i 6.1.1– SOD for FC and NPIV function 2H 2010 (both functions thru VIOS)

• Firmware level required: 3.5.0 or later

Page 51: Power Systems 2009 Hardware

51

IBM Training

© 2009 IBM Corporation101

FCoE “Top of Rack” Switches

IBM Converged Switch B32 (5758-B32)

–24 ports FCoE and 8 ports 8 Gbps FC

Cisco Nexus 5010 for IBM System Storage–20 ports FCoE plus choice of three expansion

modules (3722-S51)

•6-port FCoE, 4-port 4 Gbps FC + 4-port FCoE,

or 8-port 4 Gbps FC

Cisco Nexus 5020 for IBM System Storage–40 ports FCoE plus choice of three expansion

modules (3722-S52)

•6-port FCoE, 4-port 4 Gbps FC + 4-port FCoE, or 8-port 4 Gbps FC

IBM announced FCoE switches in July 2009

IBM Training

© 2009 IBM Corporation102

Combined 2009 Agenda (April + October)

• Faster Power 520/550

– Power 520 ….. 4.7 GHz

– Power 550 ….. 5.0 GHz

• More powerful Power BladeCenter

– JS23/JS43 …. 4-core & 8-core 4.2 GHz

– I/O enhancements

• New I/O for POWER6

– PCIe 12X I/O drawers & SFF Disk

– New PCIe adapters: SAS PCIe, SAS PCI-X, FCoE

– High performance solid state drive (SSD)

– USB Removable Disk Drive

• New I/O for POWER5

Page 52: Power Systems 2009 Hardware

52

IBM Training

© 2009 IBM Corporation103

Solid State Drives (SSD) Matching Applications’ Need

• SSD represent a break-through technology – it will become pervasive

• Today’s applications can often benefit with a faster storage option

• SSD high speed can really help get rid of I/O bottlenecks, bridging the gap between memory and disk speeds

– Improve performance

– And save space, energy at the same time

Processors Memory DiskSSD

Very, very, very, very, very fast

Very, very, very fast

Very, very slow comparativelyFast

Access Speed

1,000,000 -8,000,000 ns

~200,000 ns~100 ns< 10’s ns

IBM Training

© 2009 IBM Corporation104

Attend separate SSD session for the detail you need

SMP16 – Power

sAM03+4 – Storage

Exhibitor booth #37

Page 53: Power Systems 2009 Hardware

53

IBM Training

© 2009 IBM Corporation105

Combined 2009 Agenda (April + October)

• Faster Power 520/550

– Power 520 ….. 4.7 GHz

– Power 550 ….. 5.0 GHz

• More powerful Power BladeCenter

– JS23/JS43 …. 4-core & 8-core 4.2 GHz

– I/O enhancements

• New I/O for POWER6

– PCIe 12X I/O drawers & SFF Disk

– New PCIe adapters: SAS PCIe, SAS PCI-X, FCoE

– High performance solid state drive (SSD)

– USB Removable Disk Drive

• New I/O for POWER5

IBM Training

© 2009 IBM Corporation106

USB Removable Disk Drive

• Entry tape alternative to VXA-2, VXA-320, DAT72, DAT160, or 8mm tape drives

• Supports boot, save/restore like a tape drive • Also supports UDF files like a CD/DVD

• For AIX/Linux (not IBM i) on POWER6 servers• Rugged• Fast• Great for most application logging• Lower total cost of ownership for many tape cartridge users

Page 54: Power Systems 2009 Hardware

54

IBM Training

© 2009 IBM Corporation107

USB Removable Disk Drive Considerations

Larger, SAS PCI adapter attachment

Small, mobile, easy USB attach

External packaging / attachability

Entry tape slowerFaster than entry tapeBackup/Restore time

YesnoHead cleaning & cleaning cartridge

Lower end drives use server application, not drive

Done by server application, not docking station

Encryption

lowerhigherPrice cartridge

Not zerozero Maintenance drive / dock

Not offered on lower end drives

noAuto loader

Most drives have built in compression

Done by server, not docking station

Compression

Less ruggedyesRugged (drop & contamination resistant)

higherlowerPrice drive / dock

noYes (with UDF file system)Random access to middle or end of cartridge

susceptibleVery resistantSusceptible to dirt / dust

Entry cartridges less500GB uncompressed, more than entry tape

Cartridge max capacity

Entry drives slowerHigher than entry drivesSpeed

Tape DrivesUSB Removable Disk Drive

IBM Training

© 2009 IBM Corporation108

Excellent Fit Scenario for Removable Disk Drive

• Application data logging

– Small/mid-size business location wants to immediately write each transaction to tape (logging) as well as update the server

– Methodology provides multiple redundancy

– Allows easy transportability to backup location or server

– Typical examples include retail, hotel, distribution, or credit union applications

• Problems:

– Writing individual transactions is hard on tape drives and tape cartridges due to constant start/stop

– Server in a “dirty” office (dust/dirt) exacerbates the tape problems

– Time to write on tape slows application responsiveness

• Solution = Removable Disk Drive

– Far better for stop/starts

– Rugged has much less problem with dust/dirt

– Fast

Page 55: Power Systems 2009 Hardware

55

IBM Training

© 2009 IBM Corporation109

USB Removable Disk Drive Features

#1104 USB External Docking Station• Supported on POWER6 520/550/560/570• Free standing, connected to USB port on back

of CEC or to PCI adapter with USB ports

#1103 USB Internal Docking Station• Placed in HH slot of POWER6 520/550 CEC

• Connected to USB connector inside 520/550 CEC,

disabling the USB port on the operator panel (leaves

two external USB ports available)

• USB & power cables included with #1103

USB Disk Drives (Cartridges)

• #1106 160GB

• #1107 500GB

#1104

#1103

Config rule: When order an #1103 or #1104 docking station, must order at least one #1106 or #1107

IBM Training

© 2009 IBM Corporation110

Height: 51.8 mm (2.04 in)Width: 109.8 mm (4.32 in)

Length: 177.5 mm (7.0 in)

Weight: 540 g (1.19 lbs)

#1104 USB External Docking Station

• Supported on POWER6 520/550/560/570• Free standing, connected to USB port on back of

CEC or to PCI adapter with USB ports

• #1104 contains USB cable, universal power

transformer, power plug converters, power cords

included with #1104

• Can plug into PDU• Shelf or mounting bracket not part of #1104. If

in rack, set on top of another mounted piece of

equipment. Pre-made filler for empty rack space

on the sides of the #1104 not provided.

Config rule: When order an #1103 or #1104, must order at least one #1106 160GB or #1107 500GB Disk Drive cartridge

Page 56: Power Systems 2009 Hardware

56

IBM Training

© 2009 IBM Corporation111

USB Removable Disk Drive vs DAT

USB can provide better total cost of ownership. Key variables include the number of cartridges needed and lifespan of cartridges.

Estimate** $35$ 0Price of cleaning cartridge

$ 35$ 0 Maintenance

yesnoRead/write DAT72 cartridges

Usually done by driveDone by server applicationCompression

$1661$ 225 / 275Price drive (on 8203-E4A)

n/a#1107 $709500GB uncompressed

Price 500GB Media (on 8203-E4A)

Estimate** $4580GB uncompressed

#1106 $315 160GB uncompressed

Price 160GB Media (on 8203-E4A)

Up to 200*Up to 10,000*Quantity times media used

goodSignificantly faster with most applications

Speed

DAT 80/160 Tape Drive #5619

USB Removable Disk Drive #1103/1104

* data provided by USB removable Disk Drive supplier** based on quick Internet search, your price will vary

Non-cartridge DAT prices are IBM's USA suggested list prices as of October 2009 and are subject to change without notice; reseller prices may vary. USB prices are projected USA list prices and very subject to change. Formal USB prices are planned to be released in 2009 October.

IBM Training

© 2009 IBM Corporation112

Internal Break Even Example DAT vs USB Drive

Three year analysis – only using hard costs– Assuming DAT cartridges last 1.5 years … buy at time 0, 1.5 and 3yr– Assuming 1 cleaning cartridge per year– Assuming same number “n” of cartridges in use at one time (USB or DAT) – The more cartridges needed, the better the case for DAT

Internal

Non-cartridge DAT prices are IBM's USA suggested list prices as of October 2009 and are subject to change without notice; reseller prices may vary. USB prices are projected USA list prices and very subject to change. Formal USB prices are planned to be released in 2009 October.

$0$0$0$0

$1,000$1,000$1,000$1,000

$2,000$2,000$2,000$2,000

$3,000$3,000$3,000$3,000

$4,000$4,000$4,000$4,000

$5,000$5,000$5,000$5,000

$6,000$6,000$6,000$6,000

$7,000$7,000$7,000$7,000

$8,000$8,000$8,000$8,000

$9,000$9,000$9,000$9,000

5 cart5 cart5 cart5 cart 10101010cartcartcartcart

15151515cartcartcartcart

20202020cartcartcartcart

25252525cartcartcartcart

DATDATDATDAT

USBUSBUSBUSB

Breakeven about 13.2 cartridges

DAT cartridge prices based on quick Internet search, your price will vary

Page 57: Power Systems 2009 Hardware

57

IBM Training

© 2009 IBM Corporation113

External Break Even Example DAT vs USB Drive

Three year analysis – only using hard costs– Assuming DAT cartridges last 1.5 years … buy at time 0, 1.5 and 3yr– Assuming 1 cleaning cartridge per year– Assuming same number “n” of cartridges in use at one time (USB or DAT) – The more cartridges needed, the better the case for DAT

External

Non-cartridge DAT prices are IBM's USA suggested list prices as of October 2009 and are subject to change without notice; reseller prices may vary. USB prices are projected USA list prices and very subject to change. Formal USB prices are planned to be released in 2009 October.

$0$0$0$0

$2,000$2,000$2,000$2,000

$4,000$4,000$4,000$4,000

$6,000$6,000$6,000$6,000

$8,000$8,000$8,000$8,000

$10,000$10,000$10,000$10,000

$12,000$12,000$12,000$12,000

15151515cartcartcartcart

20202020cartcartcartcart

25252525cartcartcartcart

30303030cartcartcartcart

35353535cartcartcartcart

DATDATDATDAT

USBUSBUSBUSB

Breakeven about 25.8 cartridges

DAT cartridge prices based on quick Internet search, your price will vary

IBM Training

© 2009 IBM Corporation114

USB Removable Disk Drive Additional Information

• 160GB and 500GB native (uncompressed) capacity disk drives/cartridges • Uses SATA HDD technology• Compression provided by system application, not drive/docking station

• USB 2.0• RDXTM (Removable Disk X -- name from supplier)• Configures as USBMS01/02/03• Warranty = normal 1 yr for #1103/4/6/7. Maint = standard for feat code of

server for #1103/4 docking station. Maint not provided for #1106/7 (supply)

• Software support• AIX 5.3 or later• IBM i - no support• Linux SLES10 or later• Linux RH 4.7 or later.• VIOS – no support

• Additional white paper information• AIX usage and configuration tips

• Plan to make available by end October, probably IBM TechDoc Web site

• See RDX vendor web page: www.rdxstorage.com/newsroom/papers.php

Page 58: Power Systems 2009 Hardware

58

IBM Training

© 2009 IBM Corporation115

Combined 2009 Agenda (April + October)

• Faster Power 520/550

– Power 520 ….. 4.7 GHz

– Power 550 ….. 5.0 GHz

• More powerful Power BladeCenter

– JS23/JS43 …. 4-core & 8-core 4.2 GHz

– I/O enhancements

• New I/O for POWER6

– PCIe 12X I/O drawers & SFF Disk

– New PCIe adapters: SAS PCIe, SAS PCI-X, FCoE

– High performance solid state drive (SSD)

– USB Removable Disk Drive

• New I/O for POWER5

IBM Training

© 2009 IBM Corporation116

POWER5 Enhancements

POWER5 System i• Add SAS disk support

– 5886 EXP 12S Disk Drawer

– SAS 3.5-inch disk drives

– #5904 & 5908 PCI-X 1.5GB SAS Controller

– Requires IBM i 5.4 or later

• Add IOP-less Fibre Channel #5749

– Requires IBM i 6.1

– SAN DS8000 & DS6000*

POWER5 System p• Add SAS disk support

– 5886 EXP 12S Disk Drawer

– SAS 3.5-inch disk drives

– #5912 & 5902 PCI-X SAS Controllers

– Requires AIX 5.3, SLES10, RHEL4.6 or later

• Add SAS tape support

– Using #5912 SAS Adapter

See add’l insights, next slide

Page 59: Power Systems 2009 Hardware

59

IBM Training

© 2009 IBM Corporation117

POWER5 System i SAS Configuration Insights/Limitations

• SAS disk configuration rules identical to POWER6 EXCEPT

– SAS 1.5GB adapter not supported when placed in System i POWER5 CEC. Place in:• #0595/5095 (max 2 #5904) slots C1/C2, C2/C3, C3/C4

• #5094/5096/8094 (max 1 #5904) slots C14/C15

• #5294/5296/8294 (max 2 #5904) slots C14/C15 (1 upper/1 lower)

• #5790 (max 2 #5908) slots C1/C2, C2/C3, C4/C5, C5/C6

– IBM i load source with SAS not supported

– Cascading of two #5886 I/O drawers not supported

• Means max 12 drives per adapter SAS port, lower max per controller than POWER6 max

– SSD not supported

– #5912 SAS adapter not supported

• SAS tape configurations

– Not supported on System i

• Smart Fibre Channel (IBM i 6.1)

– Disk only, Tape drives/libraries not supported

– #5749 Fibre Channel adapter supported in POWER5 CEC or I/O drawers/towers

• Supported I/O drawers/towers with PCI-X slots such as: #0588/5088, #0595/5095, #5094/5096/9094, #5294/5296/8294

• Not supported in I/O drawers/towers without PCI-X slots such as #0578/5078 and #5074 and #5079

– SAN Load source not supported via this adapter

– eConfig support for DS6000 planned for May 12, a couple weeks later than DS8000

IBM Training

© 2009 IBM Corporation118

POWER5 System p SAS Configuration Insights/Limitations

• SAS disk configuration rules identical to POWER6 EXCEPT

– SAS adapters are supported when placed in System p POWER5 CEC

– AIX boot drive with SAS not supported

– Cascading of two #5886 I/O drawers not supported

• Means max 12 drives per adapter SAS port, lower max per controller than POWER6 max

– SSD not supported

– #5904/5908 1.5 GB cache adapter not supported

Page 60: Power Systems 2009 Hardware

60

IBM Training

© 2009 IBM Corporation119

April / October Content NOT Covered

• Lots of detail on topics covered

• 520/550 CEC SFF drive slots (April)

• SATA DVD, 520/550/560/570 backplanes (April)

• PCIe & split backplanes options (April & Oct)

• New HMC enhancements (Oct)

• Withdrawal from marketing (April & Oct)

IBM Training

© 2009 IBM Corporation120

Combined 2009 Agenda (April + October)

• Faster Power 520/550

– Power 520 ….. 4.7 GHz

– Power 550 ….. 5.0 GHz

• More powerful Power BladeCenter

– JS23/JS43 …. 4-core & 8-core 4.2 GHz

– I/O enhancements

• New I/O for POWER6

– PCIe 12X I/O drawers & SFF Disk

– New PCIe adapters: SAS PCIe, SAS PCI-X, FCoE

– High performance solid state drive (SSD)

– USB Removable Disk Drive

• New I/O for POWER5

THANKSSMP15

Page 61: Power Systems 2009 Hardware

61

IBM Training

© 2009 IBM Corporation121

Power 520/550 SAS Disk Bays/Slots

• Must choose 3.5-inch or SFF bays– Even if empty of drives– Most packages have drives as part of

definition

• Power 550 announced SFF in 2008, but did not GA until 2009. Power 520 announced April 2009. Power 520/550 GA SFF capability together.

Original six 3.5-inch SAS disk bayschoose one

New eight SFF SAS disk bays

86Max number HDD in CEC

YNSSD option

69 / 73 GB

428 / 450 GB

Max 15k rpm HDD capacity drive as of May 2009

146 GBn/aMax 10k rpm HDD capacity drive as of May 2009

YYSplit backplane option (AIX/Linux)

SFF (2.5-inch)

3.5-inchConsiderations

SFF requires IBM i 6.1

IBM Training

© 2009 IBM Corporation122

Refreshed DVD Drive Options

• The industry is moving off SCSI IDE DVD drives and to SATA DVD drives

• To ensure supply, IBM is therefore gradually moving to SATA DVD drives on the Power 520/550/560/570 and will eventually withdraw the IDE drives from marketing

– SATA DVD features: #5743 DVD-ROM #5762 DVD-RAM

• SAME as IDE:

– Capacity/Media

– Performance• AIX: remember to set your write cache parameter appropriately (Cache on = better performance, but risk of invalid

files on media if power interrupted during write. Cache off = significantly lower DVD performance)

• IBM i / Linux always use cache on, no selection option

• Price*: Same price DVD-ROM, lower price DVD-RAM

• SATA Configuration/features

– New 520/550 DASD/Media Backplane: #8308, #8310, #8346• Different Ops panel cable used

– New 560/570 Media Enclosure and Backplane #5674, and DASD backplane #5878

– MES Notes: • If you have a Power 520 CEC without a DVD, and if it has the original non-SATA backplane, then after the IDE

features have eventually been withdrawn from marketing you will have to order a ned DASD/Media backplane as a pre-requisite for the DVD.

• If you have a Power 560/570 CEC processor drawer without a DVD and with the original non-SATA backplanes, then after the IDE features have eventually been withdrawn from marketing, you will have to order both the #5674 and #5878 as pre-requisites for the DVD. Also make sure the firmware level is recent enough to support these backplanes.

* All pricing shown are IBM's USA planned suggested list prices as of April 2009 and are subject to change without notice; reseller prices may vary.

Page 62: Power Systems 2009 Hardware

62

IBM Training

© 2009 IBM Corporation123

Ordering 8203/8204 DASD/Media Backplanes

Five options (must pick one)– IDE options eventually withdrawn

– i Note: #8345, #8310 or #8346 required if IBM i on the system and to use • i-formatted disk drives in the CEC or #5886 attached via CEC SAS port

• DVD/tape drives in CEC

– Generally recommended #8345/8310/8346 for functional flexibility & performance• Small incremental price compared to #8341/8308

DASD/media backplane

111Number of slimline Media bays

YesYesNoIBM i support

n/a

Yes

No

No

Six 3.5-inch

#8341-IDE

#8308-SATA

Yes, (i*)

No

Yes

Yes

Six 3.5-inch

#8345-IDE #8310-SATA

Yes, (i*)split backplane option (use cables #3669/3679)

Eight SFFNumber and type of HDD bays

NoLowest priced backplane

YesCan attach one EXP 12S Disk Drawer to CEC SAS port (with 175MB cache) (without split backplane)

YesCan use 175MB protected write cache (disk performance & RAID-5/6)

-----#8346-SATA

Feature code and support of IDE or SATA DVD

* Split backplane not supported by IBM i

IBM Training

© 2009 IBM Corporation124

520/550 Controllers for Split Backplanes

Power 520/550 split backplane (AIX/Linux, not IBM i)

– first half of drive bays always run by the embedded controller and optionally augmented by

write cache and RAID enabler #5679.

– Second half of drive bays when split run by SAS adapter (PCI-X or PCIe)

• Zero write cache PCI-X #5900/5912 or PCIe-#5901

• 1.5GB write cache PCI-X #5904 (this option makes the most sense only for high performance SSD given the

price of this adapter)

• #5902/5903 medium cache adapter option not announced for 520/550 split backplane

• Split backplane incompatible with attaching a #5886 EXP12S Disk Drawer to CEC SAS port

AI #3679 SAS cable (1 meter)SAS bulkhead port to SAS adapter port

• For #5900/5912/5901/5904 adapter

520/550 CEC

Front

CEC SAS port

SAS cable, DASD backplane to SAS port on bulkhead

• For 520 (#3670) For 550 (#3669)(different cables because 550 deeper than 520 needing longer cable)

• For #8310/8345/8346 backplane• Note: different cable used if going to attach #5886 to SAS port

(520--#3674, 550--#3668)

Adapter

Rear

Page 63: Power Systems 2009 Hardware

63

IBM Training

© 2009 IBM Corporation125

Controllers for Split Backplanes

Power 560/570 split backplane

– Options with a zero write cache controller, using only 1 PCIe slot • NEW April 2009: #5911 (see next slide) and #5909 earlier offering

– Options for controllers with write cache, using up to 3 PCI slots per processor (2 slots adapter +

1 slot 3650/3651)

• #3650 uses one PCIe slot – provides pathway to 3 of 6 drive bays. #5902/5903/5908 adapters then

run those 3 bays. #5902/5903/5908 use additional PCI slots (AIX/Linux, not IBM i)

• #3651 uses one PCIe slot – provides pathway to 6 of 6 drive bays. #5902/5903/5908 adapters then

run those 6 bays, not the imbedded controller. #5902/5903/5908 use additional PCI slots

(AIX/Linux/IBM i) (calling #3651 “split backplane” a little awkward)

560/570 CEC

Front

#5908 is only

IBM i supported

option for write

cache controller

driving SAS bays

in 560/570 CEC

Rear

560/570 CEC

570 CEC

Yellow card and

dotted line

represents #3650 or

#3651 & takes one

PCIe slot

Not shown in diagram• YR #3667 SAS cable to

attach pair #5902/5903

adapters to #3650/3651

• YO #369x SAS cable to

attach #5908 adapter to

#5886 EXP12S Disk

Drawer

#3651AI #3679 SAS cable (1 meter)3650/3651 to SAS adapter port

• For #5908 1.5GB SAS adapter #5908

#3651

#3651

IBM Training

© 2009 IBM Corporation126

#5903 used in POWER6 570/560 for CEC Internal Disk

• Existing option for AIX/Linux. New for IBM i in October 2009

• #5903 is additional option to #5908 1.5GB Write Cache Controller with IBM i

• Power 560/570 “split” backplane feature #3651 used

–#3651 uses one PCIe slot – provides pathway to 6 of 6 drive bays.

–#5902/5903/5908 (AIX/Linux) or #5903/5908 (IBM i) adapters then run those 6 bays.

FrontRear

570 CEC

Yellow dotted line &

PCI card represents

#3650 or #3651 & taking one PCIe slot

IBM i doesn’t

support #3650 split

Diagram shows two YR #3667

SAS cables.

#5903 could be in same CEC

drawer or as shown could

be in different CEC drawers4 PCI slots used in this

diagram and the 3rd

drawer’s disk drives are run

by the drawer’s imbedded

controller with zero write

cache

#3651

#5903

#3651

#5903

560/570 CEC

560/570 CEC

YR

YR

Page 64: Power Systems 2009 Hardware

64

IBM Training

© 2009 IBM Corporation127

Power 560/570 Split Backplane – No Cache Option

PCI-e

SAS Adapter(#5901 imbedded, but 5901

feat code not used)

Connects Into

DASD

Backplane

External SAS Cable #3679

connects PCIe card to

backplane connection

assembly

• For SAS adapter with zero write cache – NOT #5902/5903 or #5908

• More efficient use of PCI slots than using #3650 + #5900/5912/5901 adapter. Uses only one slot, not two slots.

• Earlier feature = #5909 Alternate SAS Controller for 3 OF 6 Internal SAS for AIX or Linux®. Could not attach anything else to PCIe SAS card.

• New feature April 2009 = #5911. Exactly the same function as 5909 PLUS an extra SAS port available for attaching additional SAS disk or SAS tape

– AIX/Linux option, not IBM i

#5909 � #5911

3rd port available on #5911

IBM Training

© 2009 IBM Corporation128

Withdrawals Summary Announced April

• Three hardware withdrawal announcement letters 28 April

– Review all for full picture

• Key withdrawals from marketing

– SCSI disk & most controllers Aug 2009 (two short term exceptions)

– Systems Aug/Oct/Nov 2009

• Processor upgrades within POWER5 9405-520, 9406-520, 9406-570

• Features unique to POWER5 9405-520, 9406-520, 9406-570

• Processor upgrades within POWER5 9406-525, 9406-550, 9407-515

• Features unique to POWER5 9406-525, 9406-550, 9407-515

• 4-core Power 520 4.2 GHz

– I/O drawers/towers: Aug/Sep 2009

• #0595 & 7311-D20

• 7314-G30 (use #5796 instead)

– A number of PCI-X I/O cards including: May/Nov 2009

• #2793/4/6803/4 2-line w/ modem WAN; #4812 IXS; #6800/01 1Gb Ethernet; #4746 twinax

Page 65: Power Systems 2009 Hardware

65

IBM Training

© 2009 IBM Corporation129

Withdrawals Details

• Key withdrawals from marketing

– SCSI disk 28 Aug 2009

• Two short-term exceptions … 146 GB drives only for 9119-FHA and 9xxx-51A

– Most SCSI disk controllers 28 Aug 2009

• EXP24 1.5GB Cache disk controller & #3736/5775 zero cache tape controller NOT withdrawn

– Systems

• Withdrawn 28 Aug 2009

– Processor upgrades within POWER5 MT/models 9405-520, 9406-520, 9406-570

– Features unique to POWER5 MT/models 9405-520, 9406-520, 9406-570

• Withdrawn 30 Oct 2009: 4-core 8203 4.2GHz

• Withdrawn 27 Nov 2009

– Processor upgrades within POWER5 MT/models 9406-525, 9406-550, 9407-515

– Features unique to POWER5 MT/models 9406-525, 9406-550, 9407-515

– I/O drawers/towers

• Already announced, effective 29 May: #5094/5096 & #5294/5296

• April announce, effective 28 Aug: #0595 (#5790 still available for IOP-based cards)

• April announce, effective 28 Sep: 7311-D20 (RIO) and 7314-G30 (use #5796 instead of G30)

– A number of older I/O cards/features … note especially

• April announce, effect 29 May 2009:

– #2793/2794/6803/6804 2-line w/ modem WAN (use PCIe alternative)

– #4812 IXS (use iSCSI or new LAN interface capability)

– #6800/6801 1Gb Ethernet

• April announce, effective 27 Nov 2009: #4746 twinax adapter

IBM Training

© 2009 IBM Corporation130

New Rack Mounted HMC – 7042-CR5 -- Oct 2009

• 7042-CR5 replacing 7042-CR4

– CR4 being withdrawn December 2009

– Normal technology refresh – use CR5 anywhere CR4 could be used

• Minimum firmware level: Version 7 Revision 350 (V7R350)

• Differences of note between CR5 and CR4

– More powerful processors in CR5• Helpful for larger, more complex configurations

• Helpful for running more robust System Director applications

– 4GB memory standard on all CR5• CR4 had 1GB standard with option to grow to 4GB

– Four integrated 1Gb Ethernet ports• CR4 had 2 integrated and then used PCI slots to add more

• Should all existing proposals be changed to CR5?

– If large complex configuration; suggest changing

– If HMC might ship in 2010; suggest changing

– If smaller, less complex configuration shipping in 2009; change if

convenient. Otherwise leave proposal alone.

USA list pricing –compare CR5 to CR4

• “Bare” box: CR5 is higher

• “Full” box: (with monitor, keyboard, mouse, etc --(base-to-base configs): CR5 modestly higher.

• “Full” box including 4GB memory + 4 Ethernet ports: CR5 slightly higher.

Page 66: Power Systems 2009 Hardware

66

IBM Training

© 2009 IBM Corporation131

Additional IBM i Oct 2009 Hardware Enhancements

IBM i 6.1 with 6.1.1 machine firmware is enhanced with:

– Native support for IBM Systems Storage DS5100 and DS5300

• 4 December 2009 for DS5100 and DS5300 support

• Using 4Gb or 8Gb Fibre Channel adapters (#5749, #5774, #5735)

• With POWER6 servers (not POWER5)

– Support for XIV with PowerVM VIOS on POWER6 processors

– Support for NPIV through VIOS partitions with PowerVM

• Using 8Gb FC adapter: PCIe #5735 or BladeCenter #8240/8242/8271

• For tape libraries: 3584 (TS3500) and 3573 (TS3100 and TS3200)

• For the DS8000

– Hot spare for mirroring

IBM i 5.4 and IBM i 6.1 enhancements

– Native support for IBM Systems Storage DS8700

• POWER5 or POWER6

• 12 November 2009, for DS8700 support vs 30 October

• Using 4 or 8Gb FC adapters

– Support for ProtecTIER virtual tape storage device

IBM Training

© 2009 IBM Corporation132

Power 595 I/O Drawer Conversion Program

• Expanded option announced 2 September 2009

• Replace old RIO drawers with newer #5797 12X I/O drawer for lower price

• For POWER6 595 clients with RIO-attached 24-inch I/O drawers

– Convert #5791 or #5794 or 7040-61D (61D with #4643 DCA conversion)

– Ordering mechanics convert #5791/5794 to #5797, #5809 to #5797

• Can convert existing RIO GX adapter to 12X GX adapter at lower price also

– #1814 (RIO) to #1816 (12X)

• Benefits:

– Newer #5797 is faster and allows greater throughput

– Existing PCI-X adapters and SCSI disk drive investment can be preserved

– Eases future upgrade to POWER7 595 as #5797 planned to be supported, but not RIO-attached drawers

• Notes:

– Scheduled downtime is required to remove the RIO I/O drawer

– Conversion to even newer #5803/5873 12X drawer not announced as PCI-X adapters and SCSI disk drives not usable in these PCIe and SAS SFF drawers

– #5791 and 1814 conversions announced April 2009, September announce augments

– !!! Careful how order !!! If do when moving from POWER5 to POWER6 upgrade you won’t see the discount/trade in pricing. 1st order the POWER5 to POWER6 conversion and then order the I/O drawer conversions separately.

RIO #5791/5794 12X #5797

Page 67: Power Systems 2009 Hardware

67

IBM Training

© 2009 IBM Corporation133

Enterprise Systems Resiliency on Power 570 & 595The roadmap to continuous availability got wider!

• Concurrent maintenance previously available…

�Single processor components

�Single memory components

�12X GX adapter component

�Power supplies and regulators

�Disk drives and PCI adapters

�RIO & 12X I/O drawers

• Additional components added 3Q09 and now

available…

�Multiple processor and memory components

�12X I/O drawer permanent removal

�Power 595 Service Processors

IBM Training

© 2009 IBM Corporation134

Power Systems I/O & RAS Optimization Assessment

• Lab Services Offering Power 595 & 570 clients who want to optimize system resources to:

– Leverage Power System RAS functions – Concurrent Maintenance, Hot-node Add/Repair

– Enhance I/O performance with 12X

– Prepare for smooth transition to POWER7

• Scope of work: IBM will conduct a POWER5 or POWER6 595 system I/O and RAS Assessment, analyzing the following :

– CEC Concurrent Maintenance hardware configuration

– HMC, FSP configuration

– LPAR Configuration (LVT and SPT)– are these correctly laid out to support CCM

– Required PTFs, Service / Fix packs required to to support 12X I/O migration and/or CCM?

– Operating system configuration and release

– I/O configuration being migration from HSL to 12X - supported and non-supported I/O components

– Racks, Draws, IOPs, IOA, HDD, SSD, Smart IOAs, External Storage.

– VIOS configuration

– Live Partition Mobility design and architecture

– Data Center layout for migration

– Power, cooling, floor space, access

– Proposed IBM Sales or BP Hardware configurations

– Steps and processes required to complete a successful migration or implementation

– Business-allowed window for migration activities vs. actual requirements

• Time to complete - 40 / 80 hours, depending on Customer Complexity

• Contact Lab Services for more information & pricing or Allen Johnston – [email protected]

Page 68: Power Systems 2009 Hardware

68

IBM Training

© 2009 IBM Corporation135

N-Port ID Virtualization (NPIV)

IBM i 6.1.1

AIX5.3

LinuxAIX6.1

VIOS

POWER HypervisorSAN Storage

8Gb PCIeFiber Chan

Adapter

►LPARs have direct visibility on SAN (Zoning/Masking)

►I/O Virtualization configuration effort is dramatically reduced

►NPIV does not eliminate the need to virtualize PCI-family

►Tape Library Support

VIOS Fiber Channel adapter supports Multiple World Wide Port Names / Source Identifiers

Physical adapter appears as multiple virtual adapters to SAN / end-point device

Virtual adapter can be assigned to multiple operating systems sharing the physical adapter

Tape Library

FiberChanSwitch

Oct 2009

IBM i

IBM Training

© 2009 IBM Corporation136

7311-D20 I/O Drawer – “Gone”

• Some confusion about 7311-D20 availability

• Shows as “available”, BUT

– Definition of minimum configuration which MUST be ordered now includes fairly expensive PCI telephony adapters

– Plus can’t order unless an approved i-listed RPQ 8A1766 is also on the order

– NET – For all practical purposes 7311-D20 is withdrawn from marketing unless you are installing a telephony application !!

• Insights --

– Extremely limited supply of D20 parts

– Normally product would be withdrawn from marketing, but need to support a specific telephony application on POWER6 server which needs to be in a RIO-attached drawer

– If officially withdrawn, non-telephony orders placed through withdrawn-products ordering process might have used up the very limited supply.

– Change to minimum product definition documented in announcement letter released 11 August 2009

– Focus on 12X I/O drawers for POWER6 servers. All RIO/HSL I/O drawers now effectively withdrawn from marketing.

Page 69: Power Systems 2009 Hardware

69

IBM Training

© 2009 IBM Corporation137

This document was developed for IBM offerings in the United States as of the date of publication. IBM may not make these offerings available in other countries, and the information is subject to change without notice. Consult your local IBM business contact for information on the IBM offerings available in your area.

Information in this document concerning non-IBM products was obtained from the suppliers of these products or other public sources. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products.

IBM may have patents or pending patent applications covering subject matter in this document. The furnishing of this document does not give you any license to these patents. Send license inquires, in writing, to IBM Director of Licensing, IBM Corporation, New Castle Drive, Armonk, NY 10504-1785 USA.

All statements regarding IBM future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives

only.

The information contained in this document has not been submitted to any formal IBM test and is provided "AS IS" with no warranties or guarantees either expressed or implied.

All examples cited or described in this document are presented as illustrations of the manner in which some IBM products can be used and the results that may be achieved. Actual environmental costs and performance characteristics will vary depending on individual client configurations and conditions.

IBM Global Financing offerings are provided through IBM Credit Corporation in the United States and other IBM subsidiaries and divisions worldwide to qualified commercial and government clients. Rates are based on a client's credit rating, financing terms, offering type, equipment type and options, and may vary by country. Other restrictions may apply. Rates and offerings are subject to change, extension or withdrawal without notice.

IBM is not responsible for printing errors in this document that result in pricing or information inaccuracies.

All prices shown are IBM's United States suggested list prices and are subject to change without notice; reseller prices may vary.

IBM hardware products are manufactured from new parts, or new and serviceable used parts. Regardless, our warranty terms apply.

Any performance data contained in this document was determined in a controlled environment. Actual results may vary significantly and are dependent on many factors including system hardware configuration and software design and configuration. Some measurements quoted in this document may have been made on development-level systems. There is no guarantee these measurements will be the same on generally-available systems. Some measurements quoted in this document may have been estimated through extrapolation. Users of this document should verify the applicable data for their specific environment.

Revised September 26, 2006

Special notices

IBM Training

© 2009 IBM Corporation138

IBM, the IBM logo, ibm.com AIX, AIX (logo), AIX 6 (logo), AS/400, BladeCenter, Blue Gene, ClusterProven, DB2, ESCON, i5/OS, i5/OS (logo), IBM Business Partner (logo), IntelliStation, LoadLeveler, Lotus, Lotus Notes, Notes, Operating System/400, OS/400, PartnerLink, PartnerWorld, PowerPC, pSeries, Rational, RISC

System/6000, RS/6000, THINK, Tivoli, Tivoli (logo), Tivoli Management Environment, WebSphere, xSeries, z/OS, zSeries, AIX 5L, Chiphopper, Chipkill, Cloudscape, DB2 Universal Database, DS4000, DS6000, DS8000, EnergyScale, Enterprise Workload Manager, General Purpose File System, , GPFS, HACMP, HACMP/6000, HASM, IBM Systems Director Active Energy Manager, iSeries, Micro-Partitioning, POWER, PowerExecutive, PowerVM, PowerVM (logo), PowerHA, Power Architecture, Power

Everywhere, Power Family, POWER Hypervisor, Power Systems, Power Systems (logo), Power Systems Software, Power Systems Software (logo), POWER2, POWER3, POWER4, POWER4+, POWER5, POWER5+, POWER6, POWER6+, System i, System p, System p5, System Storage, System z, Tivoli Enterprise, TME 10,

Workload Partitions Manager and X-Architecture are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both. If these and other IBM trademarked terms are marked on their first occurrence in this information with a trademark symbol (® or ™), these symbols indicate U.S. registered or common law trademarks owned by IBM at the time this information was published. Such trademarks may also be registered or common law

trademarks in other countries. A current list of IBM trademarks is available on the Web at "Copyright and trademark information" at www.ibm.com/legal/copytrade.shtml

The Power Architecture and Power.org wordmarks and the Power and Power.org logos and related marks are trademarks and service marks licensed by Power.org.UNIX is a registered trademark of The Open Group in the United States, other countries or both.

Linux is a registered trademark of Linus Torvalds in the United States, other countries or both.

Microsoft, Windows and the Windows logo are registered trademarks of Microsoft Corporation in the United States, other countries or both.

Intel, Itanium, Pentium are registered trademarks and Xeon is a trademark of Intel Corporation or its subsidiaries in the United States, other countries or both.

AMD Opteron is a trademark of Advanced Micro Devices, Inc.

Java and all Java-based trademarks and logos are trademarks of Sun Microsystems, Inc. in the United States, other countries or both.

TPC-C and TPC-H are trademarks of the Transaction Performance Processing Council (TPPC).

SPECint, SPECfp, SPECjbb, SPECweb, SPECjAppServer, SPEC OMP, SPECviewperf, SPECapc, SPEChpc, SPECjvm, SPECmail, SPECimap and SPECsfs are trademarks of the Standard Performance Evaluation Corp (SPEC).

NetBench is a registered trademark of Ziff Davis Media in the United States, other countries or both.

AltiVec is a trademark of Freescale Semiconductor, Inc.

Cell Broadband Engine is a trademark of Sony Computer Entertainment Inc.

InfiniBand, InfiniBand Trade Association and the InfiniBand design marks are trademarks and/or service marks of the InfiniBand Trade Association. Other company, product and service names may be trademarks or service marks of others.

Revised April 24, 2008

Special notices (cont.)

Page 70: Power Systems 2009 Hardware

70

IBM Training

© 2009 IBM Corporation139

The IBM benchmarks results shown herein were derived using particular, well configured, development-level and generally-available computer systems. Buyers should

consult other sources of information to evaluate the performance of systems they are considering buying and should consider conducting application oriented testing. For additional information about the benchmarks, values and systems tested, contact your local IBM office or IBM authorized reseller or access the Web site of the benchmark

consortium or benchmark vendor.

IBM benchmark results can be found in the IBM Power Systems Performance Report at http://www.ibm.com/systems/p/hardware/system_perf.html.

All performance measurements were made with AIX or AIX 5L operating systems unless otherwise indicated to have used Linux. For new and upgraded systems, AIX

Version 4.3, AIX 5L or AIX 6 were used. All other systems used previous versions of AIX. The SPEC CPU2006, SPEC2000, LINPACK, and Technical Computing benchmarks were compiled using IBM's high performance C, C++, and FORTRAN compilers for AIX 5L and Linux. For new and upgraded systems, the latest versions of these compilers were used: XL C Enterprise Edition V7.0 for AIX, XL C/C++ Enterprise Edition V7.0 for AIX, XL FORTRAN Enterprise Edition V9.1 for AIX, XL C/C++

Advanced Edition V7.0 for Linux, and XL FORTRAN Advanced Edition V9.1 for Linux. The SPEC CPU95 (retired in 2000) tests used preprocessors, KAP 3.2 for FORTRAN and KAP/C 1.4.2 from Kuck & Associates and VAST-2 v4.01X8 from Pacific-Sierra Research. The preprocessors were purchased separately from these vendors. Other

software packages like IBM ESSL for AIX, MASS for AIX and Kazushige Goto’s BLAS Library for Linux were also used in some benchmarks.

For a definition/explanation of each benchmark and the full list of detailed results, visit the Web site of the benchmark consortium or benchmark vendor.

TPC http://www.tpc.org

SPEC http://www.spec.org

LINPACK http://www.netlib.org/benchmark/performance.pdf

Pro/E http://www.proe.com

GPC http://www.spec.org/gpc

NotesBench http://www.notesbench.org

VolanoMark http://www.volano.com

STREAM http://www.cs.virginia.edu/stream/

SAP http://www.sap.com/benchmark/

Oracle Applications http://www.oracle.com/apps_benchmark/

PeopleSoft - To get information on PeopleSoft benchmarks, contact PeopleSoft directly

Siebel http://www.siebel.com/crm/performance_benchmark/index.shtm

Baan http://www.ssaglobal.com

Microsoft Exchange http://www.microsoft.com/exchange/evaluation/performance/default.asp

Veritest http://www.veritest.com/clients/reports

Fluent http://www.fluent.com/software/fluent/index.htm

TOP500 Supercomputers http://www.top500.org/

Ideas International http://www.ideasinternational.com/benchmark/bench.html

Storage Performance Council http://www.storageperformance.org/results Revised January 15, 2008

Notes on benchmarks and values

IBM Training

© 2009 IBM Corporation140

Revised January 15, 2008

Notes on HPC benchmarks and valuesThe IBM benchmarks results shown herein were derived using particular, well configured, development-level and generally-available computer systems. Buyers should

consult other sources of information to evaluate the performance of systems they are considering buying and should consider conducting application oriented testing. For additional information about the benchmarks, values and systems tested, contact your local IBM office or IBM authorized reseller or access the Web site of the benchmark

consortium or benchmark vendor.

IBM benchmark results can be found in the IBM Power Systems Performance Report at http://www.ibm.com/systems/p/hardware/system_perf.html.

All performance measurements were made with AIX or AIX 5L operating systems unless otherwise indicated to have used Linux. For new and upgraded systems, AIX

Version 4.3 or AIX 5L were used. All other systems used previous versions of AIX. The SPEC CPU2000, LINPACK, and Technical Computing benchmarks were compiled using IBM's high performance C, C++, and FORTRAN compilers for AIX 5L and Linux. For new and upgraded systems, the latest versions of these compilers were used: XL C Enterprise Edition V7.0 for AIX, XL C/C++ Enterprise Edition V7.0 for AIX, XL FORTRAN Enterprise Edition V9.1 for AIX, XL C/C++ Advanced Edition V7.0 for Linux, and

XL FORTRAN Advanced Edition V9.1 for Linux. The SPEC CPU95 (retired in 2000) tests used preprocessors, KAP 3.2 for FORTRAN and KAP/C 1.4.2 from Kuck & Associates and VAST-2 v4.01X8 from Pacific-Sierra Research. The preprocessors were purchased separately from these vendors. Other software packages like IBM ESSL

for AIX, MASS for AIX and Kazushige Goto’s BLAS Library for Linux were also used in some benchmarks.

For a definition/explanation of each benchmark and the full list of detailed results, visit the Web site of the benchmark consortium or benchmark vendor.

SPEC http://www.spec.org

LINPACK http://www.netlib.org/benchmark/performance.pdf

Pro/E http://www.proe.com

GPC http://www.spec.org/gpc

STREAM http://www.cs.virginia.edu/stream/

Veritest http://www.veritest.com/clients/reports

Fluent http://www.fluent.com/software/fluent/index.htm

TOP500 Supercomputers http://www.top500.org/

AMBER http://amber.scripps.edu/

FLUENT http://www.fluent.com/software/fluent/fl5bench/index.htm

GAMESS http://www.msg.chem.iastate.edu/gamess

GAUSSIAN http://www.gaussian.com

ABAQUS http://www.abaqus.com/support/sup_tech_notes64.html

select Abaqus v6.4 Performance Data

ANSYS http://www.ansys.com/services/hardware_support/index.htm

select “Hardware Support Database”, then benchmarks.

ECLIPSE http://www.sis.slb.com/content/software/simulation/index.asp?seg=geoquest&

MM5 http://www.mmm.ucar.edu/mm5/

MSC.NASTRAN http://www.mscsoftware.com/support/prod%5Fsupport/nastran/performance/v04_sngl.cfm

STAR-CD www.cd-adapco.com/products/STAR-CD/performance/320/index/html

NAMD http://www.ks.uiuc.edu/Research/namd

HMMER http://hmmer.janelia.org/http://powerdev.osuosl.org/project/hmmerAltivecGen2mod

Page 71: Power Systems 2009 Hardware

71

IBM Training

© 2009 IBM Corporation141

Revised April 2, 2007

Notes on performance estimates

rPerf for AIX

rPerf (Relative Performance) is an estimate of commercial processing performance relative to other IBM UNIX systems. It is derived from an IBM analytical model which uses characteristics from IBM internal workloads, TPC and SPEC benchmarks. The rPerf model is not intended to represent any specific public benchmark results and should not be reasonably used in that way. The model simulates some of the system operations such as CPU, cache and memory. However, the model does not simulate disk or network I/O operations.

• rPerf estimates are calculated based on systems with the latest levels of AIX and other pertinent software at the time of systemannouncement. Actual performance will vary based on application and configuration specifics. The IBM eServer pSeries 640 is the baseline reference system and has a value of 1.0. Although rPerf may be used to approximate relative IBM UNIX commercial processing performance, actual system performance may vary and is dependent upon many factors including system hardware configuration and software design and configuration. Note that the rPerf methodology used for the POWER6 systems is identical to that used for the POWER5 systems. Variations in incremental system performance may be observed in commercial workloads due to changes in the underlying system architecture.

All performance estimates are provided "AS IS" and no warranties or guarantees are expressed or implied by IBM. Buyers should consult other sources of information, including system benchmarks, and application sizing guides to evaluate the performance of a system they are considering buying. For additional information about rPerf, contact your local IBM office or IBM authorized reseller.

========================================================================

CPW for IBM i

Commercial Processing Workload (CPW) is a relative measure of performance of processors running the IBM i operating system. Performance in customer environments may vary. The value is based on maximum configurations. More performance information is available in the Performance Capabilities Reference at: www.ibm.com/systems/i/solutions/perfmgmt/resource.html