driving cluster and grid technologies in hpc · driving cluster and grid technologies in hpc...

43
Driving Cluster and Grid Driving Cluster and Grid Technologies in HPC Technologies in HPC ClusterWorld San Jose, CA June 25, 2003 David Barkai HPC Computational Architect Intel

Upload: vudiep

Post on 03-Oct-2018

218 views

Category:

Documents


0 download

TRANSCRIPT

Driving Cluster and Grid Driving Cluster and Grid Technologies in HPCTechnologies in HPC

ClusterWorldSan Jose, CA

June 25, 2003

David BarkaiHPC Computational Architect

Intel

2

EMEA HPTC Virtual TeamNational Lab Series

*Other brands and names are the property of their respective owners© Copyright 2002 Intel Corporation. All Rights Reserved.

Outline

Observations about the Evolution of HPCTechnologies for Cluster Building BlocksIntel’s Role in HPCChoices: Matching Apps to ClustersFrom Clusters to GridsWhere from here?

3

EMEA HPTC Virtual TeamNational Lab Series

*Other brands and names are the property of their respective owners© Copyright 2002 Intel Corporation. All Rights Reserved.

Custom CPUsCustom memoryCustom packagingCustom interconnectsCustom OS

COTS CPUsCOTS memoryCustom packagingCustom interconnectsCustom OS

COTS CPUCOTS MemoryCOTS PackagingCOTS InterconnectsCOTS OS

1980

~$5 million/gigaflop

1980

~$5 million/gigaflop 1990

~$200K /gigaflop

1990

~$200K /gigaflop 2000

<$2K /gigaflop

2000

<$2K /gigaflopFrom ~1GF ~100’s GF >10TFFrom ~1GF ~100’s GF >10TF

4

EMEA HPTC Virtual TeamNational Lab Series

*Other brands and names are the property of their respective owners© Copyright 2002 Intel Corporation. All Rights Reserved.

The Enabling Hardware Technologies

EUV Lithography 15nm

TeraHertz Transistor(Performance , Power )

SiliconDie

Bumpless Build-Up Layer (BBUL)

Packaging

52-megabit chipone square micronSRAM cell on 90nm

Manufacturing (130nm)

300mm Wafers

www.intel.com/research/silicon

5

EMEA HPTC Virtual TeamNational Lab Series

*Other brands and names are the property of their respective owners© Copyright 2002 Intel Corporation. All Rights Reserved.

Silicon by the end of the Decade

Transistors

40048008

80858086

286386

486

Pentium® Pro

Itanium®

1.8B

0.001

0.01

0.1

1

10

100

1,000

10,000

’70 ’80 ’90 ’00 ’10

~2B Transistors

8080

Xeon®

Itanium2®

1,000

10,000

100,000

Itanium® procPentium® Pro

Pentium® proc486

100

’00

386286

1

10

’70 ’90

80868085

80808008

4004

’800.1

30GHz

14GHz6.5GHz

3 GHz

’10

~30 GHz

Frequency

10,00

100,000

1,000,000

“30 gigahertz devices, 10 nanometer or less delivering “30 gigahertz devices, 10 nanometer or less delivering a tera instruction of performance by 2010a tera instruction of performance by 2010”(1)”(1)

1) Pat Gelsinger, Intel CTO, Spring 2002 IDF

6

EMEA HPTC Virtual TeamNational Lab Series

*Other brands and names are the property of their respective owners© Copyright 2002 Intel Corporation. All Rights Reserved.

The Growing Popularity of Cluster Computing

The number of clusters in the TOP500 has grown to nearly 20 percent, with a total of 93 systems, 56 of them are Intel Architecture based.

7

EMEA HPTC Virtual TeamNational Lab Series

*Other brands and names are the property of their respective owners© Copyright 2002 Intel Corporation. All Rights Reserved.

High-Performance Computing Areas

BusinessScience Engineering

8

EMEA HPTC Virtual TeamNational Lab Series

*Other brands and names are the property of their respective owners© Copyright 2002 Intel Corporation. All Rights Reserved.

Outline

Observations about the Evolution of HPCTechnologies for Cluster Building BlocksIntel’s Role in HPCChoices: Matching Apps to ClustersFrom Clusters to GridsWhere from here?

9

EMEA HPTC Virtual TeamNational Lab Series

*Other brands and names are the property of their respective owners© Copyright 2002 Intel Corporation. All Rights Reserved.

14

19851985 20022002

SystemSystemCapabilityCapability

ProcessorProcessorFrequencyFrequency

Intel® 486Processor

Intel® 486Intel® 486ProcessorProcessori386

Processori386i386

ProcessorProcessor

Pentium®Processor

Pentium®Pentium®ProcessorProcessor

Pentium® ProProcessor

Pentium® ProPentium® ProProcessorProcessor

Pentium® IIXeon™Processor

Pentium® IIPentium® IIXeon™Xeon™ProcessorProcessor

Pentium® IIIXeon™Processor

Pentium® Pentium® IIIIIIXeon™Xeon™ProcessorProcessor

Itanium™Processor

Itanium™Itanium™ProcessorProcessor

Xeon™ProcessorXeon™Xeon™ProcessorProcessor

More than SiliconMore than Silicon

ProcessorProcessorCapabilitiesCapabilities

32-bit3232--bitbit

Mathc o pro c e s s o r

MathMathc o pro c e s s o rc o pro c e s s o r

MMXTe c hno lo g y

MMXMMXTe c hno lo g yTe c hno lo g y

S upe rs c alarS upe rs c alarS upe rs c alar

S MPS MPS MP

Ne tburs t™µ-arc h

Ne tburs t™Ne tburs t™µµ--arc harc h

L2 Cac he Bus

L2 Cac he L2 Cac he BusBus

Larg e L2Larg e L2Larg e L2

S tre am ing S IMDExte ns io ns

S tre am ing S IMDS tre am ing S IMDExte ns io nsExte ns io ns

Inte g rate d L2Inte g rate d L2Inte g rate d L2

ECCECCECC

EPICEPICEPIC

MCAMCAMCA

64-bitArc h.6464--bitbitArc h.Arc h.

Cartridg eCartridg eCartridg e

Hype r-Thre adingTe c hno lo g y

Hype rHype r--Thre adingThre adingTe c hno lo g yTe c hno lo g y

S S E2S S E2S S E2

(Paralle lis m )(Paralle lis m )(Paralle lis m )

S e am le s s4-way

S e am le s sS e am le s s44--w ayway

2-way22--w ayway

Pro fus io n8-way

Pro fus io nPro fus io n88--wayway

AGP2X

AGPAGP2X2X

AGP4X

AGPAGP4X4X

Larg e S MP16-64P

Larg e S MPLarg e S MP1616--64P64P

Platform & Platform & ChipsetChipset

CapabilitiesCapabilities

InfiniBand*Te c hno lo g yInfiniBand*InfiniBand*Te c hno lo g yTe c hno lo g y

AGP8X

AGPAGP8X8X

Ultra-de ns eFo rm fac to rsUltraUltra--de ns ede ns e

Fo rm fac to rsFo rm fac to rsCarrie r-g rade

S e rve rsCarrie rCarrie r--g radeg rade

S e rve rsS e rve rs

PCIPCIPCI

FaultTo le ranc e

FaultFaultTo le ranc eTo le ranc e

*Other names and brands are property of their respective owners.*Other names and brands are property of their respective owners.

PCI-xpre s sPCIPCI--xpre s sxpre s s

Multi-c o reMultiMulti--c o rec o re

Mo re GHzMo re GHzMo re GHz

FutureFutureFuture

Innovating at all levels of the platformInnovating at all levels of the platform

10

EMEA HPTC Virtual TeamNational Lab Series

*Other brands and names are the property of their respective owners© Copyright 2002 Intel Corporation. All Rights Reserved.

Blade Clusters – The Next Wave?

11

EMEA HPTC Virtual TeamNational Lab Series

*Other brands and names are the property of their respective owners© Copyright 2002 Intel Corporation. All Rights Reserved.

Some choices of cluster interconnect

805 MB/s7.5 usVariousInfiniBand* 4x

340 MB/s< 5 usFat TreeQuadrics*

320 MB/s< 4 us2D/3D TorusDolphin* SCI

243 MB/s7 usFat TreeMyrinet*

BandwidthMPI LatencyTopologyInterconnect

12

EMEA HPTC Virtual TeamNational Lab Series

*Other brands and names are the property of their respective owners© Copyright 2002 Intel Corporation. All Rights Reserved.

InfiniBand and PCI ExpressPCI Express Interface

InfiniHost Express• PCI Express: 8X Interface• 3rd Generation HCA• Software compatible to

PCI-X & PCI-X DDR InfiniHost HCA

• Memory Optional• Lower Power

InfiniHost PCI • 2nd Generation HCA• Dual 10Gb/sec 4X Ports• PCI-X Interface• Available NOW• TSMC 0.18 Process

13

EMEA HPTC Virtual TeamNational Lab Series

*Other brands and names are the property of their respective owners© Copyright 2002 Intel Corporation. All Rights Reserved.

InfiniBand MPI Throughput Comparison

Up to 3X the throughput of proprietary technologies (over 860MByte/sec for large MPI messages!)Improvements ahead with new firmware and software releases.

MPI Bandwidth vs Message Size

0.0100.0200.0300.0400.0500.0600.0700.0800.0900.0

4 64 256

4096

8192

1638

432

768

6553

613

1072

Message Size Bytes

Ban

dwid

th (M

B/s

ec)

InfiniBandMyrinetQuadrics

Source: Ohio State University, Xeon 2.4 GHz platform, thh driver 0.0.6, mvapich v0.8

14

EMEA HPTC Virtual TeamNational Lab Series

*Other brands and names are the property of their respective owners© Copyright 2002 Intel Corporation. All Rights Reserved.

InfiniBand MPI LatencyLess than 7.0 usec Latencies for Small MPI Messages3X Better Latencies for Large MessagesContinued improvements with new software releases

MPI Latency vs Message Size

0.0

50.0

100.0

150.0

200.0

250.0

300.0

350.0

4 64 256 4096 8192 16384 32768 65536

Message Size (Bytes)

Late

ncy

(us)

InfiniBandMyrinetQuadrics

Source: Ohio State University, Xeon 2.4 GHz platform, thh driver 0.0.6, mvapich v0.8

15

EMEA HPTC Virtual TeamNational Lab Series

*Other brands and names are the property of their respective owners© Copyright 2002 Intel Corporation. All Rights Reserved.

Software Development Tools

Compilers

VTune™

Performance Analyzer

Performance Libraries

SW Products Developer Services

Intel®

ThreadingTools

www.intel.com/ids

16

EMEA HPTC Virtual TeamNational Lab Series

*Other brands and names are the property of their respective owners© Copyright 2002 Intel Corporation. All Rights Reserved.

Processors Chipsets Platform Interconnect Software

17

EMEA HPTC Virtual TeamNational Lab Series

*Other brands and names are the property of their respective owners© Copyright 2002 Intel Corporation. All Rights Reserved.

More than Silicon –Broad Choice of Solutions and Services

Rack SaverDP/1U

Scale-Up(SMP, ccNUMA)

SGI 64/512P

NEC 32P

Unisys 16P

Scale-Out(Cluster)

HPDP/2U HP

4P/4UIBM4P/8P/16P

18

EMEA HPTC Virtual TeamNational Lab Series

*Other brands and names are the property of their respective owners© Copyright 2002 Intel Corporation. All Rights Reserved.

Outline

Observations about the Evolution of HPCTechnologies for Cluster Building BlocksIntel’s Role in HPCChoices: Matching Apps to ClustersFrom Clusters to GridsWhere from here?

19

EMEA HPTC Virtual TeamNational Lab Series

*Other brands and names are the property of their respective owners© Copyright 2002 Intel Corporation. All Rights Reserved.

Intel Clusters Rising to the "Top"

Dramatic % increase on Top 500 List (<½% - >11%)56 of Top 500 supercomputer sites are Intel-based (up from 3 in 1999)33 new Intel-based clusters on the list:

Source: Latest TOP500 list, November 2002. Data to be availableSource: Latest TOP500 list, November 2002. Data to be available at: at: www.top500.org/listswww.top500.org/lists

–– ##5: Lawrence Livermore National Laboratory Livermore5: Lawrence Livermore National Laboratory Livermore–– #8: Forecast Systems Laboratory #8: Forecast Systems Laboratory -- NOAA BoulderNOAA Boulder–– #17: Louisiana State University #17: Louisiana State University –– #22: University at Buffalo, SUNY, Center for Comp. Res. #22: University at Buffalo, SUNY, Center for Comp. Res. –– #32: Sandia National Laboratories Albuquerque#32: Sandia National Laboratories Albuquerque–– #42: British Petroleum Houston#42: British Petroleum Houston–– #43: Academy of Mathematics and System Science Beijing#43: Academy of Mathematics and System Science Beijing–– #46: Argonne National Laboratory #46: Argonne National Laboratory –– #51: National Supercomputer Centre (NSC) Linkoping#51: National Supercomputer Centre (NSC) Linkoping–– #61: Pacific Northwest National Laboratory Richland#61: Pacific Northwest National Laboratory Richland–– #80: Seoul National University Seoul#80: Seoul National University Seoul–– #85: Honda Research and Development Company Tokyo#85: Honda Research and Development Company Tokyo–– #88: Cornell Theory Center Ithaca#88: Cornell Theory Center Ithaca–– #89: University of Utah Salt Lake City#89: University of Utah Salt Lake City–– #95: Naval Research Laboratory (NRL) Washington D.C.#95: Naval Research Laboratory (NRL) Washington D.C.

–– #169: #169: PemexPemex Gas Gas CdCd del Carmendel Carmen–– #174: Pennsylvania State University #174: Pennsylvania State University –– #175: System Research Group, University of Hong Kong #175: System Research Group, University of Hong Kong –– #180: Center for Astrophysics & Supercomputing, Melbourne#180: Center for Astrophysics & Supercomputing, Melbourne–– #187: University at Buffalo, SUNY, Center for Comp. Res. #187: University at Buffalo, SUNY, Center for Comp. Res. –– #193: Shell #193: Shell –– #195: Intelligent Information Center, #195: Intelligent Information Center, DoshishaDoshisha U. KyotoU. Kyoto–– #197: University of Oklahoma Norman#197: University of Oklahoma Norman–– #207: Scalable Systems Group, Dell Computer#207: Scalable Systems Group, Dell Computer–– #336: University of Maine #336: University of Maine OroneOrone–– #338: KISTI Supercomputing Center #338: KISTI Supercomputing Center YusongYusong–– #341: Honda Research and Development Company Tokyo#341: Honda Research and Development Company Tokyo–– #352: GSIC Center, Tokyo Institute of Technology Tokyo#352: GSIC Center, Tokyo Institute of Technology Tokyo–– #367: 4SC AG #367: 4SC AG MunicMunic–– #461: University of Notre Dame Notre Dame#461: University of Notre Dame Notre Dame

20

EMEA HPTC Virtual TeamNational Lab Series

*Other brands and names are the property of their respective owners© Copyright 2002 Intel Corporation. All Rights Reserved.

Cluster Build with Intel Architecture Worldwide Momentum and Breadth

Academic•SUNY Buffalo•Denmark Scientific•Mississippi State•Louisiana State•Brookhaven•Clemson•Utah•Cornell•Toronto•Virginia Polytech•Ohio State•Tsinghua Uni.•Imperial College

Government•TeraGrid•Los Alamos National Lab (LANL)•Lawrence Livermore National Lab (LLNL)•National Oceans Atmospheric Administration (NOAA)•Pacific Northwest National Lab •Sandia National Lab•National Center for SupercomputingApplications (NCSA)•Classified Defense Sites•China Atmospheric Center•European Organization for Nuclear Research (CERN)•China Atmosphere•Seoul National University Grid•Ohio Supercomputing Center•Inria

Industry•Western GECO•GCC•Shell•Saudi Aramco•Tensor Geophysical •WETA•Pemex•Google•Inpharmatica•Syrxx•Immunex•MDS Proteomics•Pixar•British Petroleum•DaimlerChrysler•SAS•Ifineon•Volvo

More than 60,000 Intel processors deployed in these clusters; All Clusters listed are >32 nodes

••National U of National U of SingaporeSingapore••SwinburneSwinburne••Stanford Linear Stanford Linear AcceleratorAccelerator••South HamptonSouth Hampton••ValenciaValencia••OxfordOxford••Johns HopkinsJohns Hopkins••Cal TechCal Tech••BelfastBelfast••ZeijingZeijing••PrincetonPrinceton••CambridgeCambridge••BrandeisBrandeis

…and many others

21

EMEA HPTC Virtual TeamNational Lab Series

*Other brands and names are the property of their respective owners© Copyright 2002 Intel Corporation. All Rights Reserved.

Intel-Based Building Blocks for HPC Solutions

Architectures

Processors

Platforms

Interconnects

Software

Services

22

EMEA HPTC Virtual TeamNational Lab Series

*Other brands and names are the property of their respective owners© Copyright 2002 Intel Corporation. All Rights Reserved.

Processor Leadership for all HPC solutions

Intel® Pentium® 4 processors are ideal for distributed, peer-to-peer desktop environments

Pentium® 4 processor is built to power the most Pentium® 4 processor is built to power the most demanding peerdemanding peer--toto--peer, and collaborative HPC peer, and collaborative HPC applications.applications.

UP, DPRack, blades, low

power blades

performance 2P, 4P, 8P, 8P+

high-end servers

Intel® Itanium™ processors are ideal for most demanding, compute intensive 64-bit HPC solutions today and tomorrow

Designed for the future for the most revolutionary HPC Designed for the future for the most revolutionary HPC solutions Ideal for large memory needs, large SMP systems, solutions Ideal for large memory needs, large SMP systems, complex highcomplex high--end floating point applications, 64end floating point applications, 64--bit integer bit integer applicationsapplications

Intel® Xeon™ processor are built to power the most Intel® Xeon™ processor are built to power the most demanding server and peerdemanding server and peer--toto--peer, collaborative HPC peer, collaborative HPC applications, and is optimized for today’s server applications, and is optimized for today’s server environmentenvironment

4P, 8P, 8P+ rack optimized,

pedestal serversIntel® Xeon™ processors are ideal for today’s most demanding 32-bit HPC solutions

2P rack optimized,

pedestal servers

23

EMEA HPTC Virtual TeamNational Lab Series

*Other brands and names are the property of their respective owners© Copyright 2002 Intel Corporation. All Rights Reserved.

HPC Requires an Ecosystem

®

9Copyright © 2002 Intel Corporation. All Rights Reserved.*All names and brands are property of their respective owners

Intel WW ServicesIntel WW Services--Dedicated HPC PersonnelDedicated HPC Personnel--End User Developer EnablingEnd User Developer Enabling--Infrastructure (WA, NM, VG) Infrastructure (WA, NM, VG) --Premier Technical SupportPremier Technical Support--HW Design SupportHW Design Support

Intel BuildingIntel BuildingBlocksBlocks--MicroprocessorsMicroprocessors--Chipsets Chipsets --InterconnectsInterconnects--Platform EnablingPlatform Enabling--Software SuiteSoftware Suite

Intel Portfolio Intel Portfolio InvestmentsInvestments--SCALI*, United Devices*SCALI*, United Devices*--Portfolio MatchmakingPortfolio Matchmaking

Strategic CollaborationsStrategic Collaborations--Cornell Theory Center* (CTC)Cornell Theory Center* (CTC)--CERNCERN--NCSANCSA--Western GECO*Western GECO*--European Virtual Grid Center* European Virtual Grid Center* --DaresburyDaresbury Benchmark Center*Benchmark Center*--Singapore Singapore BioITBioIT Grid*Grid*

Intel ProgramsIntel Programs-- 13 major hardware vendors13 major hardware vendors-- Regional LeadersRegional Leaders-- SI ProgramsSI Programs-- iSV iSV enabling enabling

Intel TrainingIntel Training--Worldwide Software CollegeWorldwide Software College--Worldwide TrainingWorldwide Training--Cluster RecipesCluster Recipes--Case Studies/Blue PrintsCase Studies/Blue Prints

IntelIntelManufacturingManufacturing--.13us, 90nm, 300mm.13us, 90nm, 300mm-- Platform ValidationPlatform Validation

Joint Joint Solutions CentersSolutions Centers-- ChannelChannel-- OilOil-- Finance Finance -- Life SciencesLife Sciences

HPC Program Office HPC Program Office More than SiliconMore than Silicon

* Other names and brands may be claimed as the property of others.

24

EMEA HPTC Virtual TeamNational Lab Series

*Other brands and names are the property of their respective owners© Copyright 2002 Intel Corporation. All Rights Reserved.

Intel Confidential – Internal Use Only

Solutions Market Development

Solutions ChannelsSolutions Channels

Solutions EnablingSolutions Enabling

ProductsProducts

Architecture / StandardsArchitecture / Standards

*All names and brands are property of their respective owners

25

EMEA HPTC Virtual TeamNational Lab Series

*Other brands and names are the property of their respective owners© Copyright 2002 Intel Corporation. All Rights Reserved.

Outline

Observations about the Evolution of HPCTechnologies for Cluster Building BlocksIntel’s Role in HPCChoices: Matching Apps to ClustersFrom Clusters to GridsWhere from here?

26

EMEA HPTC Virtual TeamNational Lab Series

*Other brands and names are the property of their respective owners© Copyright 2002 Intel Corporation. All Rights Reserved.

Differentiation of HPC

Computationally intensiveLarge scale appsRandom, dynamic data access patterns

Pre-early adoptersExpert users

27

EMEA HPTC Virtual TeamNational Lab Series

*Other brands and names are the property of their respective owners© Copyright 2002 Intel Corporation. All Rights Reserved.

Architectural Implications

Fast computational units– Not just floating point

Large memory with high bandwidth & low latency– Larger caches don’t always help– Maintain bandwidth/latency within n-way node

Wide, low-latency interconnect for distributed applications

28

EMEA HPTC Virtual TeamNational Lab Series

*Other brands and names are the property of their respective owners© Copyright 2002 Intel Corporation. All Rights Reserved.

We’d like to believe that...

The academic and public sector HPC serves as proving grounds for Industry / private sectorHigh-end technologies experimented by HPC community will go main-stream in 1-2 ‘technology generations’

The Industry needs to invest in HPC becauseits community is technology trend setter and innovator-adopter

29

EMEA HPTC Virtual TeamNational Lab Series

*Other brands and names are the property of their respective owners© Copyright 2002 Intel Corporation. All Rights Reserved.

Needs of apps drive selection metrics

Cannot satisfy every job – Step 1 = Determine the apps/datasets that matter!– Not necessarily the most demanding..

App “environment” factors; such as --– Distribution of workload by number and size of jobs– Distribution by required time to solution and size– Frequency of compilations, restarts, ..

App characteristics; such as --– Data access patterns and size– Granularity of parallelism– Computations – per data accessed; float/integer mix

30

EMEA HPTC Virtual TeamNational Lab Series

*Other brands and names are the property of their respective owners© Copyright 2002 Intel Corporation. All Rights Reserved.

The Metrics form a multidimensional Space

Memory- Bandwidth- Latency- Size

Interconnect- Bandwidth from node- Latency- Bisection bandwidth

Operational Env- Size- Power / rack, total- File System

Computations- Flop rate- Integer rate- Float/Integer mix

31

EMEA HPTC Virtual TeamNational Lab Series

*Other brands and names are the property of their respective owners© Copyright 2002 Intel Corporation. All Rights Reserved.

Deriving a Metric

Find your part of a surface or a bounded sub-space in the universe of the metricsDetermine the relationships among parameters in your metrics spaceExamples:– Maximize B:F in node, but <10– B:F:I = 4:1:0.5 or 1:1:0.1 – Minimize Watt/Flop– Etc., etc.

32

EMEA HPTC Virtual TeamNational Lab Series

*Other brands and names are the property of their respective owners© Copyright 2002 Intel Corporation. All Rights Reserved.

A Paradox?

The choices in cluster components allow us to tailor a cluster to very specific needs– Examples: Vis, Storage clusters

COTS make HPC cluster affordable at the department or project levelAs a result, the cluster can be as specific as we wish..

Commodity components enable (drive?) custom solutions

33

EMEA HPTC Virtual TeamNational Lab Series

*Other brands and names are the property of their respective owners© Copyright 2002 Intel Corporation. All Rights Reserved.

Not all is done..

A reminder that existing challenges only get worse as we march on to the implications of Moore’s LawThe next frontier(s)– I/O and interconnect keeping pace with processor speed

and memory size– Managing clusters – quality and performance of

monitoring, maintenance, effective use

34

EMEA HPTC Virtual TeamNational Lab Series

*Other brands and names are the property of their respective owners© Copyright 2002 Intel Corporation. All Rights Reserved.

Outline

Observations about the Evolution of HPCTechnologies for Cluster Building BlocksIntel’s Role in HPCChoices: Matching Apps to ClustersFrom Clusters to GridsWhere from here?

35

EMEA HPTC Virtual TeamNational Lab Series

*Other brands and names are the property of their respective owners© Copyright 2002 Intel Corporation. All Rights Reserved.

Beyond Clusters - Grids

*All names and brands are property of their respective owners.© Copyright 2002, Intel Corporation. All rights reserved. 9

GridGrid

Shared Shared ResourcesResources

Rich ClientRich Client

Cluster, SMP, BladesCluster, SMP, Blades

Distributed Distributed ApplicationsApplications

ApplicationsApplicationsOSOS

MiddlewareMiddlewareHardwareHardware

Traditional Traditional HPC ArchitectureHPC Architecture

Standard BasedStandard BasedClustersClusters

SMPSMP

Future HPC Future HPC ArchitectureArchitecture

Current Current HPC ArchitectureHPC Architecture

ProprietaryProprietaryRISCRISC

VectorVectorCustomCustom

The Changing Models ofThe Changing Models ofHigh Performance ComputingHigh Performance Computing

36

EMEA HPTC Virtual TeamNational Lab Series

*Other brands and names are the property of their respective owners© Copyright 2002 Intel Corporation. All Rights Reserved.

Solutions Adoption CurveIn

dust

ry A

dopt

ion

Indu

stry

Ado

ptio

n

GOAL = GOAL = EcosystemEcosystem

scalescale

GOAL = GOAL = Select the RightSelect the RightOpportunitiesOpportunities

GOAL = GOAL = Select the RightSelect the Right

PartnersPartners

GOAL = GOAL = Solid EUSolid EU

Proof PointsProof Points

GOAL = GOAL = Extend Ecosystem Extend Ecosystem

ParticipationParticipation

GridGrid ClustersClusters SMPSMP

MATURITYMATURITY TimeDEVELOPMENT RAMPRAMPDISCOVERY DISCOVERY DEVELOPMENT

37

EMEA HPTC Virtual TeamNational Lab Series

*Other brands and names are the property of their respective owners© Copyright 2002 Intel Corporation. All Rights Reserved.

The duality of Grid Computing

Grid Computing as an networked ensemble of data centers– As originally understood by the people who coined the

term (and formed the Grid Forum)Grid computing as harnessing resources on desktops– Activity that was originally known as “distributed

computing” (.. On the Internet)– Proprietary software is being replaced by Globus

38

EMEA HPTC Virtual TeamNational Lab Series

*Other brands and names are the property of their respective owners© Copyright 2002 Intel Corporation. All Rights Reserved.

Intersection of Technologies

Is desktop grid computing (DGC) HPC? Is it cluster computing?– DGC can, and is, employed to solve problems of the

size we call “HPC problems”– DGC employs, more and more, the same resource

sharing tools used for cluster-based gridsDGC is interesting – it builds on both P2P and clusters technologies– P2P: identification/authentication, intermittent

connectivity– Cluster: scheduling, monitoring, ..

39

EMEA HPTC Virtual TeamNational Lab Series

*Other brands and names are the property of their respective owners© Copyright 2002 Intel Corporation. All Rights Reserved.

Possible Implications

DGC, with P2P foundation and cluster-based traditional grid tools, can define a special class of clustersP2P technologies enable types of apps other than computations and data handling– Content distribution, team collaboration

Should the concept of cluster be extended to include these other P2P app types?– After all, they involved coordinated use of multiple and

networked collection of systemsShould “HPC” be extended to these app types– And then really be a superset of HPTC

40

EMEA HPTC Virtual TeamNational Lab Series

*Other brands and names are the property of their respective owners© Copyright 2002 Intel Corporation. All Rights Reserved.

Not mere Semantics

Defining the scope of “clusters” and “HPC” matters to their development– If technologies are shared it makes sense for all relevant

practitioners to collaborate in their development– For that to happen we’d need some sort of a conceptual

umbrella .. Clusters and HPC can be thought as occupying a spectrum – from dedicated high-end clustered SMPs to loosely coupled Internet-connected desktops

41

EMEA HPTC Virtual TeamNational Lab Series

*Other brands and names are the property of their respective owners© Copyright 2002 Intel Corporation. All Rights Reserved.

Internet Distributed ComputingSe

curit

ySe

curit

ySe

curit

y

Web

Pro

toco

lsW

eb P

roto

cols

Web

Pro

toco

ls

Inte

rnet

C

onne

ctiv

ityIn

tern

et

Con

nect

ivity

Dire

ctor

ies

Dire

ctor

ies

Dire

ctor

ies

Priv

acy

Priv

acy

Priv

acy

Web ServicesWeb Services

Secu

rity

Secu

rity

Secu

rity

Ease

-of-U

seEa

seEa

se-- o

fof-- U

seUse

Inte

rnet

C

onne

ctiv

ityIn

tern

et

Con

nect

ivity

Dire

ctor

ies

Dire

ctor

ies

Dire

ctor

ies

Priv

acy

Priv

acy

Priv

acy

Peer-to-PeerPeerPeer--toto--PeerPeer

Secu

rity

Secu

rity

Secu

rity

Man

agea

bilit

yM

anag

eabi

lity

Man

agea

bilit

y

Inte

rnet

C

onne

ctiv

ityIn

tern

et

Con

nect

ivity

Dire

ctor

ies

Dire

ctor

ies

Dire

ctor

ies

Ava

ilabi

lity

Ava

ilabi

lity

Ava

ilabi

lity

GridGridGrid

Secu

rity

Secu

rity

Secu

rity

Priv

acy

Priv

acy

Priv

acy

Web

Pro

toco

lsW

eb P

roto

cols

Web

Pro

toco

ls

Ease

-of-U

seEa

seEa

se-- o

fof-- U

seUse

Dire

ctor

ies

Dire

ctor

ies

Dire

ctor

ies

Inte

rnet

C

onne

ctiv

ityIn

tern

et

Con

nect

ivity

Avai

labi

lity

Ava

ilabi

lity

Avai

labi

lity

Man

agea

bilit

yM

anag

eabi

lity

Man

agea

bilit

y

Web Services -- Peer-to-Peer -- GridWeb Services Web Services ---- PeerPeer--toto--Peer Peer ---- GridGrid

Internet Distributed ComputingInternet Distributed Computing

42

EMEA HPTC Virtual TeamNational Lab Series

*Other brands and names are the property of their respective owners© Copyright 2002 Intel Corporation. All Rights Reserved.

Concluding Remarks

The economics of HPC have changed, and it drives exciting opportunitiesIn an interesting twist commoditization opens the door to specializationThe next big challenges are Interconnect (hardware) and Cluster Mgmt (mostly software)Intel works with the Industry and with the user community to enable HPC for discovery and scienceThe “beyond clusters” examination can get the mainstream computer industry to pay closer attention to HPC

43

EMEA HPTC Virtual TeamNational Lab Series

*Other brands and names are the property of their respective owners© Copyright 2002 Intel Corporation. All Rights Reserved.

Thank you

Questions?