alex reid & james o'donoghue

of 18 /18
Supercomputers Alex Reid & James O'Donoghue

Upload: others

Post on 23-Jan-2022

1 views

Category:

Documents


0 download

TRANSCRIPT

SupercomputersAlex Reid & James O'Donoghue

The Need for Supercomputers• Supercomputers allow large amounts of processing to be

dedicated to calculation-heavy problems• Supercomputers are centralized in one location, allowing for

high-speed communication within the computer• Specific Applications throughout the years:

o 1970s: Aerodynamic researcho 1980s: Radiation shielding modelingo 1990s: 3D nuclear test simulationso 2010s: Molecular Dynamics Simulation

• All applications would be difficult to simulate on individual machines

History of the Supercomputer: Origins• 1964 - Seymour Cray, working at

Control Data Corporation, puts together the CDC 6600

• At this time, machines used one CPU to drive entire system

• The CPC's CPUs handled only arithmetic and logic, letting Peripheral Processors (PP) handle I/O

• 6600 had one CPU and 10 PPs• System ran at 1 MFLOPS, world's

fastest computer from 1964 - 1969 -10 times faster than any machine of its time

• One hundred machines produced, sold for $8 Million each, defining the supercomputer market

History of the Supercomputer: Cray• In 1975, Seymour Cray and

Jim Thornton developed the 80 MHz Cray-1.

• The Cray-1 used vector processing, many registers, and pipelining for fast vector and scalar operations.

• Ran at 80 MFLOPS• Most successful

Supercomputer in history• Crays defined

supercomputers for much of the 70s and 80s

Cray-1 Supercomputer

History of the Supercomputer: Cray

• 64-bit System• 24-bit Addressing• 72-bit Word Length

(64-bit data, 8-bit parity check)

• 12 pipelined functional units

History of the Supercomputer: Multi- Processor

• At its peak in popularity, the best Cray supercomputer had at most 8 Cores

• The 90s introduced many multiprocessor systems including:o Fujitsu's Numerical Wind

Tunnel (166 Vector Processors, 280 GFLOPS)

o Hitachi SR2201 (2048 Processors, 600 GFLOPS)

• Inter-processor communication was crucial

• Developments in this area led to the ASCI Red, the first computer to beat 1 TFLOP

Examining part of the ASCI Red

History of the Supercomputer: Intel Paragon

History of the Supercomputer: Petascale

• Post-2000, the trend of many small units to achieve high performance continued, many systems consisting of many nodes with many processors

• Top Supercomputers of the last decade:o 2000 IBM ASCI White 7.226 TFLOPSo 2002 NEC Earth Simulator 35.86 TFLOPSo 2004 IBM Blue Gene/L 70.72TFLOPSo 2005 IBM Blue Gene/L 280.6 TFLOPSo 2007 IBM Blue Gene/L 478.2 TFLOPSo 2008 IBM Roadrunner 1.026 PFLOPSo 2009 Cray Jaguar 1.759 PFLOPSo 2010 Tianhe-IA 2.566 PFLOPSo 2011 Fujitsu K computer 10.51 PFLOPSo 2012 Cray Titan 20 PFLOPS

• Heat and power are becoming increasingly important - The K computer uses 12.6 MW, costing $970/hr to run, or 8.5 million dollars in a year.

Fujitsu K-Computer, individual rack

History of the Supercomputer: Blue Gene

History of the SuperComputer

Cluster Computing• Clusters are a modern,

inexpensive solution to run high-processing power tasks

• Made from many cheap nodes, connected via ethernet

• Low-cost, commodity solutions

Supercomputers VSCluster Computing• Clusters are much cheaper and

easier to assemble vs supercomputers

• Supercomputers, due to their custom construction, can be designed to be much more power-efficient, and reach much higher speeds.

• Both design approaches have their uses in different circumstances

o Clusters are appropriate for many low-cost situations with a limited amount of software needed

o Supercomputers excel when max-performance is needed and custom software can be written

Supercomputer Node Communication• Mesh Network• Torus• Message Passing

Interface(Cray)• Wireless Network

on Chip�

Supercomputer Node Communication: Torus• Rectilinear array of

2 or more dimensions

• Processors connected to nearest neighbor

• Number of connections equals 2 times the dimension

Operating Systems used in Supercomputing• The first supercomputers used custom OS to

help increase performance. • Generic OS so began to overtake custom

made OS due to reduced cost.• Generic OS were tailored to specific

systems, depending on specifications• Multi-core systems sometimes run different

OS depending on what the core might be doing, EX: computing core or I/O core

Operating Systems used in Supercomputing

Current Limitations• Heat produced by current systems• Power consumption• Money

Questions?