supercomputers (1).docx

Upload: matthew-rogers

Post on 03-Jun-2018

272 views

Category:

Documents


3 download

TRANSCRIPT

  • 8/12/2019 supercomputers (1).docx

    1/35

    An assignment report on Supercomputers Information Technology for Business

    Department of Business and Industrial Management Page 1

    What is a Supercomputer?

    A supercomputer is a computer at the frontline of contemporary processing capacity

    particularly speeds of calculation. A large computer or collection of computers that act as onelarge computer capable of processing enormous amounts of data. Supercomputers are used

    for very complex jobs such as nuclear research or collecting and calculating weather patterns.

    Below is an example picture of a super computer at the William R. Wiley Environmental

    Molecular Sciences Laboratory, the Linux-based supercomputer is composed of nearly 2,000

    processors. Courtesy: Pacific Northwest National Laboratory. Supercomputers are the

    bodybuilders of the computer world. They boast tens of thousands of times the computing

    power of a desktop and cost tens of millions of dollars. They fill enormous rooms, which are

    chilled to prevent their thousands of microprocessor cores from overheating. And they

    perform trillions, or even thousands of trillions, of calculations per second.

    All of that power means supercomputers are perfect for tacklingbig scientific problems,from

    uncovering the origins of the universe to delving into the patterns of protein folding that

    make life possible. Here are some of the most intriguing questions being tackled by

    supercomputers today.

    A supercomputer is a computer that performs at or near the currently highest operational rate

    for computers. A supercomputer is typically used for scientific and engineering applications

    that must handle very large databases or do a great amount of computation (or both).

    At any given time, there are usually a few well-publicized supercomputers that operate at

    extremely high speeds. The term is also sometimes applied to far slower (but still

    impressively fast) computers. Most supercomputers are really multiple computers that

    performparallel processing. In general, there are two parallel processing approaches:symmetric multiprocessing (SMP)and massively parallel processing (MPP).

    IBM'sRoadrunner is the fastest supercomputer in the world, twice as fast asBlue Gene and

    six times as fast as any of the other current supercomputers. At the lower end of

    supercomputing, a new trend called clustering, takes more of a build-it-yourself approach to

    supercomputing. TheBeowulf Project offers guidance on how to put together a number of

    off-the-shelf personal computer processors, usingLinux operating systems, and

    http://www.livescience.com/strangenews/090507-top10-greatest-mysteries.htmlhttp://searchdatacenter.techtarget.com/definition/parallel-processinghttp://searchdatacenter.techtarget.com/definition/SMPhttp://whatis.techtarget.com/definition/MPP-massively-parallel-processinghttp://searchdatacenter.techtarget.com/definition/IBM-Roadrunnerhttp://searchdatacenter.techtarget.com/definition/Blue-Genehttp://searchenterpriselinux.techtarget.com/definition/Beowulfhttp://searchenterpriselinux.techtarget.com/definition/Linuxhttp://searchenterpriselinux.techtarget.com/definition/Linuxhttp://searchenterpriselinux.techtarget.com/definition/Beowulfhttp://searchdatacenter.techtarget.com/definition/Blue-Genehttp://searchdatacenter.techtarget.com/definition/IBM-Roadrunnerhttp://whatis.techtarget.com/definition/MPP-massively-parallel-processinghttp://searchdatacenter.techtarget.com/definition/SMPhttp://searchdatacenter.techtarget.com/definition/parallel-processinghttp://www.livescience.com/strangenews/090507-top10-greatest-mysteries.html
  • 8/12/2019 supercomputers (1).docx

    2/35

    An assignment report on Supercomputers Information Technology for Business

    Department of Business and Industrial Management Page 2

    interconnecting the processors withFast Ethernet.Applications must be written to manage

    the parallel processing.

    Perhaps the best-known builder of supercomputers has been Cray Research, now a part of

    Silicon Graphics. In September 2008, Cray and Microsoft launched CX1, a $25,000 personal

    supercomputer aimed markets such as aerospace, automotive, academic, financial services

    and life sciences. CX1 runs Windows HPC (High Performance Computing) Server 2008.

    In the United States, somesupercomputer centres are interconnected on an Internet

    backbone known asvBNS or NSFNet. This network is the foundation for an evolving

    network infrastructure known as the National Technology Grid.Internet2 is a university-led

    project that is part of this initiative.

    Supercomputers were introduced in the 1960s, made initially and, for decades, primarily by

    Seymour Cray at Control Data Corporation (CDC), Cray Research and subsequent companies

    bearing his name or monogram. While the supercomputers of the 1970s used only a few

    processors, in the 1990s machines with thousands of processors began to appear and, by the

    end of the 20th century, massively parallel supercomputers with tens of thousands of "off-

    the-shelf" processors were the norm. As of November 2013, China's Tianhe-2 supercomputer

    is the fastest in the world at 33.86 petaFLOPS.

    Systems with massive numbers of processors generally take one of two paths: In one

    approach (e.g., in distributed computing), a large number of discrete computers (e.g., laptops)

    distributed across a network (e.g., the internet) devote some or all of their time to solving a

    common problem; each individual computer (client) receives and completes many small

    tasks, reporting the results to a central server which integrates the task results from all the

    clients into the overall solution. In another approach, a large number of dedicated processorsare placed in close proximity to each other (e.g. in a computer cluster); this saves

    considerable time moving data around and makes it possible for the processors to work

    together (rather than on separate tasks), for example in mesh and hypercube architectures.

    The use of multi-core processors combined with centralization is an emerging trend; one can

    think of this as a small cluster (the multicore processor in a smartphone, tablet, laptop, etc.)

    that both depends upon and contributes to the cloud.

    http://searchnetworking.techtarget.com/definition/Fast-Ethernethttp://whatis.techtarget.com/definition/supercomputer-centerhttp://searchnetworking.techtarget.com/definition/vBNShttp://whatis.techtarget.com/definition/Internet2http://whatis.techtarget.com/definition/Internet2http://searchnetworking.techtarget.com/definition/vBNShttp://whatis.techtarget.com/definition/supercomputer-centerhttp://searchnetworking.techtarget.com/definition/Fast-Ethernet
  • 8/12/2019 supercomputers (1).docx

    3/35

  • 8/12/2019 supercomputers (1).docx

    4/35

    An assignment report on Supercomputers Information Technology for Business

    Department of Business and Industrial Management Page 4

    particles and of the origin and nature of the universe. Supercomputers have become an

    indispensable tool in weather forecasting: predictions are now based on numerical models. As

    the cost of supercomputers declined, their use spread to the world ofonline gaming. In

    particular, the 5th through 10th fastest Chinese supercomputers in 2007 were owned by a

    company with online rights inChina to theelectronic gameWorld of War craft, which

    sometimes had more than a million people playing together in the same gaming world.

    Characteristics which make super computers different

    from ordinary computer?

    Super computers has a big differences from ordinary computers like high speed and it is the

    most high speed performance as of today and also capable of manipulating massive amount

    of data in a short time.

    They are much more faster

    Generally used for scientific calculations

    Use much more power

    Give off more heat

    Much more expensive

    Usage

    Supercomputers play an important role in the field ofcomputational science,and are used for

    a wide range of computationally intensive tasks in various fields, includingquantum

    mechanics,weather forecasting,climate research,oil and gas exploration,molecular

    modeling (computing the structures and properties of chemical compounds,

    biologicalmacromolecules, polymers, and crystals), and physical simulations (such as

    simulations of the early moments of the universe, airplane and spacecraft aerodynamics, the

    detonation ofnuclear weapons,andnuclear fusion). Throughout their history, they have been

    essential in the field ofcryptanalysis.

    http://www.britannica.com/EBchecked/topic/570533/subatomic-particlehttp://www.britannica.com/EBchecked/topic/1017758/online-gaminghttp://www.britannica.com/EBchecked/topic/111803/Chinahttp://www.britannica.com/EBchecked/topic/183800/electronic-gamehttp://www.britannica.com/EBchecked/topic/1195550/World-of-Warcraft-WoWhttp://en.wikipedia.org/wiki/Computational_sciencehttp://en.wikipedia.org/wiki/Quantum_mechanicshttp://en.wikipedia.org/wiki/Quantum_mechanicshttp://en.wikipedia.org/wiki/Weather_forecastinghttp://en.wikipedia.org/wiki/Climate_researchhttp://en.wikipedia.org/wiki/Oil_and_gas_explorationhttp://en.wikipedia.org/wiki/Computational_chemistryhttp://en.wikipedia.org/wiki/Computational_chemistryhttp://en.wikipedia.org/wiki/Macromoleculeshttp://en.wikipedia.org/wiki/Nuclear_weaponshttp://en.wikipedia.org/wiki/Nuclear_fusionhttp://en.wikipedia.org/wiki/Cryptanalysishttp://en.wikipedia.org/wiki/Cryptanalysishttp://en.wikipedia.org/wiki/Nuclear_fusionhttp://en.wikipedia.org/wiki/Nuclear_weaponshttp://en.wikipedia.org/wiki/Macromoleculeshttp://en.wikipedia.org/wiki/Computational_chemistryhttp://en.wikipedia.org/wiki/Computational_chemistryhttp://en.wikipedia.org/wiki/Oil_and_gas_explorationhttp://en.wikipedia.org/wiki/Climate_researchhttp://en.wikipedia.org/wiki/Weather_forecastinghttp://en.wikipedia.org/wiki/Quantum_mechanicshttp://en.wikipedia.org/wiki/Quantum_mechanicshttp://en.wikipedia.org/wiki/Computational_sciencehttp://www.britannica.com/EBchecked/topic/1195550/World-of-Warcraft-WoWhttp://www.britannica.com/EBchecked/topic/183800/electronic-gamehttp://www.britannica.com/EBchecked/topic/111803/Chinahttp://www.britannica.com/EBchecked/topic/1017758/online-gaminghttp://www.britannica.com/EBchecked/topic/570533/subatomic-particle
  • 8/12/2019 supercomputers (1).docx

    5/35

    An assignment report on Supercomputers Information Technology for Business

    Department of Business and Industrial Management Page 5

    Hardware and Architecture

    While the supercomputers of the 1970s used only a fewprocessors,in the 1990s, machines

    with thousands of processors began to appear and by the end of the 20th century, massivelyparallel supercomputers with tens of thousands of "off-the-shelf" processors were the norm.

    Supercomputers of the 21st century can use over 100,000 processors (some beinggraphic

    units)connected by fast connections.

    Systems with a massive number of processors generally take one of two paths: in one

    approach, known asgrid computing,the processing power of a large number of computers in

    distributed, diverse administrative domains, is opportunistically used whenever a computer is

    available. In another approach, a large number of processors are used in close proximity toeach other, e.g. in acomputer cluster. In such a centralizedmassively parallel system the

    speed and flexibility of the interconnect becomes very important and modern supercomputers

    have used various approaches ranging from enhancedInfiniband systems to three-

    dimensionaltorus interconnects.The use ofmulti-core processors combined with

    centralization is an emerging direction, e.g. as in theCyclops64system.

    http://en.wikipedia.org/wiki/Central_processing_unithttp://en.wikipedia.org/wiki/GPGPUhttp://en.wikipedia.org/wiki/GPGPUhttp://en.wikipedia.org/wiki/Grid_computinghttp://en.wikipedia.org/wiki/Computer_clusterhttp://en.wikipedia.org/wiki/Massively_parallelhttp://en.wikipedia.org/wiki/Infinibandhttp://en.wikipedia.org/wiki/Torus_interconnecthttp://en.wikipedia.org/wiki/Multi-core_processorhttp://en.wikipedia.org/wiki/Cyclops64http://en.wikipedia.org/wiki/Cyclops64http://en.wikipedia.org/wiki/Multi-core_processorhttp://en.wikipedia.org/wiki/Torus_interconnecthttp://en.wikipedia.org/wiki/Infinibandhttp://en.wikipedia.org/wiki/Massively_parallelhttp://en.wikipedia.org/wiki/Computer_clusterhttp://en.wikipedia.org/wiki/Grid_computinghttp://en.wikipedia.org/wiki/GPGPUhttp://en.wikipedia.org/wiki/GPGPUhttp://en.wikipedia.org/wiki/Central_processing_unit
  • 8/12/2019 supercomputers (1).docx

    6/35

    An assignment report on Supercomputers Information Technology for Business

    Department of Business and Industrial Management Page 6

    Operating system

    Since the end of the 20th century,supercomputer operating systems have undergone major

    transformations, as sea changes have taken place insupercomputer architecture.While early

    operating systems were custom tailored to each supercomputer to gain speed, the trend has

    been to move away from in-house operating systems to the adaptation of generic software

    such as Linux.Given that modernmassively parallel supercomputers typically separate

    computations from other services by using multiple types ofnodes,they usually run different

    operating systems on different nodes, e.g. using a small and efficient lightweight kernel such

    asCNK orCNL on compute nodes, but a larger system such as aLinux-derivative on server

    andI/O nodes.

    While in a traditional multi-user computer systemjob scheduling is in effect

    ataskingproblem for processing and peripheral resources, in a massively parallel system, the

    job management system needs to manage the allocation of both computational and

    communication resources, as well as gracefully dealing with inevitable hardware failures

    when tens of thousands of processors are present.

    Although most modern supercomputers use theLinux operating system, each manufacturer

    has made its own specific changes to the Linux-derivative they use, and no industry standard

    exists, partly due to the fact that the differences in hardware architectures require changes to

    optimize the operating system to each hardware design.

    Software tools and message passing

    The parallel architectures of supercomputers often dictate the use of special programming

    techniques to exploit their speed. Software tools for distributed processing include

    standardAPIs such asMPI andPVM,VTL, andopen source-based software solutions such

    asBeowulf.In the most common scenario, environments such asPVM andMPI for loosely connected

    clusters andOpenMP for tightly coordinated shared memory machines are used. Significant

    effort is required to optimize an algorithm for the interconnect characteristics of the machine

    it will be run on; the aim is to prevent any of the CPUs from wasting time waiting on data

    from other nodes.GPGPUs have hundreds of processor cores and are programmed using

    programming models such asCUDA.Moreover, it is quite difficult to debug and test parallel

    programs.Special techniques need to be used for testing and debugging such applications

    http://en.wikipedia.org/wiki/Supercomputer_operating_systemshttp://en.wikipedia.org/wiki/Supercomputer_architecturehttp://en.wikipedia.org/wiki/Linuxhttp://en.wikipedia.org/wiki/Massively_parallelhttp://en.wikipedia.org/wiki/Locale_(computer_hardware)http://en.wikipedia.org/wiki/Lightweight_Kernel_Operating_Systemhttp://en.wikipedia.org/wiki/CNK_operating_systemhttp://en.wikipedia.org/wiki/Compute_Node_Linuxhttp://en.wikipedia.org/wiki/Linuxhttp://en.wikipedia.org/wiki/I/Ohttp://en.wikipedia.org/wiki/Job_schedulinghttp://en.wikipedia.org/wiki/Task_schedulinghttp://en.wikipedia.org/wiki/Linuxhttp://en.wikipedia.org/wiki/Application_programming_interfacehttp://en.wikipedia.org/wiki/Message_Passing_Interfacehttp://en.wikipedia.org/wiki/Parallel_Virtual_Machinehttp://en.wikipedia.org/wiki/Virtual_tape_libraryhttp://en.wikipedia.org/wiki/Open_sourcehttp://en.wikipedia.org/wiki/Beowulf_(computing)http://en.wikipedia.org/wiki/Parallel_Virtual_Machinehttp://en.wikipedia.org/wiki/Message_Passing_Interfacehttp://en.wikipedia.org/wiki/OpenMPhttp://en.wikipedia.org/wiki/GPGPUhttp://en.wikipedia.org/wiki/CUDAhttp://en.wikipedia.org/wiki/Testing_high-performance_computing_applicationshttp://en.wikipedia.org/wiki/Testing_high-performance_computing_applicationshttp://en.wikipedia.org/wiki/CUDAhttp://en.wikipedia.org/wiki/GPGPUhttp://en.wikipedia.org/wiki/OpenMPhttp://en.wikipedia.org/wiki/Message_Passing_Interfacehttp://en.wikipedia.org/wiki/Parallel_Virtual_Machinehttp://en.wikipedia.org/wiki/Beowulf_(computing)http://en.wikipedia.org/wiki/Open_sourcehttp://en.wikipedia.org/wiki/Virtual_tape_libraryhttp://en.wikipedia.org/wiki/Parallel_Virtual_Machinehttp://en.wikipedia.org/wiki/Message_Passing_Interfacehttp://en.wikipedia.org/wiki/Application_programming_interfacehttp://en.wikipedia.org/wiki/Linuxhttp://en.wikipedia.org/wiki/Task_schedulinghttp://en.wikipedia.org/wiki/Job_schedulinghttp://en.wikipedia.org/wiki/I/Ohttp://en.wikipedia.org/wiki/Linuxhttp://en.wikipedia.org/wiki/Compute_Node_Linuxhttp://en.wikipedia.org/wiki/CNK_operating_systemhttp://en.wikipedia.org/wiki/Lightweight_Kernel_Operating_Systemhttp://en.wikipedia.org/wiki/Locale_(computer_hardware)http://en.wikipedia.org/wiki/Massively_parallelhttp://en.wikipedia.org/wiki/Linuxhttp://en.wikipedia.org/wiki/Supercomputer_architecturehttp://en.wikipedia.org/wiki/Supercomputer_operating_systems
  • 8/12/2019 supercomputers (1).docx

    7/35

    An assignment report on Supercomputers Information Technology for Business

    Department of Business and Industrial Management Page 7

    Performance metrics

    Top supercomputer speeds:logscalespeedover 60 years

    In general, the speed of supercomputers is measured andbenchmarked in "FLOPS"(Floating

    Point Operations Per Second), and not in terms ofMIPS,i.e. as "instructions per second", as

    is the case with general purpose computers.[69]These measurements are commonly used with

    anSI prefix such astera-, combined into the shorthand "TFLOPS" (1012FLOPS,

    pronounced teraflops), orpeta-, combined into the shorthand "PFLOPS" (1015FLOPS,

    pronounced petaflops.) "Petascale"supercomputers can process one quadrillion (1015) (1000

    trillion) FLOPS.Exascale is computing performance in the exaflops range. An exaflop is one

    quintillion (1018) FLOPS (one million teraflops).

    Appl icat ions of sup ercomputersThe stages of supercomputer application may be summarized in the following table:

    Decade Uses and computer involved

    1970s Weather forecasting, aerodynamic research (Cray-1).

    1980s Probabilistic analysis, radiation shielding modelling (CDC Cyber).

    1990s Brute force code breaking (EFF DES cracker),

    http://en.wikipedia.org/wiki/Logarithmic_scalehttp://en.wikipedia.org/wiki/Logarithmic_scalehttp://en.wikipedia.org/wiki/Benchmark_(computing)http://en.wikipedia.org/wiki/FLOPShttp://en.wikipedia.org/wiki/Million_instructions_per_secondhttp://en.wikipedia.org/wiki/Supercomputer#cite_note-Xifu-69http://en.wikipedia.org/wiki/Supercomputer#cite_note-Xifu-69http://en.wikipedia.org/wiki/Supercomputer#cite_note-Xifu-69http://en.wikipedia.org/wiki/SI_prefixhttp://en.wikipedia.org/wiki/Tera-http://en.wikipedia.org/wiki/Peta-http://en.wikipedia.org/wiki/Petascalehttp://en.wikipedia.org/wiki/Exascale_computinghttp://en.wikipedia.org/wiki/Cray-1http://en.wikipedia.org/wiki/CDC_Cyberhttp://en.wikipedia.org/wiki/EFF_DES_crackerhttp://en.wikipedia.org/wiki/File:Supercomputing-rmax-graph.pnghttp://en.wikipedia.org/wiki/EFF_DES_crackerhttp://en.wikipedia.org/wiki/CDC_Cyberhttp://en.wikipedia.org/wiki/Cray-1http://en.wikipedia.org/wiki/Exascale_computinghttp://en.wikipedia.org/wiki/Petascalehttp://en.wikipedia.org/wiki/Peta-http://en.wikipedia.org/wiki/Tera-http://en.wikipedia.org/wiki/SI_prefixhttp://en.wikipedia.org/wiki/Supercomputer#cite_note-Xifu-69http://en.wikipedia.org/wiki/Million_instructions_per_secondhttp://en.wikipedia.org/wiki/FLOPShttp://en.wikipedia.org/wiki/Benchmark_(computing)http://en.wikipedia.org/wiki/Logarithmic_scale
  • 8/12/2019 supercomputers (1).docx

    8/35

    An assignment report on Supercomputers Information Technology for Business

    Department of Business and Industrial Management Page 8

    2000s3D nuclear test simulations as a substitute for legal conductNuclear Non-

    Proliferation Treaty (ASCI Q).

    2010s Molecular Dynamics Simulation (Tianhe-1A)

    The IBMBlue Gene/P computer has been used to simulate a number of artificial neurons

    equivalent to approximately one percent of a human cerebral cortex, containing 1.6 billion

    neurons with approximately 9 trillion connections. The same research group also succeeded

    in using a supercomputer to simulate a number of artificial neurons equivalent to the entiretyof a rat's brain.

    Modern-day weather forecasting also relies on supercomputers. TheNational Oceanic and

    Atmospheric Administration uses supercomputers to crunch hundreds of millions of

    observations to help make weather forecasts more accurate.

    In 2011, the challenges and difficulties in pushing the envelope in supercomputing were

    underscored byIBM's abandonment of theBlue Waterspetascale project.

    http://en.wikipedia.org/wiki/Nuclear_Non-Proliferation_Treatyhttp://en.wikipedia.org/wiki/Nuclear_Non-Proliferation_Treatyhttp://en.wikipedia.org/wiki/ASCI_Qhttp://en.wikipedia.org/wiki/Tianhe-1Ahttp://en.wikipedia.org/wiki/Blue_Genehttp://en.wikipedia.org/wiki/National_Oceanic_and_Atmospheric_Administrationhttp://en.wikipedia.org/wiki/National_Oceanic_and_Atmospheric_Administrationhttp://en.wikipedia.org/wiki/IBMhttp://en.wikipedia.org/wiki/Blue_Watershttp://en.wikipedia.org/wiki/Blue_Watershttp://en.wikipedia.org/wiki/IBMhttp://en.wikipedia.org/wiki/National_Oceanic_and_Atmospheric_Administrationhttp://en.wikipedia.org/wiki/National_Oceanic_and_Atmospheric_Administrationhttp://en.wikipedia.org/wiki/Blue_Genehttp://en.wikipedia.org/wiki/Tianhe-1Ahttp://en.wikipedia.org/wiki/ASCI_Qhttp://en.wikipedia.org/wiki/Nuclear_Non-Proliferation_Treatyhttp://en.wikipedia.org/wiki/Nuclear_Non-Proliferation_Treaty
  • 8/12/2019 supercomputers (1).docx

    9/35

    An assignment report on Supercomputers Information Technology for Business

    Department of Business and Industrial Management Page 9

    Supercomputers Operating System

    Early sys tems

    The firstCray-1(sample shown with internals) was delivered to the customer without an operating system. [8]

    TheCDC 6600,generally considered the first supercomputer in the world, ran theChippewa

    Operating System, which was then deployed on various otherCDC 6000

    series computers. The Chippewa was a rather simple job control oriented system derived

    from the earlierCDC 3000,but it influenced the laterKRONOS andSCOPE systems.

    The firstCray 1 was delivered to the Los Alamos Lab without an operating system, or any

    other software. Los Alamos developed not only the application software for it, but also the

    operating system. The main timesharing system for the Cray 1, the Cray Time Sharing

    System (CTSS), was then developed at the Livermore Labs as a direct descendant of

    theLivermore Time Sharing System (LTSS) for the CDC 6600 operating system from twenty

    years earlier.

    The rising software costs in developing a supercomputer soon became dominant, as

    evidenced by the fact that in the 1980s the cost for software development at Cray came to

    equal what they spent on hardware. That trend was partly responsible for a move away from

    the in-houseCray Operating System toUNICOS system based onUnix. In 1985, theCray

    2 was the first system to ship with the UNICOS operating system.

    Around the same time, theEOS operating system was developed byETA Systems for use in

    theirETA10 supercomputers. Written inCybil, a Pascal-like language fromControl Data

    Corporation,EOS highlighted the stability problems in developing stable operating systems

    for supercomputers and eventually a Unix-like system was offered on the same machine. The

    lessons learned from the development of ETA system software included the high level of risk

    http://en.wikipedia.org/wiki/Cray-1http://en.wikipedia.org/wiki/Cray-1http://en.wikipedia.org/wiki/Cray-1http://en.wikipedia.org/wiki/Supercomputer_operating_systems#cite_note-8http://en.wikipedia.org/wiki/Supercomputer_operating_systems#cite_note-8http://en.wikipedia.org/wiki/Supercomputer_operating_systems#cite_note-8http://en.wikipedia.org/wiki/CDC_6600http://en.wikipedia.org/wiki/Chippewa_Operating_Systemhttp://en.wikipedia.org/wiki/Chippewa_Operating_Systemhttp://en.wikipedia.org/wiki/CDC_6000_serieshttp://en.wikipedia.org/wiki/CDC_6000_serieshttp://en.wikipedia.org/wiki/Job_controlhttp://en.wikipedia.org/wiki/CDC_3000http://en.wikipedia.org/wiki/CDC_KRONOShttp://en.wikipedia.org/wiki/CDC_SCOPE_(software)http://en.wikipedia.org/wiki/Cray_1http://en.wikipedia.org/wiki/Cray_Time_Sharing_Systemhttp://en.wikipedia.org/wiki/Cray_Time_Sharing_Systemhttp://en.wikipedia.org/wiki/Livermore_Time_Sharing_Systemhttp://en.wikipedia.org/wiki/Cray_Operating_Systemhttp://en.wikipedia.org/wiki/UNICOShttp://en.wikipedia.org/wiki/Unixhttp://en.wikipedia.org/wiki/Cray_2http://en.wikipedia.org/wiki/Cray_2http://en.wikipedia.org/wiki/EOS_(operating_system)http://en.wikipedia.org/wiki/ETA_Systemshttp://en.wikipedia.org/wiki/ETA10http://en.wikipedia.org/wiki/Cybil_(computer_language)http://en.wikipedia.org/wiki/Control_Data_Corporationhttp://en.wikipedia.org/wiki/Control_Data_Corporationhttp://en.wikipedia.org/wiki/File:Cray_1_IMG_9126.jpghttp://en.wikipedia.org/wiki/Control_Data_Corporationhttp://en.wikipedia.org/wiki/Control_Data_Corporationhttp://en.wikipedia.org/wiki/Cybil_(computer_language)http://en.wikipedia.org/wiki/ETA10http://en.wikipedia.org/wiki/ETA_Systemshttp://en.wikipedia.org/wiki/EOS_(operating_system)http://en.wikipedia.org/wiki/Cray_2http://en.wikipedia.org/wiki/Cray_2http://en.wikipedia.org/wiki/Unixhttp://en.wikipedia.org/wiki/UNICOShttp://en.wikipedia.org/wiki/Cray_Operating_Systemhttp://en.wikipedia.org/wiki/Livermore_Time_Sharing_Systemhttp://en.wikipedia.org/wiki/Cray_Time_Sharing_Systemhttp://en.wikipedia.org/wiki/Cray_Time_Sharing_Systemhttp://en.wikipedia.org/wiki/Cray_1http://en.wikipedia.org/wiki/CDC_SCOPE_(software)http://en.wikipedia.org/wiki/CDC_KRONOShttp://en.wikipedia.org/wiki/CDC_3000http://en.wikipedia.org/wiki/Job_controlhttp://en.wikipedia.org/wiki/CDC_6000_serieshttp://en.wikipedia.org/wiki/CDC_6000_serieshttp://en.wikipedia.org/wiki/Chippewa_Operating_Systemhttp://en.wikipedia.org/wiki/Chippewa_Operating_Systemhttp://en.wikipedia.org/wiki/CDC_6600http://en.wikipedia.org/wiki/Supercomputer_operating_systems#cite_note-8http://en.wikipedia.org/wiki/Cray-1
  • 8/12/2019 supercomputers (1).docx

    10/35

    An assignment report on Supercomputers Information Technology for Business

    Department of Business and Industrial Management Page 10

    associated with the development of a new supercomputer operating system, and the

    advantages of using Unix with its large existing base of system software libraries.

    By the middle 1990s, despite the existing investment in older operating systems, the trend

    was towards the use of Unix-based systems, which also facilitated the use ofinteractive userinterfacesforscientific computing across multiple platforms. The move towards a 'commodity

    OS' was not without its opponents who cited the fast pace and focus of Linux development as

    a major obstacle towards adoption. As one author wrote "Linux will likely catch up, but we

    have large-scale systems now". Nevertheless, that trend continued to build momentum and by

    2005, virtually all supercomputers used some UNIX like OS. These variants of UNIX

    includedAIX from IBM, the open sourceLinux system, and other adaptations such

    asUNICOS from Cray. By the end of the 20th century, Linux was estimated to command thehighest share of the supercomputing pie.

    Modern approaches

    TheBlue Gene/P supercomputer atArgonne National Lab

    The IBMBlue Gene supercomputer uses theCNK operating system on the compute nodes,

    but uses a modifiedLinux-based kernel calledINK (for I/O Node Kernel) on the I/O

    nodes. CNK is alightweight kernel that runs on each node and supports a single application

    running for a single user on that node. For the sake of efficient operation, the design of CNK

    was kept simple and minimal, with physical memory being statically mapped and the CNK

    neither needing nor providing scheduling or context switching. CNK does not even

    implementfile I/O on the compute node, but delegates that to dedicated I/O nodes. However,

    given that on the Blue Gene multiple compute nodes share a single I/O node, the I/O node

    operating system does require multi-tasking, hence the selection of the Linux-based operating

    system.

    http://en.wikipedia.org/wiki/GUIhttp://en.wikipedia.org/wiki/GUIhttp://en.wikipedia.org/wiki/Scientific_computinghttp://en.wikipedia.org/wiki/AIXhttp://en.wikipedia.org/wiki/Linuxhttp://en.wikipedia.org/wiki/UNICOShttp://en.wikipedia.org/wiki/Blue_Genehttp://en.wikipedia.org/wiki/Blue_Genehttp://en.wikipedia.org/wiki/Blue_Genehttp://en.wikipedia.org/wiki/Argonne_National_Laboratoryhttp://en.wikipedia.org/wiki/Argonne_National_Laboratoryhttp://en.wikipedia.org/wiki/Argonne_National_Laboratoryhttp://en.wikipedia.org/wiki/Blue_Genehttp://en.wikipedia.org/wiki/CNK_operating_systemhttp://en.wikipedia.org/wiki/Linuxhttp://en.wikipedia.org/wiki/INK_(operating_system)http://en.wikipedia.org/wiki/Lightweight_Kernel_Operating_Systemhttp://en.wikipedia.org/wiki/Input/outputhttp://en.wikipedia.org/wiki/File:IBM_Blue_Gene_P_supercomputer.jpghttp://en.wikipedia.org/wiki/Input/outputhttp://en.wikipedia.org/wiki/Lightweight_Kernel_Operating_Systemhttp://en.wikipedia.org/wiki/INK_(operating_system)http://en.wikipedia.org/wiki/Linuxhttp://en.wikipedia.org/wiki/CNK_operating_systemhttp://en.wikipedia.org/wiki/Blue_Genehttp://en.wikipedia.org/wiki/Argonne_National_Laboratoryhttp://en.wikipedia.org/wiki/Blue_Genehttp://en.wikipedia.org/wiki/UNICOShttp://en.wikipedia.org/wiki/Linuxhttp://en.wikipedia.org/wiki/AIXhttp://en.wikipedia.org/wiki/Scientific_computinghttp://en.wikipedia.org/wiki/GUIhttp://en.wikipedia.org/wiki/GUI
  • 8/12/2019 supercomputers (1).docx

    11/35

    An assignment report on Supercomputers Information Technology for Business

    Department of Business and Industrial Management Page 11

    While in traditional multi-user computer systems and early supercomputers,job

    scheduling was in effect aschedulingproblem for processing and peripheral resources, in a

    massively parallel system, the job management system needs to manage the allocation of both

    computational and communication resources. The need to tune task scheduling and tune the

    operating system in different configurations of a supercomputer is essential. A typical parallel

    job scheduler has amaster scheduler which instructs a number of slave schedulers to launch,

    monitor and controlparallel jobs,and periodically receives reports from them about the status

    of job progress.

    Some, but not all supercomputer schedulers attempt to maintain locality of job execution.

    ThePBS Pro scheduler used on theCray XT3 andCray XT4 systems does not attempt to

    optimize locality on its three dimensionaltorus interconnect, but simply uses the firstavailable processor. On the other hand, IBM's scheduler on the Blue Gene supercomputers

    aims to exploit locality and minimize network contention by assigning tasks from the same

    application to one or more midplanes of an 8x8x8 node group. TheSLURM scheduler uses a

    best fit algorithm, and performsHilbert curve scheduling in order to optimize locality of task

    assignments. A number of modern supercomputers such as theTianhe-I use the SLURM job

    scheduler which arbitrates contention for resources across the system. SLURM isopen

    source,Linux-based, is quite scalable, and can manage thousands of nodes in a computer

    cluster with a sustained throughput of over 100,000 jobs per hour.

    http://en.wikipedia.org/wiki/Job_schedulinghttp://en.wikipedia.org/wiki/Job_schedulinghttp://en.wikipedia.org/wiki/Task_schedulinghttp://en.wikipedia.org/wiki/Master/slave_(technology)http://en.wikipedia.org/wiki/Parallel_processinghttp://en.wikipedia.org/wiki/PBS_Prohttp://en.wikipedia.org/wiki/Cray_XT3http://en.wikipedia.org/wiki/Cray_XT4http://en.wikipedia.org/wiki/Torus_interconnecthttp://en.wikipedia.org/wiki/Simple_Linux_Utility_for_Resource_Managementhttp://en.wikipedia.org/wiki/Hilbert_curve_schedulinghttp://en.wikipedia.org/wiki/Tianhe-Ihttp://en.wikipedia.org/wiki/Open_sourcehttp://en.wikipedia.org/wiki/Open_sourcehttp://en.wikipedia.org/wiki/Open_sourcehttp://en.wikipedia.org/wiki/Open_sourcehttp://en.wikipedia.org/wiki/Tianhe-Ihttp://en.wikipedia.org/wiki/Hilbert_curve_schedulinghttp://en.wikipedia.org/wiki/Simple_Linux_Utility_for_Resource_Managementhttp://en.wikipedia.org/wiki/Torus_interconnecthttp://en.wikipedia.org/wiki/Cray_XT4http://en.wikipedia.org/wiki/Cray_XT3http://en.wikipedia.org/wiki/PBS_Prohttp://en.wikipedia.org/wiki/Parallel_processinghttp://en.wikipedia.org/wiki/Master/slave_(technology)http://en.wikipedia.org/wiki/Task_schedulinghttp://en.wikipedia.org/wiki/Job_schedulinghttp://en.wikipedia.org/wiki/Job_scheduling
  • 8/12/2019 supercomputers (1).docx

    12/35

    An assignment report on Supercomputers Information Technology for Business

    Department of Business and Industrial Management Page 12

    Applications of super computers

    Recreating the Big Bang

    It takes big computers to look into the biggest question of all: What is the origin of the

    universe?

    The "Big Bang," or the initial expansion of all energy and matter in the universe, happened

    more than 13 billion years ago in trillion-degree Celsius temperatures, but supercomputer

    simulations make it possible to observe what went on during the universe's birth. Researchers

    at the Texas Advanced Computing Centre (TACC) at the University of Texas in Austin have

    also used supercomputers to simulate the formation of the first galaxy, while scientists at

    NASAs Ames Research Centre in Mountain View, Calif., have simulated the creation of

    stars from cosmic dust and gas.

    Supercomputer simulations also make it possible for physicists to answer questions about the

    unseen universe of today. Invisible dark matter makes up about 25 percent of the universe,

    anddark energy makes up more than 70 percent, but physicists know little about either.

    Using powerful supercomputers like IBM's Roadrunner at Los Alamos National Laboratory,researchers can run models that require upward of a thousand trillion calculations per second,

    allowing for the most realistic models of these cosmic mysteries yet.

    Understanding earthquakes

    Other supercomputer simulations hit closer to home. By modeling the three-dimensional

    structure of the Earth, researchers can predict howearthquake waves will travel both locally

    and globally. It's a problem that seemed intractable two decades ago, says Princeton

    http://www.space.com/scienceastronomy/big-bang-universe-beginning-100319.htmlhttp://www.space.com/scienceastronomy/090427-mm-dark-energy.htmlhttp://www.livescience.com/environment/earthquake-world-threat-100302.htmlhttp://www.livescience.com/environment/earthquake-world-threat-100302.htmlhttp://www.space.com/scienceastronomy/090427-mm-dark-energy.htmlhttp://www.space.com/scienceastronomy/big-bang-universe-beginning-100319.html
  • 8/12/2019 supercomputers (1).docx

    13/35

    An assignment report on Supercomputers Information Technology for Business

    Department of Business and Industrial Management Page 13

    geophysicist Jeroen Tromp. But by using supercomputers, scientists can solve very complex

    equations that mirror real life.

    "We can basically say, if this is your best model of what the earth looks like in a 3-D sense,

    this is what the waves look like," Tromp said.

    By comparing any remaining differences between simulations and real data, Tromp and his

    team are perfecting their images of the earth's interior. The resulting techniques can be used

    to map the subsurface for oil exploration or carbon sequestration, and can help researchers

    understand the processes occurring deep in the Earth's mantle and core.

    Folding Proteins

    In 1999, IBM announced plans to build the fastest supercomputer the world had ever seen.

    The first challenge for this technological marvel, dubbed "Blue Gene"?

    Unravelling the mysteries of folding. Proteinsare made of long strands of amino acids folded

    into complex three-dimensional shapes. Their function is driven by their form. When a

    protein misfolds, there can be serious consequences, including disorders like cystic fibrosis,

    Mad Cow disease and Alzheimer's disease. Finding out how proteins foldand how folding

    can go wrongcould be the first step in curing these diseases.

    Blue Gene isn't the only supercomputer to work on this problem, which requires massive

    amounts of power to simulate mere microseconds of folding time. Using simulations,

    researchers have uncovered the folding strategies of several proteins, including one found in

    the lining of the mammalian gut. Meanwhile, the Blue Gene project has expanded. As of

    November 2009, a Blue Gene system in Germany is ranked as the fourth-most powerful

    supercomputer in the world, with a maximum processing speed of a thousand trillion

    calculations per second.

  • 8/12/2019 supercomputers (1).docx

    14/35

    An assignment report on Supercomputers Information Technology for Business

    Department of Business and Industrial Management Page 14

    Mapping the blood stream

    Think you have a pretty good idea of how your blood flows? Think again. The total length of

    all of the veins, arteries and capillaries in the human body is between 60,000 and 100,000

    miles. To map blood flow through this complex system in real time, Brown University

    professor of applied mathematics George Karniadakis works with multiple laboratories and

    multiple computer clusters.

    In a 2009 paper in the journal Philosophical Transactions of the Royal Society, Karniadakas

    and his team describe the flow of blood through thebrain of a typical person compared with

    blood flow in the brain of a person with hydrocephalus, a condition in which cranial fluid

    builds up inside the skull. The results could help researchers better understand strokes,

    traumatic brain injury and other vascular brain diseases, the authors write.

    Modeling swine flu

    Potential pandemics like the H1N1 swine flu require a fast response on two fronts: First,

    researchers have to figure out how the virus is spreading. Second, they have to find drugs tostop it.

    Supercomputers can help with both. During the recent H1N1 outbreak, researchers at

    Virginia Polytechnic Institute and State University in Blacksburg, Va., used an advanced

    model of disease spread called EpiSimdemics to predict the transmission of the flu. The

    program, which is designed to model populations up to 300 million strong, was used by the

    U.S. Department of Defense during the outbreak, according to a May 2009 report in IEEE

    Spectrum magazine.

    http://www.technewsdaily.com/361-new-implants-mold-to-brain-like-shrink-wrap.htmlhttp://www.technewsdaily.com/361-new-implants-mold-to-brain-like-shrink-wrap.html
  • 8/12/2019 supercomputers (1).docx

    15/35

    An assignment report on Supercomputers Information Technology for Business

    Department of Business and Industrial Management Page 15

    Meanwhile, researchers at the University of Illinois at Urbana-Champaign and the University

    of Utah were using supercomputers to peer into the virus itself. Using the Ranger

    supercomputer at the TACC in Austin, Texas, the scientists unraveled the structure of swine

    flu. They figured out how drugs would bind to the virus and simulated the mutations that

    might lead to drug resistance. The results showed that the virus was not yet resistant, but

    would be soon, according to a report by the TeraGrid computing resources center. Such

    simulations can help doctors prescribe drugs that won't promote resistance.

    Testing nuclear weapons

    Since 1992, the United States has banned the testing ofnuclear weapons.But that doesn't

    mean the nuclear arsenal is out of date.

    The Stockpile Stewardship program uses non-nuclear lab tests and, yes, computer simulations

    to ensure that the country's cache of nuclear weapons are functional and safe. In 2012, IBM

    plans to unveil a new supercomputer, Sequoia, at Lawrence Livermore National Laboratory

    in California. According to IBM, Sequoia will be a 20 petaflop machine, meaning it will be

    capable of performing twenty thousand trillion calculations each second. Sequoia's primedirective is to create better simulations of nuclear explosions and to do away with real-world.

    http://www.livescience.com/technology/090922-nuclear-weapons-science.htmlhttp://www.livescience.com/technology/090922-nuclear-weapons-science.html
  • 8/12/2019 supercomputers (1).docx

    16/35

    An assignment report on Supercomputers Information Technology for Business

    Department of Business and Industrial Management Page 16

    Forecasting hurricanes

    With Hurricane Ike bearing down on the Gulf Coast in 2008, forecasters turned to Ranger for

    clues about the storm's path. This supercomputer, with its cowboy moniker and 579 trillion

    calculations per second processing power, resides at the TACC in Austin, Texas. Using data

    directly from National Oceanographic and Atmospheric Agency airplanes, Ranger calculated

    likely paths for the storm. According to a TACC report, Ranger improved the five-day

    hurricane forecast by 15 percent.

    Simulations are also useful after a storm. When Hurricane Rita hit Texas in 2005, Los

    Alamos National Laboratory in New Mexico lent manpower and computer power to model

    vulnerable electrical lines and power stations, helping officials make decisions about

    evacuation, power shutoff and repairs.

    Predicting climate change

    The challenge of predicting global climate is immense. There are hundreds of variables, from

    the reflectivity of the earth's surface (high for icy spots, low for dark forests) to the vagaries

    of ocean currents. Dealing with these variables requires supercomputing capabilities.Computer power is so coveted by climate scientists that the U.S. Department of Energy gives

    out access to its most powerful machines as a prize.

    The resulting simulations both map out the past and look into the future. Models of the

    ancient past can be matched with fossil data to check for reliability, making future predictions

    stronger. New variables, such as the effect of cloud cover on climate, can be explored. One

    model, created in 2008 at Brookhaven National Laboratory in New York, mapped the aerosol

    particles and turbulence of clouds to a resolution of 30 square feet. These maps will have to

  • 8/12/2019 supercomputers (1).docx

    17/35

    An assignment report on Supercomputers Information Technology for Business

    Department of Business and Industrial Management Page 17

    become much more detailed before researchers truly understand how clouds affect climate

    over time.

    Building brains

    So how do supercomputers stack up tohuman? Well, they're really good at computation: It

    would take 120 billion people with 120 billion calculators 50 years to do what the Sequoia

    supercomputer will be able to do in a day. But when it comes to the brain's ability to process

    information in parallel by doing many calculations simultaneously, even supercomputers lag

    behind. Dawn, a supercomputer at Lawrence Livermore National Laboratory, can simulate

    the brain power of a catbut 100 to 1,000 times slower than a real cat brain.

    Nonetheless, supercomputers are useful for modeling the nervous system. In 2006,

    researchers at the colePolytechniqueFdrale de Lausanne in Switzerland successfully

    simulated a 10,000-neuron chunk of a rat brain called a neocortical unit. With enough of

    these units, the scientists on this so-called "Blue Brain" project hope to eventually build a

    complete model of the human brain.

    The brain would not be an artificial intelligence system, but rather a working neural circuit

    that researchers could use to understand brain function and test virtual psychiatric treatments.But Blue Brain could be even better than artificial intelligence, lead researcher Henry

    Markram told The Guardian newspaper in 2007: "If we build it right, it should speak."

  • 8/12/2019 supercomputers (1).docx

    18/35

    An assignment report on Supercomputers Information Technology for Business

    Department of Business and Industrial Management Page 18

    How super computer is different from other computers??

    Mainframe computers were introduced in 1975. A mainframe computer is a large computer in

    term of price, power and speed. It is more powerful than minicomputer. Mainframe computer

    can serve up to 50,000 users simultaneously. Its price is $5000 to $5 million. These

    computers can store large amount of data, information and instructions. The users access a

    mainframe computer through terminal or personal computer.A typical mainframe computer

    can execute 16 million instructions per second. Qualified operators and programmers are

    required to use these computers. Mainframe computers can accept all types of high-level

    languages. Different types of peripheral devices can be attached with mainframe computer.

    Examples:

    1- IBM4381

    2- NEC 610

    3- DEC 10 etc.

    Super Computer: Super computer were introduced in 1980. Super computer is the biggest in

    size and the most expensive in price than any other computers. It is the most sophisticated,

    complex and advanced computer. It has very large storage capacity. It can process trillions of

    instructions in one second. Its price is $500000 to $350 million. Super computer use high

    speed facilities such as satellite for online processing.

    A supercomputer can handle high amounts of scientific computation. It is maintained in a

    special room. It is 50000 times faster than that of microcomputers, which are very common

    nowadays. The cost that is associated with a supercomputer is roughly $20 million. Due to its

    high cost it is not used for domestic or office level of work.

    Examples:

    1- CRAY-XP

    2- ETA-10 etc.

  • 8/12/2019 supercomputers (1).docx

    19/35

    An assignment report on Supercomputers Information Technology for Business

    Department of Business and Industrial Management Page 19

    It is used in areas such as defence, weaponry systems, weather forecasting or scientific

    research. It was first used for defence purposes and was used to keep the record information

    of war weapons and its allied products.

    For example George and David Chudnovsky broke the world record for PI calculation by

    using two supercomputers to calculate PI to 480 million decimal places (PI is a commonly

    used mathematical constant that is based on the relationship of a circle's circumference to its

    diameter). From there onwards the value of PI became popular for geographically related

    calculations. In the next few years, more and more large industries will start using

    supercomputers such as the parallel computer, which will have hundreds or even thousands of

    processors.

    Mainframe computers are used in large scale organization whereas super computers are

    considered to be fast with regardless of its size.Mainframes are generally called as an

    operating system whereas super computers are a mini computers.

    A MAINFRAME is one form of a computer system that is generally more powerful than

    other typical mini systems. They r used in large organizations 4 large scale jobs& also

    mainframes themselves may vary widely cost capability.

    The kind traditionally used as the main record-keeper and data processor for large businesses

    and government facilities. But Super-computer" is a term used for very fast computers,

    regardless of their physical size. It used to be that a computer that could perform more than

    one gigaflop (one billion operations per second) was considered to be a supercomputer. Now,

    most high-end personal computers operate at that speed The most largest ,fastest the most

    expensive computers in the world is SUPER COMPUTER. They are used for Bio-Medical

    Research, Weather Forecasting,and Chemical Analysis in Laboratory etc.NEC'sEarth

    Simulator in Japan is now world's fastest computer.

  • 8/12/2019 supercomputers (1).docx

    20/35

    An assignment report on Supercomputers Information Technology for Business

    Department of Business and Industrial Management Page 20

    Top 10 Supercomputers in world

    The following table gives the Top 10 positions of the supercomputers on November 18, 2013.

    R

    a

    n

    k

    Rmax

    RpeakName

    Computer

    design

    Processor

    type,

    interconnect

    VendorSite

    Country, year

    1 33.86354.902

    Tianhe-2

    NUDT

    Xeon E52692 +XeonPhi 31S1P, THExpress-2

    NUDT National Supercomputing Center in GuangChina,2013

    217.59027.113

    Titan

    Cray XK7Opteron 6274 +TeslaK20X,Cray GeminiInterconnect

    CrayOak Ridge National Laboratory

    United States,2012

    3 17.17320.133 Sequoia Blue Gene/QPowerPC A2,Custom IBM Lawrence Livermore National LaboratoryUnited States,2013

    410.51011.280

    K computerRIKENSPARC64 VIIIfx,Tofu

    FujitsuRIKEN

    Japan,2011

    58.58610.066

    MiraBlue Gene/QPowerPC A2,Custom

    IBMArgonne National Laboratory

    United States,2013

    6 5.1688.520 Stampede

    PowerEdge C8220

    Xeon E52680 +XeonPhi,Infiniband

    Dell Texas Advanced Computing CenterUnited States,2013

    75.0085.872

    JUQUEENBlue Gene/QPowerPC A2,Custom

    IBMForschungszentrumJlich

    Germany,2013

    84.2935.033

    VulcanBlue Gene/QPowerPC A2,Custom

    IBM

    Lawrence Livermore National LaboratoryUnited States,2013

    http://en.wikipedia.org/wiki/Tianhe-2http://en.wikipedia.org/wiki/Tianhe-2http://en.wikipedia.org/wiki/National_University_of_Defense_Technologyhttp://en.wikipedia.org/wiki/Ivy_Bridge_(microarchitecture)http://en.wikipedia.org/wiki/Ivy_Bridge_(microarchitecture)http://en.wikipedia.org/wiki/Ivy_Bridge_(microarchitecture)http://en.wikipedia.org/wiki/Xeon_Phihttp://en.wikipedia.org/wiki/Xeon_Phihttp://en.wikipedia.org/wiki/National_University_of_Defense_Technologyhttp://en.wikipedia.org/wiki/National_University_of_Defense_Technologyhttp://en.wikipedia.org/wiki/National_Supercomputing_Center_in_Guangzhou_(NSCC-GZ)http://en.wikipedia.org/wiki/Chinahttp://en.wikipedia.org/wiki/Titan_(supercomputer)http://en.wikipedia.org/wiki/Cray_XK7http://en.wikipedia.org/wiki/Opteron_6274http://en.wikipedia.org/wiki/Nvidia_Teslahttp://en.wikipedia.org/wiki/Nvidia_Teslahttp://en.wikipedia.org/wiki/Crayhttp://en.wikipedia.org/wiki/Oak_Ridge_National_Laboratoryhttp://en.wikipedia.org/wiki/United_Stateshttp://en.wikipedia.org/wiki/IBM_Sequoiahttp://en.wikipedia.org/wiki/Blue_Gene/Qhttp://en.wikipedia.org/wiki/PowerPC_A2http://en.wikipedia.org/wiki/IBMhttp://en.wikipedia.org/wiki/IBMhttp://en.wikipedia.org/wiki/Lawrence_Livermore_National_Laboratoryhttp://en.wikipedia.org/wiki/United_Stateshttp://en.wikipedia.org/wiki/K_computerhttp://en.wikipedia.org/wiki/K_computerhttp://en.wikipedia.org/wiki/RIKENhttp://en.wikipedia.org/wiki/SPARC64_VIIIfxhttp://en.wikipedia.org/wiki/Fujitsuhttp://en.wikipedia.org/wiki/Fujitsuhttp://en.wikipedia.org/wiki/RIKENhttp://en.wikipedia.org/wiki/Japanhttp://en.wikipedia.org/wiki/IBM_Mirahttp://en.wikipedia.org/wiki/Blue_Gene/Qhttp://en.wikipedia.org/wiki/PowerPC_A2http://en.wikipedia.org/wiki/IBMhttp://en.wikipedia.org/wiki/IBMhttp://en.wikipedia.org/wiki/Argonne_National_Laboratoryhttp://en.wikipedia.org/wiki/United_Stateshttp://en.wikipedia.org/wiki/Texas_Advanced_Computing_Center#Stampedehttp://en.wikipedia.org/wiki/PowerEdgehttp://en.wikipedia.org/wiki/Sandy_Bridge-Ehttp://en.wikipedia.org/wiki/Sandy_Bridge-Ehttp://en.wikipedia.org/wiki/Sandy_Bridge-Ehttp://en.wikipedia.org/wiki/Xeon_Phihttp://en.wikipedia.org/wiki/Xeon_Phihttp://en.wikipedia.org/wiki/Infinibandhttp://en.wikipedia.org/wiki/Dellhttp://en.wikipedia.org/wiki/Dellhttp://en.wikipedia.org/wiki/Texas_Advanced_Computing_Centerhttp://en.wikipedia.org/wiki/United_Stateshttp://en.wikipedia.org/wiki/Blue_Gene#Installations_2http://en.wikipedia.org/wiki/Blue_Gene/Qhttp://en.wikipedia.org/wiki/PowerPC_A2http://en.wikipedia.org/wiki/IBMhttp://en.wikipedia.org/wiki/Forschungszentrum_J%C3%BClichhttp://en.wikipedia.org/wiki/Germanyhttp://en.wikipedia.org/wiki/Blue_Gene#Installations_2http://en.wikipedia.org/wiki/Blue_Gene/Qhttp://en.wikipedia.org/wiki/PowerPC_A2http://en.wikipedia.org/wiki/IBMhttp://en.wikipedia.org/wiki/Lawrence_Livermore_National_Laboratoryhttp://en.wikipedia.org/wiki/United_Stateshttp://en.wikipedia.org/wiki/United_Stateshttp://en.wikipedia.org/wiki/Lawrence_Livermore_National_Laboratoryhttp://en.wikipedia.org/wiki/IBMhttp://en.wikipedia.org/wiki/PowerPC_A2http://en.wikipedia.org/wiki/Blue_Gene/Qhttp://en.wikipedia.org/wiki/Blue_Gene#Installations_2http://en.wikipedia.org/wiki/Germanyhttp://en.wikipedia.org/wiki/Forschungszentrum_J%C3%BClichhttp://en.wikipedia.org/wiki/IBMhttp://en.wikipedia.org/wiki/PowerPC_A2http://en.wikipedia.org/wiki/Blue_Gene/Qhttp://en.wikipedia.org/wiki/Blue_Gene#Installations_2http://en.wikipedia.org/wiki/United_Stateshttp://en.wikipedia.org/wiki/Texas_Advanced_Computing_Centerhttp://en.wikipedia.org/wiki/Dellhttp://en.wikipedia.org/wiki/Infinibandhttp://en.wikipedia.org/wiki/Xeon_Phihttp://en.wikipedia.org/wiki/Xeon_Phihttp://en.wikipedia.org/wiki/Sandy_Bridge-Ehttp://en.wikipedia.org/wiki/PowerEdgehttp://en.wikipedia.org/wiki/Texas_Advanced_Computing_Center#Stampedehttp://en.wikipedia.org/wiki/United_Stateshttp://en.wikipedia.org/wiki/Argonne_National_Laboratoryhttp://en.wikipedia.org/wiki/IBMhttp://en.wikipedia.org/wiki/PowerPC_A2http://en.wikipedia.org/wiki/Blue_Gene/Qhttp://en.wikipedia.org/wiki/IBM_Mirahttp://en.wikipedia.org/wiki/Japanhttp://en.wikipedia.org/wiki/RIKENhttp://en.wikipedia.org/wiki/Fujitsuhttp://en.wikipedia.org/wiki/SPARC64_VIIIfxhttp://en.wikipedia.org/wiki/RIKENhttp://en.wikipedia.org/wiki/K_computerhttp://en.wikipedia.org/wiki/United_Stateshttp://en.wikipedia.org/wiki/Lawrence_Livermore_National_Laboratoryhttp://en.wikipedia.org/wiki/IBMhttp://en.wikipedia.org/wiki/PowerPC_A2http://en.wikipedia.org/wiki/Blue_Gene/Qhttp://en.wikipedia.org/wiki/IBM_Sequoiahttp://en.wikipedia.org/wiki/United_Stateshttp://en.wikipedia.org/wiki/Oak_Ridge_National_Laboratoryhttp://en.wikipedia.org/wiki/Crayhttp://en.wikipedia.org/wiki/Nvidia_Teslahttp://en.wikipedia.org/wiki/Nvidia_Teslahttp://en.wikipedia.org/wiki/Opteron_6274http://en.wikipedia.org/wiki/Cray_XK7http://en.wikipedia.org/wiki/Titan_(supercomputer)http://en.wikipedia.org/wiki/Chinahttp://en.wikipedia.org/wiki/National_Supercomputing_Center_in_Guangzhou_(NSCC-GZ)http://en.wikipedia.org/wiki/National_University_of_Defense_Technologyhttp://en.wikipedia.org/wiki/Xeon_Phihttp://en.wikipedia.org/wiki/Xeon_Phihttp://en.wikipedia.org/wiki/Ivy_Bridge_(microarchitecture)http://en.wikipedia.org/wiki/National_University_of_Defense_Technologyhttp://en.wikipedia.org/wiki/Tianhe-2
  • 8/12/2019 supercomputers (1).docx

    21/35

    An assignment report on Supercomputers Information Technology for Business

    Department of Business and Industrial Management Page 21

    The following table gives the Top 10 positions of the supercomputers on November 18, 2013.

    R

    a

    n

    k

    Rmax

    RpeakName

    Computer

    designProcessor

    type,

    interconnect

    VendorSite

    Country, year

    92.8973.185

    SuperMUCiDataPlex DX360M4Xeon E52680,Infiniband

    IBMLeibniz-Rechenzentrum

    Germany,2012

    102.566

    4.701Tiahne-1A

    NUDTXeon E52692 +XeonPhi 31S1P, THExpress-2

    NUDTNational Supercomputing Center in Tianjin

    China.

    RankIn the TOP500 List table, the computers are ordered first by their Rmax value. In the

    case of equal performances (Rmax value) for different computers, the order is by Rpeak.

    Rmax The highest score measured using the LINPACK benchmark suite. This is the

    number that is used to rank the computers. Measured in quadrillions of floating point

    operations per second, i.e. petaflops.

    RpeakThis is the theoretical peak performance of the system. Measured in Pflops.

    NameSome supercomputers are unique, at least on its location, and are therefore christened

    by its owner.

    ComputerThe computing platform as it is marketed.

    Processor cores The number of active processor cores actively used running LINPACK.

    After this figure is the processor architecture of the cores named.

    VendorThe manufacturer of the platform and hardware.

    SiteThe name of the facility operating the supercomputer.

    CountryThe country in which the computer is situated.

    http://en.wikipedia.org/wiki/SuperMUChttp://en.wikipedia.org/wiki/IDataPlexhttp://en.wikipedia.org/wiki/Sandy_Bridge-Ehttp://en.wikipedia.org/wiki/Sandy_Bridge-Ehttp://en.wikipedia.org/wiki/Sandy_Bridge-Ehttp://en.wikipedia.org/wiki/Infinibandhttp://en.wikipedia.org/wiki/IBMhttp://en.wikipedia.org/wiki/Leibniz-Rechenzentrumhttp://en.wikipedia.org/wiki/Germanyhttp://en.wikipedia.org/wiki/National_University_of_Defense_Technologyhttp://en.wikipedia.org/wiki/Ivy_Bridge_(microarchitecture)http://en.wikipedia.org/wiki/Ivy_Bridge_(microarchitecture)http://en.wikipedia.org/wiki/Ivy_Bridge_(microarchitecture)http://en.wikipedia.org/wiki/Xeon_Phihttp://en.wikipedia.org/wiki/Xeon_Phihttp://en.wikipedia.org/wiki/Xeon_Phihttp://en.wikipedia.org/wiki/Xeon_Phihttp://en.wikipedia.org/wiki/Ivy_Bridge_(microarchitecture)http://en.wikipedia.org/wiki/National_University_of_Defense_Technologyhttp://en.wikipedia.org/wiki/Germanyhttp://en.wikipedia.org/wiki/Leibniz-Rechenzentrumhttp://en.wikipedia.org/wiki/IBMhttp://en.wikipedia.org/wiki/Infinibandhttp://en.wikipedia.org/wiki/Sandy_Bridge-Ehttp://en.wikipedia.org/wiki/Sandy_Bridge-Ehttp://en.wikipedia.org/wiki/IDataPlexhttp://en.wikipedia.org/wiki/SuperMUC
  • 8/12/2019 supercomputers (1).docx

    22/35

    An assignment report on Supercomputers Information Technology for Business

    Department of Business and Industrial Management Page 22

    1. Tianhe-2 (MilkyWay-2)

    Country:China

    Site:National University of Defence Technology (NUDT)

    Manufacturer:NUDT

    Cores:3,120,000

    Linpack Performance (Rmax):33,862.7 TFlop/s

    Theoretical Peak (Rpeak):54,902.4 TFlop/s

    Power:17,808.00 kW

    Memory:1,024,000 GB

    Interconnect:TH Express-2

    Operating System:Kylin Linux

    Compiler:ICC

    Math Library:Intel MKL-11.0.0

    MPI:MPICH2 with a customized GLEX channel

    http://www.china.org.cn/top10/index.htm
  • 8/12/2019 supercomputers (1).docx

    23/35

    An assignment report on Supercomputers Information Technology for Business

    Department of Business and Industrial Management Page 23

    2. Titan

    Country:U.S.

    Site:DOE/SC/Oak Ridge National Laboratory

    System URL:http://www.olcf.ornl.gov/titan/

    Manufacturer:Cray Inc.

    Cores:560,640

    Linpack Performance (Rmax):17,590.0 TFlop/s

    Theoretical Peak (Rpeak):27,112.5 TFlop/s

    Power:8,209.00 kW

    Memory:710,144 GB

    Interconnect:Cray Gemini interconnect

    Operating System:Cray Linux Environment

    http://www.olcf.ornl.gov/titan/http://www.china.org.cn/top10/2013-06/21/content_29187340_10.htmhttp://www.olcf.ornl.gov/titan/
  • 8/12/2019 supercomputers (1).docx

    24/35

    An assignment report on Supercomputers Information Technology for Business

    Department of Business and Industrial Management Page 24

    3. Sequoia

    Country:U.S.

    Site:DOE/NNSA/LLNL

    Manufacturer:IBM

    Cores:1,572,864

    Linpack Performance (Rmax):17,173.2 TFlop/s

    Theoretical Peak (Rpeak):20,132.7 TFlop/s

    Power:7,890.00 kW

    Memory:1,572,864 GB

    Interconnect:Custom Interconnect

    Operating System:Linux

    http://www.china.org.cn/top10/2013-06/21/content_29187340_9.htm
  • 8/12/2019 supercomputers (1).docx

    25/35

    An assignment report on Supercomputers Information Technology for Business

    Department of Business and Industrial Management Page 25

    4. K computer

    Country:Japan

    Site:RIKEN Advanced Institute for Computational Science (AICS)

    Manufacturer:Fujitsu

    Cores:705,024

    Linpack Performance (Rmax):10,510.0 TFlop/s

    Theoretical Peak (Rpeak):11,280.4 TFlop/s

    Power:12,659.89 kW

    Memory:1,410,048 GB

    Interconnect:Custom Interconnect

    Operating System:Linux

    http://www.china.org.cn/top10/2013-06/21/content_29187340_8.htm
  • 8/12/2019 supercomputers (1).docx

    26/35

    An assignment report on Supercomputers Information Technology for Business

    Department of Business and Industrial Management Page 26

    5. Mira

    Country:U.S.

    Site:DOE/SC/Argonne National Laboratory

    Manufacturer:IBM

    Cores:786,432

    Linpack Performance (Rmax):8,586.6 TFlop/s

    Theoretical Peak (Rpeak):10,066.3 TFlop/s

    Power:3,945.00 kW

    Interconnect:Custom Interconnect

    Operating System:Linux

    http://www.china.org.cn/top10/2013-06/21/content_29187340_7.htm
  • 8/12/2019 supercomputers (1).docx

    27/35

    An assignment report on Supercomputers Information Technology for Business

    Department of Business and Industrial Management Page 27

    6. Stampede

    Country:U.S.

    Site:Texas Advanced Computing Center/Univ. of Texas, Austin

    System URL:http://www.tacc.utexas.edu/stampede

    Manufacturer:Dell

    Cores:462,462

    Linpack Performance (Rmax):5,168.1 TFlop/s

    Theoretical Peak (Rpeak):8,520.1 TFlop/s

    Power:4,510.00 kW

    Memory:192,192 GB

    Interconnect:Infiniband FDR

    Operating System:Linux

    Compiler:Intel

    Math Library:MKL

    http://www.tacc.utexas.edu/stampedehttp://www.china.org.cn/top10/2013-06/21/content_29187340_6.htmhttp://www.tacc.utexas.edu/stampede
  • 8/12/2019 supercomputers (1).docx

    28/35

    An assignment report on Supercomputers Information Technology for Business

    Department of Business and Industrial Management Page 28

    7. JUQUEEN

    Country:Germany

    Site:ForschungszentrumJuelich (FZJ)

    System URL:http://www.fz-

    juelich.de/ias/jsc/EN/Expertise/Supercomputers/JUQUEEN/JUQUEEN_node.html

    Manufacturer:IBM

    Cores:458,752

    Linpack Performance (Rmax):5,008.9 TFlop/s

    Theoretical Peak (Rpeak):5,872.0 TFlop/s

    Power:2,301.00 kW

    Memory:458,752 GB

    Interconnect:Custom Interconnect

    Operating System:Linux

    http://www.fz-juelich.de/ias/jsc/EN/Expertise/Supercomputers/JUQUEEN/JUQUEEN_node.htmlhttp://www.fz-juelich.de/ias/jsc/EN/Expertise/Supercomputers/JUQUEEN/JUQUEEN_node.htmlhttp://www.china.org.cn/top10/2013-06/21/content_29187340_5.htmhttp://www.fz-juelich.de/ias/jsc/EN/Expertise/Supercomputers/JUQUEEN/JUQUEEN_node.htmlhttp://www.fz-juelich.de/ias/jsc/EN/Expertise/Supercomputers/JUQUEEN/JUQUEEN_node.html
  • 8/12/2019 supercomputers (1).docx

    29/35

    An assignment report on Supercomputers Information Technology for Business

    Department of Business and Industrial Management Page 29

    8. Vulcan

    Country:U.S.

    Site:DOE/NNSA/LLNL

    Manufacturer:IBM

    Cores:393,216

    Linpack Performance (Rmax):4,293.3 TFlop/s

    Theoretical Peak (Rpeak):5,033.2 TFlop/s

    Power:1,972.00 kW

    Memory:393,216 GB

    Interconnect:Custom Interconnect

    Operating System:Linux

    http://www.china.org.cn/top10/2013-06/21/content_29187340_4.htm
  • 8/12/2019 supercomputers (1).docx

    30/35

    An assignment report on Supercomputers Information Technology for Business

    Department of Business and Industrial Management Page 30

    9. SuperMUC

    Country:Germany

    Site:Leibniz Rechenzentrum

    System URL:http://www.lrz.de/services/compute/supermuc/

    Manufacturer:IBM

    Cores:147,456

    Linpack Performance (Rmax):2,897.0 TFlop/s

    Theoretical Peak (Rpeak):3,185.1 TFlop/s

    Power:3,422.67 kW

    Interconnect:Infiniband FDR

    Operating System:Linux

    http://www.lrz.de/services/compute/supermuc/http://www.china.org.cn/top10/2013-06/21/content_29187340_3.htmhttp://www.lrz.de/services/compute/supermuc/
  • 8/12/2019 supercomputers (1).docx

    31/35

    An assignment report on Supercomputers Information Technology for Business

    Department of Business and Industrial Management Page 31

    10.Tianhe-1A (MilkyWay-1A)

    Country:China

    Site:National Supercomputing Center in Tianjin

    Manufacturer:NUDT

    Cores:186,368

    Linpack Performance (Rmax):2,566.0 TFlop/s

    Theoretical Peak (Rpeak):4,701.0 TFlop/s

    Power:4,040.00 kW

    Memory:229,376 GB

    Interconnect:Proprietary

    Operating System:Linux

    Compiler:ICC

    MPI:MPICH2 with a custom GLEX channel

    http://www.china.org.cn/top10/2013-06/21/content_29187340_2.htm
  • 8/12/2019 supercomputers (1).docx

    32/35

    An assignment report on Supercomputers Information Technology for Business

    Department of Business and Industrial Management Page 32

    Supercomputing in India

    India'ssupercomputerprogram was started in late 1980s becauseCraysupercomputers were

    denied for import due to an arms embargo imposed on India, as it was a dual use technologyand could be used for developingnuclear weapons.

    PARAM 8000 is considered India's firstsupercomputer. It was indigenously built in 1990

    by Centre for Development of Advanced Computing and was replicated and installed at

    ICAD Moscow in 1991 under Russian collaboration.

    India's Rank in Top500super computersAs of June 2013, India has 11 systems on theTop500 list ranking 36, 69, 89, 95, 174, 245,

    291, 309, 310, 311 and 439.

    Rank Site NameRmax

    (TFlop/s)Rpeak

    (TFlop/s)

    36Indian Institute of Tropical

    MeteorologyiDataPlex DX360M4 719.2 790.7

    69Centre for Development of

    Advanced ComputingPARAM Yuva - II 386.7 529.4

    89 National Centre for Medium RangeWeather Forecasting

    iDataPlex DX360M4 318.4 350.1

    95

    CSIR Centre for Mathematical

    Modelling and Computer

    Simulation

    Cluster Platform 3000

    BL460c Gen8303.9 360.8

    http://en.wikipedia.org/wiki/Indiahttp://en.wikipedia.org/wiki/Supercomputerhttp://en.wikipedia.org/wiki/Crayhttp://en.wikipedia.org/wiki/Supercomputershttp://en.wikipedia.org/wiki/Nuclear_weaponshttp://en.wikipedia.org/wiki/PARAMhttp://en.wikipedia.org/wiki/Supercomputerhttp://en.wikipedia.org/wiki/Centre_for_Development_of_Advanced_Computinghttp://en.wikipedia.org/wiki/Top500http://en.wikipedia.org/wiki/FLOPShttp://en.wikipedia.org/wiki/FLOPShttp://en.wikipedia.org/wiki/FLOPShttp://en.wikipedia.org/wiki/FLOPShttp://en.wikipedia.org/wiki/FLOPShttp://en.wikipedia.org/wiki/Indian_Institute_of_Tropical_Meteorologyhttp://en.wikipedia.org/wiki/Indian_Institute_of_Tropical_Meteorologyhttp://en.wikipedia.org/wiki/Centre_for_Development_of_Advanced_Computinghttp://en.wikipedia.org/wiki/Centre_for_Development_of_Advanced_Computinghttp://en.wikipedia.org/wiki/Centre_for_Development_of_Advanced_Computinghttp://en.wikipedia.org/wiki/PARAM#Param_Yuva_IIhttp://en.wikipedia.org/wiki/National_Centre_for_Medium_Range_Weather_Forecastinghttp://en.wikipedia.org/wiki/National_Centre_for_Medium_Range_Weather_Forecastinghttp://en.wikipedia.org/wiki/National_Centre_for_Medium_Range_Weather_Forecastinghttp://en.wikipedia.org/wiki/CSIR_Centre_for_Mathematical_Modelling_and_Computer_Simulationhttp://en.wikipedia.org/wiki/CSIR_Centre_for_Mathematical_Modelling_and_Computer_Simulationhttp://en.wikipedia.org/wiki/CSIR_Centre_for_Mathematical_Modelling_and_Computer_Simulationhttp://en.wikipedia.org/wiki/CSIR_Centre_for_Mathematical_Modelling_and_Computer_Simulationhttp://en.wikipedia.org/wiki/CSIR_Centre_for_Mathematical_Modelling_and_Computer_Simulationhttp://en.wikipedia.org/wiki/CSIR_Centre_for_Mathematical_Modelling_and_Computer_Simulationhttp://en.wikipedia.org/wiki/National_Centre_for_Medium_Range_Weather_Forecastinghttp://en.wikipedia.org/wiki/National_Centre_for_Medium_Range_Weather_Forecastinghttp://en.wikipedia.org/wiki/PARAM#Param_Yuva_IIhttp://en.wikipedia.org/wiki/Centre_for_Development_of_Advanced_Computinghttp://en.wikipedia.org/wiki/Centre_for_Development_of_Advanced_Computinghttp://en.wikipedia.org/wiki/Indian_Institute_of_Tropical_Meteorologyhttp://en.wikipedia.org/wiki/Indian_Institute_of_Tropical_Meteorologyhttp://en.wikipedia.org/wiki/FLOPShttp://en.wikipedia.org/wiki/FLOPShttp://en.wikipedia.org/wiki/Top500http://en.wikipedia.org/wiki/Centre_for_Development_of_Advanced_Computinghttp://en.wikipedia.org/wiki/Supercomputerhttp://en.wikipedia.org/wiki/PARAMhttp://en.wikipedia.org/wiki/Nuclear_weaponshttp://en.wikipedia.org/wiki/Supercomputershttp://en.wikipedia.org/wiki/Crayhttp://en.wikipedia.org/wiki/Supercomputerhttp://en.wikipedia.org/wiki/India
  • 8/12/2019 supercomputers (1).docx

    33/35

    An assignment report on Supercomputers Information Technology for Business

    Department of Business and Industrial Management Page 33

    174Vikram Sarabhai Space

    Centre,ISRO

    SAGA -

    Z24XX/SL390s Cluster188.7 394.8

    245 Manufacturing CompanyIndiaCluster Platform 3000

    BL460c Gen8149.2 175.7

    291Computational Research

    Laboratories

    EKA - Cluster Platform

    3000 BL460c 132.8 172.6

    309 Semiconductor Company (F)Cluster Platform 3000

    BL460c Gen8129.2 182.0

    310 Semiconductor Company (F)Cluster Platform 3000

    BL460c Gen8129.2 182.0

    311 Network CompanyCluster Platform 3000

    BL460c Gen8128.8 179.7

    439 IT Services Provider (B)Cluster Platform 3000

    BL460c Gen8104.2 199.7

    http://en.wikipedia.org/wiki/Vikram_Sarabhai_Space_Centrehttp://en.wikipedia.org/wiki/Vikram_Sarabhai_Space_Centrehttp://en.wikipedia.org/wiki/ISROhttp://en.wikipedia.org/wiki/Indiahttp://en.wikipedia.org/wiki/Computational_Research_Laboratorieshttp://en.wikipedia.org/wiki/Computational_Research_Laboratorieshttp://en.wikipedia.org/wiki/Computational_Research_Laboratorieshttp://en.wikipedia.org/wiki/EKA_(supercomputer)http://en.wikipedia.org/wiki/EKA_(supercomputer)http://en.wikipedia.org/wiki/EKA_(supercomputer)http://en.wikipedia.org/wiki/EKA_(supercomputer)http://en.wikipedia.org/wiki/EKA_(supercomputer)http://en.wikipedia.org/wiki/Computational_Research_Laboratorieshttp://en.wikipedia.org/wiki/Computational_Research_Laboratorieshttp://en.wikipedia.org/wiki/Indiahttp://en.wikipedia.org/wiki/ISROhttp://en.wikipedia.org/wiki/Vikram_Sarabhai_Space_Centrehttp://en.wikipedia.org/wiki/Vikram_Sarabhai_Space_Centre
  • 8/12/2019 supercomputers (1).docx

    34/35

    An assignment report on Supercomputers Information Technology for Business

    Department of Business and Industrial Management Page 34

    PARAM SERIES

    After being deniedCraysupercomputers as a result of a technologyembargo,India started a

    program to develop indigenous supercomputers and supercomputing technologies.

    Supercomputers were considered a double edged weapon capable of assisting in the

    development ofnuclear weapons.[5]For the purpose of achieving self-sufficiency in the field,

    theCentre for Development of Advanced Computing (C-DAC) was set up in 1988 by the

    then Department of Electronics with Dr.Vijay Bhatkar as its Director. The project was given

    an initial run of 3 years and an initial funding of 300,000,000. Because the same amount of

    money and time was usually expended to purchase a supercomputer from the US. In 1990,

    aprototype was produced and wasbenchmarked at the 1990 Zurich Supercomputing Show. It

    surpassed most other systems, placing India second after US.

    The final result of the effort was the PARAM 8000, which was installed in 1991. It is

    considered India's first supercomputer.

    PARAM 8000

    Unveiled in 1991, PARAM 8000 usedInmos T800transputers.Transputers were a fairly new

    and innovativemicroprocessor architecture designed forparallel processing at the time. It

    was a distributedMIMD architecture with a reconfigurable interconnection network. It had

    64CPUs.

    PARAM 8600

    PARAM 8600 was an improvement over PARAM 8000. It was a 256 CPU computer. For

    every four Inmos T800, it employed anIntel i860 coprocessor. The result was over

    5GFLOPS at peak forvector processing.Several of these models were exported.

    PARAM 9900/SS

    PARAM 9900/SS was designed to be aMPP system. It used theSuperSPARC IIprocessor.

    The design was changed to be modular so that newer processors could be easily

    accommodated. Typically, it used 32-40 processors. But, it could be scaled up to 200 CPUs

    using theclose network topology.PARAM 9900/USwas theUltraSPARC variant

    and PARAM 9900/AAwas the DEC variant.

    http://en.wikipedia.org/wiki/Crayhttp://en.wikipedia.org/wiki/Supercomputerhttp://en.wikipedia.org/wiki/Embargohttp://en.wikipedia.org/wiki/Nuclear_weaponshttp://en.wikipedia.org/wiki/PARAM#cite_note-5http://en.wikipedia.org/wiki/PARAM#cite_note-5http://en.wikipedia.org/wiki/PARAM#cite_note-5http://en.wikipedia.org/wiki/Centre_for_Development_of_Advanced_Computinghttp://en.wikipedia.org/wiki/Vijay_bhatkarhttp://en.wikipedia.org/wiki/Prototypehttp://en.wikipedia.org/wiki/Benchmark_(computing)http://en.wikipedia.org/wiki/Inmoshttp://en.wikipedia.org/wiki/Transputerhttp://en.wikipedia.org/wiki/Microprocessorhttp://en.wikipedia.org/wiki/Parallel_processinghttp://en.wikipedia.org/wiki/MIMDhttp://en.wikipedia.org/wiki/CPUhttp://en.wikipedia.org/wiki/Intel_i860http://en.wikipedia.org/wiki/FLOPShttp://en.wikipedia.org/wiki/Vector_processinghttp://en.wikipedia.org/wiki/Massively_parallel_(computing)http://en.wikipedia.org/wiki/SuperSPARChttp://en.wikipedia.org/wiki/Clos_networkhttp://en.wikipedia.org/wiki/UltraSPARChttp://en.wikipedia.org/wiki/UltraSPARChttp://en.wikipedia.org/wiki/Clos_networkhttp://en.wikipedia.org/wiki/SuperSPARChttp://en.wikipedia.org/wiki/Massively_parallel_(computing)http://en.wikipedia.org/wiki/Vector_processinghttp://en.wikipedia.org/wiki/FLOPShttp://en.wikipedia.org/wiki/Intel_i860http://en.wikipedia.org/wiki/CPUhttp://en.wikipedia.org/wiki/MIMDhttp://en.wikipedia.org/wiki/Parallel_processinghttp://en.wikipedia.org/wiki/Microprocessorhttp://en.wikipedia.org/wiki/Transputerhttp://en.wikipedia.org/wiki/Inmoshttp://en.wikipedia.org/wiki/Benchmark_(computing)http://en.wikipedia.org/wiki/Prototypehttp://en.wikipedia.org/wiki/Vijay_bhatkarhttp://en.wikipedia.org/wiki/Centre_for_Development_of_Advanced_Computinghttp://en.wikipedia.org/wiki/PARAM#cite_note-5http://en.wikipedia.org/wiki/Nuclear_weaponshttp://en.wikipedia.org/wiki/Embargohttp://en.wikipedia.org/wiki/Supercomputerhttp://en.wikipedia.org/wiki/Cray
  • 8/12/2019 supercomputers (1).docx

    35/35

    An assignment report on Supercomputers Information Technology for Business

    PARAM 10000

    In 1998, the PARAM 10000 was unveiled. PARAM 10000 used several independent nodes,

    each based on theSun Enterprise 250 server and each such server contained two

    400MhzUltraSPARC IIprocessors. The base configuration had three compute nodes and aserver node. The peak speed of this base system was 6.4GFLOPS.A typical system would

    contain 160CPUs and be capable of 100GFLOPS.But; it was easily scalable to

    theTFLOP range.

    PARAM Padma

    PARAM Padma (PadmameansLotusinSanskrit)was introduced in April 2003.It had a peak

    speed of 1024 GFLOPS (about 1 TFLOP) and peak storage of 1 TB. It used

    248IBMPower4CPUs of 1 GHz each. Theoperating system was IBM AIX 5.1L. It used

    PARAMnet II as its primary interconnects. It was the first Indian supercomputer to break the

    1 TFLOP barriers.

    PARAM Yuva

    PARAM Yuva (Yuvameans YouthinSanskrit) was unveiled in November 2008. It has a

    maximum sustainable speed (Rmax) of 38.1 TFLOPS and a peak speed (Rpeak) of 54

    TFLOPS.[10]There are 4608 cores in it, based onIntel 73XX of 2.9 GHz each. It has a storagecapacity of 25 TB up to 200 TB. It uses PARAMnet 3 as its primary interconnects.

    ParamYuva II

    ParamYuva II was made by Centre for Development of Advanced Computing in a period of

    three months, at a cost of 16 crore (US$2 million), and was unveiled on 8 February 2013. It

    performs at a peak of 524 teraflops and consumes 35% less energy as compared to

    ParamYuva. It delivers sustained performance of 360.8 teraflops on the community standard

    Linpack benchmark, and would have been ranked 62 in the November 2012 ranking list of

    Top500. In terms of power efficiency, it would have been ranked 33rd in the November 2012

    List of Top Green500 supercomputers of the world. It is the first Indian supercomputer

    achieving more than 500 teraflops.

    http://en.wikipedia.org/wiki/Sun_Enterprisehttp://en.wikipedia.org/wiki/Mhzhttp://en.wikipedia.org/wiki/FLOPShttp://en.wikipedia.org/wiki/CPUhttp://en.wikipedia.org/wiki/FLOPShttp://en.wikipedia.org/wiki/FLOPShttp://en.wikipedia.org/wiki/Sanskrithttp://en.wikipedia.org/wiki/Bytehttp://en.wikipedia.org/wiki/IBMhttp://en.wikipedia.org/wiki/Power4http://en.wikipedia.org/wiki/Operating_systemhttp://en.wikipedia.org/wiki/IBM_AIXhttp://en.wikipedia.org/wiki/Sanskrithttp://en.wikipedia.org/wiki/PARAM#cite_note-top500yuva-10http://en.wikipedia.org/wiki/PARAM#cite_note-top500yuva-10http://en.wikipedia.org/wiki/PARAM#cite_note-top500yuva-10http://en.wikipedia.org/wiki/Tigerton_(microprocessor)#Tigertonhttp://en.wikipedia.org/wiki/Tigerton_(microprocessor)#Tigertonhttp://en.wikipedia.org/wiki/PARAM#cite_note-top500yuva-10http://en.wikipedia.org/wiki/Sanskrithttp://en.wikipedia.org/wiki/IBM_AIXhttp://en.wikipedia.org/wiki/Operating_systemhttp://en.wikipedia.org/wiki/Power4http://en.wikipedia.org/wiki/IBMhttp://en.wikipedia.org/wiki/Bytehttp://en.wikipedia.org/wiki/Sanskrithttp://en.wikipedia.org/wiki/FLOPShttp://en.wikipedia.org/wiki/FLOPShttp://en.wikipedia.org/wiki/CPUhttp://en.wikipedia.org/wiki/FLOPShttp://en.wikipedia.org/wiki/Mhzhttp://en.wikipedia.org/wiki/Mhzhttp://en.wikipedia.org/wiki/Sun_Enterprise