sc10 slide share

54
SC10 Guy Tel-Zur November 2010

Upload: gtelzur

Post on 13-Jan-2015

820 views

Category:

Technology


0 download

DESCRIPTION

SC10 Diary

TRANSCRIPT

Page 1: Sc10 slide share

SC10

Guy Tel-ZurNovember 2010

Page 2: Sc10 slide share

My Own Diary

• A subjective impression from SC10

Page 3: Sc10 slide share

Outline

• The Tutorials• Plenary Talks• Papers & Panels• The Top500 list• The Exhibition

http://sc10.supercomputing.org/

Page 4: Sc10 slide share

Day 0 - ArrivalUS Airways entertainment system is running Linux!

Page 5: Sc10 slide share
Page 6: Sc10 slide share
Page 7: Sc10 slide share

A lecture by Prof. Rubin Landau Computational Physics at the

Educational Track3:30PM - 5:00PM Communities, Education Physics: Examples in Computational Physics, Part 2 Physics: Examples in Computational Physics, Part 2 Rubin Landau 297

Although physics faculty are incorporating computers to enhance physics education, computation is often viewed as a black box whose inner workings need not be understood. We propose to open up the computational black box by providing Computational Physics (CP) curricula materials based on a problem-solving paradigm that can be incorporated into existing physics classes, or used in stand-alone CP classes. The curricula materials assume a computational science point of view, where understanding of the applied math and the CS is also important, and usually involve a compiled language in order for the students to get closer to the algorithms. The materials derive from a new CP eTextbook available from Compadre that includes video-based lectures, programs, applets, visualizations and animations.

Page 8: Sc10 slide share

Eclipse PTP

At last FORTRAN has an advanced, free, IDE !!!

PTP - Parallel Tools Platformhttp://www.eclipse.org/ptp/

Page 9: Sc10 slide share

Elastic-R

Page 10: Sc10 slide share
Page 11: Sc10 slide share
Page 12: Sc10 slide share
Page 13: Sc10 slide share

Visit

Page 14: Sc10 slide share

Python for Scientific Computing

Page 15: Sc10 slide share

Amazon Cluster GPU Instances provide 22 GB of memory, 33.5 EC2 Compute Units, and utilize the Amazon EC2 Cluster network, which provides high throughput and low latency for High Performance Computing (HPC) and data intensive applications. Each GPU instance features two NVIDIA Tesla® M2050 GPUs, delivering peak performance of more than one trillion double-precision FLOPS. Many workloads can be greatly accelerated by taking advantage of the parallel processing power of hundreds of cores in the new GPU instances. Many industries including oil and gas exploration, graphics rendering and engineering design are using GPU processors to improve the performance of their critical applications.Amazon Cluster GPU Instances extend the options for running HPC workloads in the AWS cloud. Cluster Compute Instances, launched earlier this year, provide the ability to create clusters of instances connected by a low latency, high throughput network. Cluster GPU Instances give customers with HPC workloads an additional option to further customize their high performance clusters in the cloud. For those customers who have applications that can benefit from the parallel computing power of GPUs, Amazon Cluster GPU Instances can often lead to even further efficiency gains over what can be achieved with traditional processors. By leveraging both instance types, HPC customers can tailor their compute cluster to best meet the performance needs of their workloads. For more information on HPC capabilities provided by Amazon EC2, visit aws.amazon.com/ec2/hpc-applications.

Amazon Cluster GPU InstancesNot S

C10 Relate

d

Page 16: Sc10 slide share

The Top500

Page 17: Sc10 slide share
Page 18: Sc10 slide share
Page 19: Sc10 slide share
Page 20: Sc10 slide share
Page 21: Sc10 slide share
Page 22: Sc10 slide share

World’s #1

China's National University of Defense Technology's Tianhe-1A supercomputer has taken the top ranking from Oak Ridge National Laboratory's Jaguar supercomputer on the latest Top500 ranking of the world's fastest supercomputers. The Tianhe-1A achieved a performance level of 2.67 petaflops per second, while Jaguar achieved 1.75 petaflops per second. The Nebulae, another Chinese-built supercomputer, came in third with a performance of 1.27 petaflops per second. "What the Chinese have done is they're exploiting the power of [graphics processing units], which are...awfully close to being uniquely suited to this particular benchmark," says University of Illinois Urbana-Champagne professor Bill Gropp. Tianhe-1A is a Linux computer built from components from Intel and NVIDIA. "What we should be focusing on is not losing our leadership and being able to apply computing to a broad range of science and engineering problems," Gropp says. Overall, China had five supercomputers ranked in the top 100, while 42 of the top 100 computers were U.S. systems.

Page 23: Sc10 slide share

The Top 10

Page 24: Sc10 slide share
Page 25: Sc10 slide share
Page 26: Sc10 slide share

http://www.green500.org

Page 27: Sc10 slide share

Talks

Page 28: Sc10 slide share

SC10 Keynote LectureClayton M. Christensen - Harvard Business School

Page 29: Sc10 slide share

Disruption is the mechanism by which great companies continue to succeed and new entrants displace the market leaders. Disruptive innovations either create new markets or reshape existing markets by delivering relatively simple, convenient, low cost innovations to a set of customers who are ignored by industry leaders. One of the bedrock principles of Christensen's disruptive innovation theory is that companies innovate faster than customers' lives change. Because of this, most organizations end up producing products that are too good, too expensive, and too inconvenient for many customers. By only pursuing these "sustaining" innovations, companies unwittingly open the door to "disruptive" innovations, be it "low-end disruption" targeting overshot-less-demanding customers or "new-market disruption", targeting non-consumers. 1. Many of today’s markets that appear to have little growth remaining, actually have great growth potential through disruptive innovations that transform complicated, expensive products into simple, affordable ones. 2. Successful innovation seems unpredictable because innovators rely excessively on data, which is only available about the past. They have not been equipped with sound theories that do not allow them to see the future perceptively. This problem has been solved. 3. Understanding the customer is the wrong unit of analysis for successful innovation. Understanding the job that the customer is trying to do is the key. 4. Many innovations that have extraordinary growth potential fail, not because of the product or service itself, but because the company forced it into an inappropriate business model instead of creating a new optimal one. 5. Companies with disruptive products and business models are the ones whose share prices increase faster than the market over sustained periods

How to Create New Growth in a Risk-Minimizing Environment

Page 30: Sc10 slide share

SC10 Keynote Speaker

Page 31: Sc10 slide share

High-End Computing and Climate Modeling: Future Trends and Prospects

SESSION: Big Science, Big Data IIPresenter(s):Phillip Colella

ABSTRACT:Over the past few years, there has been considerable discussion of the change in high-end computing, due to the change in the way increased processor performance will be obtained: heterogeneous processors with more cores per chip, deeper and more complex memory and communications hierarchies, and fewer bytes per flop. At the same time, the aggregate floating-point performance at the high end will continue to increase, to the point that we can expect exascale machines by the end of the decade. In this talk, we will discuss some of the consequences of these trends for scientific applications from a mathematical algorithm and software standpoint. We will use the specific example of climate modeling as a focus, based on discussions that have been going on in that community for the past two years.Chair/Presenter Details:Patricia Kovatch (Chair) - University of Tennessee, KnoxvillePhillip Colella - Lawrence Berkeley National Laboratory

Page 32: Sc10 slide share

Prediction of Earthquake Ground Motions Using Large-Scale Numerical Simulations

SESSION: Big Science, Big Data IIPresenter(s):Tom JordanABSTRACT:Realistic earthquake simulations can now predict strong ground motions from the largest anticipated fault ruptures. Olsen et al. (this meeting) have simulated a M8 “wall-to-wall” earthquake on southern San Andreas fault up to 2-Hz, sustaining 220 teraflops for 24 hours on 223K cores of NCCS Jaguar. Large simulation ensembles (~10^6) have been combined with probabilistic rupture forecasts to create CyberShake, a physics-based hazard model for Southern California. In the highly-populated sedimentary basins, CyberShake predicts long-period shaking intensities substantially higher than empirical models, primarily due to the strong coupling between rupture directivity and basin excitation. Simulations are improving operational earthquake forecasting, which provides short-term earthquake probabilities using seismic triggering models, and earthquake early warning, which attempts to predict imminent shaking during an event. These applications offer new and urgent computational challenges, including requirements for robust, on-demand supercomputing and rapid access to very large data sets.

Page 33: Sc10 slide share

Panel

Page 34: Sc10 slide share

Exascale Computing Will (Won't) Be Used by Scientists by the End of This Decade

EVENT TYPE: PanelPanelists:Marc Snir, William Gropp, Peter Kogge, Burton Smith, Horst Simon, Bob Lucas, Allan Snavely, Steve WallachABSTRACT:DOE has set a goal of Exascale performance by 2018. While not impossible, this will require radical innovations. A contrarian view may hold that technical obstacles, cost, limited need, and inadequate policies will delay exascale well beyond 2018. The magnitude of the required investments will lead to a public discussion for which we need to be well prepared. We propose to have a public debate on the proposition "Exascale computing will be used by the end of the decade", with one team arguing in favor and another team arguing against. The arguments should consider technical and non-technical obstacles and use cases. The proposed format is: (a) introductory statements by each team (b) Q&A's where each team can put questions to other team (c) Q&A's from the public to either teams. We shall push to have a lively debate that is not only informative, but also entertaining.

Page 35: Sc10 slide share

GPU Computing: To ExaScale and Beyond

Bill Dally - NVIDIA/Stanford University

Page 36: Sc10 slide share

Dedicated High-End Computing to Revolutionize Climate Modeling: An International Collaboration

ABSTRACT:A collaboration of six institutions on three continents is investigating the use of dedicated HPC resources for global climate modeling. Two types of experiments were run using the entire 18,048-core Cray XT-4 at NICS from October 2009 to March 2010: (1) an experimental version of the ECMWF Integrated Forecast System, run at several resolutions down to 10 km grid spacing to evaluate high-impact and extreme events; and (2) the NICAM global atmospheric model from JAMSTEC, run at 7 km grid resolution to simulate the boreal summer climate, over many years. The numerical experiments sought to determine whether increasing weather and climate model resolution to accurately resolve mesoscale phenomena in the atmosphere can improve the model fidelity in simulating the mean climate and the distribution of variances and covariances.Chair/Presenter Details:Robert Jacob (Chair) - Argonne National LaboratoryJames Kinter - Institute of Global Environment and Society

Page 37: Sc10 slide share

Using GPUs for Weather and Climate Models

Presenter(s):Mark GovettABSTRACT:With the power, cooling, space, and performance restrictions facing large CPU-based systems, graphics processing units (GPUs) appear poised to become the next-generation super-computers. GPU-based systems already are two of the top ten fastest supercomputers on the Top500 list, with the potential to dominate this list in the future. While the hardware is highly scalable, achieving good parallel performance can be challenging. Language translation, code conversion and adaption, and performance optimization will be required. This presentation will survey existing efforts to use GPUs for weather and climate applications. Two general parallelization approaches will be discussed. The most common approach is to run select routines on the GPU but requires data transfers between CPU and GPU. Another approach is to run everything on the GPU and avoid the data transfers, but this can require significant effort to parallelize and optimize the code.

Page 38: Sc10 slide share

Global ArraysGlobal Arrays Roadmap and Future DevelopmentsSESSION: Global Arrays: Past, Present & FutureEVENT TYPE: Special and Invited EventsSESSION CHAIR: Moe KhaleelSpeaker(s):Daniel ChavarriaABSTRACT:This talk will describe the current state of the Global Arrays toolkit and its underlying ARMCI communication layer and how we believe they should evolve over the next few years. The research and development agenda is targeting expected architectural features and configurations on emerging extreme-scale and exascale systems.Speaker Details:Moe Khaleel (Chair) - Pacific Northwest National LaboratoryDaniel Chavarria - Pacific Northwest National Laboratory

Page 39: Sc10 slide share

http://www.emsl.pnl.gov/docs/global/

Page 40: Sc10 slide share

Enabling High Performance Cloud Computing Environments

SESSION LEADER(S):Jurrie Van Den BreekelABSTRACT:The cloud is the new “killer” service to bring service providers and enterprises into the age of network services capable of infinite scale. As an example, 5,000 servers with many cloud services could feasibly serve one billion users or end devices. The idea of services at this scale is now possible with multi-core processing, virtualization and high speed Ethernet, but even today the mix of implementing these technologies requires careful considerations in public and private infrastructure design. While cloud computing offers tremendous possibilities, it is critical to understanding the limitations of this framework across key network attributes such as performance, security, availability and scalability. Real-world testing of a cloud computing environment is a key step toward putting any concerns to rest around performance, security and availability. Spirent will share key findings that are the result of some recent work with the European Advanced Networking Test Center (EANTC) including a close examination of how implementing a cloud approach within a private or private data center affects the firewall, data center bridging, virtualization, and WAN optimization.Session Leader Details:Jurrie Van Den Breekel (Primary Session Leader) - Spirent Communications

Page 41: Sc10 slide share

Cont’

Speakers:NEOVISE – Paul BurnsSPIRENT – Jurrie van den BreekelBROCADE – Steve Smith

Paul:Single application – single serverSingle application – multiple servers (cluster computing)Multiple applications – single sever – virtualizationMultiple applications – multiple servers – Cloud computing

3rd dimension : tenants, T1 T2 on the same physical server - security

Page 42: Sc10 slide share

Friday Panels (19-11-2010)

Page 43: Sc10 slide share

Future Supercomputing CentersThom Dunning, William Gropp, Thomas Lippert, Satoshi Matsuoka, Thomas Zacharia

This panel will discuss the nature of federal- and state-supported supercomputing centers, what is required to sustain them in the future, and how they will cope with the evolution of computing technology. Since the federally supported centers were created in the mid-1980s, they have fueled innovation and discovery, increasing the number of computational researchers, stimulating the use of HPC in industry, and pioneering new technologies. The future of supercomputing is exciting—sustained petascale systems are here with planning for exascale systems now underway—but it also challenging— disruptive technology changes will be needed to reach the exascale. How can supercomputing help ensure that today’s petascale supercomputer are effectively used to advance science and engineering and how can they help the research and industrial communities prepare for an exciting, if uncertain future?

Page 44: Sc10 slide share

Advanced HPC Execution Models: Innovation or Disruption

Panelists:Thomas L. Sterling, William Carlson, Guang Gao, William Gropp, Vivek Sarkar, Thomas Sterling, Kathy YelickABSTRACT:An execution model is the underlying conceptual foundation that integrates the HPC system architecture, programming methods, and intervening Operating System and runtime system software. It is a set of governing principles that govern the co-design, operation, and interoperability of the system layers to achieve most efficient scalable computing in terms of time and energy. Historically, HPC has been driven by five previous epochs of execution models including the most recent CSP that has been exemplified by "Pax MPI" for almost two decades. HPC is now confronted by a severe barrier of parallelism, power, clock rate, and complexity exemplified by multicore and GPU heterogeneity impeding progress between today's Petascale and the end of the decade's Exascale performance. The panel will address the key questions of requirements, form, impact, and programming of such future execution models should they emerge from research in academia, industry, and government centers.

Page 45: Sc10 slide share
Page 46: Sc10 slide share

The Exhibition

Page 47: Sc10 slide share
Page 48: Sc10 slide share
Page 49: Sc10 slide share
Page 50: Sc10 slide share
Page 51: Sc10 slide share
Page 52: Sc10 slide share
Page 53: Sc10 slide share
Page 54: Sc10 slide share