1 ornl computing story arthur maccabe director, computer science and mathematics division may 2010...

Download 1 ORNL Computing Story Arthur Maccabe Director, Computer Science and Mathematics Division May 2010 Managed by UT-Battele for the Department of Energy

If you can't read please download the document

Upload: abner-barton

Post on 26-Dec-2015

215 views

Category:

Documents


0 download

TRANSCRIPT

  • Slide 1
  • 1 ORNL Computing Story Arthur Maccabe Director, Computer Science and Mathematics Division May 2010 Managed by UT-Battele for the Department of Energy
  • Slide 2
  • 2 Our vision for sustained leadership and scientific impact Provide the worlds most powerful open resource for capability computing Follow a well-defined path for maintaining world leadership in this critical area Attract the brightest talent and partnerships from all over the world Deliver leading-edge science relevant to the missions of DOE and key federal and state agencies Invest in education and training Unique opportunity for multi-agency collaboration based on requirements and technology
  • Slide 3
  • 3Managed by UT-Battelle for the U.S. Department of Energy ORNL_Computing_0912 Ubiquitous access to data and workstation level computing (>10,000 users) Capacity computing (>1000 users) Capability computing (>100 users) Mid-range computing (clouds or clusters) and datacenters Large hardware systems and mid-range computers (clusters) Leadership computing at the exascale Most urgent, challenging, and important problems Scientific computing support Scalable applications developed and maintained Computational endstations for community codes High-speed WAN 2009: Jobs with ~10 5 CPU cores 2009: Jobs with ~10 3 CPU cores 2009: Jobs with ~1 to 10 2 CPU cores ORNL has a role in providing a healthy HPC ecosystem for several agencies Breadth of access Software either commercially available or developed internally Knowledge discovery tools and problem solving environment High-speed LAN and WAN Applications having some scalability developed and maintained, portals, user support High-speed WAN
  • Slide 4
  • 4Managed by UT-Battelle for the U.S. Department of Energy BOG_CompSciStrat_0901 ORNLs computing resources Science Data Net TeraGrid Internet2 ESnet Scientific Visualization Lab EVEREST: 30 ft 8 ft 35 million pixels EVEREST cluster Data analysis LENS: 32 nodes EWOK: 81 nodes Experimental systems Keeneland, NSF: GPU- based compute cluster IBM BG/P Power 7 Cray XMT Center-wide file system Spider 192 data servers 10 PB disks 240 GB/sec Archival storage HPSS Stored data: 8 PB Capacity: 30 PB December 2009: Summary Supercomputers: 6 Cores: 376,812 Memory: 516 TB Petaflops: 3.85 Disks: 14,350 TB Classified Network routers DOE Jaguar 2.3 PF NSF Kraken 1 PF Leadership Capability DOE Jaguar 240 TF ORNL Frost Capacity Computing NSF Athena 164 TF Climate NOAA TBD 1 PF+ Oak Ridge Institutional Clusters (LLNL model) Multiprogrammatic clusters National Security Cores: TBD Memory: TBD Cores: 224,256 Memory: 300 TB Cores: 99,072 Memory: 132 TB Cores: 31,328 Memory: 62 TB Cores: 2,048 Memory: 3 TB Cores: 18,060 Memory: 18 TB
  • Slide 5
  • 5 Worlds most capable complex for computational science: Infrastructure, staff, multiagency programs Outcome: Sustained world leadership in transformational research and scientific discovery using advanced computing We are DOEs lead laboratory for open scientific computing SCplan_0804 Why ORNL? Strategy Provide the nations most capable site for advancing through the petascale to the exascale and beyond Execute multiagency programs for advanced architecture, extreme- scale software R&D, and transformational science Attract top talent, deliver outstanding user program, educate and train next-generation researchers Leadership areas Delivering leading-edge computational science for DOE missions Deploying and operating computational resources required to tackle national challenges in science, energy, and security Scaling applications through the petascale to the exascale Infrastructure Multiprogram Computational Data Center: Infrastructure for 100250 petaflops and 1 exaflops systems Jaguar performance Today: 2300 TF+
  • Slide 6
  • 6 Advancing Scientific Discovery * 5 of the top 10 ASCR science accomplishments in Breakthroughs report used OLCF resources and staff Electron pairing in HTC cuprates* The 2D Hubbard model emits a superconducting state for cuprates & exhibits an electron pairing mechanism most likely caused by spin-fluctuation exchange. PRL (2007, 2008) Taming turbulent heat loss in fusion reactors* Advanced understanding of energy loss in tokamak fusion reactor plasmas,. PRL (vol 99) and Physics of Plasmas (vol 14) How does a pulsar get its spin?* Discovered the first plausible mechanism for a pulsars spin that fit observations, namely the shock wave created when the stars massive iron core collapses. Jan 4, 2007 issue of Nature Carbon Sequestration Simulations of carbon dioxide sequestration show where the greenhouse gas flows when pumped into underground aquifers. Stabilizing a lifted flame* Elucidated the mechanisms that allow a flame to burn stably above burners, namely increasing the fuel or surrounding air co-flow velocity Combustion and Flame(2008) Shining the light on dark matter* A glimpse into the invisible world of dark matter, finding that dark matter evolving in a galaxy such as our Milky Way remains identifiable and clumpy. Aug 7, 2008 issue of Nature
  • Slide 7
  • 7 Science applications are scaling on Jaguar Finalist
  • Slide 8
  • 8 Utility infrastructure to support multiple data centers Build a 280 MW substation on ORNL campus Upgrade building power to 25 MW Deploy a 10,000+ ton chiller plant Upgrade UPS and generator capability 8Managed by UT-Battelle for the Department of Energy
  • Slide 9
  • 9 Flywheel based UPS for highest efficiency Variable Speed Chillers save energy Liquid Cooling is 1,000 times more efficient than air cooling 13,800 volt power into the building saves on transmission losses ORNLs data center: Designed for efficiency Vapor barriers and positive air pressure keep humidity out of computer center Result: With a PUE of 1.25, ORNL has one of the worlds most efficient data centers 480 volt power to computers saves $1M in installation costs and reduce losses
  • Slide 10
  • 10 Equipment CoolingChiller PlantFacility Electrical Systems IT EquipmentCross-cutting issues Air ManagementCooling plant optimizations UPS systemsPower supply efficiency Motor efficiency Air economizersFree coolingSelf generationSleep/standby loads Right sizing Humidification controls alternatives Variable speed pumping AC-DC DistributionIT equipment fans Variable speed drives Centralized air handlers Variable speed chillers Standby generation Lighting Direct liquid coolingMaintenance Low pressure drop air distribution Commissioning / continuous benchmarking Fan efficiencyHeat recovery Redundancies Charging for space and power Building envelope High Performance Buildings for High Tech Industries in the 21 st Century Dale Sartor, Lawrence Berkeley National Laboratory ORNL Steps Innovative best practices peeded to increase computer center efficiency
  • Slide 11
  • 11 Today, ORNLs facility is among the worlds most efficient data centers Power utilization efficiency (PUE) = Data center power / IT equipment ORNLs PUE=1.25
  • Slide 12
  • 12 12Managed by UT-Battelle for the Department of Energy State-of-the-art owned network is directly connected to every major R&E network at multiple lambdas
  • Slide 13
  • 13 Center-wide file system Spider provides a shared, parallel file system for all systems Based on Lustre file system Demonstrated bandwidth >240 GB/s Over 10 PB of RAID-6 capacity 13,440 1-TB SATA Drives 192 storage servers Available from all systems via our high-performance scalable I/O network (Infiniband) Currently mounted on over 26,000 client nodes
  • Slide 14
  • 14 Everest Powerwall Remote Visualization Cluster End-to-End Cluster Application Development Cluster Data Archive 25 PB Completing the simulation environment to meet the science requirements XT5 XT4 Spider Login Scalable I/O Network (SION) 4x DDR Infiniband Backplane Network
  • Slide 15
  • 15 Hardware scaled from single-core through dual-core to quad-core and dual-socket, 12-core SMP nodes Scaling applications and system software is biggest challenge NNSA and DoD have funded much of the basic system architecture research Cray XT based on Sandia Red Storm IBM BG designed with Livermore Cray X1 designed in collaboration with DoD DOE SciDAC and NSF PetaApps programs are funding scalable application work, advancing many apps DOE-SC and NSF have funded much of the library and applied math as well as tools Computational Liaisons key to using deployed systems Cray XT4 119 TF 2006 2007 2008 Cray XT3 Dual-Core 54 TF Cray XT4 Quad-Core 263 TF We have increased system performance by 1,000 times since 2004 2005 Cray X1 3 TF Cray XT3 Single-core 26 TF 2009 Cray XT5 Systems 12-core, dual-socket SMP 2000+ TF and 1000 TF
  • Slide 16
  • 16 Science requires advanced computational capability 1000x over the next decade Mission: Deploy and operate the computational resources required to tackle global challenges Vision: Maximize scientific productivity and progress on the largest scale computational problems Deliver transforming discoveries in climate, materials, biology, energy technologies, etc. Ability to investigate otherwise inaccessible systems, from regional climate impacts to energy grid dynamics Providing world-class computational resources and specialized services for the most computationally intensive problems Providing stable hardware/software path of increasing scale to maximize productive applications development Cray XT5 2+ PF Leadership system for science OLCF-3: 10-20 PF Leadership system with some HPCS technology 2009201120152018 OLCF-5: 1 EF OLCF-4: 100-250 PF based on DARPA HPCS technology
  • Slide 17
  • 17 Jaguar: Worlds most powerful computer designed for science from the ground up Peak performance2+ petaflops System memory300 terabytes Disk space10 petabytes Disk bandwidth240 gigabytes/second System power7 MW
  • Slide 18
  • 18 18Managed by UT-Battelle for the Department of Energy National Institute for Computational Sciences University of Tennessee and ORNL partnership
  • Slide 19
  • 19 Collaborators An International, Dedicated High-End Computing Project to Revolutionize Climate Modeling COLACenter for Ocean-Land-Atmosphere Studies, USA ECMWFEuropean Center for Medium-Range Weather Forecasts JAMSTECJapan Agency for Marine-Earth Science and Technology UTUniversity of Tokyo NICSNational Institute for Computational Sciences, University of Tennessee Expected Outcomes Better understand global mesoscale phenomena in the atmosphere and ocean Understand the impact of greenhouse gases on the regional aspects of climate Improve the fidelity of models simulating mean climate and extreme events Project Use dedicated HPC resources Cray XT4 (Athena) at NICS to simulate global climate change at the highest resolution ever. Six months of dedicated access. NICAMNonhydrostatic, Icosahedral, Atmospheric Model IFSECMWF Integrated Forecast System Codes
  • Slide 20
  • 20 ORNL / DOD HPC collaborations Peta/Exa-scale HPC Technology Collaborations in support of national security System design, performance and benchmark studies Wide-area network investigations Extreme Scale Software Center Focused on widening the usage and improving productivity the next generation of extreme-scale supercomputers Systems software, tools, environments and applications development Large scale system reliability, availability and serviceability (RAS) improvements 25 MW power 8,000 tons cooling 32,000 ft 2 raised floor 25 MW power 8,000 tons cooling 32,000 ft 2 raised floor 20Managed by UT-Battelle for the Department of Energy
  • Slide 21
  • 21 Impact: Performance (time-to-solution): Speed up critical national security applications by a factor of 10 to 40 Programmability (idea-to-first-solution): Reduce cost and time of developing application solutions Portability (transparency): Insulate research and operational application software from system Robustness (reliability): Apply all known techniques to protect against outside attacks, hardware faults, and programming errors Fill the Critical Technology and Capability Gap Today (late 80s HPC technology) Future (Quantum/Bio Computing) Fill the Critical Technology and Capability Gap Today (late 80s HPC technology) Future (Quantum/Bio Computing) Applications: Intelligence/surveillance, reconnaissance, cryptanalysis, weapons analysis, airborne contaminant modeling, and biotechnology HPCS program focus areas Slide courtesy of DARPA We are partners in the $250M DARPA HPCS program Prototype Cray system to be deployed at ORNL
  • Slide 22
  • 22 The next big climate challenge Nature, Vol. 453, Issue No. 7193, March 15, 2008 Develop a strategy to revolutionize prediction of the climate through the 21 st century to help address the threat of global climate change Current inadequacy in provision of robust estimates of risk to society is strongly influenced by limitations in computer power A World Climate Research Facility (WCRF) for climate prediction should be established that will enable the national centers to accelerate progress in improving operational climate prediction at decadal to multi-decadal lead times Central component of the WCRF will be one or more dedicated high-end computing facilities that will enable the revolution in climate prediction with systems at least 10,000 times more powerful than the currently available computers, is vital for regional climate predictions to underpin mitigation policies and adaptation needs
  • Slide 23
  • 23 Oak Ridge Climate Change Science Institute A new multi-disciplinary, multi-agency organization bringing together ORNLs climate change programs to promote a cohesive vision of climate change science and technology at ORNL >100 Staff members matrixed from across ORNL Worlds leading computing systems (>3 PF in 2009) Specialized facilities and laboratories Programs James Hack Director David Bader Deputy Director Computation Geologic Sequestration Observation & Experiment Synthesis Science Data Developing Areas
  • Slide 24
  • 24 High-performance computing: OLCF, NICS, NOAA Data systems, knowledge discovery, networking Observation networks Experimental manipulation facilities Atmosphere Ocean Ice Terrestrial and marine biogeochemistry Land use Hydrologic cycle Aerosols, water vapor, clouds, atmosphere dynamics Ocean dynamics and biogeochemistry Ice dynamics Terrestrial ecosystem feedbacks and response Land-use trends and projections Extreme events, hydrology, aquatic ecology Adaptation Mitigation Infrastructure Energy and economics Oak Ridge Climate Change Science Institute Integration of models, measurements, and analysis Earth system models from local to global scales Process understanding: Observation, experiment, theory Integrated assessment Partnerships will be essential Facilities and infrastructure http://climatechangescience.ornl.gov
  • Slide 25
  • 25 What we would like to be able to say about climate-related impacts within next 5 years What specific changes will be experienced, and where? When will the changes begin and how will they evolve over the next two-to-three decades? How severe will the changes be? How do they compare with historical trends and events? What will be the impacts over space and time? People (e.g., food, water, health, employment, social structure) Nature (e.g., biodiversity, water, fisheries) Infrastructure (e.g., energy, water, buildings) What specific and effective adaptation tactics are possible? How might adverse impacts be avoided or mitigated?
  • Slide 26
  • 26 High-resolution Earth system modeling A necessary core capability Objectives and Impact Strategy: Develop predictive global simulation capabilities for addressing climate change consequences Driver: Higher fidelity simulations with improved predictive skill on decadal time scales on regional space scales Objective: Configurable high-resolution scalable atmospheric, ocean, terrestrial, cryospheric, and carbon component models to answer policy and planning relevant questions about climate change Impact: Exploration of renewable energy resource deployment, carbon mitigation strategies, climate adaptation scenarios (agriculture, energy and water resource management, protection of vulnerable infrastructure, national security) Mesoscale-resolved column integrated water vapor Jaguar XT5 simulation Eddy-resolved sea surface temperature Jaguar XT5 simulation Net ecosystem exchange of CO 2
  • Slide 27
  • 27 NOAA collaboration as example Interagency Agreement with Department of Energy Oak Ridge Operations Office Signed August 6, 2009 Five-year agreement $215M Work for Others (initial investment of $73M) Facilities and science activity Provides dedicated specialized high performance computing collaborative services for climate modeling Builds on existing three-year MOU w/ DOE Common research goal to develop, test, and apply state-of-the-science computer-based global climate simulation models based upon strong scientific foundation Collaboration with Oak Ridge National Laboratory Synergy among research efforts across multiple disciplines and agencies Leverages substantial existing infrastructure on campus Access to world-class network connectivity Leverages ORNL Extreme Scale Software Center Utilizes proven project management expertise
  • Slide 28
  • 28 Delivering Science Effective speed increases come from faster hardware and improved algorithms Science Prospects and Benefits with HPC in the Next Decade Speculative requirements for scientific application on HPC platforms (20102020) Architectures and applications can be co-designed in order to create synergy in their respective evolutions. Having HPC facilities embedded in an R&D organization (CCSD) comprised of staff with expertise in CS, math, scalable application development, knowledge discovery, and computational science enables integrated approaches to delivering new science Applied Math Theoretical & Computational Science New Science Domain Science: A partnership of experiment, theory and simulation working towards shared goals. Computer Science Nanoscience
  • Slide 29
  • 29 We have a unique opportunity for advancing math and computer science critical to mission success through multi-agency partnership Institute for Advanced Architectures and Algorithms Two national centers of excellence in HPC architecture and software established in 2008 Funded by DOE and DOD Major breakthrough in recognition of our capabilities Extreme Scale Software Development Center Jointly funded by NNSA and SC in 2008 ~$7.4M ORNL-Sandia partnership $7M in 2008 Aligned with DOE-SC interests IAA is the medium through which architectures and applications can be co-designed in order to create synergy in their respective evolutions.
  • Slide 30
  • 30 Preparing for the Exascale By Analyzing Long-Term Science Drivers and Requirements We have recently surveyed, analyzed, and documented the science drivers and application requirements envisioned for exascale leadership systems in the 2020 timeframe These studies help to Provide a roadmap for the ORNL Leadership Computing Facility Uncover application needs and requirements Focus our efforts on those disruptive technologies and research areas in need of our and the HPC communitys attention
  • Slide 31
  • 31 All projections are daunting Based on projections of existing technology both with and without disruptive technologies Assumed to arrive in 2016-2020 timeframe Example 1 400 cabinets, 115K nodes @ 10 TF per node, 50-100 PB, optical interconnect, 150-200 GB/s injection B/W per node, 50 MW Examples 2-4 (DOE Townhall report*) What Will an EF System Look Like? *www.er.doe.gov/ASCR/ProgramDocuments/TownHall.pdf
  • Slide 32
  • 32 The U.S. Department of Energy requires exaflops computing by 2018 to meet the needs of the science communities that depend on leadership computing Our vision: Provide a series of increasingly powerful computer systems and work with user community to scale applications to each of the new computer systems Today : Upgrade of Jaguar to 6-core processors in progress OLCF-3 Project : New 10-20 petaflops computer based on early DARPA HPCS technology Moving to the Exascale OLCF Roadmap from 10-year plan 250 PF HPCS System 10 20 PF 200820092010201120122013201420152016 ORNL Multiprogram Computing and Data Center (140,000 ft 2 ) 2017 ORNL Computational Sciences Building 20182019 ORNL Multipurpose Research Facility 1 EF OLCF-3 Future systems Today 2 PF, 6-core 1 PF 100 PF
  • Slide 33
  • 33 Multi-core Era: A new paradigm in computing Vector Era USA, Japan Vector Era USA, Japan Massively Parallel Era USA, Japan, Europe Massively Parallel Era USA, Japan, Europe We have always had inflection points where technology changed
  • Slide 34
  • 34 What do the Science Codes Need? What system features do the applications need to deliver the science? 20 PF in 20112012 time frame with 1 EF by end of the decade Applications want powerful nodes, not lots of weak nodes Lots of FLOPS and OPS Fast, low-latency memory Memory capacity 2GB/core Strong interconnect Node peak FLOPS Memory bandwidth Interconnect latency Memory latency Interconnect bandwidth Node memory capacity Disk bandwidth Large storage capacity Disk latency WAN bandwidth MTTI Archival capacity
  • Slide 35
  • 35 How will we deliver these features, and address the power problem? DARPA ExaScale Computing Study (Kogge et al.): We cant get to the exascale without radical changes Clock rates have reached a plateau and even gone down Power and thermal constraints restrict socket performance Multi-core sockets are driving up required parallelism and scalability Future systems will get performance by integrating accelerators on the socket (already happening with GPUs) AMD Fusion Intel Larrabee IBM Cell (power + synergistic processing units) This has happened before (3090+array processor, 8086+8087, )
  • Slide 36
  • 36 Same number of cabinets, cabinet design, and cooling as Jaguar Operating system upgrade of todays Cray Linux Environment New Gemini interconnect 3-D Torus Globally addressable memory Advanced synchronization features New accelerated node design 10-20 PF peak performance Much larger memory 3x larger and 4x faster file system 10 MW of power OLCF-3 system description
  • Slide 37
  • 37 OLCF-3 node description Accelerated Node Design Next generation interconnect Next generation AMD processor Future NVIDIA accelerator Fat nodes 70 GB memory Very high performance processors Very high memory bandwidth
  • Slide 38
  • 38 NVIDIAs commitment to HPC Features for computing on GPUs Added high-performance 64-bit arithmetic Adding ECC and parity that other GPU vendors have not added Critical for a large system Larger memories Dual copy engines for simultaneous execution and copy S1070 has 4 GPUs exclusively for computing No video out cables Development of CUDA and recently announced work with PGI on Fortran CUDA May 04, 2009 NVIDIA Shifts GPU Clusters Into Second Gear by Michael Feldman, HPCwire Editor GPU-accelerated clusters are moving quickly from the "kick the tires" stage into production systems, and NVIDIA has positioned itself as the principal driver for this emerging high performance computing segment. The company's Tesla S1070 hardware, along with the CUDA computing environment, are starting to deliver real results for commercial HPC workloads. For example Hess Corporation has a 128-GPU cluster that is performing seismic processing for the company. The 32 S1070s (4 GPUs per board) are paired with dual-socket quad-core CPU servers and are performing at the level of about 2,000 dual-socket CPU servers for some of their workloads. For Hess, that means it can get the same computing horsepower for 1/20 the price and for 1/27 the power consumption. May 04, 2009 NVIDIA Shifts GPU Clusters Into Second Gear by Michael Feldman, HPCwire Editor GPU-accelerated clusters are moving quickly from the "kick the tires" stage into production systems, and NVIDIA has positioned itself as the principal driver for this emerging high performance computing segment. The company's Tesla S1070 hardware, along with the CUDA computing environment, are starting to deliver real results for commercial HPC workloads. For example Hess Corporation has a 128-GPU cluster that is performing seismic processing for the company. The 32 S1070s (4 GPUs per board) are paired with dual-socket quad-core CPU servers and are performing at the level of about 2,000 dual-socket CPU servers for some of their workloads. For Hess, that means it can get the same computing horsepower for 1/20 the price and for 1/27 the power consumption.
  • Slide 39
  • 39 Exceptional expertise and experience In-depth applications expertise in house Strong partners Proven management team Driving architectural innovation needed for exascale Superior global and injection bandwidths Purpose built for scientific computing Leverages DARPA HPCS technologies Broad system software development partnerships Experienced performance/optimization tool development teams Partnerships with vendors and agencies to lead the way Leverage DOD and NSF investments Petaflops Power (reliability, availability, cost) Space (current and growth path) Global network access capable of 96 X 100 Gb/s Exaflops Multidisciplinary application development teams Partnerships to drive application performance Science base and thought leadership Our strengths in key areas will ensure success
  • Slide 40
  • 40Managed by UT-Battelle for the U.S. Department of Energy Overview_0909 www.ornl.gov Oak Ridge National Laboratory: Meeting the challenges of the 21st century