high performance computing in the max planck society€¦ · puting applications in the max planck...

92
High Performance Computing in the Max Planck Society Plasmadichte Plasmatemperatur ASDEX Upgrade

Upload: others

Post on 19-Jun-2020

4 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

1F A C H R I C H T U N G / A B T E I L U N GH

IG

HP

ER

FO

RM

AN

CE

CO

MP

UT

IN

G

High Performance Computingin the Max Planck Society

Plasmadichte

Plasmatemperatur

ASDEX Upgrade

Page 2: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

Garching Computing Centreof the Max Planck Society (RZG)

Max Planck Institute for Plasma PhysicsBoltzmannstr. 2

D-85748 Garching, Germany

Phone: +49-89-3299-2176Fax: +49-89-3299-1301URL: http://www.rzg.mpg.de

Page 3: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

3H

IG

H P

ER

FO

RM

AN

CE

CO

MP

UT

IN

G

Fritz Haber Institute of the Max Planck Society, BerlinModeling Heterogeneous Catalysis by First-Principles Statistical Mechanics . . . . . . . . . . . p. 06Growth-Related Properties ofSemiconductor Quantum Dots . . . . . . . . . . . . . . . . . p. 07Simulations of Nanoporous Carbon... . . . . . . . . . . . . p. 09First-Principles Analysis of the Stability andMetastability of Helical Conformations in Proteins p. 10

Max Planck Institute for Polymer Research, MainzAttraction between Like Charged Objects . . . . . . . . p. 11Macromolecules at Interfaces . . . . . . . . . . . . . . . . . . p. 13Hydrodynamic Interactions... . . . . . . . . . . . . . . . . . . p. 14

Max Planck Institute for Metals Research, (MPI-MF), Stuttgart Large-Scale Atomistic Studies... . . . . . . . . . . . . . . . . p. 16Ultra-Large Scale Atomistic Studies... . . . . . . . . . . . p. 18Mechanical Properties of Submicron... . . . . . . . . . . . p. 20Is Smaller Always Harder? . . . . . . . . . . . . . . . . . . . . p. 21A Novel Self-Folded State of Carbon Nanotubes . . . . . . . . . . . . . . . . . . . . . . . . . . p. 22Biological Molecules Interacting... . . . . . . . . . . . . . . p. 24Flaw Tolerant Bulk andSurface Nanostructures... . . . . . . . . . . . . . . . . . . . . . p. 25Fluid Flow on the Nanoscale:... . . . . . . . . . . . . . . . . p. 27

Max Planck Institute for Coal Research, Mülheim a. d. RuhrSimulation of NMR Properties . . . . . . . . . . . . . . . . . p. 29

Max Planck Institute for Chemical Physics of Solids, DresdenElectronic Correlation in Position Space . . . . . . . . . p. 31Mechanisms of Reactions, Crystal Nucleation,Dissolution and Phase Transitions . . . . . . . . . . . . . . p. 33

Max Planck Institute for the Physics of Complex Systems (PKS), DresdenGrand Challenges in Light-Matter Interaction . . . . . p. 35

Max Planck Institute of Microstructure Physics, HalleSimulation of Correlated Systems . . . . . . . . . . . . . . . p. 37Molecular Design . . . . . . . . . . . . . . . . . . . . . . . . . . . p. 39

Max Planck Institute for Solid State Research, StuttgartDensity-Functional Theory for Adsorbed Molecular Complexes . . . . . . . . . . . . . . . . . . . . . . . . p. 41

Max Planck Institute of BiophysicsMolecular Dynamics Simulations... . . . . . . . . . . . . . p. 43

Max Planck Institute for Biochemistry, Martinsried Molecular Visualization of Cells... . . . . . . . . . . . . . . p. 45

Max Planck Institute for Astrophysics, Garching near MunichThe “Millennium” Simulation . . . . . . . . . . . . . . . . . p. 47Thermonuclear Supernova Explosions . . . . . . . . . . . . p. 49Gravitational Waves . . . . . . . . . . . . . . . . . . . . . . . . . p. 50Simulations of Massive Star Explosions . . . . . . . . . . p. 51

Max Planck Institute for Gravitational Physics(Albert Einstein Institute), PotsdamNumerical Simulations of Black Hole Spacetimes . . . . . . . . . . . . . . . . . . . . . . . . . . . . p. 53

Max Planck Institute for extraterrestial Physics (MPE), Garching near MunichImaging the Gamma-Ray Sky . . . . . . . . . . . . . . . . . . p. 55

Max Planck Institute for Solar System Research (MPS), Katlenburg-LindauRadiative Magneto-Convection in the Solar Atmosphere . . . . . . . . . . . . . . . . . . . . . . p. 57

Max Planck Institute for Plasma Physics, Garching and Greifswald . . . . . . . . . . . . . . . . . . . . . . p. 59Turbulence in Fusion Plasmas . . . . . . . . . . . . . . . . . p. 60Fluctuation-Caused Transport in Stellarators . . . . . . p. 623D Plasma Edge Modelling and Island Divertor Physics . . . . . . . . . . . . . . . . . . . . . . . . . . . . p. 64Nonlinear Energy Dynamics in MHD Turbulence . . . . . . . . . . . . . . . . . . . . . . . . . p. 66Anisotropic Spatial Structureof MHD Turbulence . . . . . . . . . . . . . . . . . . . . . . . . . p. 67

Contents

Page 4: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

4H

IG

H P

ER

FO

RM

AN

CE

CO

MP

UT

IN

G

Centre for Interdisciplinary Plasma Science – Plasma Theory Interaction of Global Structures and Microscopic Plasma Turbulence... . . . . . . . . . . . p. 68Magnetic Reconnection... . . . . . . . . . . . . . . . . . . . . . p. 70MPE Collisionless Magnetic... . . . . . . . . . . . . . . . . . p. 71Structure of Quasi-Perpendicular Collisionless Shocks . . . . . . . . . . . . . . . . . . . . . . . . . p. 72

Max Planck Institute for Quantum Optics,Garching near MunichLaser Plasma Interactions . . . . . . . . . . . . . . . . . . . . . p. 73Wavepacket Dynamics... . . . . . . . . . . . . . . . . . . . . . . p. 76

Max Planck Institute for Chemistry, Mainz Global Atmospheric Chemistry Modeling . . . . . . . . . p. 78

The Rechenzentrum Garching (RZG)History, Main Duties, Organisation and Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . p. 80Actual Compute Systems at RZG . . . . . . . . . . . . . . . p. 81Supercomputing History in the Max Planck Society . . . . . . . . . . . . . . . . . . . . . . . . . p. 82Main Computer Installationsat the RZG/IPP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . p. 83The Deisa Project . . . . . . . . . . . . . . . . . . . . . . . . . . . p. 84MiGenAS: Bioinformatics at RZG . . . . . . . . . . . . . . p. 85Data Acquisition Systems . . . . . . . . . . . . . . . . . . . . . p. 86Groups at RZG . . . . . . . . . . . . . . . . . . . . . . . . . . . . p. 88Advisory Boards . . . . . . . . . . . . . . . . . . . . . . . . . . . . p. 92

Page 5: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

5H

IG

H P

ER

FO

RM

AN

CE

CO

MP

UT

IN

G

This brochure aims......at giving an overview of the high performance com-puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projectscurrently being undertaken at the central Max Plancksupercomputing centre in Garching (RZG).

Expansion of the theoretically oriented scienceswithin the Max Planck Society in the last ten years hasled to an increased role of supercomputing in manyareas such as materials science, theoretical chemistry,polymer research, and biophysics, in addition to themore traditional supercomputing disciplines of astro-physics, climate research and plasma physics.

Accordingly, more than thirty projects from overtwenty Max Planck Institutes give insight into the rangeof challenges which can be addressed and solved withsupercomputing simulations.

In addition to the traditional scientific approachesof theory and experiment, numerical simulations offer athird methodology to the advancement of scientific dis-covery.

Finally, some background is provided about the central role and tasks of RZG in High PerformanceComputing for the Max Planck Society.

STEFAN HEINZEL, DIRECTOR OF RZG

Page 6: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

Fritz Haber Institute of the Max Planck Society at Berlin, Theory Department (Director: Matthias Scheffler)

Novel First-Principles Approaches in Materials ScienceThe main research projects of the Theory Department are concernedwith fundamental aspects (starting from the electronic structure) ofcatalysis and the chemical and physical properties of surfaces, inter-faces, clusters and nanostructures. Some work is also done in thefield of biophysics.The design of new materials, improving the capabilities of existingmaterials, and making production processes more efficient are majorgoals of materials science. While research and development of newmaterials often appears like a trial and error approach, first-principlesstudies (starting with the electronic structure) have enormouslyexpanded our understanding of materials phenomena. Indeed

progress towards a truly predictive theory of materials is now well under way. The description of materials properties and functions requires a careful multi-scale modeling, where corresponding theories fromthe electronic, mesoscopic, and macroscopic regimes are linkedappropriately. Therefore a typical aspect of studies and new develop-ments in the Theory Department concerns the linkage of atomistic,electronic structure theory with concepts and techniques from statis-tical mechanics and thermodynamics. Most of our calculations startfrom density-functional theory (DFT). In “ab initio atomisticthermodynamics”, DFT-determined materials parameters are usedto study surfaces and nanostructures at temperatures and pressuresdifferent from zero. For non-equilibrium phenomena, as for examplecrystal growth and heterogeneous catalysis, scientists in the TheoryDepartment are developing the concept of “ab initio statisticalmechanics”. In particular they are implementing kinetic MonteCarlo simulations, which include relevant elementary processesdirectly, as DFT-determined input, rather than relying on just a feweffective parameters. Such a description, that treats the statisticalinterplay of a large number of microscopically well-described elemen-tary processes, enables a predictive materials science modelingwith microscopic understanding.Emphasis is also put on extending the methodology beyond the standard approximations to DFT, and to excited-state calculations.Keywords along this line are optimized effective potentials(including exact exchange), the GW self-energy approach, and thequantum Monte Carlo approach. The new quality of, and the novelinsights that can be gained by, such techniques is illustrated on thefollowing 4.5 pages that sketch some recent highlights. For moreinformation see http://www.fhi-berlin.mpg.de/th/th.html

Karsten Reuter and Matthias Scheffler, Fritz Haber Institute, Berlin

Modeling Heterogeneous Catalysis by First-Principles Statistical Mechanics

Research objectives: Our research aims at a predictivematerials modeling, particularly addressing topics fromthe area of heterogeneous catalysis at metal and oxidesurfaces. Special emphasis is put on treating the in-fluence of realistic environments (multi-component gasphase at realistic pressure and temperature).Methodologically, such a predictive modeling requirescombining an accurate description of all involved atom-ic-scale processes with a treatment of their statisticalinterplay at meso- and macro-scopic scales (e.g. model-ing the time evolution from 10-13 to 1 s). We thereforedevelop and employ approaches that combine many-body quantum mechanics, as contained in density-func-tional theory (DFT), with concepts from statisticalmechanics and thermodynamics.

Computational approach: The DFT calculations for thisproject are based on the highly accurate full-potentiallinearized augmented plane wave (FP-LAPW andAPW+lo) method, as implemented in the WIEN2kcode. Several aspects were jointly developed with P.Blaha et al. (Vienna) and the parallelization of the codewith R. Dohmen and J. Pichlmeier (Garching). The sta-tistical approaches include ab initio molecular dynamicsand (kinetic) Monte Carlo simulations.

Project description: Late transition metals like Rhodiumor Platinum are widely used to catalyze oxidation reac-tions, with the CO oxidation in car catalytic convertersbeing a prominent example. Despite this importance, it isstill unclear, what is really the active state of the catalyst

6

Page 7: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

surface in the oxygen-rich reactive environments typicalof technological applications. Especially the likelihood ofoxides forming at the metal surfaces under such condi-tions has lately received considerable attention. In fact, amodel Ru catalyst for CO oxidation is the first system,for which it could now be unambiguously shown thatwhat is actuating the catalysis is a nanometer thin oxidefilm that forms on top of the metal surface, and not thetransition metal itself.

Aiming at a first-principles understanding of thisoxide formation process and the ensuing catalytic activ-ity, we employed a novel methodology combining thecalculation of the involved microscopic processes usingDFT with a proper treatment of the statistical interplay

7F R I T Z H A B E R I N S T I T U T E O F T H E M P S / T H E O R Y D E P A R T M E N T H

IG

H P

ER

FO

RM

AN

CE

CO

MP

UT

IN

G

at the meso- and macroscopic scales by kinetic MonteCarlo simulations. This approach enables us to followthe development of the system over time scales up toseconds and longer, and it therefore allows for a first-principles computation of the steady-state catalyticactivity. Whereas previous and present academicresearch in this area has widely focused on just the mostlikely reaction processes, i.e. ones with a low reactionbarrier, we could show that the occurrence of the appro-priate surface atomic configurations as a consequenceof the statistical interplay of all processes is at least ofsimilar importance. Indeed, we find the catalytic activi-ty dominated by a process that does not exhibit the low-est barrier of all studied reaction paths. The computedcatalytic activity is in unprecedented agreement withdetailed experimental data, both in magnitude as well asin its pressure and temperature dependence. Thedetailed analyses possible on the basis of these newfirst-principles statistical mechanics simulations haveprovided a new level of insight into the concertedactions ruling heterogeneous catalysis in the Ru system,and will eventually advance our microscopic under-standing of catalysis in general.

Publications:• R. Dohmen et al. “Parallel FP-LAPW for Distributed-Memory

Machines”, Computing in Science & Engineering 3, 18 (2001). • K. Reuter et al., “Atomistic Description of Oxide Formation on Metal

Surfaces: the Example of Ru”, Chem. Phys. Lett. 352, 311 (2002).• K. Reuter and M. Scheffler, “First-Principles Atomistic

Thermodynamics for Oxidation Catalysis: Surface Phase Diagramsand Catalytically Interesting Regions”, Phys. Rev. Lett. 90, 046103(2003).

• K. Reuter, D. Frenkel, and M. Scheffler, “The Steady-State ofHeterogeneous Catalysis, Studied by First-Principles StatisticalMechanics”, Phys. Rev. Lett. 93,116105 (2004).

Calculated and measured CO2 formation rate, together with a snapshot of the surface population under optimum catalytic performance and the detailed atomic geometry of the processresponsible for the high activity.

Peter Kratzer and Matthias Scheffler, Fritz Haber Institute, Berlin

Growth-Related Properties of Semiconductor Quantum Dots

Research objectives: This project is devoted to theoret-ical investigations of semiconductor nanostructuresformed spontaneously during the deposition of thinsemiconductor films. Nanometer-size islands with crys-talline structure (usually called quantum dots) allow forthe confinement of charge carriers (electrons and holes)in a very small volume, such that the quantum nature ofthe carriers is borne out. In this way a new type of mate-rial with unprecedented electronic and optical proper-ties is created. The main objective of our research is to

gain an understanding of how the electronic propertiesof the quantum dots are affected by their atomic struc-ture, how this structure evolves by self-assemblinggrowth, and how this process can be controlled. Only bylearning about the interplay of the numerous, very dis-tinct atomic processes involved in the self-assembly, onereaches an understanding from which the properties ofquantum dots can be predicted and systematicallyimproved. As an important special case, we investigatethe growth of InAs quantum dots on GaAs. For this

Page 8: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

materials system, novel devices on the laboratory scalehave already been realized using quantum dots as theactive part of light-emitting diodes and lasers.

Computational approach: The InAs quantum dots,although being nanostructures, consist of 5,000-30,000atoms. Hence calculating their electronic structure rep-resents a significant challenge. We use a multi-scaleapproach: On the largest, but yet atomic scale, wemodel the spatially inhomogeneous strain field inducedby an InAs island on a GaAs substrate by means offorce-field calculations. For calculating the effect ofstrain on the energy levels of the carriers confined in the quantum dot, we use an empirical tight-bindingscheme. For a description of the growth process on theatomic scale, the most accurate, but also computation-ally most demanding tool is employed, first-principleselectronic structure calculations using density function-al theory (DFT). In order to bridge the gap between theatomistic time scale of vibrations and diffusion (picosec-onds) and the time scale of self-assembly (seconds, upto minutes), kinetic Monte Carlo (kMC) simulations ofisland growth are performed.

Project description: From the viewpoint of thermody-namics, the spontaneous formation of the quantum dotsis governed by the energetic balance between strainrelief and the energy cost due to the formation of thequantum dot side facets and edges. We employ a hybridapproach: surface energies and surface stress are calcu-lated by DFT, taking into account the specific atomicstructure of each surface. The bulk elastic energy inboth the islands and the substrate is calculated classi-cally, e.g., using atomic force fields. By combining bothenergy contributions, we can predict the stability ofislands (see Figure). The results show that a flat islandshape is energetically preferable for small island vol-umes, while a steeper shape becomes lower in energyfor larger islands, in agreement with recent experimen-tal work. Experiments have shown that the shape of the quantumdots is strongly affected by the different growth speed oftheir crystalline facets. With the help of kinetic Monte

Carlo simulations, we were able to identify the role ofAs2 incorporation for the growth of arsen-ide compoundmaterials from the gas phase for the case of GaAs.Based on this knowledge, we conjecture that the differ-ent growth speed of the facets is related to differencesin the incorporation mechanism of the As2 molecules atthe facet edges.

Publications: • R. Santoprete, B. Koiller, R.B. Capaz, P. Kratzer, Q.K.K. Liu,

and M. Scheffler, Phys. Rev. B 68, 235311 (2003).• P. Kratzer and M. Scheffler, Comp. Sci. Eng. 3 (6), 16 (2001).• P. Kratzer and M. Scheffler, Phys. Rev. Lett. 88, 036102 (2002).• Y. Temko, T. Suzuki, P. Kratzer, and K. Jacobi, Phys. Rev. B 68,

165310 (2003).

8H

IG

H P

ER

FO

RM

AN

CE

CO

MP

UT

IN

G

Energy gain associated with quantum dot formation as a function of the quantum dot volume, for two types of InAs quantum dots on GaAs. If the dot grows larger than 1800 nanometer3, steeper facets are found to be energetically preferable, since they allow for a better relaxation of strain. The inset shows the atomic structure of one facet.

F R I T Z H A B E R I N S T I T U T E O F T H E M P S / T H E O R Y D E P A R T M E N T

Page 9: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

9H

IGH

PE

RF

OR

MA

NC

E C

OM

PU

TIN

G

Johan M. Carlsson, Suljo Linic, and Matthias Scheffler, Fritz Haber Institute, Berlin

Simulations of Nanoporous CarbonResearch objectives: While the chemical activity ofdefect-free graphite, fullerenes, and nanotubes is verylow, recent experiments have indicated that various car-bon materials, in particular when containing strainednano-structures and broken bonds, can behave very dif-ferently. Even thought nanoporous carbon was notknown to be present in industrial catalysts, there are nowstrong indications that this novel material is playing acrucial role in important catalytic processes. The mech-anism and function is, however, obscure. Our computa-tional first-principles studies analyze the properties ofvarious defects that may be present in natural, graphite-like networks, in order to understand and explain thestructure, stability, physical, and chemical properties ofthis material under realistic catalytic conditions.

Computational approach: We employ density-functionaltheory (DFT) using the generalized gradient approxima-tion for the exchange-correlation potential. By minimiz-ing the total energy, stable and metastable defect struc-tures are identified. The DFT calculations are comple-mented with ab initio thermodynamics to capture thetemperature and pressure effects that are important forthe reactions taking place at elevated temperatures.

Project description: The catalytic dehydrogenation ofethylbenzene to styrene is a very important industrialprocess. Styrene is an important intermediate used in theproduction of most plastic. The commercial catalystsemploy large quantities of iron oxide. However, examina-tions of iron-oxide surfaces, after they had been used inthis catalytic reaction, have demonstrated that thesematerials are fully covered by carbon. These observationsled recently to the suggestion that these carbon depositsare active in the catalytic process, and iron oxide is “only”needed to create it in aproper, active form. Thestructure of these car-bon materials appears tobe graphite like withdefects or nanoporousproperties, but the

details remain unclear. Also the mechanism by whichthis novel material catalyzes this reaction is stillunknown. These experimental observations have moti-vated us to systematically study defects in graphiticsheets as they represent the motifs (or building blocks)of the new material.

Our calculations indicate that vacancies that arefrozen in the material during its growth form large atomrings in the lattice due to extensive rebonding of theundercoordinated atoms surrounding the vacancy. Fig-ure a) shows the atomic structure of a motif from a typi-cal nanoporous carbon, where a four-atom vacancy formsa nine-atom ring with a diameter of 4 Å. The rebondingleads to significant strain and cresuates local curvature.Further calculations regarding oxidation of graphiticmaterials suggest that the edges of the vacancies get oxi-dized much more easily than the graphitic basal planes.As a result of the oxidation, two distinctly different func-tional oxygen groups can appear, the carbonyl (C=O) andthe ether (C-O-C) groups shown in Figure b)Subsequent calculations of hydrogen adsorption on theC-O groups indicate that the C-O-C group isunfavourable for hydrogen adsorption, while the C=Ogroup could be an active site for abstracting hydrogenfrom ethylbenzene. We have also identified a competingreaction where the C=O group desorbs as CO. This sug-gests that there will be a delicate competition betweenthe desired reaction converting ethylbenzene intostyrene, deposition of carbon on the support, and theburning of the nanoporous carbon via CO desorption.

Publications:• Johan M. Carlsson and Matthias Scheffler, Structural, electronic and

chemical properties of nanoporous carbon, submitted to Phys. Rev.LeH. April 2005.

• Suljo Linic, Johan M.Carlsson, Robert Schlögl, andMatthias Scheffler, Carbonnano materials as effectiveheterogeneous catalysts: Thevalue of non-pristine sp2

materials, in manuscript.

F R I T Z H A B E R I N S T I T U T E O F T H E M P S / T H E O R Y D E P A R T M E N T

a.) A nine-atom ring with 4 Å diameter resulting from a four atom vacancy.b.) C-O groups in oxidized nanoporous carbon. C-O-C group left and C=Ogroup right.

a b

Page 10: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

Research objectives: Our pri-mary interest is to predict thestability and structural transfor-mations of the secondary struc-ture of proteins based solely onthe information of their atomiccomposition. We investigate thehydrogen-bond strength andvibrations in finite and infinitechains in diverse conformations.

Computational approach: Total-energy calculations employing: i)density- functional theory (DFT)with the generalized gradientapproximation to the exchange-correlation functional, ab-initiopseudo-potentials, and planewaves for ex-panding the wavefunction. ii) The quantum MonteCarlo (QMC) approach to the electronic many-bodyproblem.

Project description: Proteins are formed by hundreds orthousands of atoms. Their function in biology dependson their structural conformation. Only a specific confor-mation, the so-called native state, is biologically activeand stable (or at least metastable with sufficient lifetime)under typical environmental conditions. In fact, somediseases in mammals, like the Creutzfeld-Jacob orAlzheimer diseases, are related to changes in the helicalstructure (misfolding) of certain sections of proteinsfound in nerve cells. We are presently studying the sta-bility and structural changes that helical conformationsmay assume under different (external) conditions.

The subtle interplay between different covalent andhydrogen bonding patterns in such structures calls for ananalysis based on the electronic structure, i.e. startingfrom first principles, with accurate treatment of electron-ic many-body effects. At the same time the large scale ofthe systems, several hundred atoms, must be dealt witheffectively. Modern approximations (for the exchange-correlation energy) used in DFT calculations are ade-quate for metallic and covalent bonds, yet their perform-ance for interactions such as hydrogen bonding and vander Waals forces is uncertain. As a benchmark of ourDFT data on hydrogen bonded molecules we are carryingout QMC calculations that treat the many-body quantum

mechanics exactly. This approachremains tractable for systemswith several hundred electrons,i.e. where traditional quantumchemical methods become com-putationally intractable, even onhigh-performance computers.Our calculations on differentmodel complexes show that mod-ern gradient corrected densityfunctionals successfully describe,e.g., the cooperative strengthen-ing of hydrogen bonds.

In fact DFT predicts that in an infinite α-helix cooperativitystrengthens the hydrogen bondsby a factor of two, which is cru-cial to stabilize the α-helix withrespect to a non-helical fully-extended conformation. This sta-

bility is altered when the helix is mechanically distorted(see figure). We observe that the α-helix undergoes struc-tural transitions to a π-helix and to 310 helix under com-pressive and tensile strain, respectively.

Helices can be considered as one-dimensionalcrystals in cylindrical coordinates. We have takenadvantage of this feature for calculating the specificheat of polyalanine in α-helical conformation based ona DFT harmonic vibrational analysis. The calculatedphonon dispersion spectrum shows excellent agree-ment to available experimental data. A major advantagecompared to previously performed empirical force-fieldstudies is that long range effects such as electrostaticinteraction and polarization are fully taken intoaccount. Our results indicate that these effects lead toa significantly better agreement with experiment for thespecific heat in the low temperature range.

These results have implications for understandingthe intrinsic propensities of amino acids to form helicalconformations, a key aspect for protein folding and mis-folding studies.

Publications• J. Ireta, J. Neugebauer, M. Scheffler, A. Rojo and M. Galván,

J. Phys. Chem. B 107, 1432 (2003).• J. Ireta, J. Neugebauer and M. Scheffler, J. Phys. Chem. A, 108,

5692 (2004).• L. Ismer, J. Ireta, S. Boeck and J. Neugebauer, Phys. Rev. E, 71,

031911 (2005).

10H

IG

H P

ER

FO

RM

AN

CE

CO

MP

UT

IN

G

Under compressive or tensile strain the twist of the polyalanine α-helix is changed leading to thickeror thinner helices with different hydrogen bondingpatterns. The figure shows the region of stability(with respect to a fully extended structure) for differ-ent helical structures.

Joel Ireta, Martin Fuchs, Matthias Scheffler, Fritz Haber Institute, Berlin

First-Principles Analysis of the Stability and Metastability of Helical Conformations in Proteins

F R I T Z H A B E R I N S T I T U T E O F T H E M P S / T H E O R Y D E P A R T M E N T

Page 11: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

Max Planck Institute for Polymer ResearchPolymer Theory (Director: Kurt Kremer)

Simulating Soft Matter:Polymers and ColloidsIt is the major research goal of the Theory Group to provide a rathercomplete and unified understanding of macromolecular systems or asit is nowadays termed “soft matter”. Our starting point usually is themolecular structure, which leads to certain properties and functions.Modern materials science seeks to understand, predict and thendesign material properties and functions based on a specific molecu-lar design. For “soft matter” such properties can include mechanicalor relaxational properties of everyday “commodities” such as “plas-tics” as well as high tech electronic or optical properties or a specificbiological function. In all cases the underlying molecular structure isthe basis for the anticipated properties. From that point of view, it ata first glance is not so relevant, whether a material or a system understudy is of synthetic of biological origin. In addition to this more gen-eral point of view the institute provides a very broad experimentalexpertise, which is somehow complemented by theoretical activities.This combination provides a basis for a fruitful interaction and collab-oration with the experimental groups, which is needed to achieve theabove mentioned complete understanding. Basis for our way of approaching the theory of soft matter is a spe-cific aspect, which is different to many other materials. Compared to“hard materials” soft materials are governed by a very small energydensity, and consequently fluctuations and thus entropic contributionsplay a crucially important role. These entropic contributions includeboth intra and inter molecular parts. This also gives rise to a charac-teristic hierarchical structure of modern soft matter assemblies andfunctional materials. Therefore, neither microscopic nor mesoscopicor macroscopic theoretical work solely can provide the necessaryinsight to understand the systems. That means not only that multi-scale modeling is needed to provide the tools to understand such systems, what is also needed is the systematic coarse grainingtogether with a method to reintroduce higher levels of detail. Only thecombination eventually provides the close link to experimental testsof theories and properties. This reasoning also outlines the activitiesof the theory group, which include both analytic as well as numericalwork on scales ranging from the quantum level all the way to a

Research objectives: Many charged polymeric systemscan under strong Coulomb coupling attract each otherdue to ion correlations. Typically these are interactingsemi-flexible polyelectrolytes with hydrophobic sidechains, which are known to form cylindrical micelles inaqueous solution, but also DNA and F-actin solutions

macroscopic description. On the computer simulation side theapproaches which are followed, and where we also participateactively in the development of improved methods, include (rangingfrom macroscopic methods to the quantum mechanical level):• Dissipative Particle Dynamics • Hybrid Approaches MD-MC-Lattice Boltzmann • Monte Carlo (MC) statics/dynamics • Molecular Dynamics (MD) methods,

including non equilibrium MD (NEMD)• Classical Force Field Simulations (MD, MC)• Embedded Atomistic-Quantum methods• Path Integral MC• Ab initio density functional theory (Car-Parrinello MD)• Quantum Chemical MethodsThese different levels of detail in the description reflect the typicalhierarchical structure of soft materials. Individual topics investigatedinclude atomistic and quantum chemical calculations for specific sur-face interactions, solubility studies to properly parameterize non-bonded interactions, studies on the structure and properties ofbiopolymers and synthetic polymers especially when charges areplaying a major role, improved methods for calculation of dynamicand static properties of polyelectrolytes as well as charged colloidsup to analytical studies for disordered polymers and (ordered) com-plex fluids. In the following three characteristic projects, whichstrongly depend in their success on the resources supplied by theRZG, and where method development also plays a crucial role, arediscussed in more detail. The work should not only lead the path tolarger systems and longer observation times by computer simulations,but also provide a step towards a methodology, which allowsapproaching qualitatively new problems.For more information see: http://www.mpip-mainz.mpg.de/theory.html

in the presence of multivalent counterions. While thenature of the attractions is meanwhile well established,there is still considerable dispute, if the observed mor-phologies represent true equilibrium structures, or ifthe aggregation kinetics determies the aggregate shape.

Christian Holm and Kurt Kremer, Max Planck Institute for Polymer Research, Mainz

Attraction between Like Charged Objects

Page 12: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

Computational approach: We employ a combination ofMolecular dynamics simulation, Monte Carlo simula-tion, and parallel tempering methods. As program pack-age we use ESPResSo – an Extensible SimulationPackage for Research on Soft Matter Systems. This is anewly written program package, that was designed toperform MD/MC simulations for a broad class of softmatter systems in a parallel computing environment. Itis not only fast, but allows also an easy code modifica-tion due to its script driven interface and modularstructure.

Project description: We investigated the stability ofsuch bundles with respect to hydrophobicity, thestrength of the electrostatic interaction, and the bundlesize. We have shown that charged semi-flexible poly-electrolytes interacting via two kinds of hydrophobicshort range interactions have parameter regions wherefinite aggregate sizes exist. For large values of theBjerrum length the bundle size increased, and in thelimit of large Bjerrum length we observed the expectedtrend to form infinite bundles due to counterion crystal-lization. Furthermore, we could demonstrate that therelease of counterions during a bundle split can consid-erably lower the free energy of the total system.

The stability curve for different bundle sizes as afunction of Coulomb coupling shows similarities to theRayleigh instability in charged oil drops, i.e., for anincrease in Bjerrum length one needs an increasedshortrange attraction. Another observation is that thestability line varies non-monotonically with Bjerrumlength, which resembles the non-monotonic extensionbehavior linear polyelectrolytes show if the Bjerrumlength is increased. The degree of polymerization alsoshows an effect on bundle size, which has beenobserved in experiments in the group of G. Wegner atour institute on Poly(p-phenylene) as well; however, so

far limited data do not allow to draw definite conclu-sions about the underlying mechanism.

Our results show close similarities to trendsobserved in DNA condensation experiments, where alsofinite aggregate sizes are found, when (usually) multiva-lent counterions are added to the solution. The applica-bility of our model to those experiments basically restson two assumptions. First, we assume that the strengthof the short range attraction does not change with exter-nal parameters, and second, the interaction strengthoriginating from multivalent counterions can effectivelybe modelled by the strength of the interactions. Toexplicitely introduce multivalent counterions is left forfuture investigations which are presently underway.However, it appears plausible that at least for someparameter regions of biological charged polymers bothassumptions are justified, so that our results can be usedto provide a mechanism for finite bundle sizes.

Publications• A. Arnold, B.A. Mann, H.J. Limbach, C. Holm, ESPResSo – An

Extensible Simulation Package for Research on Soft Matter Systems,paper to the Heinz-Billing prize competition 2003, www.espresso.-mpg.de.

• A. Naji, A. Arnold, C. Holm, R.R. Netz, Attraction and unbinding oflike-charged rods, Euro. Phys. Lett. 67, 130-136 (2004).

• H. J. Limbach, M. Sayar, and C. Holm, Polyelectrolyte Bundles,Journal of Physics: Condensed Matter 16, S2135-S2144 (2004).

• H.J. Limbach, C. Holm, K. Kremer, Micelle Formation of Hairy Rods,submitted to Macromolecular Chemistry and Physics 206, 77-82 (2005)

12 M P I F O R P O L Y M E R R E S E A R C H / P O L Y M E R T H E O R YH

IG

H P

ER

FO

RM

AN

CE

CO

MP

UT

IN

G

The snapshot of a polyelectrolyte bundle (left) consisting of eight single macromolecules, together with a cross section (right)is shown. One observes that some counterions go into the bundle,and can therefore efficiently screen the strong repulsion of thecharged rods.

Page 13: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

Research objectives: The aim of theproject is to study interface propertiesof macromolecular systems, such aspolymer melts or biomolecules out ofmelt or solution, in contact withmetal surfaces. The focus is to prop-erly describe the delicate interplaybetween adsorption properties, gov-erned by quantum mechanics, andconformational properties which areproperly described by statisticalmechanics.

Methodological approaches: Themolecular adsorption is accuratelydescribed by the Density Functional(Car-Parrinello) technique. Resultsfrom these calculations are then usedto parameterize the surface interac-tions for classical models, eitherbead-spring coarse grained models forpolymers or (when a solvent is pres-ent for example) classical atomisticforcefields, and eventually a combi-nation of both.

Project details: The study of soft con-densed matter systems requires a mul-tilevel analysis which spans fromquantum mechanics to macroscopicstatistical approaches. In many cases,aspects related to different length andtime scales can be separated and studied independently,however there are several other relevant cases where theinterplay between different scales determines the rele-vant properties of the system. The case of polymers inter-acting with metal surfaces is one example. In general,organic-inorganic (metal) interfaces occur in many prob-lems of modern material science as well as in many inves-tigations and applications involving biological macromol-ecules. Which molecules adsorb on which surfaces, towhich extent does the nature of the metal surface deter-mine the conformation and the morphology of theadsorbed macromolecule and what are its macroscopicconsequences, are questions of interest. In order to prop-erly address these problems, it is mandatory to considerthe quantum nature of the molecular adsorption in com-bination with the statistics of the (many) global polymerconformations. The interplay between adsorption energyand conformational entropy can be studied via multiscale

modeling approaches which combinein a consistent sequential way the twodifferent aspects. The developmentand application of a novel, quantumbased, multiscale modeling approachis the main focus of our currentresearch. This approach consists ofusing a Density Functional, Car-Parrinello, approach to describe theinteraction of polymer subunits with ametal substrate. Next, the molecularadsorption study is combined with thestatistics of polymer conformations atthe surface. Finally, this information isemployed to parameterize a bead-spring coarse grained model for thepolymer-surface interaction. Such aprocedure has been successfullyapplied to study the properties of amelt of Bisphenol-A Polycarbonate(BPA-PC) near a nickel surface (seeFigure).

In particular we have shown thatthe introduction of specific polymerend groups can support adhesionstrongly, this in turn can have ratherstrong effects on both dynamical andmorphological aspects, thus providinga way to control the final material. Wehave also shown that by modifyingthe morphology of the surface (e.g.presence of a step defect), it is possi-

ble to modify interfacial properties, in particular weobserved chain localization and local ordering at thestep edge. The application of this method has been alsoextended to small amino acids, and, combined with thedensity Functional study of the adsorption of water onmetal surfaces, is currently used to model the adsorp-tion of small oligopeptides out of solution.

Publications[1] L. Delle Site, C.F.Abrams, A.Alavi, K.Kremer, Phys.Rev.Lett. 89,

156103 (2002).[2] Cameron F.Abrams, Luigi Delle Site and Kurt Kremer, Phys.Rev.E 67

(2), 021807 (2003).[3] L. Delle Site, A.Alavi and C.F.Abrams, Phys.Rev.B 67 (19), 193406

(2003).[4] Luigi Delle Site, Salvador Leon and Kurt Kremer, J. Am. Chem. Soc.

126, 2944 (2004).[5] Luigi Delle Site and Daniel Sebastiani, Phys. Rev. B (2004) 70 (II),

115 401

13H

IG

H P

ER

FO

RM

AN

CE

CO

MP

UT

IN

G

Luigi Delle Site and Kurt Kremer, Max Planck Institute for Polymer Research, Mainz

Macromolecules at Interfaces

The multiscale model of BPA-PC onnickel. The ab initio study of molecularadsorption (c) is combined with a bead-spring coarse grained model descriptionof polymer (a) to simulate the chainadsorption at the metal interface (b).

M P I F O R P O L Y M E R R E S E A R C H / P O L Y M E R T H E O R Y

Page 14: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

Research objectives: The flow behavior of liquid poly-mers, and in particular their viscoelasticity, dependsstrongly on both composition and molecular weight.This has, for example, been exploited in the develop-ment of high-performance synthetic motor oils. In thelast fifty years, theory has contributed a lot to explainthe macroscopic behavior in terms of molecular struc-ture and molecular motion. In particular, the origin ofsimple scaling laws was elucidated. The dynamics ofsimple systems, like one component polymer melts, orsolutions of polymers in low molecular weight solvent,today is viewed as essentially understood. The mainmolecular features which govern the microscopicmotion are (i) flexibility, i.e. the fact that the polymerchains assume randomly coiled conformations in space;(ii) chain connectivity, i.e. the fact that a monomer can-not move without dragging its neighbors along; (iii)entanglements, i.e. the mutual hindrance of chains intheir motions; (iv) noise, i.e. the fact that the dominantforces on the molecular scale are thermal fluctuations;(v) hydrodynamic interactions, i.e. fast momentumtransport through the environment, leading to highlycorrelated monomer motions. The goal of the project isto study the dynamics of soft matter in solution, takingthe hydrodynamic interaction fully into account.Initially, we focused on a fundamental question of poly-mer dynamics: It was experimentally clear that (i) hydro-

dynamic interactions are important in dilute solutions(Zimm dynamics), that (ii) they do not play any role inconcentrated solutions or melts (Rouse dynamics), andthat (iii) the crossover between the two types of motionis governed by one single length scale, the “blob size”(the length at which the chains start to overlap andinteract, like a mesh in a temporary network). DeGennes had pointed out already in 1976 that the blobsize is the relevant length scale for both the static con-formations and the dynamics (Zimm dynamics shouldhold on length scales smaller than the blob size, andRouse dynamics above). Still, the underlying mecha-nism of the observed “hydrodynamic screening”remained unclear. Furthermore, neutron spin echo data,which indicated a mixed regime beyond the blob size,remained unexplained (“incomplete screening”). Weattacked this problem with a new algorithm (see below),which by now has been extended to study the motion ofcharged colloids in electric fields, where first resultshave already been obtained. Furthermore, this apporachmight be useful to study the motion of polymer chainsin strongly fluctuating (turbulent) flows.

Computational approach: To treat the dynamiccrossover in polymer solutions, a novel simulation modeland algorithm had to be constructed. For the polymerspecific features, a molecular model with explicit verylong chains is needed, in order to investigate the prop-erties on scales below and above the blob size. Themomentum transport through the solvent is simulatedby solving the Navier-Stokes equation via the LatticeBoltzmann approach, while the polymer chains whichare represented by a bead spring model in the continu-um, run via Molecular Dynamics. The advantage of thisapproach is that the solvent is reduced to its minimalfeatures: It is structureless, and its motion is taken intoaccount only on the relevant large (hydrodynamic) timescale. The two parts are coupled by a friction forcebetween the monomers and the surrounding latticesites. Both parts are subject to thermal noise whichdrives the motion in thermal equilibrium. Since thealgorithm is completely local, it has been parallelized viadomain decomposition.

Project description: For the Zimm-Rouse crossover wecalculated the single chain dynamic structure factor, S (k,t), where k is the momentum transfer and t thetime which gives information about both the static and

14H

IG

H P

ER

FO

RM

AN

CE

CO

MP

UT

IN

G

Burkhard Dünweg, Max Planck Institute for Polymer Research, Mainz

Hydrodynamic Interactions in Complex Fluids

Normalized single chain dynamic structure factor using bothRouse and Zimm scaling, in the regime of length scales beyondthe blob size . At short times, the data collapse for Zimm scaling(i.e. the dynamics is Zimm-like), while at later times Rousedynamics prevails.

M P I O F P O L Y M E R R E S E A R C H / P O L Y M E R T H E O R Y

Page 15: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

the dynamic correlations within one single labeledchain. This is exactly the quantity that was measured byneutron spin echo experiments. However, the simula-tion results turned out to be considerably more accu-rate, such that a more detailed data analysis was possi-ble. A mixture of Rouse- and Zimm- like signals on largelength scales was found, too; however, it could be shownclearly that the former pertain to late, while the latterpertain to the earlier times. In other words, it was shownthat the crossover is, strictly spoken, not governed bythe blob length scale, but rather by the correspondingtime scale. For short times, the hydrodynamic interac-tion remains unscreened, regardless of length scales.The screening occurs essentially through chain chaincollisions, preventing the momentum from spreadingout in a well defined fashion through the sample – butsuch collisions do, on average, only occur after the blobrelaxation time has passed.For colloidal spheres, we could show that the motion is

exactly as predicted by hydrodynamics, verifying theapplicability of our method. Preliminary results on elec-trophoresis show that there is a non trivial interplaybetween hydrodynamic and electric forces.

Publications• P. Ahlrichs and B. Dünweg, Int. J. Mod. Phys. C 9, 1429 (1998).• P. Ahlrichs and B. Dünweg, J. Chem. Phys. 111, 8225 (1999).• P. Ahlrichs, R. Everaers and B. Dünweg, Phys. Rev. E 64,

040501(R) (2001).• B. Dünweg, P. Ahlrichs, and R. Everaers, in “Computer simulation

studies in condensed matter physics XIV” (D. P. Landau, S. P. Lewis,H.- B. Schuettler, eds.), Springer-Verlag 2001.

• B. Dünweg, in “Computational soft matter: From synthetic polymersto proteins” (N. Attig, K. Binder, H. Grubmueller, and K. Kremer, eds.)NIC Series Volume 23, ISBN 3-00-012641-4, Juelich 2004.

• V. Lobaskin and B. Dünweg, New J. Phys. 6, 54 (2004).• V. Lobaskin , B. Dünweg and C. Holm, J. Phys. Condensed Matter

6, 4063-4073 (2004)

15H

IG

H P

ER

FO

RM

AN

CE

CO

MP

UT

IN

GM P I O F P O L Y M E R R E S E A R C H / P O L Y M E R T H E O R Y

Page 16: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

Research objectives: This research projects focuses onunderstanding the fundamental, atomistic mechanismsof brittle fracture. A fact that has been neglected inmost theories of brittle fracture is that the elasticity of asolid clearly depends on its state of deformation. Metalswill weaken, or soften, and polymers stiffen as the strainapproaches the state of materials failure. It is only forinfinitesimal deformation that the elastic moduli can beconsidered constant and the elasticity of the solid linear.However, most theories model fracture using linear elas-tic models. We hypothesize that several of the unre-solved questions related to dynamic fracture can only beunderstood once hyperelasticity is incorporated into themodelling.

Computational approach: We use large-scale moleculardynamics in conjunction with continuum mechanicsconcepts. For that purpose, we employ a massively par-allelized classical MD code adapted to match thedesired boundary conditions and interatomic potentials.Based on this method, we can model fracture of materi-als from a very fundamental, atomistic perspective. Ourmodels comprise of up to 70 million atoms.

Max Planck Institute for Metals Research (MPI-MF) at Stuttgart – Dept. Prof. Huajian Gao(Theory of Mesoscopic Phenomena)

Nanomechanics of Crystalline MaterialsDr. rer. nat. Markus Buehler, Dipl.-Ing. Nils Broedling, Dr. rer. nat.Alexander Hartmaier, Dr. Yong Kong, Dr. Xinling Ma and Prof. Dr.Huajian Gao are conducting large-scale computer simulations of

nano-structured crystalline and biological materials. The objective is to provide fundamental understanding and predictive descriptionsof the mechanical, chemical and electrical properties of these materials across all relevant length and time scales.Our simulation tools are classical molecular dynamics, force fieldmethods as well as first principle calculations. The bulk of ourresearch, however, relies on classical molecular dynamics simula-tions. In this method, we calculate the collective behavior of a largenumber of atoms governed by their mutual interatomic interaction.The atomic interaction is based on empirical interatomic potential formulations, such as for instance the embedded atom method. A simulation study is defined by an atomistic model together with the boundary conditions, both incorporating the most important features of the physical system under investigation.The phenomena under investigation include dynamic crack propaga-tion, deformation of ductile materials, creep in thin metal films, grainstructure evolution under plastic deformation, mechanical propertiesof carbon nanotubes, interaction of carbon nanotubes with DNA, as well as properties of biological bulk and surface materials. This fundamental research helps to lay the foundation for tomorrow’snanotechnologies.

Markus J. Buehler, Farid F. Abraham* and Huajian Gao, Max Planck Institute for Metals Research

Large-Scale Atomistic Studies of Brittle Fracture: How Fast Can Cracks Propagate?

PHO

TOG

RAPH

: BE

RNH

ARD

HEI

NZE

, (M

PI-M

F)

Figure 1: Supersonic fracture. This phenomenon can not be understood based on the classical theories of fracture, but can be explained by our new concept of a characteristic energy length scale χχ. This newly discovered length scale χχ describeswhen hyperelasticity is important.

* ALSO IBM ALMADEN RESEARCH CENTER, SAN JOSE, CA, USA

Page 17: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

based on a hyperelastic continuum model of a local lim-iting speed.

The main conclusion of our research is that hyper-elasticity plays a crucial role in forming a complete pic-ture of rapid fracture: It plays a critical role in deter-mining the limiting speed of cracks, and also governsthe instability dynamics of dynamic fracture.

Another field of interest is shear dominated cracksat interfaces of dissimilar materials (schematic see Fig.3). Understanding such systems is important also ingeological applications such as earthquake dynamics.From the perspective of fundamental crack dynamics,an interesting question is how fast the interface crack

17H

IG

H P

ER

FO

RM

AN

CE

CO

MP

UT

IN

G

Project description: We show that hyperelasticity, theelasticity of large strains, can play a governing role in thedynamics of fracture and that linear theory is incapableof capturing all phenomena. We introduce a new con-cept of a characteristic length scale for the energy fluxnear the crack tip and demonstrate that the local hyper-elastic wave speed governs the crack speed when thehyperelastic zone approaches this energy length scale.This new length scale, heretofore missing in the existingtheories of dynamic fracture, helps to form a compre-hensive picture of crack dynamics; explaining super-Rayleigh and supersonic fracture (see Fig. 1). Thesephenomena are in clear contrast to the existing theoriesof fracture. Based on our theoretical predictions, inter-sonic mode I cracking has recently been verified exper-imentally.

We further investigate the stability of cracks. It isknown from experiment that cracks at low speeds travelthrough materials creating mirror-like, atomically flatcrack surfaces. For higher crack speed, this deformationmode becomes unstable, and the crack surface trans-forms via a mirror-mist-hackle mechanism (see Fig. 2).

Using large-scale atomistic modelling, we showagreement of the classical theories with the dynamics ofcracks in harmonic reference systems. In the next step,we introduce material nonlinearities and investigate theeffect of these on the instability dynamics. We find thatsoftening hyperelastic effects lead to a decrease in criti-cal instability speed, and stiffening hyperelastic effectsleads to an increase in critical speed, allowing forstraight crack motion up to super-Rayleigh crackspeeds. This observation contradicts many existing the-ories of fracture! Based on a series of computer simula-tions in which we change the strength of the hyperelas-tic effect, we describe the conditions under whichhyperelasticity governs the dynamic crack tip instability.The analysis is complemented by theoretical modeling

Figure 2: Atomistic study of dynamic crack tip instabilities. Upon acritical crack speed, the crack surface starts to roughen and showsa mirror-mist-hackle transition.

Figure 3: Atomistic model of fracture along interfaces of dissimilarmaterials under shear dominated loading.

Figure 4: Crack dynamics along interfaces of dissimilar materials.Subplot (a) shows snapshots of the stress field as the crack speedincreases. Subplot (b) depicts the mother crack (A), the daughtercrack (B) and the granddaughter crack (C).

M P I F O R M E T A L R E S E A R C H ( M F ) / T H E O R Y O F M E S O S C O P I C P H E N O M E N A

Page 18: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

18H

IG

H P

ER

FO

RM

AN

CE

CO

MP

UT

IN

G

Research objectives: The objective of this project is toinvestigate fundamental mechanisms of deformationhardening in ductile materials. Work-hardening is theunderlying mechanism leading to breaking of a paperclip that is bent several times (see Fig. 1).

It is known that the motion of dislocations is themost common mechanism of plastic deformation ofcrystals. Among the major issues in dislocation plastici-ty are the reactions and interactions of dislocations lead-ing to work hardening, which can only be described byempirical rules in continuum mechanics. Such anempirical description is unreliable when materials aredeformed under extreme conditions such as high strainrate. Atomistic studies provide an ideal tool to describethe deformation mechanisms from a very fundamentalperspective.

Markus J. Buehler, Alexander Hartmaier, Farid F. Abraham and Huajian Gao,Max Planck Institute for Metals Research, Stuttgart

Ultra-Large Scale Atomistic Studies of Work-Hardening: Ductile Materials Failure

Computational approach: Our computational method isa massively parallelized molecular dynamics (MD) codeutilizing reliable semi-empirical interatomic potentialsfor metals based on the embedded-atom method. Thisapproach is generally accepted to be a favorable methodto describe the bonding of atoms in metals, and allowssimulation of a large number of atoms at the same time.Making use of the new supercomputer system “Regatta”at “RZG”, we can simulate systems that contain up to250,000,000 atoms.

Figure 1: Bending a metal paper clip forth and back. The deforma-tion leads to work-hardening, and eventually, the material breaks(right snapshot). We use large-scale atomistic computer simulationto model this phenomenon from a very fundamental perspective.

can propagate. The crack limiting speed is associatedwith the elastic properties of the material. In cracks atinterfaces, the elastic properties change discontinuous-ly across the material interface. “Which” elasticity dom-inates crack dynamics? What is the maximum speed thecrack can achieve? To address some of these questions,we have carried out extensive large-scale MD studies.

In our studies, we observe a new phenomenonreferred to as mother-daughter-granddaughter mecha-nism. After the loading of the material is increasedbeyond a critical value, the crack starts to initiate andquickly accelerates to the Rayleighwave speed of thesoft material. After some time of steady-state propaga-tion at that speed, a secondary daughter crack is nucle-ated some distance ahead of the primary mother crack.Shortly after that, the crack speed suddenly jumps againand reaches the longitudinal wave speed of the stiffmaterial through nucleation of a tertiary granddaughter

* ALSO IBM ALMADEN RESEARCH CENTER, SAN JOSE, CA, USA

crack. This crack propagates at the longitudinal wavespeed of the stiff material. Crack motion is, surprising-ly, thus supersonic with respect to the soft material! Theresults therefore suggest that the limiting speed forshear dominated cracks is given by the longitudinalwave speed of the stiff material. The elastic fields closeto such cracks are summarized in Figure 4.

Publications1. M.J. Buehler, F.F. Abraham, H. Gao, Nature, Vol. 426, pp. 141-146,

2003 2. M.J. Buehler, H. Gao, Y. Huang, Computational Materials Science,

Vol. 28, no. 3-4, pp. 385-408, 2003 3. M.J. Buehler and H. Gao, Physik in unserer Zeit, Vol. 35, no. 1, pp.

30-37, 20

M P I F O R M E T A L R E S E A R C H ( M F ) / T H E O R Y O F M E S O S C O P I C P H E N O M E N A

Page 19: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

19H

IG

H P

ER

FO

RM

AN

CE

CO

MP

UT

IN

G

Figure 2: Large-scale study of work-hardening in Nickel. Once load-ing is applied to the system, thousands of dislocations (blue lines)are nucleated from the crack tips (yellow) and glide into the crystal.

Project description: In many previous atomistic studiesof deformation of materials, only a small number of dis-locations has been considered, in relatively small sam-ples containing up to about 35 million atoms. A group ofresearchers around Abraham and Gao have recentlyreported studies with one billion atoms featuring anextremely high dislocation density. However, the simu-lation was carried out based on a simplistic interatomicpotential, the Lennard-Jones potential, which has short-comings with respect to describing the interatomicbonding in metals. In the present work we study a sys-tem of a three-dimensional single crystal approachingone billion atoms with a more accurate interatomicpotential. Because of its interesting material properties(in particular high stacking fault energy), nickel is in thefocus of this study. Of particular interest are the dislo-cation interactions and the hardening mechanism of

Figure 3: Detailed analysis of the dislocation structure. The upperpicture on the left shows the defect structure near the crack. Theupper picture on the right depicts the defect structure near thecenter of the crystal (yellow lines=point defects, blue lines=disloca-tions). The lower plot shows a so-called “centrosymmetry analysis”of the dislocation structure near the crack tip.

complete dislocations. The simulation allows us toinvestigate the complex dynamics of thousands of dislo-cations in metals. The size of the simulation is pushingthe frontier in computational materials science to newlimits.

Figure 2 shows a time sequence of the simulation,indicating how dislocations nucleate from defects andglide into the center of the crystal. Figure 3 depicts adetailed analysis of the developing dislocation structure.

Publications1. F. F. Abraham, R. Walkup, H. Gao, M. Duchaineau, T. D. De La Rubia,

and M. Seager, Proceedings of the National Academy of Sciencesof USA, Vol. 99, pp. 5783-5787, 2002

2. M.J. Buehler, A. Hartmaier, H. Gao, M. Duchaineau, and F.F.Abraham, In the press: Computer Methods in Applied Mechanicsand Engineering (to appear 2004).

M P I F O R M E T A L R E S E A R C H ( M F ) / T H E O R Y O F M E S O S C O P I C P H E N O M E N A

Page 20: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

Research objectives: This research project focuses onthe mechanical properties of ultra thin copper films. Animportant aspect of the studies is to investigate how thedeformation mechanisms change when the film thick-ness reaches the submicron regime. One of thehypotheses is that diffusional processes becomeincreasingly important once the film thicknessapproaches nanoscale [1-3].

Computational approach: In past years, reliable semi-empirical interatomic potentials for metals have beendeveloped based on the embedded-atommethod (EAM). We use Mishin’s EAMpotential fitted to copper within a stan-dard molecular dynamics scheme (ITAP-IMD code, developed at the Institute ofTheoretical and Applied Physics at theUniversity of Stuttgart). This code is par-ticularly suitable for ultra-large scale par-allelized studies of nano-structured mate-rials over long time spans.

20H

IG

H P

ER

FO

RM

AN

CE

CO

MP

UT

IN

G

Project description: In a recent study of diffusionalcreep in polycrystalline thin films deposited on sub-strates, we have discovered a new class of defects calledgrain boundary diffusion wedges. These diffusionwedges are formed by stress driven mass transportbetween the free surface of the film and the grainboundaries during the process of substrate-constrainedgrain boundary diffusion (Fig. 1). We find that the the-oretical analysis successfully explains the difference inthe mechanical behavior of passivated and unpassivatedcopper films during thermal cycling on a silicon sub-strate. An important implication of our theoretical analy-sis is that dislocations with Burgers vector parallel to theinterface can be nucleated at the root of the grainboundary. This new dislocation mechanism in thin filmscontrasts the well known mechanisms of threading dis-location propagation.

Recent TEM experiments at the Max PlanckInstitute for Metals Research have shown that, whilethreading dislocations dominate in passivated metal

Figure 1: Continuum mechanics model of constrained diffusionalcreep. A crack-like stress field develops in the long-time limit lead-ing to large resolved shear stress on glide planes parallel to the film surface, causing nucleation of parallel glide dislocations [1].

Markus J. Buehler, Alexander Hartmaier and Huajian GaoMax Planck Institute for Metals Research, Stuttgart

Mechanical Properties of Submicron Ultra Thin Copper Films Deposited on Substrates

Figure 2: View of the surface of the polycrystalline thin film. Thecolor refers to the surface height (red=high, blue=low). The resultsreveal existence of surface grooves and diffusion wedges at high-energy grain boundaries that are created by diffusional creep [2].

Figure 3: Details of nucleation of novel parallel glide dislocations from grain bound-aries. Due to relaxation of surface tractions by diffusional creep, such unexpectedparallel glide dislocations are nucleated.

M P I F O R M E T A L R E S E A R C H ( M F ) / T H E O R Y O F M E S O S C O P I C P H E N O M E N A

Page 21: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

21H

IG

H P

ER

FO

RM

AN

CE

CO

MP

UT

IN

G

The study should have far reaching implications formodeling deformation and diffusion in micro- andnanostructured materials.

Publications1. H. Gao, L. Zhang, W. Nix, C. Thompson, E. Arzt, Acta Mater., Vol. 47,

pp. 2865-2878, 19992. M.J. Buehler, A. Hartmaier, H. Gao, J. Mech. Phys. Solids, Vol. 51,

pp. 2105-2125, 20033. A. Hartmaier, M.J. Buehler, H. Gao, Defect and Diffusion Forum, Vol.

224-225, pp. 107-128, 2003

Research objectives: Nanocrystalline metals and thinmetal films reveal an extraordinary strength and inter-esting deformation behaviour at the same time. To shedlight on the question why smaller structures are harder,we study the deformation mechanisms of nano-struc-tured metals with large scale computer simulations.

The high strength of these materials can theoreti-cally be attributed to geometrical confinement of themovement of dislocations, i.e. defects in the materialthat transport plastic deformation. Computer simula-tions help to elucidate the question down to whichdimensions this confinement effect contributes tomaterial strengthening and at which scale other inelas-tic deformation mechanisms, like grain boundaryaccommodation, render the material softer again.

Computational approach: In our study, the elementaryprocesses of plastic deformation are investigated withstandard molecular dynamics (MD) simulations. Theemployed MD software is a software tool developed atthe University of Stuttgart [1]. The simulations are con-ducted with a semi-empirical embedded atom (EAM)potential adapted to nickel [2]. The interaction of theatoms with the indenter occurs via a repulsive potentialsuch that the indenter is effectively a hard sphere.

Project description: The analysis of the plastic zone insimulations of surface indentation processes reveals dif-ferences for single crystals (Figure 1) and polycrystals(Figure 2). The single crystal exhibits a primary plasticzone of partial dislocations connected with the surface

Alexander Hartmaier, Xinling Ma and Huajian Gao, Max Planck Institute for Metals Research, Stuttgart

Nanostructured Metals:

Is Smaller Always Harder?

Figure 1: Dislocation distribution underneath a nanoindentation.Only atoms with elevated potential energy are displayed, revealingthe cores of the partial dislocations. Immediately underneath theindenter a zone with a high density of partial dislocations is locat-ed. The more remote dislocations are so-called prismatic disloca-tion loops. The length scale is in Angstrom.

and a secondary plastic zone of prismatic loops thatmove into the bulk of the material until they are stoppedat the fixed lower boundary.

The plastic zone of the polycrystal, in contrast, isseverely confined by the high energy grain boundariesenclosing the indented grain. As seen in Figure 2, noprismatic loops are produced in this simulation due tothe geometrical confinement of the plastic zone. These

M P I F O R M E T A L R E S E A R C H ( M F ) / T H E O R Y O F M E S O S C O P I C P H E N O M E N A

films, parallel glide dislocations begin to dominate inunpassivated copper films with thicknesses below 200nm. We have performed large scale molecular dynamicssimulations of grain boundary diffusion wedges to clari-fy the nucleation mechanisms of parallel glide in thinfilms (Fig. 2 and Fig. 3) [2]. Such atomic scale simula-tions of thin film diffusion not only show results whichare consistent with both continuum theoretical andexperimental studies, but also revealed the atomicprocesses of dislocation nucleation at grain boundaries.

Page 22: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

22H

IG

H P

ER

FO

RM

AN

CE

CO

MP

UT

IN

G

simulations are qualitatively in good agreement withwork published in the literature; see for example [3-5].

The analysis of the indentation forces shows thatthe elastic properties and also the hardness of the singlecrystal and the polycrystal are not significantly different.This result is unexpected, because geometrical confine-ment of the plasticity usually yields stronger and hardermaterials. This confinement hardening and its break-down due to grain boundary relaxation has also beeninvestigated for polycrystalline thin films [6] (see alsoprevious project summary in this booklet).

Research objectives: This research project focuses onthe mechanical properties of carbon nanotubes. Due totheir large surface to volume ratio, CNTs belong to theimportant class of interfacial materials, and thus theproperties of CNTs are significantly different from anyknown bulk-material. Since CNTs could potentially beheavily used in tomorrow’s nanotechnologies, under-standing their behaviour, in particular their mechanicalproperty is critically important. The focus of thisresearch is on investigating the mechanical properties ofCNTs depending on their geometry.

Markus J. Buehler, Yong Kong and Huajian Gao, MPI for Metals Research, Stuttgart

Nano Materials: A Novel Self-Folded State of Carbon Nanotubes

Figure 2: Indentation in the middle of a grain in a nanocrystalline sample. The dislocation zone is confined by the high energy grain boundaries, which are visible due to their broad spectrum of atomic energies.

M P I F O R M E T A L R E S E A R C H ( M F ) / T H E O R Y O F M E S O S C O P I C P H E N O M E N A

Publications1. J. Stadler, R. Mikulla, H.-R. Trebin.

Int. J. of Mod. Phys. C, Vol. 8, pp. 1131-1140, 19972. S.M. Foiles, M.I. Baskes, M.S. Daw. Phys, Rev. B, Vol. 33, pp.

7983-7991, 19863. J. Li, K.J. Van Vliet, T. Zhu, S. Yip, and S. Suresh. Nature, Vol. 418, pp.

307 – 310, 20024. X.L. Ma, and W. Yang, Nanotechnology, Vol. 14, pp. 1208-1215, 20035. A. Hasnaoui, P.M. Derlet, and H. Van Swygenhoven. Acta mater.,

Vol. 52, pp. 2251-2258, 20046. M.J. Buehler, A. Hartmaier, H. Gao. Modeling and Simulation in

Materials Science and Engineering, Vol. 12, pp. S391-S413H, 2004

Computational approach: The fact that we indent tostudy very long CNTs with up to 50,000 atoms dictatesthe usage of classical interatomic potentials, rather thanusing quantum mechanics based treatment of inter-atomic interaction. Our computer model is based on acombination of the Tersoff potential for the C-C bond-ing within the graphite layers, and a Lennard-Jones (LJ)potential parameterized for the weak vdW interactions.The LJ potential parameters are fitted to experimentalresults to correctly reproduce the cohesion energy ofaligned CNTs. We use the ITAP-IMD code developed

Page 23: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

23H

IG

H P

ER

FO

RM

AN

CE

CO

MP

UT

IN

G

at the Institute of Theoretical and Applied Physics atthe University of Stuttgart. The calculation is computa-tionally challenging because of the long range vdWinteraction (cut-off radius is larger than 15 Ångstrom).

Project Description: The focus of the research is on themechanical properties of CNTs dependent on theirgeometry. We have shown that very long tubes displaysignificantly different mechanical behaviour than tubeswith smaller aspect ratios. We distinguish three differ-ent classes of mechanical response to compressive load-

Figure 1: Shell-rod-wire transition of CNTs depending on the aspect ratio.

Figure 2: Deformation of a single wall CNT under compressiveloading (see arrows). The simulation suggests that so-called “elastic defects” form upon application of loading.

ing: While the deformation mechanism is characterizedby buckling of thin shells in nanotubes with smallaspect ratios, it is replaced by a rod-like buckling modeabove a critical aspect ratio, analogous to the Euler the-ory in continuum mechanics. For very large aspectratios, carbon nanotubes are found to behave like a wirethat can be deformed in a very flexible manner towardsvarious shapes. The different deformation modes arecompared in Fig. 1. Figure 2 shows the deformationdynamics of a single CNT under compressive loading.

The wire-like tubes are interesting, since they allowfor completely new dynamical behaviour of CNTs. Forexample, if different parts of the tube come sufficientlyclose, attractive vdW forces between different parts ofCNTs are present and can lead to formation of self-fold-ed structures CNTs. By forming vdW “bonds”, the sys-tem gains energy. On the other hand, by further bendingthe CNT into a shape with smaller radius, higher elas-tic energy is necessary. This suggests that there exists atrade-off between the two energy contributions of bind-ing energy versus bending energy. This implies thatthere exists a critical length of CNTs for which suchself-folded structures are energetically stable. Theresults of a computer simulation of self-folded CNTsare shown in Fig. 3. Our calculations show that theproperties of nanoscale materials may indeed dependcritically on the geometry. Carbon nanotubes with lowaspect ratio on the order of ten behave significantly dif-ferent than those with large aspect ratio beyond severalhundred.

Publications1. M.J. Buehler, Y. Kong, H. Gao and Y. Huang., Submitted to: Journal

of Engineering Materials and Technology2. M.J. Buehler, Y. Kong, and H. Gao, Journal of Engineering Materials

and Technology, Vol. 126, pp. 245-249, 2004

Figure 3: Dynamical evolution of the self-folded state of CNTs atthermal equilibrium, representing a new, heretofore unknown stateof CNTs. The folded part oscillates around an equilibrium length.

M P I F O R M E T A L R E S E A R C H ( M F ) / T H E O R Y O F M E S O S C O P I C P H E N O M E N A

Page 24: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

Research objectives: Functional bio-nano-complex,which are formed by functionalizing novel nanostruc-tures (e.g. quantum dots, nanotubes and nanowires)with biomolecules such as nucleic acids and proteinpeptides, exhibit great potential for applications in bio-engineering and nanotechnology. Fundamental under-standing and description of such systems will ultimate-ly lead to a new generation of integrated systems thatcombine unique properties of the nanostructures withbiological recognition capabilities.

The focus of this research is on investigating thedynamics and behaviour of carbon nanotube (CNT)-based bio-nano-complex. Our primary objectives are tounderstand the interface and the interactions betweenCNTs and biomolecules.

Computational approach: Classical molecular dynamics(MD) method is applied to simulate the dynamics ofCNT-based bio-nano-complexes. Classical MD des-cribes interatomic potentials via semi-empirical molec-ular force fields and has a long and successful record inthe study of biomolecular systems.

We use AMBER-94 and GROMOS-96 force field tomodel atomic interactions in nucleic acids and proteinrespectively, and their interactions with solvent. The car-bon atoms of CNTs were treated as uncharged Lennard-Jones (LJ) particles and described by a CNT force field.

Combination rules were taken for setting LJ param-eters between CNT and DNA/protein atoms. The wholesystem including CNT, DNA/protein and water mole-cules contains about 50,000 ~ 100,000 atoms dependingupon the size of CNT and nucleic acids/proteins. Withthe IBM Regatta machines all the trajectory lengths ofthe simulations are on the order of nanoseconds.

24H

IG

H P

ER

FO

RM

AN

CE

CO

MP

UT

IN

G

Fig. 1: A DNA molecule encapsulated inside a carbon nanotube in aqueous solution.

Yong Kong and Huajian Gao, Max Planck Institute for Metals Research, Stuttgart

Biological Molecules Interacting with Carbon Nanotubes

Fig. 2: Van der Waals attraction dominates the encapsulation process.

M P I F O R M E T A L R E S E A R C H ( M F ) / T H E O R Y O F M E S O S C O P I C P H E N O M E N A

Project description: We use the classical MD packageGROMACS to investigate dynamics and behaviour ofdesigned CNT-DNA oligonucleotide and CNT-polypep-tide complexes in water. The simulating results indicat-ed that an oligonucleotide or a polypeptide could bespontaneously encapsulated into a CNT, provided thatthe tube size is large enough and the oligonucleotide orpolypeptide is appropriately aligned with CNT. Wefocus on the dynamics and energetics of DNA/proteinencapsulation inside nanotubes, and discuss mecha-nism of encapsulation and effects of CNT size, end-group, DNA/protein sequence and solvent temperatureand pressure on the encapsulation process. The van derWaals attraction and hydrophobic forces were found tobe important for the encapsulation process, with the for-mer playing a more dominant role in the CNT-DNA andCNT-protein interaction.

This study has general implications on filling nano-porous materials with water solutes of molecular clusteror nanoparticles, and offers the possibilities to exploitthe encapsulated CNT-DNA/protein or other CNTbased bio-nano-systems for applications such as molec-ular electronics, sensors, electronic/optical genesequencing, and nanotechnology of gene/drug delivery.

Further study will be focused on active and selec-tive encapsulation or translocation of DNA/protein mol-ecules in relation to DNA sequencing and DNA/proteindelivery systems.

Publications1. H. Gao and Y. Kong, Annual Review of Materials Research, Vol. 34,

pp. 123-150, 20042. H. Gao, Y. Kong, D. Cui and C. S. Ozkan, Nano Letters, Vol. 3, pp. 471-

473, 2003

Page 25: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

logical systems to achieve such properties, and suggestthat the principle of flaw tolerance may have had anoverarching influence on the evolution of the bulknanostructure of bone-like materials and the surfacenanostructure of gecko-like animal species.

We illustrate that if the characteristic dimension ofmaterials is below a critical length scale on the order ofseveral nanometers, Griffith theory of fracture no longerholds. An important consequence of this finding is thatmaterials with such nano-substructures become flaw-tol-erant, as the stress concentration at crack tips disappearsand failure always occurs at the theoretical strength ofmaterials, regardless of defects. The atomistic simula-tions complement continuum analysis and reveal asmooth transition between Griffith mode of failure viacrack propagation to uniform bond rupture at theoreticalstrength below a nanometer critical length. Below thecritical length for flaw tolerance, the stress distributionbecomes uniform near the crack tip. Our modellingresolves a long-standing paradox of fracture theories.These results may have far-reaching consequences forunderstanding failure of small-scale materials.

In another area of interest, we focus on biologicaladhesion systems as for example found in Geckos. It iswell-known that Geckos can very effectively attach to avariety of surfaces, despite the very weak van der Waals

25H

IG

H P

ER

FO

RM

AN

CE

CO

MP

UT

IN

G

Research objectives: This research project focuses ondeveloping fundamental understanding of the behaviourof biological materials. Particular emphasis is thereby onthe mechanical properties of materials with nano-sub-structures. Such nanostructured materials are frequent-ly found in nature, as for instance in bone-like materi-als. Another field of interest is surface adhesion, wherewe aim on explaining nature’s superior adhesion systems(see Fig. 1).

Computational approach: We have developed atomisticmodels of biomaterials. The models allow studying bulkproperties such as failure mechanisms of biomaterials,as well as surface adhesion of biological materials onsubstrates. The models are based on a classical MDcode with empirical force fields.

Project description: Bone-like biological materials haveachieved superior mechanical properties through hierar-chical composite structures of mineral and protein.Gecko and many insects have evolved hierarchical sur-face structures to achieve superior adhesion capabilities.

What is the underlying principle of achieving supe-rior mechanical properties of materials? Using jointatomistic-continuum modeling, we show that thenanometer scale plays a key role in allowing these bio-

Huajian Gao, Markus J. Buehler, Baohua Ji and Haimin Yao, Max Planck Institute for Metals Research, Stuttgart

Flaw Tolerant Bulk and SurfaceNanostructures in Biological Systems

Figure 1: Different adhesion systems in nature. Such systems are very effective, and often show substructures on the order of nanoscale. Our modelling aims on understanding nature’s secrets in developing extremely effective adhesion systems (Figure courtesy Dr. Gorb, MPI-MF).

M P I F O R M E T A L R E S E A R C H ( M F ) / T H E O R Y O F M E S O S C O P I C P H E N O M E N A

spider gecko beetle fly

Page 26: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

interaction. What is the underlying principle of nature’sdesign of such superior adhesion systems? Our researchshows that reducing the relevant length scale of the ter-minal adhesion elements allows for superior adhesionforce (see a schematic in Fig. 2). For instance, thenanoscale sizes allow the spatula nanoprotrusions inGecko to achieve optimum adhesion strength. Based onsimilar concepts as found in biological bulk materials,adhesion strength optimization is achieved by restrictingthe relevant dimension to nanometer scale so thatcrack-like flaws do not propagate to break the desiredstructural link. Our research shows that adhesion cannot only be achieved based on reducing the dimensions.

26H

IG

H P

ER

FO

RM

AN

CE

CO

MP

UT

IN

G

Figure 2: Left: Schematic of adhesion model of insect spatula tosubstrates. Right: Atomistic computer model of surface adhesionof such a spatula in a Gecko.

Alternatively, by optimizing the shape of the contactarea, adhesion at the theoretical strength can beachieved at any length scale. We discuss these results inthe perspective of adhesion systems found in biologicalsystems. Our large-scale atomistic simulations providefundamental insight into tribological behaviour ofnanostructured surfaces.

Publications1. Gao H, Ji B, Jaeger IL, Arzt E, Fratzl P, Proc. Natl. Acad. Sci. USA,

Vol. 100, pp.5597-5600, 20032. H. Gao, B. Ji, M.J. Buehler and H. Yao, Mechanics and Chemistry

of Biosystems, Vol. 1, No. 1, pp. 37-52, 20043. H. Gao and H. Yao, “Shape insensitive optimal adhesion of

nanoscale fibrillar structures,” Proc. Natl. Acad. Sci. USA, Vol. 101,7851-7856, 2004

M P I F O R M E T A L R E S E A R C H ( M F ) / T H E O R Y O F M E S O S C O P I C P H E N O M E N A

Page 27: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

HI

GH

Max Planck Institute for Metals Research (MPI-MF) in Stuttgart, Department “Theory ofInhomogeneous Condensed Matter”, S. Dietrich

NanofluidicsIn recent years substantial efforts have been invested in miniaturizingchemical processes by building microfluidic systems. The “lab on achip concept” integrates a large variety of chemical and physicalprocesses into a single device in a similar way as an integrated cir-cuit incorporates many electronic devices into a single chip. Thesemicrofluidic devices do not only allow for cheap mass production butthey can operate with much smaller quantities of reactants and reac-tion products than standard laboratory equipments. This is particularlyimportant for rare and expensive substances such as certain biologi-cal substances and for toxic or explosive materials. Even though mostavailable microfluidic devices today have micron sized channels fur-ther miniaturization is leading towards the nano-scale. Besides meet-ing technical challenges, new theoretical concepts are needed tounderstand the basic physical processes underlying this new technol-ogy at that scale. Whereas the ultimate limits for the miniaturization

of electronic devices are set by quantum fluctuations, in a chemicalchip these limits are determined by thermal fluctuations and can beexplored by methods of classical statistical mechanics.One main line of development are so-called open microfluidicdevices. The idea is that the liquid will be guided by hydrophilicstripes on an otherwise hydrophobic substrate (in “chemical chan-nels”). In this project we study fluid flow on chemically patternedsubstrates on the mesoscopic and on the atomistic scale and wedevelop the theoretical foundations for further miniaturization ofmicrofluidic devices.

Research objectives: Liquids in contact with chemical-ly structured substrates are not only of technological usebut also enjoy high interest in basic research. Wettingon chemically structured substrates has been studiedextensively on all length scales. For example adsorptionon stripe patterns shows a rich morphology on themacro-scale as well as on the nano-scale.

Equilibrium wetting phenomena on chemically pat-terned substrates have been analyzed theoretically ingreat detail on both the macroscopic and the micro-scopic scale. Microscopic theories, such as the success-ful density functional theory, take into account the largerange of inter-molecular attractions and short-rangedrepulsions explicitly. Density functional theories do notonly allow to study the order of wetting transitions andthe equilibrium shape of the wetting film but also thedetailed microscopic structure of the liquid in the vicin-ity of the substrate and at the liquid-vapor interface.Unfortunately, at present density functional theory can-not describe flow of fluids.

In macroscopic theories, however, the inter-molec-ular interactions are approximated by local descriptions.

Macroscopic theories have been used to describe theshape of droplets on homogeneous and structured sub-strates. The equilibrium droplet shape in chemicallypatterned slit-like pores has also been studied with thesame technique. Flow can be described in macroscopictheories straightforwardly.

It is a great challenge to describe the intermediatescale between the microscopic and the macroscopicone. In most cases it is impossible to obtain analyticalresults from microscopic theories and numerical calcu-lations are prohibitive for large systems. Our currentresearch focuses on extending the scope of macroscop-ic theories down to the meso-scale by incorporatingmicroscopic effects. We also use atomistic theories, inparticular molecular dynamics simulations, to test thescope of mesoscopic theories.

Computational approach: In this project we use meso-scopic descriptions for fluid flow on chemically struc-tured substrates and compare predictions from these

S. Dietrich, A. Moosavi, M.N. Popescu, M. Rauscher; in collaboration with J. Koplik* and T.S. Lo*, Max Planck Institute for Metals Research, Stuttgart

Fluid Flow on the Nanoscale: From Microfluidics to Nanofluidics

* CITY COLLEGE OF THE CITY UNIVERSITY OF NEW YORK

Page 28: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

28H

IG

H P

ER

FO

RM

AN

CE

CO

MP

UT

IN

G

theories with results from large scale molecular dynam-ics simulations performed at the RZG.

Accomplishments: What we call mesoscale hydrody-namics is standard hydrodynamic equations augmentedwith long-ranged liquid-substrate interactions andhydrodynamic slip of the liquid at the substrate (withslip lengths on the nanometer scale). The effect of ther-mal fluctuations (originating from the Brownian motionof the molecules) can also be included.

These mesoscopic additions to standard hydrody-namics lead to a qualitatively different picture of fluidflow on a chemical channel. One of the most criticalissues in open microfluidic systems is to keep the liquidin the desired areas such as channels, reactors, andreservoirs. On a macroscopic scale the liquid will stayon the hydrophilic channels for low filling and the threephase contact line will lie on the channel area or it willbe pinned at the chemical step. Spill-over onto thehydrophobic areas occurs once the contact angle of theliquid at the chemical step exceeds the advancing con-tact angle on the hydrophobic area (which is in generallarger than the equilibrium contact angle due to surfacedefects).

On the nano-scale the situation is quite different.First, the concepts of a contact line and contact anglehave to be revised. A sharp contact line is replaced by asmooth transition from a mesoscopic wetting film to theprecursor film which is only a few molecular diametersthick and spreads ahead of the main portion of the mov-ing liquid. Moreover even an atomically sharp boundarybetween hydrophilic and hydrophobic areas on the sub-strate will lead to a smooth lateral variation of the inter-action potential between the liquid particles and thesubstrate.

Spill-over on a micron-sized chemical strip (left) as compared to a nano-scale strip (right). Dark red marks the hydrophobic areas and green the hydrophilic areas. At the micron scale onecan clearly distinguish between no spillage (a) and spillage (c),whereas for nano-channels one can only distinguish betweensmall tails (b) and large tails (d) of the lateral liquid distribution.

Thus the macroscopic and sharp criterion for a liq-uid staying on a chemical channel, namely whether thetriple line crosses the channel boundary or not, becomesfuzzy at the nano-scale. Since there is always a certainamount of liquid on the hydrophobic part of the sub-strate, one has to address the issue which fraction of theliquid is outside the channels rather than whether thereis liquid outside the channels. This is a serious issue ifone wants to avoid cross talk (i.e., exchange of liquid)between two neighboring channels in a chemical chip.

These features of mesoscopic hydrodynamics dis-cussed in the last paragraph are mostly based on quasi-static considerations. Transport mechanisms anddynamic properties have not been discussed.Experience tells that down to length scales of about 1-10 nm hydrodynamic theories provide a quite gooddescription of liquid flow. This has been confirmed byour molecular dynamics simulations.

Publications:Wetting on structured substrates, S. Dietrich, M. N. Popescu, M. Rauscher, J. Phys.: Cond. Mat., in pressFlow of thin liquid films on chemically structured substrates, M. Rauscher, S. Dietrich, proceedings of the TH-2002 (Supplement to Journal Henri Poincare 4, 2003), cond-mat/0303639

Snapshot of a molecular dynamics simulation of a fluid flowingalong a chemical nano-channel. The red substrate atoms arehydrophilic and the green ones hydrophobic. One can observe alayering of liquid molecules near the substrate, spill over onto thehydrophobic part and a finite width of the liquid vapor interface.

M P I F O R M E T A L R E S E A R C H ( M F ) / T H E O R Y O F I N H O M O G E N E O U S C O N D E N S E D M A T T E R

Page 29: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

29F A C H R I C H T U N G / A B T E I L U N GH

IG

H P

ER

FO

RM

AN

CE

CO

MP

UT

IN

G

the desire to obtain the highest possible accuracy, by the need toproperly describe systems of ever increasing size, or by the necessityto simulate molecular dynamics over sufficiently long time scales.Depending on the problem at hand, the computations are performedon local workstations and clusters, or on external massively parallelsupercomputers.

Max Planck Institute for Coal Research in Mülheim an der Ruhr, Department of Theory

Computational ChemistryThe department of theory, headed by Prof. Walter Thiel, is devoted tothe use of theoretical and computational methods as means toaddress specific questions in chemistry in general, and catalysis inparticular. To this end, a broad range of computational tools, rangingfrom molecular mechanics and semiempirical approaches to moderndensity functional and high-level ab initio methods, are maintainedand developed. Typical applications comprise highly accurate predic-tions of spectroscopic properties of small molecules, computation ofcatalytic cycles in homogeneous transition metal chemistry, or model-ing of enzymatic reactivities and selectivities, to name but a few. Inmost cases significant computational effort is involved, dictated by

Research objectives: Our research is focused on com-putational studies of reactivities and NMR chemicalshifts of transition metal complexes in the context ofhomogeneous catalysis. We seek to increase the accura-cy of conventional, static calculations by taking theactual experimental conditions (solvent, temperature)into account.

Computational approach: Our approach includes elec-tronic structure methods based on density functionaltheory (DFT), either for static molecules, or for ensem-bles at finite temperatures employing the Car-Parinellomolecular dynamics (CPMD) method.

Project description: Nuclear magnetic resonance(NMR) spectroscopy is an indispensable tool for charac-terizing transition-metal compounds. When accessibleexperimentally, the NMR properties of the metal nucleithemselves can afford important information on elec-tronic and geometrical structure of the complexes, aswell as on their reactivity. With the advent of DFT theseproperties, in particular the chemical shifts, havebecome amenable to theoretical calculations. In manycases static calculations, typically for isolated moleculesin their equilibrium geometry, are sufficient to reproduceexperimental trends semi-quanitatively. The presentchallenge is to increase the accuracy of the computationsin order to widen the scope of possible applications.

To this end, we have been using molecular dynam-ics methods such as CPMD, thereby going beyond thestatic models. The systems are propagated at finite tem-perature over a sufficiently long time scale, and theNMR properties are computed as averages over theensembles during that time. In the absence of a solventin the simulations, such a procedure models the effectof thermal (vibrational) averaging on the property ofinterest. Transition metal chemical shifts can be quitesensitive to such effects, and it is envisaged that, in par-ticular, the description of highly fluxional molecules willbe improved by such a procedure.

The MD simulations can readily be extended toinclude the solvent explicitly. This is particularly impor-tant in cases of highly polar and protic solvents, whichcan form strong, specific interactions with the solute.The prototypical example is water, which, due to itsenvironmentally benign character, is also a very attrac-tive reaction medium, for instance for so-called bio-mimetic systems, that is, for catalytic reactions modeledafter nature. We have recently studied a correspondingmodel system for catalytic olefin epoxidation in somedetail.

The presence of an aqueous medium can have astrong impact on the structure and dynamics of transi-tion metal complexes and, hence, on their spectroscop-ic properties. This is especially pronounced for highlycharged systems, where neglect of solvent can lead to

Michael Bühl, Max Planck Institute for Coal Research, Mülheim

Simulation of NMR Properties

Page 30: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

30H

IG

H P

ER

FO

RM

AN

CE

CO

MP

UT

IN

G

MD simulations can furnish detailed information regarding thestructure and dynamics of liquids and solutions. The snapshot of a prototypical vanadate complex in water illustrates how thesolute is incorporated into the dynamical network of hydrogenbonds that is formed in the liquid.

M P I F O R C O A L R E S E A R C H / D E P A R T M E N T O F T H E O R Y

erroneous results, and where the MD-based protocolwith explicit modeling of the solvent produces chemi-cal shifts in very good accord with experiment.

These studies can be seen as part of a more generaleffort to devise a “virtual laboratory”, in which chemicalexperiments – NMR spectroscopical studies in our case– can be reliably performed on a computer. Such simu-lations will not make actual experiments obsolete (atleast not in the foreseeable future), but already now theycan provide us with a wealth of information that wouldbe difficult to obtain otherwise. Ultimately, studies ofthe kind pursued in our group could not only be helpfulin practical applications, such as designing new or bet-ter catalysts, but also for gaining a deeper understandingof the dynamic nature of matter.

Publications:• M. Bühl, M. Parrinello, “Medium Effects on 51V NMR Chemical Shifts.

A Density Functional Study.” Chem. Eur. J. 2001, 7, 4487.• M. Bühl, F. T. Mauschick, F. Terstegen, B. Wrackmeyer, “Remarkably

Large Geometry Dependence of 57Fe NMR Chemical Shifts.” Angew. Chem. Int. Ed. 2002, 41, 2312.

• M. Bühl, R. Schurhammer, P. Imhof, “Peroxovanadate ImidazoleComplexes as Catalysts for Olefin Epoxidation: Density FunctionalStudy of Dynamics, 51V NMR Chemical Shifts, and Mechanism.” J. Am. Chem. Soc. 2004, 126, 3310.

Page 31: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

HI

GH

Max Planck Institute for Chemical Physics of Solids in Dresden, Research Fields “ChemicalMetals Science ”(Prof. Yu. Grin) and “InorganicChemistry” (Prof. R. Kniep)

Chemical Bonding(Electronic Structure)The scientific objective of the institute comprises the investigation ofsolid intermetallic phases with novel chemical and physical proper-ties. This immediately relates to the still open question of chemicalbonding in metallic systems and its direct implications on chemicaland physical properties of such materials. For this issue a common project group (“Chemical Bonding”) has beenput into life, which is employed in various theoretical aspects ofchemical bonding. Only a part of our activities are presented in thisbooklet. The detailed analysis of the electronic structure (i.e. electrondynamics) in this contribution, and of the dynamical behavior of theatoms in the next contribution (“Chemical Bonding (Dynamics)”) repre-sent complementary approaches to the full problem, the accuratequantum mechanical description of chemical systems with alldegrees of freedom they possess in nature. As each of the projectseasily meets the criteria for high-performance computing it can easilybe understood, that the treatment of the full problem with the sameaccuracy is out of scope for quite some time.

Chemical bonding itself is not aquantum mechanical observable.Moreover, it is not uniquelydefined in a rigorous way.Nevertheless, it constitutes abasis for the systematics inchemical science, established inprevious decades and centuriesand is still being extended bycurrent experimental research.In contrast, properties are well-defined quantities, which can becomputed using models at vari-ous levels of approximation. Especially difficult and computationallyvery demanding is the quantum mechanical treatment of materialscontaining highly correlated electrons, which are of special interest inour research. The adequate treatment of electron correlation is cru-cial for the understanding of the physical (magnetism, electrical con-ductivity, ....) and chemical properties (structure, phase transitions, ...)of these compounds.The unification of the corresponding chemical and physical modelspace is the central goal of the chemical bonding project. It is expect-ed to give rise to quantum mechanically based chemical bonding con-cepts also for intermetallic materials, which would be of great benefitfor the chemical tailoring of material properties.

Research objectives: There is no rigorous unique quan-tum mechanical definition of chemical bonding, insteadvarious definitions exist of which most of them arebased on non-physical, purely mathematical objects,e.g. atomic basis sets. The goal of this project is thedevelopment of quantum mechanically based chemicalbonding descriptors which are derived from the electronpair density. This universal quantity explicitly containsall the information necessary for the complete charac-terization of the system in terms of observable proper-ties. Thus, both, the properties and chemical bondingcan then be extracted at the same level of theory. Fromthe systematic analysis of this kind of chemical bondingdescriptors for a large number of systems yet undiscov-ered chemical bonding features can be extracted.

Computational approach: For molecular systems amuch higher level of quantum mechanical treatment is

Frank R. Wagner, Miroslav Kohout, Max Planck Institute for Chemical Physics of Solids, Dresden

Electronic Correlation in Position Space

possible than for crystalline systems. For sake of accu-racy, in this project we confine ourselves to molecularsystems, ideally small enough to be tractable by some ofthe most accurate ab initio methods available (MRCI,CCSD(T)) but large enough to be related to specificproblems occurring for some interesting crystallineintermetallic phases.

Quantum mechanical wavefunction-based ab initiocalculations of the electronic structure are performedusing the GAUSSIAN, MOLPRO and the MOLCASprogram systems. The resulting wavefunctions are sub-sequently processed with program packages specificallydeveloped within this project.

Project description: In 1916, about 10 years before thefounding of quantum mechanics, Gilbert N. Lewis con-cluded that a chemical bond is formed by the pairing oftwo electrons. The Lewis model is still a valid and wide-

Page 32: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

32H

IG

H P

ER

FO

RM

AN

CE

CO

MP

UT

IN

G

ly used model of the chemical bond in chemical sci-ence. However, there is a clear gap between this con-cept and accurate quantum mechanical state-of-the-artmethods for electronic structure calculation. The Lewisconcept is mostly interpreted on the basis of molecularorbitals, which are inappropriate approximations for thewavefunction in highly correlated systems. In contrast,the electronic pair density includes all the necessary andsufficient information from the many-electron wave-function that is needed to exactly characterize the elec-tronic structure in the framework of an explicit andexact Hamiltonian. It is an exciting project to investigatethe Lewis model on the basis of a rigorously definedlocal correlation of electronic motion of electrons inposition space, derived from the pair density.

We derived two different local correlation functionsin 3D-space, one for parallel-spin electrons (ELI) [1], andanother one for antiparallel-spin electrons (ELIA) [2].

The correlation of parallel-spin electrons is stronglyinfluenced by the Pauli principle, which leads to a localavoidance of same-spin electron pairs. The extent ofsuch local avoidance of pairing is not constant over allspace, but varies in a way, that is characteristic for thechemical bonding in that system. It can be exploited(more than 260 publications on chemical bondinganalysis world-wide, see [3]) already at the computa-tionally simple Hartree-Fock (or Kohn-Sham) level oftheory, where the Pauli principle is included via a singleSlater-determinant wavefunction. On the other hand,there exist many systems for which a single Slater deter-minant ansatz is not sufficient. For systems containingtransition metals calculations using multi-configurationwavefunctions within the complete active space methodreveal more reliable descriptions of physical properties.

Obviously the avoidance of local same-spin electronpairs is just the opposite behaviour with respect to theLewis electron pair picture. Conceptually, extending theinvestigations to antiparallel-spin electron pairs, i.e.ELIA, is the easy and natural solution. However compu-

Correlation of motion of antiparallel-spin electrons for the F2 mole-cule. High correlation of motion is indicated by low ELIA values andvice versa. Surfaces of constant value ELIA(r) = 0.9999 have beendrawn in blue. The bond midpoint constitutes a region of compara-tively weakly correlated electrons. The same applies to the ring-shaped regions surrounding the F atoms.

tationally, it is very much more demanding. This can beimmediately understood from the fact, that antiparallel-spin electrons are completely uncorrelated at Hartree-Fock level of theory (ELIA(r) = 1) in all space for allchemical systems. Only beyond Hartree-Fock level thecorrelation of antiparallel-spin electrons can be investi-gated at all and still depends significantly on the methodused. The problem is computationally so demanding thatbrute-force computation at chemical accuracy is not pos-sible for many medium-sized molecules on any existingcomputer system. Thus the investigation of the energeticaspects of antiparallel-spin electron correlation is at theforefront of research of many theory groups in the world.

In order to study the spatial aspects of ELIA fortransition-metal containing molecules we have to per-form very large configuration interaction (CI) calcula-tions with multi-reference wavefunctions (MRCI) orapply the single-reference coupled cluster method(CCSD(T)).

Some basic features can already be seen at the com-putationally simpler CASSCF (complete active space)level, which is only a pre-step for the MRCI calcula-tions. In the figure we show ELIA(r) from a CASSCFcalculation (14 electrons in 11 orbitals) of the pair den-sity for the F2 molecule. In the bond midpoint regionthe pair population constitutes a local maximum (highvalues of ELIA) – displayed by the disk-shaped 0.9999-isosurface – and approaches its value at the Hartree-Fock level (ELIA = 1). Antiparallel-spin electrons areless correlated there, i.e. they avoid each other less thanin neigbouring regions. This is a characteristic of cova-lent bonding. Additional local maxima of ELIA(r) occurin ring-shaped regions surrounding the F atoms. Theycan be interpreted as the lone-pair regions of the mole-cule. These features are in surprising agreement withthe Lewis formula of this molecule:

It will be exciting to investigate in detail the physi-cal relevance of the Lewis picture especially for highlycorrelated electron systems, e.g., those containing tran-sition and/or rare-earth metals.

Publications:1. M. Kohout, A meansure of electron localizability, Int. J. QuantumChem. 97, 651 (2004).2. M. Kohout, K. Pernal, F. R. Wagner, Yu. Grin, Electron localizabilityindicator for correlated wavefunctions. I. Parallel-spin pairs, Theor.Chem. Acc.112, 453 (2004).3. http://www.cpfs.mpg.de/ELF

M P I F O R C H E M I C A L P H Y S I C S O F S O L I D S / C H E M I C A L M E T A L S S C I E N C E A N D I N O R G A N I C C H E M I S T R Y

Page 33: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

HI

GH

Max Planck Institute for Chemical Physics of Solidsin Dresden, Research fields “Inorganic Chemistry”(Prof. R. Kniep) and “Chemical Metals Science”(Prof. Yu. Grin)

Chemical Bonding(Dynamics)The investigation of a single structure is often insufficient for thecomplete characterization of a material. Many important chemicalproperties result from a compound’s dynamical behavior. These areaddressed by the dynamics section of the chemical bonding group(see also previous contribution: ”Chemical Bonding (ElectronicStructure)”).We focus on chemical reactions, crystal nucleation/dissolution andphase transitions. In many of our studies we face model systems ofconsiderable complexity. These imply the need to choose a compro-mise between accuracy and computational efficiency. Depending onthe compound under investigation, the applied simulation methodsrange from pure quantum mechanical, mixed quantum/ classical to

fully classical molecular dynamics simulations. Many of the processes we study occur very rarely. It is therefore crucial to tackle the time scale problem. For this we develop severaltechniques, which range from coarse grained to path samplingapproaches. While the methodic progress is still continuing, we arealready able to provide some fascinating new insights in the dynam-ics of bond formation and dissociation. Examples of our work, namelystudies of crystal nucleation from solution and pressure-inducedphase transitions are described in the following paragraph.

Research Objectives: Processes like chemical reactions,crystal nucleation/dissolution and phase transitions ofteninvolve the crossing of rare intermediate states. As a con-sequence, their investigation is complicated by the needto scan a large number of possible arrangements in orderto find the transition state(s). In molecular dynamicssimulations, this leads to long ‘waiting’ times, before theevent of interest actually occurs. These waiting periodsmay easily exceed the scope even of sophisticated hard-ware by several orders of magnitude, hence rendering theobservation of many processes from direct simulationimpossible. Our aim is to develop methods which helpescaping this limitation and their application to theinvestigation of bond breaking and formation dynamics.

Computational Approach: In an attempt to circumventthe time-scale problem, two major approaches haveemerged over the last decades. The most straightfor-ward approach is to enhance the kinetics of rare eventsby applying elevated temperature, pressure or strongover-concentration of a particular molecular species.While in principle this strategy helps crossing any reac-tion barrier, the stronger the artificial process accelera-

tion is chosen the more careful the results have to beconsidered. Excessive driving of a process may easilylead to the skipping of important intermediates or evencause the system to follow completely different mecha-nistic routes. Similar limitations are related to the wide-ly used approach of applying external driving forces.This method is based on the choice of a presumed reac-tion coordinate. The desired process is then induced byartificial potentials or constraints, which are functionsof this coordinate. As a consequence the mechanisticanalysis may only be given in terms of the predefinedmodel of the reaction coordinate. Sometimes severalindependent investigations based on various mechanis-tic models are performed. However in complex systemsthe number of putative mechanistic routes typically istoo large to account for all possibilities.

Recently, Chandler et al. introduced the transitionpath sampling method for molecular dynamics simula-tion of rare events (http://www.pathsampling.org). Thisnovel approach concentrates on the relatively short timeinterval in which the process of interest takes place andcompletely ignores the waiting period required forobservation from direct simulation.

Oliver Hochrein, Stefano Leoni and Dirk Zahn, Max Planck Institute for Chemical Physics of Solids, Dresden

Mechanisms of Reactions, CrystalNucleation, Dissolution and Phase Transitions

Page 34: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

34H

IG

H P

ER

FO

RM

AN

CE

CO

MP

UT

IN

G

Project Description: On the basisof the path sampling approach it ispossible to investigate complexprocesses like crystal nucleationfrom solution with moleculardynamics simulations. In a firststudy of this kind the aggregationof Na+ and Cl- ions in a slightlyover-saturized solution was inves-tigated. On the atomistic level ofdetail our knowledge in this fieldis still very limited. At the exampleof NaCl nucleation, it is demon-strated how new insight in the ini-tial steps of crystal formation maybe obtained, which remained elu-sive to both theory and experi-ment so far.

We furthermore applied pathsampling to the field of pressure-induced phase transitions. Ratherthan a collective transformation ofthe system from one phase to the

Snapshot taken from a path sampling molecular dynamics run ofNaCl nucleation from aqueous solution. The sodium and chlorideions are shown as blue and green balls, respectively. The watermolecules are represented by sticks. In the center of the aggre-gate, a sodium ion (purple) exhibits no water molecules in its nearest neighborhood. It is instead coordinated by chloride ions, which occurs in a way very similar to the bulk NaCl crystal.

other, path sampling revealed the preference of a sub-stantially different class of transition routes. Therein thenew structure is observed to emerge from a single ionicdefect. The neighboring ions arranged themselves inaccordance to the new structure, leading to subsequen-tial growth of the high-pressure phase.

We consider this a major step forwards in the mech-anistic analysis of phase transitions. When using over-critical pressure or constraints on a reaction coordinatethe structural transformation is caused by both, the self-organization of the ions into new structures and the arti-ficial driving. For the structural transformations in somealkali-halides the external driving was found to complete-ly overrun the slower process of ionic self-organization,i.e. the nucleation and subsequent phase growth. Withthe path-sampling based simulation schemes presentedin this work no external driving is needed and the self-organization may be studied in absence of such artifacts.

Publications:• D. Zahn,, Phys. Rev. Lett., 92 (2004), 40801-05.

- Highlighted article in D. Moore, “Birth of a Crystal”, Phys.Rev. Focus February 2004. - Max-Planck Highlights 09/2004Festkörperforschung/Materialwissenschaften: “Geburt eines Kristalls”.

• D. Zahn and S. Leoni, Phys. Rev. Lett., 92 (2004), 250201-04.• D. Zahn and O. Hochrein, Phys. Chem. Chem. Phys., 5 (18) (2003),

4004.• D. Zahn, J. Phys. Chem. B, 107 (2003), 12303.• S. Leoni and D. Zahn, Z. Kristallogr., 219 (2004), 339.• D. Zahn and S. Leoni, Z. Kristallogr., 219 (2004), 345.• D. Zahn, Chem. Phys., 300 (2004), 79.

Snapshots taken from a path sampling molecular dynamics run of the pressure inducedphase transition of sodium chloride from the NaCl to the CsCl type of structure. The sodiumions are colored according to the number of nearest chloride ions. This number changesfrom 6 to 8 in the course of the phase transformation.

M P I F O R C H E M I C A L P H Y S I C S O F S O L I D S / I N O R G A N I C C H E M I S T R Y A N D C H E M I C A L – M E T A L S S C I E N C E

Page 35: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

HI

GH

Research objectives: Our project aims at the detailedunderstanding of quantum transport in light-matter inter-action, and at the elaboration of combined theoretical-numerical tools with predictive power, such as to guideexperiments. Specifically, we study the spectral struc-tures underlying the excitation and ionization process ofmultielectron atoms under intense electromagnetic driv-ing by optical or radiofrequency fields, in close contact tostate of the art experiments. With the complete spectralinformation on the dressed atomic system at hand, uni-versal features of quantum transport in complex systemscan be identified, at very large spectral densities.

Computational approach: Our computational approachwas tailored along the general features of the excitationand ionization process under study: An atomic system isexposed to a temporally periodic perturbation whichinduces decay on variable time scales, which may reachfrom few to several millions of field cycles. The perio-dicity of the drive is incorporated by employing Floquettheory, whilst the decay process due to coupling to theatomic continuum is efficiently described by complexdilation of the total Hamiltonian. In a basis set which isoptimally adapted (through group theoretical methodscombined with symbolic calculus) to the symmetry ofthe problem, one finally arrives at a complex symmetric,sparse-banded generalized eigenvalue problem, which issolved by an efficient parallel implementation of theLanczos algorithm (with typical storage requirements

currently in the range of 500-800 GB). The resulting(complex) eigenvalues and eigenvectors provide fulldynamical and spectral information.

Project description: The excitation and ionisation of sin-gle or two-electron states of alkali, alkali-earth or heliumatoms by electromagnetic fields is under intense exper-imental and theoretical study by various research groupsworldwide. Key issues range from quantum control andquantum information processing over quantum trans-port to complex fragmentation dynamics of many-parti-cle systems, typically under highly nonperturbative con-ditions. Accurate theoretical and/or numerical treat-ments of such strongly perturbed atomic systems are,however, scarce, since the spectral density increasesrapidly with the electronic excitation, and with the dom-inant order of the field-induced multiphoton processeswhich dominate the dynamics.

The central target of our project is to build a versa-tile numerical machinery for the accurate description ofsuch systems, with a minimum of approximations, with-out adjustable parameters, and with quantitative predic-tive power. So far, we addressed:

• The excitation and ionization of single electronRydberg states of atomic hydrogen and of the alkalisunder radiofrequency driving, where we could establishthe existence of a universal ionization threshold (as wellas a universal decay rate distribution for the atomic

Max Planck Institut for the Physics of ComplexSystems (PKS), Dresden, Research Group NonlinearDynamics in Quantum Systems

Complex QuantumDynamicsDr. Javier Madroñero and Dr. Andreas Buchleitner conduct the com-putational physics activity in this group, with a focus on complexquantum dynamics in highly excited Rydberg systems subject tostrong time-periodic or static fields. A large part of the intriguingquantum phenomena which arise under such highly nonperturbativeconditions can be understood in terms of the nonintegrability of the corresponding classical equations of motion, tantamount of the

Javier Madroñero and Andreas Buchleitner

Grand Challenges in Light-Matter Interaction

destruction of good quantum numbers on the spectral level, asalready remarked by Einstein. Such “quantum chaotic” systems oftenexhibit universal features which are reminiscent of disordered sys-tems, what nowadays can be confirmed quantitatively, thanks to highperformance supercomputing.

Page 36: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

36H

IG

H P

ER

FO

RM

AN

CE

CO

MP

UT

IN

G

eigenstates in the field) in a wide frequency regime ofthe driving field, thus solving a longstanding puzzleposed by experimental observations, which hadremained open for more than one decade.• The creation and dynamics of nondispersive wavepackets in near resonantly driven one electron Rydbergsystems (or, more generally, in periodically driven,bounded quantum systems), what opens novel perspec-tives in coherent control. Our specific approach allowedthe prediction not only of the mere existence of suchobjects, but in particular of their extremely long lifetimes– very recently confirmed in laboratory experiments. • The autoionization rates of doubly excited states ofplanar helium, where we could establish a sensitivedependence on the dimension of the accessible config-uration space: Indeed, frozen planet states – which area highly asymmetric though stable configuration of thethree body Coulomb problem, both, classically andquantum mechanically – exhibit autoionization rateswhich are suppressed by several orders of magnitudewhen confined to a 1D configuration space, comparedto 2D and 3D dynamics. • The existence of nondispersive two-electron wavepackets in periodically driven doubly excited Rydbergstates of helium, as a highly nontrivial extension of theabovementioned objects in single electron dynamics.Indeed, in planar helium (where configuration space isconfined to a plane which contains the driving fieldpolarization vector – currently we are content with thisapproximation, simply due to the rapidly growingdimension of Hilbert space when angular momentum isno more conserved) we have recently found eigenstates

of the driven system, which represent well-localizedwave packets propagating along a frozen planet trajecto-ry, without dispersion, see Figure.

In the near future, we shall generalize our approachto describe the driven three body Coulomb problem infull 3D configuration space. Furthermore, we wish toaddress the dynamics of (ultra-)cold atoms in optical lat-tices, where multiparticle interaction effects becomeimportant.

Publications:• A. Krug and A. Buchleitner, ‘Chaotic ionization of non-hydrogenic

alkali Rydberg states’, Phys. Rev. Lett. 86, 3538 (2001).• A. Krug and A. Buchleitner, ‘Chaotic ionization of non-classical

alkali Rydberg states – computational physics beats experiment’,Comp. Phys. Comm. 147, 394 (2002).

• A. Krug and A. Buchleitner, ‘Microwave ionization of alkali-metalRydberg states in a realistic numerical experiment’, Phys. Rev. A 66,053416 (2002).

• A. Buchleitner, D. Delande, and J. Zakrzewski, ‘Non-dispersive wave packets in periodically driven quantum systems’, Phys. Rep.368, 409-547 (2002).

• S. Wimberger, A. Krug, and A. Buchleitner, ‘Decay Rates and SurvivalProbabilities in Open Quantum Systems’, Phys. Rev. Lett. 89, 263601(2002).

• A. Krug, S. Wimberger and A. Buchleitner, ‘Decay, interference andchaos: How simple atoms mimic disorder’, Eur. Phys. J. D 26, 21 (2003).

• J. Madroñero and A. Buchleitner, ‘Planar helium under periodic driv-ing’, in High Performance Computing in Science and Engineering,Munich 2004. Transactions of the Second Joint HLRB and KONWIHRResult and Reviewing Workshop, March 2nd and 3rd, 2004, TechnicalUniversity of Munich. (Springer, Berlin, ISBN 3-540-44326-6).

• J. Madroñero, ‘Spectral properties of planar helium under periodicdriving’, PhD thesis, Ludwig-Maximilians-Universität München (2004).

Electronic density of a two-electron wave packet eigenstate of 2D helium under electromagnetic driving (top), as a Husimi representation inthe phase space component spanned by χ1 and p1 (the position and momentum, respectively, of the outer electron in the classical collinearfrozen planet configuration). Much as the associated classical island structure shown in the classical phase space portraits in the bottom partof the figure (for different driving field phases ωτ=0 (left), π/2 (middle), π (right), with ω the frequency of the driving field), the quantum eigen-state propagates along a periodic frozen planet trajectory, without dispersion.

M P I F O R T H E P H Y S I C S O F C O M P L E X S Y S T E M S / N O N L I N E A R D Y N A M I C S I N Q U A N T U M S Y S T E M S

Page 37: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

Max Planck Institute of Microstructure Physics Halle, Theory Department

Computational Material DesignThe ability to predict material properties is a fundamental require-ment for scientific and technological advancement. Thereby computa-tional material science is an important powerful tool being used in thedevelopment of advanced materials and their device applications. Theproperties of these materials, such as electronic and magnetic behav-ior, can be changed in a controlled way by varying external parame-ters, such as the pressure, their chemical composition or the tempera-

ture. In this manner one can learn how to improve mechanical, opticaland electronic properties of known materials, or one can predict prop-erties of new materials, which are not found in nature but aredesigned and synthesized in the laboratory. This is the main researchfield of the Theory Department at the Max Planck Institute ofMicrostructure Physics in Halle. Most theoretical efforts are concen-trated on low-dimensional magnetic systems, such as magnetic filmsand nanostructures. More specifically, our research is currentlyfocused on (i) exchange interactions in magnetic ultrathin films andnano-structures, (ii) magneto-eletronics, and (iii) electron correlationspectroscopies of solid surfaces. For these purposes we develop ab-initio concepts that are put in state –of-the-art computer codes. Usingefficent numerical algorithms and modern supercomputers we cansimulate experiments and study properties of realistic materials.

Arthur Ernst

Simulation of Correlated Systems Research objectives: The longstanding goal of comput-er simulation of materials is gaining accurate knowledgeon the structural and electronic properties of realisticcondensed-matter systems. Since materials are complexin nature, past theoretical investigations were restrictedmostly to simple models. Recent progress in computertechnology made a realistic description of a wide rangeof materials possible. Since then computational materi-als science has rapidly developed and expanded intonew fields of science and technology, and computersimulations have become an important tool in manyareas of academic and industrial research, such asphysics, chemistry, biology and nanotechnology.

Computational approach: Most of the methods used incomputational physics are based on density functionaltheory (DFT). The density functional theory deals withinhomogeneous systems of electrons. This approach isbased on the theorem of Hohenberg and Kohn whichstates that the ground state properties of a many-parti-

cle system can be exactly represented in terms of theground state density. This allows replacing the many-particle wave function by particle density or currentdensity of the system. The desired ground state quanti-ties can be obtained by minimization of a unique ener-gy functional, which is decomposed into one-electroncontributions and the so-called exchange-correlationenergy functional, which contains all many-bodyeffects. In practical applications it is usually approxi-mated by some model density functional. The variation-al problem can be reduced to an effective one-electronKohn-Sham equation describing non-interacting elec-trons in an effective potential and in principle repro-ducing exact ground-state density. One of the most pop-ular functionals is the local density approximation(LDA), in which all many-body effects are included onthe level of the homogeneous electron gas. Such anapproach enables one to carry out so-called “first-princi-ples”’ or “ab-initio” calculations (direct calculations ofmaterial properties from fundamental quantum

Page 38: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

mechanical theory). Many fundamental properties, forexample bond strength and reaction energies, can beestimated from first principles. However, the LDA failsto describe ground state properties of systems withstrongly correlated electrons. One of the reasons is thatthe LDA is not self-interaction free: In the LDA anystate includes an unphysical interaction of the electronwith itself. In most systems, where the electrons aretruly delocalized, the self-interaction contribution isnegligible and therefore the LDA is an excellent approx-imation. For systems with strongly localized electronstates one can use another efficient computationalscheme, which is able to solve the problem with self-interaction appearing in the local density approximation.This self-interaction can be exactly subtracted if thestate is sufficiently localized. This leads to the so-calledself-interaction-corrected local spin-density approxima-tion (SIC-LSD). For solids the practical implementationof the SIC-LSD entails great difficulties due to therequirement of unified treatment of both localized anddelocalized states. We have implemented the SIC-LSDwithin the framework of the Korringa-Kohn-Rostoker(KKR) Green function method, in which representationthe SIC-LSD formalism takes a more elegant and sim-pler form. Moreover, the KKR method with screenedstructure constants provides O(N) treatments of largesystems such as surfaces, interfaces, and real-spaceclusters. Another advantage of the multiple scatteringimplementation of the SIC-LSD formalism is that it caneasily be generalized to include the coherent potentialapproximation (CPA), extending the range of applica-tions to random alloys. In addition, one can use it totread static correlations beyond LDA by studying pseu-do-alloys whose constituents are composed e.g. of twodifferent states of given system: one delocalized,

described by the LDA potential, and another localized,correspondent to the SIC-LSD potential.

Project description: We have applied the SIC multiplescattering scheme in its CPA extension to the α-γ phasetransition in Ce. Ce, being the first element containingan f electron, shows an interesting phase diagram. Inparticular, the isostructural (fcc-fcc) α-γ phase transi-tion, which is associated with a 15-17% volume collapseand quenching of the magnetic moment. The low-pres-sure γ-phase shows a local magnetic moment, and isassociated with a trivalent configuration of Ce. At thetemperatures in which the γ-phase is accessible, it is ina paramagnetic disordered local moment state. Byincreasing the pressure, the material first transformsinto α-phase, which is indicated to be an intermediatevalence state with quenched magnetic moment. At highpressure Ce eventually transforms into the tetravalentα’-phase. With increasing temperature, the α-γ phasetransition shifts to higher pressures, ending in a criticalpoint (600K, 20 kbar), above which there is a continu-ous crossover between the two phases. In order to deter-mine the ground state configuration of Ce at the givenvolume, we calculated the total energies for differentvolumes using the LDA for α-phase and the SIC for-malism with correcting one f-electron, occupying insequence all possible f-states, for the γ-phase. The cal-culated ground state properties of Ce are generally ingood agreement with experiments, demonstrating theefficiency of the SIC formalism. Here we show thephase diagram from the free energies of the α-γ pseudo-alloy. It can clearly be seen in the figure how the transi-tion becomes continuous above the critical tempera-ture. The slope of the phase separation line is in verygood agreement with the experiment. The critical tem-perature overestimates the experimental one by factortwo, which is still reasonable considering that the criti-cal temperature is very sensitive to various small detailsof the calculation.

The SIC multiple scattering formalism makes itpossible to describe the ground state properties of awide range of materials with localized electrons. In com-bination with the CPA it would allow us to develop inthe future a new dynamical mean field approach, whichmight to describe many-body effects such as the Kondoscreening of local moments at low temperature.

Publications:• M. Lüders, A. Ernst, M. Däne, Z. Szotek, A. Svane, D. Ködderitzsch,

W. Hergert, B.L. Györffy, and W.M. Temmerman, “Self-interactioncorrection in multiple scattering theory”, submitted to Phys. Rev. Band cond-mat/0406515

38 M P I O F M I C R O S T R U C T U R E P H Y S I C S / T H E O R Y D E P A R T M E N TH

IG

H P

ER

FO

RM

AN

CE

CO

MP

UT

IN

G

Phase diagram obtained for the pseudo-alloy, composed of α and γ Ce. Crosses indicate the calculated and experimental criti-cal points, respectively. The colors indicate the degree of localiza-tion: red is delocalized (α-phase), blue is localized (γ-phase).

Page 39: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

Max Planck Institute of Microstructure Physics,Weinberg 2, D-06120 Halle, Experimental Department II, Research group

Interfaces and Material SystemsThe basic research in the Experimental Department II, headed by Prof. Gösele, is aimed at supplying the scientific understanding forthe design and fabrication of improved or completely new materials,especially nano-structured systems.The group of Prof. Woltersdorf is concerned with theoretical and

experimental investigations of interfaces and interlayers in metallic,ceramic, and glassy systems to control their formation and effect,

with the aim to influence the macroscopic properties as desired. This comprises modelling and characterisation down to atomicdimensions, including transformation kinetics and imaging of bondingstates, elucidation of solid state chemistry transport and exchangereactions in high temperature composites, and optimisation of energydispersive processes in reinforced composites of metallic, ceramic or glassy matrices.

Andreas Sundermann, group of Prof. Woltersdorf

Molecular Design of Preceramic Polymers

Research objectives: Research in two of our projectsinvolves the molecular design of polymers and gels forthe conversion into ceramic materials with controlledthermochemical and mechanical properties (stability,porosity, processability, etc). In collaboration with exper-imental chemists we support this complex optimisationprocess by providing insight into the material systemand its reaction kinetics down to the atomic level.Methods of high resolution and analytical electronmicroscopy (HRTEM, EDXS, EELS, ELNES) areapplied for providing microstructural and nanochemicalinformation including band structure information.Corresponding to this, quantum chemical methods atthe semiempirical (AM1, PM3) and the density func-tional theory (DFT) level are utilised for (i) the inter-pretation of the spectroscopic data and (ii) the model-ling of the reaction kinetics, especially with respect tothe transformation behaviour. These types of simula-tions are rather CPU time intensive.

Computational approach: Methods of computationalquantum chemistry are applied in two major fields:

a) Elaborating reaction mechanisms for the polymerformation and the subsequent pyrolysis process:

This focusses on the identification of the relevantreactions, involving formation or cleavage of crucialchemical bonds. Based on such knowledge proper cata-lysts can be chosen, or chemical groups can be modifiedto give optimal performance.

b) Simulating the electronic structure of polymersand ceramics:

DFT calculations for appropriate cluster models are

utilised to interpret ELNES data. In principle, ELNESprobes for the density of states above the Fermi level.The energy loss spectrum is characteristic for a certainelement in its local chemical environment. This data canbe correlated with simulated electronic structures toidentify the binding situation of the atoms in materials.

Project description: The computational contributionsare parts of our following recent projects supported bythe DFG:

a) “Molecular design of preceramic polymers by quan-tumchemical methods applied to the synthesis of newmetallacarbodiimide polymers and gels”. Within thisproject the major objective was to develop a wet-chem-ical process for the production of metal-carbide/metal-nitride composite materials. In collaboration with thegroup of Prof. Riedel (TU Darmstadt) metal carbodi-imides were proposed. These polymers have the generalcomposition RxM(N=C=N)(y-x)/2 where M can be Si orTi. By comparison of ELNES data for the C-K and N-K edge with simulated densities of state we were ableto identify the N=C=N-group as an intact entity in thecross linked polymer. Additionally, the role of pyridine asa catalyst for the polymerisation could be revealed byquantumchemical modelling, and optimum processingparameters were derived.

b) “New oxidation and creep resistant Si(Al, B)CO gra-dient fibres made of modified polysiloxanes”. This pro-ject is a collaboration with Prof. Riedel (TU Darmstadt)and Dr. Clade (Fraunhofer-Institut f. Silicatforschung,

Page 40: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

40H

IG

H P

ER

FO

RM

AN

CE

CO

MP

UT

IN

G

Nanochemistry of in situ formed Si2ON2-nanowiresThe Figure demonstrates the results of combined HREM/STEM, EELS, ELNES and EFTEM investigations of an in situ formed Si2ON2-nanowirein the system Polysiloxan-Si after pyrolysis in a nitrogen atmosphere. The STEM image (upper left) reveals a core-shell structure of thenanowires; the corresponding EEL spectra (upper right) across an individual nanowire clearly show nitrogen to be present only in the coreregion. In addition, the fine structures of the Si-L23 edge (cf. the ELNES profiles, lower right) demonstrate that the shell region consists ofSiO2, and the central region (core) of a mixture of SiO2 and Si2ON2 due to the simultaneous transmission of core and shell. The EFTEM image(lower left) again shows the distribution of oxygen and nitrogen. For the interpretation of the recorded EEL/ELNES spectra quantumchemicalmodelling and band structure calculations were necessary.

Würzburg). Here, our simulation effort is focussed onidentifying suitable silane precursors in order to controlcross-linking during the polymer synthesis and to pro-vide functional groups to harden the polymer after spin-ning. From a computational point of view, transitionstates for the silane coupling reaction will be optimisedat the DFT level. The influence of several substituentson the silicon atoms as well as a variety of linking mole-cules will be screened “in silico” to guide the synthesisof polymers optimal for the spinning process. At a laterstage, ELNES simulations of the measured spectra arerequired to analyse the chemical composition of thefibres, in relation to the different processing steps.

Publications:• O. Lichtenberger, J. Woltersdorf, N. Hering, and R. Riedel, Z. Anorg.

Allg. Chemie 626, 1881-1891 (2000).• N. Hering, K. Schreiber, R. Riedel, O. Lichtenberger, and

J. Woltersdorf, Appl. Organometallic Chemistry, 15, 879-886 (2001).• O. Lichtenberger, J. Woltersdorf, R. Riedel, Z. Anorg. Allg. Chemie

628, 569-607 (2002). • O. Lichtenberger, E. Pippel, J. Woltersdorf, R. Riedel, Materials

Chemistry and Physics 81, 195-201 (2003).

M P I O F M I C R O S T R U C T U R E P H Y S I C S / E X P R E R I M E N T A L D E P A R T M E N T I I ( R . G . )

Page 41: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

HI

GHzation phenomena, atomic scale fabrication and characterization of

metallic, semiconducting and molecular nanostructures, quantumelectronic transport in nanostructures, atomic scale electron spec-troscopy and optics on the nanometer scale. As surface phenomenaplay a key role in the understanding of nanosystems, the structure,dynamics and reactivity of surfaces in contact with gaseous or liquidphases are also in the focus of interest.

Research objectives: Modern surface science hasachieved a microscopic understanding of catalyticprocesses at surfaces and interfaces due to the availabil-ity of on one hand instruments offering atomic resolutionlike the scanning tunnelling microscope (STM) and onthe other hand powerful computational techniques suchas density functional theory (DFT). Combining both ofthem leads to a detailed picture of the interactions ofatoms and small molecules with surfaces and providesinsight into the influence of the substrate on the reactiv-ity of the constituents. Both methods complement eachother, since STM images do not only contain informationabout the underlying atomic structure but also about theelectronic structure. Theoretical modelling and compar-ison of the calculations with the experimental data canhelp to separate these contributions.

We employ density functional theory (DFT) to cal-culate the electronic structure of the adsorption systemand subsequently model the STM measurements. Thisallows not only to reproduce the experimental assign-ments but also to explore the details of the electronicand atomic structure, which is often very difficult evenwith a combination of different experimental tech-niques. Density functional theory is currently the onlyreliable electronic-structure method which can be rou-tinely used to systems with 100-1000 atoms.

Computational approach: The methodology we use isbased on the Kohn-Sham equations [1] which have tobe solved self-consistently in Car-Parrinello [2] fashiondue to their non-linear character. We employ the gener-alised gradient approximation (GGA) as the exchange-correlation functional to account for the complicatedmany-body interactions whose exact form is not known.We replace the interaction between the inert core elec-trons and the valence electrons with pseudo potentialsand expand the wave functions in plane waves as thisleads to an efficient evaluation of the matrix elementsneeded to program the method, which furthermoreheavily depends on the usage of fast Fourier transforms(FFT). Since any of the about 100..1000 single-particleorbitals requires about 105..106 plane wave coefficientswe easily end up with of the order of 109 unknowns tobe solved for. Thus high-performance computing isrequired to perform the calculations, and the solution ofthe electronic and atomic structure of a single configu-ration can easily take several days on 16-64 processorson the IBM Regatta. The method yields also a highnumerical performance due to the large number of coef-ficients used, leading to long and continuous memoryaccess. The major obstacle in the parallellisation, thethree-dimensional FFT, has been efficiently solvedallowing parallel computing.

A. Seitsonen and K. Kern

Density-Functional Theory forAdsorbed Molecular Complexes

Max Planck Institute for Solid State Research,Stuttgart. Department Nanoscale Science, Prof. Kern

Nanoscale Science Research efforts in the Nanoscale Science Department of Prof. KlausKern at the Max Planck Institut for Solid State Research in Stuttgartare centered on nanometer-scale science and technology, primarilyfocussing on solid state phenomena that are determined by smalldimensions and interfaces. Materials with controlled size, shape anddimension ranging from clusters of a few atoms to nanostructureswith several hundred or thousand atoms, to ultrathin films withnanometer thickness are studied. A central scientific goal is the detailed understanding of interactionsand processes on the atomic scale. Novel methods for the characteri-zation and control of processes on the nanometer scale as well astools to manipulate and assemble nanoobjects are developed. Of par-ticular interest are: fundamentals of epitaxial growth and self-organi-

Page 42: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

42 M P I F O R S O L I D S T A T E R E S E A R C H / D E P A R T M E N T N A N O S C A L E S C I E N C EH

IG

H P

ER

FO

RM

AN

CE

CO

MP

UT

IN

G

To compare the results from the DFT-GGA calcula-tions with the STM experiments we rely on the simpleTersoff-Hamann method [3] which basically states thatthe tunnelling current between the sample and theSTM tip is proportional to the density of states for smallbias voltages. We model the simulated images in theconstant current mode where we map the isosurface ofthe electron density close to the Fermi energy at a givenvalue.

We have also performed ab initio molecular dynam-ics runs in order to study the stability and vibrationalproperties of the carbonyl complexes.

Project description: We have been studying the adsorp-tion and decomposition of transition metal carbonyls atthe Cu(001) surface. These carbonyls are intermediatesin the hydroformylation reaction. Furthermore, themodification of the magnetic properties if the centralcobalt atom by the ligands is of interest. Here we con-

centrate on the cobalt tetracarbonyl molecule Co(CO)4.Cu(001) reveals a highly symmetric, quite little corru-gated surface face. In the DFT calculations we find thepreferred adsorption site to be the four-fold hollow posi-tion in between four substrate Cu atoms as shown inFigure 1a). The diffusion of the molecular complexwould follow over the bridge sites. The carbon monox-ide molecules are bound directly to the central cobaltatom with their carbon ends. The complex is relativelystable, but can vibrate tangentially around the cobaltadatom along a low frequency mode.

Figure 1b) shows an experimentally acquired STMimage. Several objects with four-fold symmetry are visi-ble on the surface, corresponding to single, isolatedtetracarbonyl molecules. The inset shows a magnifica-tion of the image produced by a single molecule. Themolecule is imaged with four protrusions surrounded bya dark rim.The simulated STM image derived from the DFT cal-culations is shown in Figure 1c). The main characteris-tics, e.g. an increase above the molecule and the lateraldimensions, are well reproduced, however we could notafford a large enough super-cell to inspect if the darkscreening “ring” is also visible in the calculations. Thebrightest and thus highest areas are above the oxygenatoms of the CO molecules, as one would intuitivelyexpect due to their height above the carbon and cobaltatoms. However one has to be careful and try to supportthe experiments with theoretical investigations such aspresented here as there are examples where the simpleassumption of a correlation between the STM currentand height of the adsorbates does not hold.

We have found similar features in other carbonylcomplexes adsorbed on Cu(100) and extended our stud-ies to partially decomposed carbonyls, where one ormore of the CO molecules have been removed from thecomplex. Both experimental and theoretical work are inprogress.

Publications:• [1] W. Kohn and L. J. Sham, Self-Consistent Equations Including

Exchange and Correlation Effects, Phys. Rev. 140, A1133 (1965); see e.g. R. M. Dreizler and E. K. U. Gross, Density Functional Theory, An Approach to the Quantum Many-Body Problem, Springer-VerlagBerlin, Heidelberg, 1990

• [2] R. Car and M. Parrinello, Unified Approach for MolecularDynamics and Density-Functional Theory, Phys. Rev. Lett. 55, 2471(1985)

• [3] J. Tersoff and D. R. Hamann, Theory of the scanning tunnellingmicroscope, Phys. Rev. B 31, 805 (1985)

Figure 1: Cobalt tetracarbonyl molecule Co(CO)4 on Cu(001) surfacea: Preferred adsorption site: four-fold hollow position in betweenfour substrate Cu atoms. b: Experimentally acquired STM image. The inset shows a magnification of the image produced by a single molecule. c: Simulated STM image derived from the DFT calculations.

Page 43: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

molecular dynamics simulations were performed in par-allel with 32 processors, using version c28b2 of the bio-molecular simulation program CHARMM.

Project description: Two parallel molecular dynamicssimulations with different levels of protein hydration,1.125 ns each in length, were carried out under condi-tions of constant temperature and pressure using three-dimensional periodic boundary conditions and full elec-trostatics to investigate the distribution and dynamics ofwater molecules and their corresponding hydrogen-bond-ed networks inside COX. The average number of solventsites in the proton conducting pathways was determined.The highly fluctuating hydrogen-bonded networks, com-bined with the significant diffusion of individual watermolecules provide a basis for the transfer of protons inCOX, therefore leading to a better understanding of themechanism of proton pumping. The importance of thehydrogen bonding network and the possible coupling oflocal structural changes to larger scale changes in theCOX during the catalytic cycle have been shown.

ture determinations of these membrane protein complexes which arealso known as complex IV, complex II, and complex III, respectively,of the respiratory chain have been completed. The structure of theyeast cytochrome bc1 complex is already the fifth, that of the succi-nate: quinone oxidoreductase the sixth membrane protein structuredetermined in the department, demonstrating its leading role world-wide.Having determined the structure of a membrane proteins, we aim tounderstand its mechanism of action. We address this questions bysite-directed mutagenesis, specific labeling and spectroscopic experi-ments. These experimental approaches are combined with theoreticalwork, namely electrostatic calculations and molecular dynamics simu-lation (in collaboration with Center of Bioinformatics, University ofSaarland). Elucidation of the atomic level processes, including long-range electron and proton transfer steps, and long-time conformation-al protein dynamics requires extended computational tools and tech-niques beyond what is currently available. The collaboration withRZG, therefore, provides us with the tools that enable the modelingand simulation of increasingly complex systems such as membraneproteins, with a reduced level of approximation, and an increasedlevel of accuracy of out simulations.

Max Planck Institute of Biophysics Department of Molecular Membrane Biology, Hartmut Michel

Molecular Membrane BiologyThe aim of the department is to understand function and mechanismof membrane proteins based on accurately known structures. The latter, namely the structure determination of membrane proteins isthe problem, because it is very difficult to crystallize membrane pro-teins for either X-ray or electron crystallography, and because alterna-tive methods of membrane protein structure determination still haveto prove their usefulness. The strengths of the department lie in thecrystallization of membrane proteins and the subsequent X-ray crys-tallographic analysis.In the department well-ordered crystals of respiratory proteins wereobtained with a bacterial cytochrome c oxidase, a bacterial succi-nate:quinone oxidoreductase (fumarate reductase) and the yeastmitochondrial cytochrome bc1 complex (see group reports). The struc-

Elena Olkhova and Hartmut Michel

Molecular Dynamics Simulations and Hydrogen-Bonded Network Dynamics of Cytochrome cOxidase from Paracoccus Denitrificans

Research objectives: The major subject of our researchis the investigation of the coupling of electron transferand proton translocation in cytochrome c oxidase (COX)from Paracoccus denitrificans. Our group determined theatomic structure of this protein coupled in 1995. Theenzyme ultimately couples electron transfer fromcytochrome c to an oxygen molecule with proton trans-location across the inner mitochondrial or bacterialmembrane. The reaction requires complicated chemicalprocesses to occur at the catalytic site of the enzyme incoordination with proton translocation, the exact mech-anism of which is not known at present.

Computational approach Two main theoretical approa-ches have been used to investigate the coupling of elec-tron and proton transfer – an internal water moleculesprediction scheme and a molecular dynamics study ofCOX in the fully oxidized state, embedded in a hydrateddimyristoylphosphatidylcholine (DMPC) lipid bilayermembrane. Both systems consist of more than 100 000atoms. Energy minimization, membrane modelling and

Page 44: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

Significance: The MD simulations provide initialinsights into the coupling between local changes in theactive sites of the COX and large conformationalchanges in the whole protein, in particular, conforma-tional changes in the hydrogen bonded network, duringthe catalytic cycle. However, a few issues were not com-pletely solved in the framework of our previous studyand should be the topic of future research. Differentsimulation protocols and membrane systems should betested in order to solve a problem concerning the stabil-ity of a simulated system. Our system is the first simu-lation of an odd-shaped membrane protein that sub-stantially sticks out of the membrane. Therefore, wemay be lacking the typical stabilization of parallel lipidbilayers by our use of somewhat artificial periodicboundary conditions and Ewald long-range electrostat-ics for systems of this size. However, due to the largecomputational costs involved, such studies are justbeginning to become possible with the increasing powerof new computers. Given the instability observed in thesimulations it was decided to perform some quantumchemical calculations to obtain more accurate hemecharges for the reduced state of the enzyme, using theESP module of NWChem 4.5.

Publications• Olkhova, E., M.C. Hutter, M.A. Lill, V. Helms and H. Michel,

“Dynamic Water Networks in Cytochrome c Oxidase from Paracoccus denitrificans Investigated by Molecular DynamicsSimulations,” Biophys. J. 86, 1873-1889 (2004).

44 M P I O F B I O P H Y S I C S / D E P A R T M E N T O F M O L E C U L A R M E M B R A N E B I O L O G YH

IG

HP

ER

FO

RM

AN

CE

CO

MP

UT

IN

G

Figure 1: Crystals of the co-complex consisting of an antibody Fv-fragment and the cytochrome bc1 complex from S. cerevisiae.The crystals have an approximate size of 0.4 x 0.3 x 0.3 mm3.They belong to the space group C2 and diffract up to 2.4 Å resolu-tion (research group Hunte).

Figure 2. The distribution of water molecules in the COX during the MD simulation (research group Michel).

Page 45: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

Research objectives: Our research focuses on the visu-alization of the macromolecular architecture of a cell.We use cryo-electron microscopy to image cells ororganelles in situ. The experimental technique imagesthe specimens in a vitrified state which ensures excellentpreservation. However, in the absence of staining mate-

rials the electron micrographs inevitably suffer from avery low signal-to-noise ratio (SNR) since the specimencan only be irradiated with very low electron doses.Nevertheless, the technique has proven to be able toresolve large macromolecules in the context of entirecells. In order to quantify the occurrence of complexes in

Max Planck Institute for Biochemistry, Martinsriednear Munich, Dept. Molecular Structural Biology

Three-Dimensional Imaging of MolecularMachinesThe department is engaged in studying the three-dimensional archi-tecture of molecular machines, which have important functions in thecell. e.g., chaperonins and protesomes. Chaperonins help other pro-teins to fold properly and proteasomes remove misfolded proteins byselective degradation. A wide repertoire of biochemical and biophysi-cal techniques is used to elucidate the structure of such machines.

Friedrich Förster and Reiner Hegerl

Molecular Visualization of Cells by Cryo-Electron Tomograms

A long-term project in this context isthe development of electron tomog-raphy, the most widely applicable,non-invasive approach for obtainingthree-dimensional information byelectron microscopy. It can depictunique structures and scenes, suchas many large supramolecularassemblies, organelles and evenwhole cells with a resolution of a few nanometer.

This is high enough to identify macromolecules by their structural signature. Therefore it allows to study macromolecular architecturesin their cellular context and thus helps to bridge the gap betweenmolecular and cellular strucutral biology.

Identification of macromolecules in cryo-electron tomograms based on their structural signature. In this hybrid approach the structures of the macromolecules under scrutiny are known from X-ray crystallographic studies or other high-resolution approaches. By computing a suitable six-dimensional cross-correlation function CCF (x,y,z,ϕ,ψ,θ), the positions and orientations of macromolecules can be determined.

Page 46: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

the Euler angles ϕ, ψ, and θ which describe the orien-tation of the particles within the tomogram. In order totake care of the varying contrast within the tomogram alocal normalization is performed. This normalization isdone in the vicinity of a macromolecule. Furthermore,the missing wedge is considered by constraining thecorrelation to the sampled data points.

Computing the CCF(x,y,z,ϕ,ψ,θ ) is in general timedemanding since the rotational problem cannot beaddressed efficiently. Explicit scanning of the rotationaldegrees of freedom ϕ, ψ, and θ is inevitable. However,the computation of the CCF can be implemented in aparallel way easily which reduces the time to an accept-able scale.

Currently, the developed procedure is applied tocryo-electron tomograms of various cells, in particularprokaryotic species. These tomograms are scanned forfeatures that match the structures of large molecularmachines such as ribosomes or proteasomes.

Publications:

• Frangakis AS, Bohm J, Förster F, Nickell S, Nicastro D, Typke D,Hegerl R, Baumeister W: "Identification of macromolecular complexesin cryoelectron tomograms of phantom cells". Proc Natl Acad SciUSA, 99, 14153-14158 (2002).

• Medalia O, Weber I, Frangakis AS, Nicastro D, Gerisch G, BaumeisterW: "Macromolecular architecture in eukaryotic cells visualized bycryoelectron tomography", Science, 298:1209-1213 (2002).

46 M P I F O R B I O C H E M I S T R Y / D E P A R T M E N T O F M O L E C U L A R S T R U C T U R A L B I O L O G YH

IG

HP

ER

FO

RM

AN

CE

CO

MP

UT

IN

G

cellular tomograms we develop pattern recognitionapproaches which are able to derive a maximum amountof information from cryo-electron tomograms.

Computational approach: We use several methods fordata mining from tomograms. The computational toolsdeveloped in our laboratory comprise 'denoising' tech-niques based on non-linear anisotropic diffusion andsegmentation algorithms based on the statistical prop-erties of the tomogram voxels. For quantitative explo-ration of tomograms suitable matched filtering algo-rithms were developed. The maxima of the filter outputdenote position and orientation macromolecules.

Project description: Particle detection in cryo tomogramsby means of locally normalized cross-correlation: Cryo-electron tomograms suffer from two principal obstacles:First, the SNR is extremely low and varies throughout atomogram. Secondly, the tomograms have an imagingartifact which is due to the limited angular tilt range ofspecimen holders in the electron microscope. Since theholder can only be tilted to ±70° not the full spatialinformation of the object can be gathered; in Fourierspace a wedge-shaped region remains unsampledwhich leads to the 'missing wedge' effect.

We developed and implemented a matched filter-ing procedure for the purpose of detecting macromole-cules by their known structural signature (see Figure).The algorithm computes a cross-correlation functionCCF as a function of the spatial variables x, y, and z and

Page 47: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

Research objectives: Recent progress in observationalcosmology has established a standard model for thematerial content of the Universe, and its initial condi-tions for structure formation 380000 years after the BigBang. This makes it one of the biggest challenges fortheoretical cosmology to explain how the structures wesee in the Universe today have formed out of these ini-tial conditions. The aim of this project is to makedetailed theoretical predictions for the hierarchicalgalaxy formation process, and to compare them with thelarge observational data sets that are presently obtainedwith deep redshift surveys.

Computational approach: We use a massively parallelN-body code for a collisionless fluid in an expandingbackground spacetime (GADGET-2 code). Gravitatio-nal forces are computed with a hierarchical multipoleexpansion on small scales, and a particle-mesh methodbased on Fourier techniques on large scales. Individualand adaptive time-steps for all particles are used.

Project description: Most of the mass in the Universe(~85%) consists of dark matter, an as of yet unidentifiedweakly interacting elementary particle. Initial fluctua-tions in this mass component, seeded by an early infla-tionary epoch, are amplified by gravity as the universeexpands, and eventually collapse to form the galaxies wesee today. In order to model this highly non-linear and

Max Planck Institute for Astrophysics, Garching near Munich, Dep. Cosmology

Simulating the Formation of GalaxiesProf. Simon D.M. White and his research group are working on anumber of topics in cosmology, ranging from studies of the cosmicmicrowave background radiation, to detailed modelling of large-scalestructure and galaxy formation, and gravitational lensing. Particularlyfor the theoretical study of the galaxy formation process over the 13.7billion year long history of the Universe, direct numerical simulationsare an indispensable tool. They can be used to follow the non-lineargrowth of structure in the dark matter and gas components under thecombined effects of gravitational, hydrodynamical and radiative inter-actions. As an illustrative example, a new very large simulation of the

intrinsically three-dimensional process, the matter fluidcan be represented by a collisionless N-body systemthat evolves under self-gravity. However, in order tomodel the Universe faithfully, it is imperative to makethe number N of particles used in the simulation aslarge as possible.

In a new computation of this kind, dubbed“Millennium” simulation by researchers at MPA andtheir collaborators in the international Virgo consortium,an unprecedentedly large particle number of more than10 billion has been used. This is about an order of mag-ntitude larger than the largest computations carried outin the field thus far, and significantly exceeds the long-term growth rate of cosmological simulations, whichitself roughly follows Moore’s law. This progress has beenmade possible by important algorithmic improvements inthe employed simulation code, and the high degree ofparallelization reached with it, allowing the computationto be carried out efficiently on a 512 processor partitionof the IBM p690 computer of the RZG. In addition, thevery large particle number used in the simulationrequired aggressive memory optimizations to make theproblem just barely fit it into the aggregated 1 TB ofphysical memory available on the computer partitionused. The total CPU-time requirement to completion ofthe computation was about 350,000 hours.

The simulation volume is a periodic box of 500Mpc/h on a side, giving the particles a mass of 8.6x108/h

Volker Springel, MPI for Astrophysics

The “Millennium” Simulation – A Representative Model for the Cosmic Galaxy Population

dark matter dynamics is discussed below. Other numerical activitiesof the group include hydrodynamical simulations of galaxy clusterswith non-trivial physics such as star formation, thermal conduction,magnetic fields, and chemical enrichment, and simulations of thereionization process of the Universe by the first cosmic sources.

Page 48: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

48 M P I F O R A S T R O P H Y S I C S / D E P A R T M E N T O F C O S M O L O G YH

IG

HP

ER

FO

RM

AN

CE

CO

MP

UT

IN

G

solar masses, enough to represent dwarf galaxies byabout 100 particles, galaxies like the Milky Way by abouta thousand, and the richest clusters of galaxies with sev-eral million. The spatial resolution is 5 kpc/h, availableeverywhere in the simulation volume. The dynamicrange of 105 per dimension in 3D allows extremely accu-rate statistical characterisations of the dark matter struc-ture of the universe. It also gives a nearly completeinventory of all luminous galaxies above about a tenth ofthe characteristic galaxy luminosity. This aspect is par-ticularly important for constructing a new generation oftheoretical mock galaxy catalogues which for the firsttime allow a direct comparison to observational data setson an “equal footing”, because the Millennium simula-tion has a good enough mass resolution to yield a repre-sentative sample of galaxies despite covering a volumecomparable to the large observational surveys. The largevolume is also particularly crucial for studying rareobjects of low space density, such as rich clusters ofgalaxies or the first luminous quasars at high redshift.

We note that the postprocessing tasks involved inanalysing the multi-TB data-set produced by the

Millennium simulation are a challenge in their ownright. In order to track the formation history of close to25 million galaxies in detail, special parallel algorithmshad to be developed in addition to the simulation codeitself. The final galaxy data for the model universe is ofsubstantial size and information content as well. It willbe organized into a “theoretical virtual observatory”,allowing queries similar to those applied to the largeobservational databases.

Publications:• A. Jenkins, C.S. Frenk, S.D.M. White, et al., “The mass function of

dark matter halos”, Mon. Not. R. Astron. Soc., 321, 372 (2001)• V. Springel, N. Yoshida, S.D.M. White, “GADGET: a code for collision-

less and gasdynamical cosmological simulations”, New Astronomy,6, 79 (2001)

• V. Springel, et al., “Simulations of the formation, evolution and clus-tering of galaxies and quasars”, Nature, 435, 629 (2005)

• V. Springel, S.D.M. White, G. Tormen, G. Kauffmann, “Populating acluster of galaxies: Results at z=0”, Mon. Not. R. Astron. Soc., 328,726 (2001)

Large-scale structure seen in the dark matter in a thin slice through the simulated Universe. Clearly visible is the `Cosmic Web’ which connects individual galaxies, groups and clusters by filaments of dark matter, surrounding large underdense `voids’ that have opened up. At the centres of each halo, luminous stars form out of baryons due to the dissipative effects of radiative cooling. The latter effects arenot directly included in the first step of this simulation, but are treated in subsequent computations as part of the simulation analysis.

Page 49: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

Max Planck Institute for Astrophysics at Garching near Munich, Dep. of Stellar Physics,Nuclear and Neutrino Astrophysics, and Numerical Hydrodynamics

SimulatingAstrophysical Flows The current research interests of the group, lead by ProfessorWolfgang Hillebrandt, are focused on stellar evolution, stellar atmos-pheres, nuclear and particle astrophysics, supernova explosions, γ-raybursts, and astrophysical jets. Most of the work is part of the generalfield of astrophysical fluid dynamics, including relativistic and non-rel-ativistic flows, and reactive and magnetic fluids and gases. Most ofthis work makes use of high-end supercomputers, enforced by thetime dependence and the three-dimensional nature of most of theproblems.In general, the equations of fluid dynamics, non-linear partial differ-ential equations, have to be solved subject to initial and boundaryconditions, supplemented by equations describing the state variablesand, possibly, equations for the transport of energy and momentum by means of radiation or particles. Typically, these equations are dis-cretized in space and time, and are transformed into a set of non-lin-ear algebraic equations which are then solved on computers. Whatmakes them so ‘expensive’ is the fact that in order to obtain fair dis-crete representations of continuous functions or variables many gridpoints have to be used. In 3-dimensional simulations this means thatthe number of variables (and thus of equations) scales with the num-ber of grid points. Moreover, the need to compute time evolutionsrequires many discrete time steps, and the size of the problem scaleswith this number as well. It is therefore not surprising that the groupis one of the heaviest users of the Garching Computer center.The aim of all simulations is either to interpret astronomical observa-tions which do not supply enough information to understand theirnature in an unambiguous way, or to study fundamental processeswhich cannot be observed at all. In this respect numerical simulationsplay a more important role in astrophysics then in all other branchesof physics: Astronomers are passive “observers” of what Naturedecides to show them.

This holds for the evolution of stars, galaxies, and the Universe as awhole, for which the typical evolution times are orders of magnitudelonger than the human life span. Therefore only snap-shots can beobserved which have to be connected by simulations in order toobtain a complete picture. Similarly, the deep interiors of stars and ofmany other astronomical objects cannot be directly observed but onlyafter their light has been altered by diffusing through hundreds ofthousands of kilometers of dense and hot gas. Again computer simu-lations provide the tools to connect observable quantities with theirphysical causes. In what follows three examples will be discussed in some detaildemonstrating the potential of this approach, namely the ability ofastrophysical simulations to reproduce observed properties ofextremely complex processes. The examples reflect the broad spec-trum of possible applications, ranging from turbulent thermonuclearcombustion in dense stellar matter, leading to very spectacular dis-ruptions of an entire star, to the collapse of the cores of massivestars to neutron stars and black holes, and gravitational waves emit-ted after the neutron star is born.Other activities of the group which will not be discussed here includethe modeling of relativistic and non-relativistic outflows from stellarmass and super-massive black holes, of the evolution of single anddouble stars and, in particular, of convective flows in evolved stars,being the cause of energy and angular momentum transport andchemical mixing. The transport of radiation in hot stars and stellarexplosions giving rise to their observed luminosity and spectra, is alsomodeled and, again, the computer resources required are largebecause of the complexity of the physics involved.

Wolfgang Hillebrandt, MPI for Astrophysics

Thermonuclear Supernova ExplosionsResearch objectives: This work aims at the under-standing of a certain subclass of supernovae, commonlynamed ‘Type Ia’ which have become a standard tool tomeasure cosmic distances out to several billion lightyears, i.e., many of these supernovae are observed whenthe Universe was half its present age. Therefore thequestion arises if these very distant stellar explosions are

the same we observe in great detail in our cosmic neigh-borhood, and this question can only be answered oncewe understand them.

Computational approach: There is clear evidence thattype Ia supernovae are the thermonuclear disruptions ofwhite dwarf stars with a mass slightly higher than the

Page 50: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

50H

IG

HP

ER

FO

RM

AN

CE

CO

MP

UT

IN

G

Figure 1: Position of the thermonuclear burning front in two whitedwarf stars about 1.0s after ignition. The white dwarf with highcarbon mass fraction is shown in blue, the one with low carbon inyellow. The scale of the box shown is about 2000km. Although the explosion with more carbon fuel obviously is more energetic,both supernovae would have the same peak luminosity.

Publications• M. Reinecke, W. Hillebrandt, J.C. Niemeyer, “Three-dimensional

simulations of type Ia supernovae”, Astron. Astrophys. 391, 1167(2002)

• F. Röpke, W. Hillebrandt, “The case against the progenitor’s carbon-to-oxygen ratio as source of peak-luminosity variations in type Iasupernovae”, Astron. Astrophys. 420, L1 (2004)

Ewald Müller, MPI for Astrophysics

Gravitational Waves

Research objectives: Gravitational wave astronomy is apromising new tool to observe some of the most exoticprocesses in the universe. We focus on modeling thegeneral relativistic formation of neutron stars and blackholes in core-collapse supernovae and supermassive starcollapse. The objective is to find generic properties ofthe collapsing systems, and gravitational waveformsassociated with them.

Sun and radii comparable to the radius of the Earth.Thermonuclear fusion of carbon and oxygen is thoughtto supply the necessary energy and, therefore, the com-putational problem is very similar to simulations ofchemical combustion of turbulent pre-mixed flames.The equations that have to be solved are the reactiveEuler-equations of fluid dynamics in three dimensions,for a general equation of state, plus a system of ordinarydifferential equations to deal with nuclear reactions,self-gravity, and a model of small length-scale turbu-lence to compute the propagation of nuclear ‘flames’into the unburned carbon-oxygen fuel. The latter isdone by describing the flame by a level-set functionmoving with the velocity of the turbulent velocity on thegrid scale.

Project description: Over the past years the numericaltools have been developed at MPA to solve the problemoutlined above. By default, the computations had to becarried out in three spatial dimensions because turbu-lence is inherently 3-dimensional. The up to now largestset of computations done on the IBM supercomputer atthe RZG used 5123 gridpoints, about 10 Gbytes ofmemory and about 10,000 to 15,000 CPU-hours. Thesimulations were ‘parameter free’ in the sense that onlyphysical degrees of freedom, i.e., the composition of thewhite dwarf and the ignition conditions were varied,parameters also different from supernova to supernova.The models could reproduce the observed explosionenergies, light curves, and element abundances well.They could also explain the observed correlationsbetween the luminosity of supernovae and the form oftheir light curves which is needed to calibrate them asdistance indicators.

M P I F O R A S T R O P H Y S I C S / D E P A R T M E N T O F S T E L L A R P H Y S I C S ,N U C L E A R A N D N E U T R I N O A S T R O P H Y S I C S A N D N U M E R I C A L H Y D R O D Y N A M I C S

Computational approach: To represent a general rela-tivistic collapse in a simulation model, it is necessary tofind a discrete solution to Einstein’s field equations, orsome suitable approximation to them. This requires thefull apparatus of modern numerical vacuum relativity.Since we model the collapsing stars as fluids, the equa-tions of general relativistic hydrodynamics need to besolved in addition. Further tools are horizon finders for

Page 51: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

M P I F O R A S T R O P H Y S I C S / D E P A R T M E N T O F S T E L L A R P H Y S I C S ,N U C L E A R A N D N E U T R I N O A S T R O P H Y S I C S A N D N U M E R I C A L H Y D R O D Y N A M I C S

51H

IG

H P

ER

FO

RM

AN

CE

CO

MP

UT

IN

G

PublicationsDimmelmeier, H., Font, J. A., and Müller, E., "Relativistic simulations ofrotational core collapse. I. Methods, initial models, and code tests",Astron. Astrophys., 388, 917-935, (2002)Dimmelmeier, H., Font, J. A., and Müller, E., "Relativistic simulations ofrotational core collapse. II. Collapse dynamics and gravitational radia-tion", Astron. Astrophys., 393, 523-542, (2002)

Figure 2: Axissymmetric simulation of rotational core collapse in general relativity. The plot shows the distribution of density in aslice through the star, right after formation of the bounce shock.

black hole formation and numerical methods to extractgravitational waveforms from the simulations. The sys-tem of differential equations is, depending on theapproximations used, either a set of coupled non-linearelliptic and hyperbolic equations, or a mixed elliptic-hyperbolic system. Adaptive techniques are used to bet-ter resolve specific parts of the evolutions. The mostrecent simulations are done in three spatial dimensions,and thus need a significant number of grid points andcomputational resources to allow for a reasonable levelof accuracy.Project description: Our group is leading the efforts inmodeling gravitational collapse in the Transregio Sonder-forschungsbereich 7 “Gravitational Wave Astronomy”.We are in close collaboration with the Max PlanckInstitute for Gravitational Physics and other nodes of theSonderforschungsbereich to develop techniques andtools, and exchange expertise. The core-collapse to neu-tron stars, and associated gravitational radiation signals,has already been modeled in axisymmetry and with anapproximation to Einstein’s field equations.We are nowmaking use of large-scale parallel computers to improveon these aspects, since some of the most promisingsources of gravitational radiation can only be modeledwithout restricting the number of spatial dimensions.With the help of gravitational wave detectors likeGEO600 (Germany), LIGO (USA), VIRGO (Italy),TAMA300 (Japan) or the space-based LISA(NASA/ESA), and an increasingly accurate understand-ing of the production mechanism of gravitational waveson the theoretical side, we will soon be able to open upa completely new window into the universe.

Hans-Thomas Janka, MPI for Astrophysics

Simulations of Massive Star Explosions

Research objectives: This research project attempts toinvestigate hydrodynamic instabilities in supernovaexplosions of massive stars. Convection and large-scalenon-radial plasma flow are important for understandingthe explosion mechanism of supernovae, for predictingneutrino and gravitational wave signals, for explainingthe measured pulsar velocities, and for describing heavyelement formation and mixing during the explosion.

Computational approach: We use newly developed neu-trino-hydrodynamics codes that couple multi-dimen-sional fluid dynamics with a multi-frequency, multi-angle treatment of the neutrino transport. Particularemphasis is put on a detailed description of neutrino-matter interactions in the dense supernova core and onthe implementation of state-of-the-art nuclear physicsthat is relevant for the supernova problem.

Page 52: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

• The world-wide first multi-dimensional simulations withmulti-frequency, multi-angleneutrino transport including stel-lar rotation, which unmask seri-ous deficiencies of previous,approximative approaches.

• The first modern supernova stud-ies with varied input for thenuclear equation of state andimproved neutrino-nuclei inter-actions, which reveal importantdifferences in the core evolution.

• The first model sequences for asystematic investigation of neu-trino-driven convection in twoand three spatial dimensions,which demonstrate the impor-tance of low-mode convection forexplaining global asymmetries ofobserved supernovae and highrecoil velocities of pulsars.

• The first predictions of gravita-tional wave signals from rotating,convective supernova cores witha realistic treatment of the micro-physics.

Unravelling the basic principles ofsupernova explosions and fathom-ing the viability of the convectivelysupported neutrino-heating mech-anism by more realistic numericalmodels poses a major computation-al challenge. Top-end computerperformance as well as enduringavailability of computing resources

are indispensable to bridge the involved long evolution-ary timescales and to scan the parameter space associ-ated with remaining uncertainties of the input physicsand initial conditions.

Publications• R. Buras, M. Rampp, H.-Th. Janka, K. Kifonidis, “Improved models of

stellar core collapse and still no explosions: What is missing?”, Phys.Rev. Lett. 90, 241101 (2003).

• K. Kifonidis, T. Plewa, H.-Th. Janka, E. Müller, “Non-spherical corecollapse supernovae: I. Neutrino-driven convection, Rayleigh-Taylorinstabilities, and the formation and propagation of metal clumps”,Astron. Astrophys. 408, 621 (2003).

• L. Scheck, T. Plewa, H.-Th. Janka, K. Kifonidis, E. Müller, “Pulsarrecoil by large-scale anisotropies in supernova explosions”, Phys.Rev. Lett. 92, 011103 (2004).

• E. Müller, M. Rampp, R. Buras, H.-Th. Janka, D.H. Shoemaker,“Towards gravitational wave signals from realistic core collapsesupernova models”, Astrophys. Journal 603, 221 (2004).

52H

IG

H P

ER

FO

RM

AN

CE

CO

MP

UT

IN

G

Fig. 1: Early stages of the supernova evolution of an 11.2 solar mass star (from top left tobottom right for times 0.14, 0.18, 0.20 and 0.23 seconds after shock formation). Violent con-vective overturn in the neutrino-heated postshock layer has led to a dominance of lowmodes in the flow pattern at late times, which have pushed the highly deformed shock frontto a radius of more than 600 km at the end of the simulated evolution. Continuing shockexpansion is likely to trigger a weak, anisotropic explosion.

M P I F O R A S T R O P H Y S I C S / D E P A R T M E N T O F S T E L L A R P H Y S I C S ,N U C L E A R A N D N E U T R I N O A S T R O P H Y S I C S A N D N U M E R I C A L H Y D R O D Y N A M I C S

Project description: Neutrinos are believed to powersupernova explosions. These elementary particles arecopiously produced in the nascent, hot neutron star anddeposit some of their energy in the surrounding matterbefore they escape from the dense interior of the star.This triggers hydrodynamic instabilities, which play acentral role during all phases of the explosion. Due tothe enormous complexity of the problem, whichrequires multi-dimensional hydrodynamics and neutri-no transport, our understanding of the involved physicalprocesses is still incomplete and predictions of observ-able supernova properties are highly uncertain. Progressdepends crucially on more complete and more accuratenumerical simulations. To this end we have developednew computational tools for treating the neutrino trans-port and interactions in supernova matter with unprece-dented accuracy in combination with higher-order, grid-based methods for solving the hydrodynamics problem.Applying these codes we have achieved a number ofmajor breakthroughs:

Page 53: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

numerical relativity, with an emphasis on the numerical modelling ofsources of gravitational radiation. Many of our codes contribute to the development of community soft-ware such as the Cactus Computational Toolkit, and active researchin grid computing via the Gridlab project. Our research is carried out in conjunction with a number of active col-laborations, in particular with the Louisiana State University's Centerfor Computation and Technology and Department of Physics, where anumber of us hold cross-affiliations.

Max Planck Institute for Gravitational Physics (Albert Einstein Institute), PotsdamDepartment of Astrophysical Relativity,Numerical Relativity Group

Numerical Solutionsfor Einstein’s Equations Numerical relativity is primarily concerned with the computer simula-tion of extremely massive bodies, such as neutron stars and blackholes. An accurate model of such systems requires a solution of thefull set of Einstein's equations for general relativity – equations relating the curvature of spacetime to the energy distribution. The work of the numerical relativity group is a multi-disciplinaryeffort, spanning the fields of relativistic astrophysics, differentialgeometry, mathematics of nonlinear partial differential equations andhigh performance computing. As a result, the numerical relativityeffort at the Albert Einstein Institute (AEI) is carried out by membersof both the 'Astrophysical Relativity' and 'Geometric Analysis andGravitation' divisions. Our research interests cover all aspects of

Research objectives: Our group is focussed on model-ling massive binary systems (black holes and neutronstars), using the full Einstein equations of general rela-tivity for the spacetime evolution. We aim to produceaccurate gravitational wave signals for these systems asthey inspiral and coalesce.

Computational approach: We use finite differences tomodel the partial differential equations which describethe spacetime and hydrodynamic evolution. Mesh refine-ment techniques allow increased resolution in strong-field regions. High-resolution shock capturing methodsare used to accurately model hydrodynamic fields. Thenumber of variables and the nature of the equationsrequire that a large number of computations be carriedout at each grid point, making these computations someof the most computationally demanding in physics.

Accomplishments: Our group is the first to carry out asystematic study of binary black hole mergers, startingfrom initial data corresponding to close quasi-circularorbits. These vacuum spacetimes are strong sources of

gravitational waves. Using a newly developed formula-tion of the Einstein equations and gauge conditions, wewere able to trace the individual bodies in their orbituntil their horizons merged to form a single black hole.These simulations indicated that closely separatedblack hole binaries are likely to merge on the order of ahalf an orbit.

Evolutions of the Einstein equations have long beenplagued by problems with stability, with exponentiallygrowing solutions of the finite difference equationsdestroying the accuracy of a simulation on a shorttimescale. Extensive studies of the field equations andtheir finite differencing, as well as the development ofnew gauge conditions, have allowed us to evolve blackhole spacetimes for unprecedented lengths of time.

In order to measure physical quantities, such asgravitational wave signals or black hole horizon loca-tions, we require a number of specialised computation-al tools. In the past year, we have developed an extreme-ly efficient “apparent horizon” finder, which is capableof determining black hole shapes, masses and momen-ta. Asymptotically, gravitational wave forms are meas-

Denis Pollney

Numerical Simulations of Black Hole Spacetimes

Page 54: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

HI

GH

PE

RF

OR

MA

NC

E C

OM

PU

TI

NG

54

ured as perturbations of a fixed background. Our grouphas specialised in developing various techniques forextracting wave signals from dynamical spacetimes.

We have coupled our vacuum spacetime code to ahydrodynamic code capable of evolving neutron starspacetimes. This code incorporates high resolutionshock capturing techniques to accurately model thebehaviour of shocks in the stellar fluid. In its first appli-cation, this code was used to study the collapse of rotat-ing neutron stars to a black hole, determining physicalproperties of the final object and some indication of thegravitational wave emission.

Significance: The massive dynamical systems which weare studying, orbits and collapse of black holes and neu-

generation of detectors, even the strongest signals fromnearby binary mergers are likely to be buried deepwithin the inherent detector noise. Some advancedknowledge of the expected signal can aid in producingfilters in order to make a detection possible. Later gen-erations of detectors will be sure to see signals. Then,models will be required to interpret them, in order todetermine properties of the physical system beingobserved.

The numerical simulations we are carrying out willprovide insight onto Einstein’s relativity in the strongfield and dynamical regime governing the behaviour ofdense bodies. Such studies of dynamical spacetimes inthe neighbourhood of black holes and neutron stars cannot be carried out in any other way

Evolution of a pair of inspiralling black holes before merger, showing a local estimate of the gravitationalradiation content in green.

M P I F O R G R A V I T A T I O N A L P H Y S I C S / D E P A R T M E N T O F A S T R O P H Y S I C A L R E L A T I V I T Y , N U M E R I C A L R E L A T I V I T Y G R O U P

tron stars, create dramatic fluctuations in their surround-ing spacetimes, which propagate away as gravitationalwaves. In recent years, a number of large-scale interfer-ometers have been constructed to measure theseextremely weak signals, including the LIGO detectors inthe US and the GEO600 detector operated by a German-English collaboration including the Max Planck Society.Gravitational waves have a number of properties interest-ing to astronomers. Notably, they are not obscured, forinstance by intervening dust, so that if the signals can beinterpreted we will have a new window on a variety ofastrophysical phenomena, from the massive black hole atthe centre of our galaxy, to distant supernova explosions.

An accurate model of the expected wave signal isrequired for a number of reasons. For the current first

Publications• Miguel Alcubierre, Bernd Bruegmann, Peter Diener, Michael Koppitz,

Denis Pollney, Ed Seidel, Ryoji Takahashi. “Gauge Conditions forLong Term Numerical Black Hole Evolutions Without Excision” Phys.Rev. D67 084023 (2003).

• Erik Schnetter, Scott H. Hawley, Ian Hawke. “Evolutions in 3D numer-ical relativity using fixed mesh refinement” Class. Quant. Grav. 211465 (2004).

• Jonathan Thornburg. “A Fast Apparent-Horizon Finder for 3-Dimensional Cartesian Grids in Numerical Relativity”. Class. QuantumGrav. 21 (2004), 743-766.

• L. Baiotti, I Hawke, PJ Montero, F Loeffler, L Rezzolla, N Stergioulas,JA Font, E Seidel. “Three-dimensional relativistic simulations of rotat-ing neutron star collapse to a Kerr black hole”. Phys. Rev D (t. b. p.)

Page 55: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

Max Planck Institute for extraterrestial Physics (MPE) in Garching near Munich, Dep. High-Energy Astrophysics

Gamma-Ray AstronomyVolker Schönfelder and his colleagues explore the sky in gamma-rayemission, through special gamma-ray telescopes which are flown on satellite observatories launched by the NASA and ESA spaceagencies. Gamma-rays are very energetic electromagnetic radiation.Therefore they tell us about the most violent physical processes inthe universe. Objects of research are compact stars in binary systems, pulsars,novae, supernovae, and galaxies with an extraordinarily active nucle-us. Typically, physical processes of relativistic-particle accelerationand of new-element formation are investigated with gamma-raysfrom such objects.Gamma-rays come in low intensity; so observers need to combinemonths of sky exposures taken in many pointing directions to mapthe sky brightness in gamma-rays. Moreover, instrumental back-grounds are high from cosmic-ray bombardements of the instru-ments in space, so most of the recorded signal must be recognized

as such background. Gamma-ray photons only rarely interact withtelescope detectors, and when they do, each photon follows itsindividual trajecory and interaction sequences. The “focussing”therefore is done through pattern recognition and regularizationalgorithms in computers on the ground. Massive parallel computingis employed, to make use of the known physics of high-energy pho-ton interactions in the material, and of the variety of sky exposuresunder slighly different background conditions, simultaneouslydeconvolving the full database of measurements.

Project description: The Gamma-Ray Astronomy Groupat the Max Planck Institute for extraterrestrial physics(MPE) in Garching has been studying the gamma-rayskies since suitable space-borne telescopes were estab-lished, in the 70ies. Breakthrough science came withthe NASA Compton Gamma-Ray Observatory,launched in 1991, which surveyed the sky over 9 years,

collecting many TBytes of telescope trigger data fromsuch high-energy photons. Since October 2002, ESA’sINTEGRAL space observatory maps the gamma-ray skywith a different type of telescope- deepening the earlierobservations and enriching them in particular with finespectroscopy information.

Research objectives: At gamma-ray energies, materials are almosttransparent, so telescopes func-tion very differently than e.g.opti-cal telescopes. The interactions ofhigh-energy photons in detectorssimilar to those witch are commonin high-energy physics laboratoriessuch as (CERN or DESY) arerecorded, “event by event”.Subsequent data processing thenneeds to find specific patterns inthese complex multi-detectorevent signatures, and relate thoseto the brighness distribution onthe gamma-ray sky. Moreover,gamma-ray telescopes carry a huge

Roland Diehl and Andrew W. Strong, MPI for extraterrestial Physics

Imaging the Gamma-Ray Sky

This image shows how the pioneering instrument (COMPTEL) imaged the gamma-ray skybrightness in the energy regime 1-3 MeV. For this image, Maximum-Entropy deconvolution of 5 years of data with ~240 different sky exposures was employed. (Galactic sky coordinates)

Page 56: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

HI

GH

PE

RF

OR

MA

NC

E C

OM

PU

TI

NG

56 M P I E X T R A T E R R E S T I A L P H Y S I C S ( M P E ) / D E P A R T M E N T H I G H - E N E R G Y A S T R O P H Y S I C S

on the sky are on the order of days to weeks for individ-ual regions, data from typically several months withmany different telescope pointings are to be combinedin the analysis, for imaging the gamma-ray sky. Due tothe construction of gamma-ray telescopes, an event trig-ger typically consists of some ten individual parameters,which encode energy and arrival direction informationindirectly. The laws of high-enery photon interactionswith material must be employed to efficiently locate thesignatures of backgrounds and sky photons in the meas-ured data space. Direct inversion is impossible.

Computational approach. Sophisticated imaging algo-rithms have been developed by Andrew Strong and hiscolleagues, which require large computing resourcesbecause of the large amount of data to be simultane-ously treated. This implies parallel computing. Thecomputation of sky images has been done on the RZGCray and more recently the Regatta system. TheMaximum Entropy algorithm was chosen for imaging, asit is extremely powerful and has a well-established andlogical foundation. For the application to data from theCOMPTEL (Compton telescope) instrument, datafrom 9 years of observations have to be combined. So inour parallel-computing approach each slave processor isassigned to a subset of the observations while the mas-ter processor assembles the image from this informa-tion. For the ESA INTEGRAL (coded-mask) instru-ment, imaging can still be done on normal workstations,but as the data volume increases (the mission extends atleast to 2008) the supercomputer approach will alsobecome essential.

Publications:

• “COMPTEL skymapping: a new approach using parallel computing”, A.W. Strong, H. Bloemen, R. Diehl, et al., Astrophysical Letters andCommunications, 39, 689 (1999)

• “The COMPTEL 1.809 MeV Survey”, Plüschke S., Diehl R.,Schönfelder V., et al., In: Exploring the Gamma-Ray Universe (4thINTEGRAL Workshop). (Eds.) A. Gimenez, V. Reglero and C. Winkler.ESA SP-459, Noordwijk, The Netherlands, 55 (2001)

• “Maximum Entropy imaging with INTEGRAL/SPI data”, A.W. Strong, Astronomy and Astrophysics, 411, L127 (2003)

These images from the inner Galaxy (50°>l>-50°,-25°<b<25°) from the INTEGRAL Spectrometer instruments were derived fordifferent energy bands around (from top) 50, 300, and 450 keV, and show the diffuse and point source contributions as theyevolve with energy. Maximum-Entropy deconvolution from ~4000telescope pointings is employed.

instrinsic background brightness, from activation of tel-escope and spacecraft materials through energetic cos-mic rays.

The celestial gamma-ray signal amounts to only afew percent of the total measured telescope event trig-gers; finding and discriminating such instrumentalbackground signals is a major challenge. Exposure times

Page 57: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

Max Planck Institute for Solar System Research (MPS), in Katlenburg-Lindau Solar Magnetohydrodynamics Group

The Magnetic SunThe fascinating variety of the phenomena of solar activity, reachingfrom huge dark sunspots to strings of tiny brilliant brightenings, fromsupersonic gas jets to giant coronal mass ejections, and from radiobursts due to electron beams to the acceleration of GeV particles inintense flares, to name just a few of the phenomena, are all caused bya single agent: the solar magnetic field. Understanding the origin,structure, and dynamics of this field and its interaction with the solarplasmas is one of the foremost aims of solar physics, which also haspractical implications in the framework of “Space WEATHER“ affect-ing satellite systems in earth orbit. The Research Group for Solar Magnetohydrodynamics at MPS led by Prof. Manfred Schüssler studies a variety of aspects of the gene-ration and transport of the magnetic field in the solar interior, its emer-

gence at the solar surface, as well as its interaction with the convec-tive flows and the radiation field in the solar atmosphere. The meth-ods used range from analytical calculations to large-scale simulationsand also include the analysis of observational data. The scientistscarry out three-dimensional numerical ab-initio simulations to followthe evolution of the magnetic field and its effects on the photosphericplasma. By calculating diagnostic quantities like radiation intensitiesand spectral line profiles, the computational results can be comparedwith observational data.

Manfred Schüssler, MPI for Solar System Research

Radiative Magneto-Convection in the Solar AtmosphereResearch objectives: Our aim is to understand the basicphysical processes governing the complex interaction ofmagnetic field, convective flows, and radiation in thesolar atmosphere, which is the basis of most phenomenaof solar magnetic activity and represents the prototypefor magneto-convection in various astrophysical objects.Of particular interest is the role of the magnetic field forthe heating of the million degree solar corona and for thevariability of the solar radiation output.

Computational approach: We carry out three-dimen-sional, time-dependent simulations based on the full setof equations of compressible magnetohydrodynamics,including the effects of partial ionization and radiativeenergy transport. We use high-order finite differencemethods and non-local, multi-wavelength radiativetransfer along a large number of rays. Our codes arefully parallelized for distributed-memory architecturesby domain decomposition.

Project description: The interaction of the vigorousconvective motions of the plasma with the magneticfield leads to intermittent magnetic structure: strongconcentrations of magnetic flux are embedded in almost

field-free plasma. This magneto-convective interactionis most pronounced in the visible layers of the solaratmosphere, where the convective speeds become near-ly sonic and the magnetic, kinetic, and thermal energydensities are all of the same order of magnitude.

The nonlinear interaction between magnetic field,convection and radiation is far too complex for analyticaltreatment, so we resort to numerical simulations basedupon the fundamental equations of magnetohydrody-namics and radiative transfer. Realistic 3D time-depend-ent simulations of compressible magneto-convection inthe visible layers of the solar surface represent a consid-erable computational challenge. There is a wide range oflength scales between the dominant scale of the convec-tive flow pattern and the dissipation scales. The plasmais strongly stratified and even a restricted simulation hasto cover a density ratio of about 1000. Solar convectionis strongly affected by partial ionization effects, so thatthe ionization state of the most abundant species isincluded in the equation of state. Radiation is a majorplayer in the energy budget and thus is treated non-local-ly by integration over rays, taking into account itsdependence on wavelength. Synthetic brightness mapsand spectral line profiles based on the simulations are

Page 58: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

calculated as diagnostic tools in order to be directly com-pared with observational data.

The figures below represent a snapshot from a sim-ulation run. Shown are the distributions of brightness(left), vertical velocity (right, upper panel), and mag-netic field (right, lower panel) on a 6000 km2 x 6000km2 piece of the visible solar surface. The brightnessand velocity images show the typical signature of con-vective energy transport with hot, bright upflows and anetwork of cool downflows with velocities of severalkm/s. The magnetic flux is concentrated in the down-flow lanes of the convection pattern and reaches fieldstrengths of up to 2000 Gauss. The strong magneticfield leads to a significant density reduction; the result-ing increase of the plasma transparency enhances thelight emission from deeper, hotter regions, so that theflux concentrations appear as bright features in thebrightness picture. This effect leads to a slightly largertotal solar radiance during times of high magnetic fluxlevels on the solar surface.

We carry out simulation runs to study various rele-vant physical processes of solar magneto-convection, e.g.:• Dependence of the properties of the solar surface lay-

ers on the amount of magnetic flux• Structure and dynamics of larger magnetic flux con-

centrations (pores and sunspots)• Turbulent decay of magnetic field in regions of mixed

polarity• Emergence of rising magnetic flux tubes in the solar

atmosphere

• Intensity signature of magnetic flux concentrations incontinuum light and in wavelengths bands dominatedby molecular bands.

• Origin and properties of large-scale convective patterns(mesogranulation, supergranulation) and their effecton the magnetic flux distribution

Publications• A. Vögler, M. Schüssler, Astron. Nachr./AN, 324, 399 (2003)• M. Schüssler, S. Shelyag, S. Berdyugina, A. Vögler, S.K. Solanki:

Astrophys. J., 597 , L173 (2003)• C.U. Keller, M. Schüssler, A. Vögler, V. Zakharov, Astrophys. J., 607,

L59 (2004)• A. Vögler, J.H.M.J. Bruls, M. Schüssler,

Astron. Astrophys. 421, 741 (2004)

Snapshot from a 3D simulation of a magnetically active region with a resolution of 576x576x100 grid cells. Left: total brightness;right, upper panel: vertical velocity (red: downflow, blue: upflow);right, lower panel: magnetic field strength, increasing from blue (B < 100 Gauss) to red (B > 2000 Gauss), in a horizontal cut through the simulation box near the visible solar surface.

HI

GH

PE

RF

OR

MA

NC

E C

OM

PU

TI

NG

58 M P I F O R S O L A R S Y S T E M R E S E A R C H ( M P S ) / S O L A R M A G N E T O H Y D R O D Y N A M I C S G R O U P

Page 59: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

finite particle gyro-radii and Landau-damping. Some aspects of plas-ma behaviour are, however, determined essentially by a particulargroup of particles, which, e.g., either have particular orbits (“banana”or “superbanana” orbits) or velocities resonating with a certain wavetype. Then a full kinetic description of the plasma in a five-dimension-al phase space (the gyro-motion can typically be accounted for byaveraging procedures) is needed. A thrill is added to our theoretical work by the fact that our experi-mental colleagues have a large arsenal of methods to immediatelyverify or falsify theoretical predictions. Our results are, however, alsorequired to be quantitative, and ultimately reliable enough to baseupon them financially grave decisions. Major experiments – likeASDEX Upgrade and W7-X – have already been designed and decid-ed upon essentially on the basis of theoretical predictions, albeit inthe rather solidified areas of MHD equilibrium, stability and “neo-classical” (i.e. collisional) energy transport. Highly nonlinear phenom-ena – notably turbulent transport – have so far been treated in anempirical or semi-empirical way, deciding on the next step based on a“ladder” of smaller scale devices. In the case of the planned ITERexperiment these were mainly geometrically similar devices likeASDEX Upgrade and JET, and partly analogous experiments in Japanand the USA. It is gratifying that now emerging results of first princi-ple based theories, and in particular turbulence simulations, havealready succeeded to put many of these predictions on a firm basis of understanding. The capability to implement in computer simulations realistic firstprinciple based models and the emphasis on nonlinear problems hasremoved much of the rational for the traditional distinction betweentokamak and stellarator oriented research. This was justified as longas largely empirical models – starting-out from indeed partly differingobservations – and linear theories – which could, e.g. fully exploit therigorously axisymmetric geometry of the ideal tokamak – were stateof the art. In a non-linear phase, however, in general also tokamakphenomena get 3-d, and allow therfore little simplifications in theirmodelling compared to stellarators, in particular if perturbationsacquire also a magnetic component. Out of these observations arather intensive collaboration between different groups has devel-oped, which has allowed to realize many intrinsic synergies. In 2003we have formalized these working contacts by the creation of aProject “Plasma Theory” combining the first principle based modeldevelopments in the Garching and Greifswald based groups, andheaded, on a yearly rotating basis, by one of the theoreticians in theWissenschaftlichen Leitung of the IPP. At present the project is head-ed by Prof. S. Günter.

Max Planck Institute for Plasma Physics, Garching and GreifswaldDirectors: S. Guenter, K. Lackner

Theory and the Project “Theoretical PlasmaPhysics” at IPP Research into high temperature plasma physics is viewed from theoutside as mainly driven and financed by a utilitarian concern: to real-ize the energy generation process driving the sun – thermonuclearfusion – on earth, for the use in a power plant. To follow, however,this quest we have to address and solve some of the most advancedproblems of basic nonlinear physics. The distinguishing features ofthis theoretical and computational effort are the large range of simul-taneously interacting space and corresponding time scales – rangingfrom the ion (and sometimes even the electron) gyro-radius (a fewmm) to the dimensions of our confinement systems (m) – and theclose coupling to the planning and execution of actual experiments. We traditionally have resorted to a hierarchy of models. So, we havebeen treating by different models, the large-scale instabilities capa-ble of ejecting in a single event the total stored plasma energy out of the magnetic cage and the small scale turbulence causing thequasi-continuous and quasi-stationary losses balancing the heating of the plasma (by injected waves or particles or by fusion reactions).Reality, however, does not respect such a neat separation of scales.Macroscopic instabilities can depend in their growth rate critically onphenomena within a thin layer of the scale of particle orbits, or cannon-linearly lead to the formation of near-discontinuities, limited byfine-scale dissipation. The so-called “small scale turbulence”, on theother hand, can take the form of streamers or bursts extending over asignificant part of the confinement region or can self-generate macro-scopic shear flows ultimately partly suppressing it. A main thrust ofour work is therefore to overcome this separation, including micro-scale – particle orbit – effects into gross-scale instability studies andto develop multi-scale methods for the study of turbulent fluctuations.A main complication arises as – except in a very limited, relatively“cold” region close to the plasma boundary – the requirements for asimple fluid description of the plasma are not rigorously satisfied.Many features of macroscopic instabilities or turbulence can never-theless be captured by a fluid model, in particular if extensions aremade to a so-called “gyro-fluid” description, which includes effects of

Page 60: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

Max Planck Institute for Plasma Physics, Garching near Munich

Tokamak Physics DivisionThis division was initially created to support the tokamak line of mag-netic confinement by dedicated theoretical and modelling activities.Today, the most of its activities concern the development and applica-

tion of first principle based models, which are ultimately applicable toall types of toroidal magnetic confinement. This part of the work isfirmly integrated into the project „Plasma Theory“ described above. Inthe area of turbulence simulations we are leading the developmentsbased on fluid and gyrofluid descriptions and on gyrokinetic continu-um-type codes. More simplified, but still first principle based modelsare used for extensive comparisons with experimental results. Ourefforts in the area of large scale MHD phenomena concern in particu-lar the effects of finite particle orbits and supra-thermal particles onsuch modes, and the effect of realistic walls (finite resistivity, 3-dstructure) onto the growth rate and on feedback stabilization. Waveparticle interaction, also studied in this division, is another computa-tionally challenging field, in particular in the ion-cyclotron frequencyrange, where wave field structures are of device dimensions, but theactual absorption mechanisms, or the conversion into other wavetypes can happen over narrow spatial regions. The research of ouredge and divertor physics group uses still partly empirical models.However, due to the complexity of the involved processes – notablythe importance of impurities – and the fact that in this region trans-port along and perpendicular to the magnetic field lines can trulycompete with each other, these models also require the application ofstate of the art computing methods and resources.

Frank Jenko and Bruce Scott

Turbulence in Fusion PlasmasResearch objectives: Our research focuses on the fun-damental properties of turbulent fluctuations in high-temperature plasmas (ionized gases at up to about 400million degrees) as they occur in fusion experiments.We examine the statistical character of the fully devel-oped turbulence which controls the radial transport ofparticles and energy in magnetic confinement devices.This is a key problem on the way to a future fusionpower plant.

Coputational approach: We approach the plasma turbu-lence problem by means of direct numerical simulation(DNS). To this aim we solve a set of nonlinear reducedkinetic equations on a grid in five-dimensional phasespace, employing numerical methods from computa-tional fluid dynamics (CFD). Alternatively, one can usea fluid approach involving models to represent impor-tant kinetic effects. The latter approach is less expen-sive computationally but also more approximate.Presently, our main tools are the GENE code (kinetic)and the GEM code (fluid).

Project discription: High-temperature plasmas embed-ded in strong toroidal magnetic fields (as they occur infusion research) exhibit particle and heat loss rates

which cannot be explained in terms of collision-inducedeffects. Instead, turbulent processes are held responsi-ble for this ‚anomalous‘ transport which turns out to beone of the greatest problems for the development offusion power plants. Therefore, substantial effort is putinto understanding and trying to control turbulent trans-port. Since turbulence is an inherently nonlinear phe-nomenon involving a wide range of space-time scales,the problem must be attacked by means of high-per-formance computations. In the course of several years,we have developed a suite of codes which allows us toperform numerical experiments and study the statisticalproperties of fully developed plasma turbulence. Thisway, we are able to address some questions which are, atthe same time, of great academic and practical interest.

One such example is the role of self-organizationand structure formation in plasma turbulence. Undercertain circumstances, the turbulent vortices are strong-ly anisotropic. If they are elongated in the radial direc-tion (i.e. the direction of the background density/tem-perature gradients), the resulting transport can be sig-nificantly enhanced. On the other hand, shear flows cantear eddies into smaller pieces and thus reduce or evensuppress the particle and heat fluxes. These shear flowscan be self-generated by the turbulence or created by

Page 61: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

M P I F O R P L A S M A P H Y S I C S G A R C H I N G ( M U C ) A N D G R E I F S W A L D 61H

IG

H P

ER

FO

RM

AN

CE

CO

MP

UT

IN

G

external means. They may lead to radially localized‚transport barriers‘ in the plasma edge and/or core. Suchbarriers have been found experimentally, and their theo-retical investigation is ongoing. Through a series of care-fully diagnosed runs, we were able to show that self-sup-pression of edge turbulence is unlikely. Instead, it isenvisioned that external effects lead to the establish-ment of shear flows which in turn affects the longer-wavelength turbulence. We also found that residualtransport is induced by turbulence on smaller space-time scales which has been so far neglected. Moreover,investigations of turbulence in the plasma core haveshown that the nonlinear dynamics of those degrees offreedom that dominate the transport may be describedby a balance of primary and secondary instabilities, driv-en, respectively, by gradients in the plasma backgroundand in the primary instabilities.

Publications:• F. Jenko, W. Dorland, M. Kotschenreuther, and B. N. Rogers, „

Electron Temperature Gradient Driven Turbulence“, Phys. Plasmas 7,1904 (2000).

• F. Jenko and W. Dorland, „Prediction of Significant TokamakTurbulence at Electron Gyroradius Scales“, Phys. Rev. Lett. 89,225001 (2002).

• B. Scott, „Computation of Electromagnetic Turbulence andAnomalous Transport Mechanisms in Tokamak Plasmas“, PlasmaPhys. Contr. Fusion 45, A385 (2003).

Turbulent fluctuations in fusion plasmas lead to substantial lossrates of particles and energy. We aim to understand and controlthis phenomenon with the help of carefuly diagnosed supercom-puter simulations.

Page 62: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

This research aims at the understanding of linear andnonlinear (turbulent) fluctuations in the plasma core andclose to the plasma edge in a truly three-dimensionalplasma geometry. Turbulence in a fusion plasma is char-acterized by a large scale structure parallel to the mag-netic field and a small scale structure perpendicular to it.

With the plasma temperature decreasing by twoorders of magnitude between the core and the edge therelevant first-principles based theories are the gyroki-netic plasma description in the core and a suitable fluiddescription near the boundary. Gyrokinetics ultimatelyrequires understanding the nonlinear development ofion and electron five-dimensional distribution functionsand the electromagnetic fields while the fluid descrip-tion replaces the distribution func-tions by O(10) moment equations.

Computational approach: Thegyrokinetic equations are solvedglobally by particle in cell simula-tions. In order to decrease the sta-tistical noise a δƒ method isemployed. The usage of B-splinesof up to 3rd order provides a natural way for the chargeassignment process and for solving the Helmholtzequations for the electromagnetic fields.

Linear electrostatic calculations of ion-tempera-ture-gradient (ITG) driven instabilities are performed in3dim. real space for stellarators (EUTERPE code) [1].Including kinetic electrons in the simulations hasrequired, in a first attempt, a restriction to 2dim.because of the increase in computing time caused bythe small timestep implied by the electron dynamics(GYGLES code) [2].

The TORB code allows nonlinear simulations ofITG turbulence with sufficient conservation of energy(achieved by an optimized loading of the marker parti-cles) in cylinder geometry [3].

The codes were developed in collaboration withCRPP, EPFL, Lausanne.

Max Planck Institute for Plasma Physics (IPP) Greifswald, Dep. Stellarator Theory

General Stellarator Theory

This department is devoted to general stellarator theory and – in aspecialized working group – to plasma edge physics. The general the-ory effort comprises the development of the stellarator concept andcomputational as well as analytical methods to investigate equilibri-um, stability and transport problems in three-dimensional toroidalmagnetic configurations. The edge physics effort is concentrated ondeveloping computational tools for the three-dimensional aspects of acontrolled plasma exhaust.The general stellarator theory is focused by the goal to make stellara-tors viable for magnetic fusion. The key experiment being built at theGreifswald site, Wendelstein 7-X was designed by stellarator opti-mization based on the first-principles based theories magnetohydro-dynamics and collisional drift-kinetics of charged particles. Allrequirements (imposed by these theories) necessary for fusion withstellarators were satisfied following the understanding of macroscop-ic (MHD) equilibrium and stability as well as microscopic (drift-kinetic)particle-orbit behavior. Now, the effort is concentrated on understand-ing transport caused by small-scale fluctuations.

Figure 1: Linear ITG mode in W7-X (the electrostatic potential on a flux surface on one field period is shown).

Ralf Kleiber and Jürgen Nührenberg

Fluctuation-Caused Transport in Stellarators

Page 63: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

M P I F O R P L A S M A P H Y S I C S / D E P A R T M E N T S T E L L A R A T O R T H E O R Y 63H

IG

H P

ER

FO

RM

AN

CE

CO

MP

UT

IN

G

tional transform and shear have been included which isa first step towards a generalization to 3dim. equilibriumgeometry.

The influence of rotational transform and shear onresistive edge turbulence in 2dim. axisymmetric config-urations as well as for stellarator equilibria is studied(Fig. 3). An extension to a full annulus simulation willovercome restrictions by the flux-tube geometry.

Publications[1] Kornilov, V., R. Kleiber, R. Hatzky, L. Villard and G. Jost: Gyrokinetic

global three-dimensional simulations of linear ion-temperature-gra-dient modes in Wendelstein 7-X. Physics of Plasmas 11, 3196-3202(2004)

[2] Sorge, S. and R. Hatzky: Ion-Temperature-Gradient Driven Modes inPinch Configurations within a Linear Gyrokinetic Particle-in-CellSimulation of Ions and Electrons. Plasma Physics and ControlledFusion 44, 2471-2481 (2002).

[3] Hatzky, R., T.M. Tran, A. Könies, R. Kleiber and S.J. Allfrey:Energy Conservation in a Nonlinear Gyrokinetic Particle-in-Cell Code for Ion-Temperature-Gradient-Driven (ITG) Modesin ϑ-Pinch Geometry. Physics of Plasmas 9, 898-912 (2002).[4] Kleiber, R. and B.D. Scott: Turbulence Simulations forAxisymmetric Configurations and Stellarators. IAEATechnical Meeting on Innovative Concepts and Theory ofStellarators, Greifswald 2003.[5] Sorge, S.: Investigation of ITG turbulence in cylinder

geometry within a gyrokinetic global PIC simulation: influenceof zonal flows and a magnetic well. Plasma Physics and

Controlled Fusion 46, 535-549 (2004).

The fluid equations for electromagnetic drift waveturbulence are solved for stellarator geometry using thenonlinear flux tube code DALF-Ti based on 2nd orderupwind methods. This work is done in collaborationwith B. Scott, IPP, Garching [4].

On the REGATTA computer typical runs of theabove codes require 64-256 PEs and 3-10 CPUdays.

Projects: Gyrokinetic theory in its simplest form isapplied to simulate ITG modes in W7-X and tokamaks(Fig. 1). Also the simulations are used as a test for thebehaviour of nonlinear calculations. Since trapped elec-trons may play a crucial role in fusion devices they willbe included into the simulations.

In the computations of ITG turbulence the influ-ence of a magnetic well on turbulence and the genera-tion of a zonal flow is studied [5] (Fig. 2). The compu-tational behaviour of the code can be studied once rota-

Figure 2: Electrostatic potential from a gyrokinetic simulation of turbulence in a ϑ-pinch (R0=5.5, ra=0.5) (top: with zonal flow,bottom: zonal flow artificially suppressed). Note that the length scale in z-direction (parallel to the magnetic field is compressedby a factor of 60).

Figure 3: Perpendicular structure of edge turbulence (electro-static potential) in W7-X (x corresponds to the radial direction, y to the perpendicular direction in the magnetic surface).

Page 64: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

geometries of the plasma-facing structures. The codeemploys a Monte Carlo technique in real space using afield-aligned local orthogonal vector basis, whichreduces the diffusion transport tensors to a diagonalform [2]. The recycling neutrals are taken into accountby the EIRENE code, which describes the neutraltransport kinetically by solving Boltzmann's equation.EIRENE provides the mass, momentum and energysource terms of the plasma transport equations associ-ated with ionization and charge exchange processes andis coupled to EMC3 by self-consistent iteration. BothEMC3 and EIRENE are fully parallelized. A correctnumerical distinction between the parallel and perpen-dicular transport is realized by integrating the paralleltransport along field lines reconstructed by a newreversible field line mapping (RFLM) technique [6].The reversibility inherently avoids artificial cross-fielddiffusion, which typically causes large errors in stronglyanisotropic transport processes. In addition, the RFLMtechnique exhibits an intrinsic radial accuracy in thesense that radial deviations from a closed flux surface donot accumulate as the field lines circulate around thesurface. This is particularly important in the core regionjust inside the separatrix, where higher parallel heatconductivity is expected.

Physics applications: Major modelling results for W7-AS are the predicted absence of a high-recycling regimeand the high edge-plasma density needed for detach-ment transition [5,6]. Both effects are related to the

Research objectives: This long-term project is aimed atunderstanding and predicting the 3D edge plasmaphysics (transport, recycling, radiation) for toroidalfusion devices of arbitrary magnetic topologies. It isbased on modelling studies with the 3D edge fluidtransport code EMC3-EIRENE [1-4] in close interac-tion with the experiment. After extensive applications tothe divertor physics of W7-AS [5,6], the code has beenrecently implemented for the island divertor of W7-X[7], the ergodic divertor of TEXTOR-DED [8] and thelocal island divertor of LHD [9]. For running machinesit serves as a support for the experimental program, as acomplementary "3D numeric-diagnostic tool" and as aguideline for divertor optimization. All applications toactual experiments contribute to benchmarking and val-idating the code. A further central objective is the pre-diction of the divertor physics for W7-X based on theexperimental and modeling experience gathered at W7-AS. Numerically, the code introduces a new advancedMonte Carlo technique for the treatment of highlyanisotropic 3D fluid transport processes in arbitrarymagnetic topologies.

Computational approach: The EMC3 code, which wasdeveloped in close interaction with the W7-AS experi-ment, solves the steady-state plasma fluid equations formass, momentum and electron and ion energy transportin the boundary of a toroidal fusion device for arbitrarymagnetic topologies with coexisting closed flux surfaces,islands and open ergodic regions and for arbitrary

3D Plasma Edge Modelling andIsland Divertor Physics

This group was responsible for the mod-elling and interpretation of experimentsat the W7-AS stellarator and is presentlypreparing the theoretical and numericaltools for the future W7-X experiments.The main efforts are the installation of aW7-X equilibria database (with magneticcoordinate transforms), modelling of theelectron cyclotron and neutral beaminjection heating, neoclassical transport,the development of a new predictivetransport code as well as 3D modelling of island divertor physics with the EMC3-EIRENE plasma fluid transport code.

Max Planck Institute for Plasma Physics, Greifswald“Experiment-Oriented Theory Group”

Experiment-Oriented Stellarator-Theory Group

Page 65: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

increased role of the cross-field transport arising fromthe smaller internal field-line pitch, or larger connectionlength, Lc,, in the islands and from the smaller plasma-to-target distance, ∆χ, as compared to tokamaks. Furthercode predictions are the instability-related jumps of theradiation level and of the radial position of the radiationzone at detachment transition. All these predictions havebeen verified by the W7-AS divertor experiments [10].

Island-divertor experiments in the last operationalperiod of W7-AS showed that a stable detachmentrequires sufficiently large islands and field-line pitch[11]. Numerical studies performed after the recentimplementation of the RFLM-technique, which allowsa highly-accurate treatment of magnetic configurationsof any complexity, show that, under detachment condi-tions, the radiation distribution in the island scrape-off-layer (SOL) is sensitive to the island geometry. Oncedetachment occurs, the radiation layer detaches fromthe divertor plates and shifts towards the X-points locat-ed just in front of the targets, independent of the con-figurations selected. With increasing plasma density,however, the radiation distribution in the island SOL fordifferent configurations develops in two different ways.Two typical radiation patterns are identified, as shownin Fig. 1. For stable configurations (larger ∆χ and small-er Lc,), the radiation zone gradually shifts poloidallyaway from the divertor region to the X-points located onthe inboard side of the torus. When nes increases further,the radiation zone extends poloidally to form a radiationbelt on the high-field side. At the same time, the radia-

tion belt shifts inwards and finally moves into the con-finement region. In contrast, for unstable configurations(smaller ∆χ or larger Lc) an intensive and strongly local-ized radiation is established in the divertor region.Increasing the plasma density causes a much fasterinward shift of the radiation zone to touch the closedregion than for the inboard side radiation case. Once theradiation zone moves into the core, however, the radia-tion zone shifts to the inboard side to form a radiationpattern which is almost identical to that of the first case.Experimental data for stable detachments are consistentwith the described radiation picture [11-12]. It can beshown that the observed detachment instability is driv-en by the recycling neutrals [13].

Publications:[1] Y. Feng, F. Sardei, J. Kisslinger, 1999 J. Nucl. Mater. 266-269 812[2] Y. Feng, J. Kisslinger and F. Sardei, 2000 27th EPS Conf. (Budapest) [3] Y. Feng, F. Sardei, J. Kisslinger, D. Reiter, Y. Igitkhanov, 2001 28th

EPS Conf. (Madeira)[4] D. Reiter, 1984 Technical Report Jül-1947, KFA Jülich, Germany[5] Y. Feng, J. Kisslinger, F. Sardei, 1999 26th EPS Conf. (Maastricht)[6] Y. Feng et al., 2002 Plasma Phys. Contr. Fusion 44 611[7] D. Sharma et al., 2004 J. Nucl. Mater. 337-339 471[8] M. Kobayashi et al., 2004 Nucl. Fusion 44 S64 [9] T. Morisaki et al., 2004 J. Nucl. Mater. 337-339 154[10] P. Grigull et al., 2001 Plasma Phys. Contr. Fusion 43 A175 [11] P. Grigull et al., 2003 J. Nucl. Mater. 313-316 1287[12] H. Thomsen et al., 2004 Nucl Fus. 44, 820 [13] Y. Feng et al., 2004 Contr. Plasma Phys. 44 1-3 57

Fig. 1: Evolution of the carbon-radiation zone through detachment for two different ∆x cases: Top: larger ∆x, radiation moves gradually to the X-points on the inboard side with increasing nesBottom: smaller ∆x, radiation zone stays poloidally in divertor region while moving inwards.

M P I F O R P L A S M A P H Y S I C S , G R E I F E R S W A L D / E X P E R I M E N T - O R I E N T E D T H E O R Y G R O U P 65H

IG

H P

ER

FO

RM

AN

CE

CO

MP

UT

IN

G

Page 66: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

Research objectives: Research focuses on the investi-gation of the nonlinear energy cascade in MHD turbu-lence which is only partially understood. Directnumerical simulations computing the time evolution ofsystems represented by large ensembles of up to 10243

spatial Fourier modes in the framework of incompress-ible MHD are used to obtain self-similar scaling expo-

nents of the energy spectrum and more general two-point statistics. This allows verifying predictions of dif-ferent competing cascade phenomenologies and sup-plies data necessary for the construction of improvedtheories, in particular concerning the interplaybetween kinetic and magnetic energy.

Max Planck Institute for Plasma Physics (IPP) in Garching near Munich.Independent Junior Research Group

‘Computational Studiesof Turbulence inMagnetized Plasmas’ The group investigates basic theoretical aspects of turbulence inplasmas, i.e. ionized gases. Model systems, for example turbulentmagnetofluids in the framework of magnetohydrodynamics (MHD),

are studied using direct numerical simulation of single fluid equations. Special attention is paid to the inherent similarity properties of the turbulent velocity and magnetic fields, their spatial structure and the mutual dependence of macroscopic quantities characterizingthe flow. In addition to its important role in magnetically confinedfusion plasmas, turbulence is important for the dynamics of variousnaturally occuring plasma flows in the context of geo-, space-, and astrophysics. Nonlinear processes leading to turbulent energyredistribution (spectral energy cascade), the self-generation of large-scale magnetic fields (turbulent dynamo), or the formation ofspatially intermittent small-scale structures (star formation) can only be studied experimentally in a very limited parameter range and with high expense. Numerical simulations help overcomingexperimental and observational barriers and allow examining theinherent properties of turbulent plasmas.

Wolf-Christian Mueller

Nonlinear Energy Dynamics in MHD Turbulence

Magnetic field lines in globallyisotropic MHD turbulence.

Magnetic field lines in globallyanisotropic MHD turbulence.

Page 67: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

M P I F O R P L A S M A P H Y S I C S / I N D E P E N D E N T J U N I O R R E S E A R C H G R O U P C O M P U T A T I O N A L S T U D I E S O F T U R B U L E N C E I N M A G N E T I Z E D P L A S M A S

67H

IG

H P

ER

FO

RM

AN

CE

CO

MP

UT

IN

G

Research objectives: The small-scale structure ofnumerically generated high-Reynolds-number MHDturbulence is studied by using higher-order spatial two-point statistics. This allows quantifying the spatialanisotropy of the turbulence with respect to the localmagnetic field and, by intermittency modelling, permitsto establish a link between anisotropic energy dynamicsand the characteristic distribution of dissipative struc-tures in the turbulent flow.

Computational Approach: A pseudospectral Fouriercode (see above) is used.

Wolf-Christian Mueller

Anisotropic Spatial Structure of MHD Turbulence

Computational approach: The incompressible MHDequations are solved by a parallel pseudospectral sin-gle-fluid code which is optimized for distributed aswell as shared memory environments.

Project description: Currently three major phenome-nologies propose different physical processes underly-ing the turbulent energy cascade. While theIroshnikov-Kraichnan picture assumes nonlinear scat-tering of Alfvén waves propagating along magneticfield lines, the classical Kolmogorov model developedfor hydrodynamic turbulence supposes that energy is

redistributed by the successive breakup of turbulentstructures in eddies of ever decreasing size.

The Goldreich-Sridhar approach postulates a bal-ance between dynamics perpendicular and parallel tothe local magnetic field which are meanwhile knownto be anisotropic.

Direct numerical simulations of globally isotropicas well as strongly anisotropic turbulence at highReynolds number are carried out to verify the differentphysical models. This is done by comparing theirrespective predictions for the self-similar inertial-rangescaling exponent of the total energy spectrum withnumerical results.

Project description: Intermittency of small-scale turbu-lent structures, e.g. current sheets, leads to a special sig-nature in the statistics of the turbulent fields. This sig-nature is detected by regarding the self-similar scalingexponents of the associated two-point structure func-tions. The respective set of exponents can be linked toquantities characterizing the local geometry and the tur-bulent energy cascade by a phenomenological Log-Poisson approach. The model can be generalized toinclude direction-dependent cascades as observed inour large-scale numerical simulations. It is thus possibleto connect spatial small-scale statistics with the nonlin-ear turbulent energy cascade.

Publications:• W.-C. Müller, D. Biskamp:

Scaling properties of three-dimensional isotropic magne-tohydrodynamic turbulence,Physical Review Letters 84(3),2000

• W.-C. Müller, D. Biskamp: The evolving phenomenolo-gical view on magnetohydro-dynamic turbulence, Lecture Notes in Physics 614,Springer, 2003

• W.-C. Müller, D. Biskamp, R. Grappin: Statisticalanisotropy of magnetohydro-dynamic turbulence, PhysicalReview E 67, 066302, 2003

Dissipative current sheets in globally anisotropic MHD turbulence.

Dissipative current sheets in globally isotropic and MHD turbulence.

Page 68: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

Research objectives: The turbulence in magneticallyconfined nuclear fusion plasmas transcends simpledescriptions by overall scaling laws in the dimensionlessparameters. The reason is its ability to spontaneouslycreate large structures in the electromagnetic field orthe plasma itself, which strongly act back on the turbu-lence thereby dramatically improving the confinement.Objectives are the principles for the creation of suchstructures, and their interaction with the backgroundturbulence. A comprehensive understanding of theseglobal structures would lead to more trustworthy pre-dictions of future machine performance and could leadto schemes to more directly access the favorable con-finement regimes. Apart from nuclear fusion, analogousinteractions between microscopic and macroscopicscales generally occur in quasi-two dimensional turbu-lence systems, e.g., between the convective turbulenceand the zonal bands in the atmosphere of gas planets.

Computational approach: Nonlocal electromagnetictwo-fluid equations (NLET code) and gyrokinetic equa-tions (GS2 code) have been applied in 3D first princi-ples numerical studies of tokamak turbulence.

Project description: Nonlocal edge turbulence computa-tions and comparison with GPI imaging (with S. Zweben,

PPPL, USA and J. L. Terry, MIT, USA) The principalnonlocal effect of the pedestal structure of density andtemperature as pertaining to the edge of a tokamak hasbeen studied for a generic profile in [1]. We have start-ed to compare comprehensive electromagnetic edge tur-bulence simulations with the turbulence in Alcator C-mod discharges, which has been measured withunprecedented detail by ultra-fast GPI imaging [2].

These comparisons are still in progress, owing to theuncertainty in the edge parameters and the sensitivity ofthe turbulence with regard to the background gradients.

Zonal flow studies (with K. Itoh, NIFS, Toki, Japan,K. Molvig, MIT, USA) The poloidal spectra of the flowsgenerated by the turbulence have been numericallycomputed for varying plasma parameters. Usually, theflows exhibit a continuous spectrum located at finitealthough much larger wavelengths than the turbulence.In certain regimes, however, the flows are observed tocondense into a global zonal flow – independent of thesystem size – a process which is mathematically remark-ably similar to a quantum mechanical Bose Einsteincondensation [3].

Detailed studies of the zonal flows under tokamakedge conditions reveal that they oscillate with a charac-teristic frequency and strongly modulate the turbulence.As a result, the heat flux is occurring in a series of trans-

Centre for Interdisciplinary PlasmaScience – Plasma Theory / K. Hallatschek, C. H. Jaroschek, S. Matsukiyo, M. Scholer, I. Sidorenko, R. A. Treumann

Simulation of Plasma PhysicsProcessesThe plasma theory group of CIPS is concerned with a wide variety of plasma physical questionsranging from reconnection in astrophysical plasmas around active galatic nuclei to state transitions in nuclear fusion reactors.

K. Hallatschek, CIPS, IPP

Interaction of Global Structures and Microscopic Plasma TurbulenceChaos, Order, and Physical Law ...

Page 69: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

port fronts at the same frequency [4]. The oscillatingGAMs (geodesic acoustic modes) have been identifiedlater on so far in the D3D [5], Text, and ASDEXUpgrade tokamaks and the CHS stellarator.

The computational exploration of the zonal flows inthe core regime of a tokamak (figure) helped to updatethe customary viewpoint on how the turbulence gener-ates these flows: The flows are not solely controlled bythe poloidal forces produced by the turbulence. Instead,they evolve finely balancing the drive by the poloidalturbulence forces with a braking by toroidal turbulenceforces which sensitively depends on the magnetic geom-etry [6]. The novel toroidal turbulence force requires anaugmentation of current theoretical comprehensivetransport models.

Pinch effect (with W. Dorland, UMD, USA) Highresolution gyrokinetic computations of the turbulentparticle transport in the tokamak core found that, con-trary to current wisdom, it is not only due to the trappedelectron fraction. Instead it is carried to a great extent bythe passing electrons, a mechanism which is boosted bythe surprisingly pronounced extended structures alongthe magnetic field lines [7]. Especially important is theoccurrence of an inward particle transport (pinch) forrealistic parameters, as it can lead to a profile instabilityincurring a steepening of the density, and resulting in abarrier for the turbulence.

Publications:[1] K. Hallatschek, A. Zeiler, “Nonlocal simulation of the transition

from ballooning to ηi-mode turbulence in the tokamak edge,” Phys. Plasmas. 7, 2554 (2000)

[2] J. L. Terry, S. J. Zweben, K. Hallatschek, B. LaBombard, R. J.Maqueda, et al-., “Observations of the turbulence in the scrape-off-layer of Alcator C-Mod and comparisons with simulation,”Phys. Plasmas 10, 1739 (2003)

[3] K. Hallatschek, “Condensation of microturbulence-generated shear flows into global modes,” Phys. Rev. Lett. 84, 5145 (2000)

[4] K. Hallatschek, D. Biskamp, “Transport control by coherent zonalflows in the core/edge transitional regime,” Phys. Rev. Lett. 86,1223 (2001)

[5] G.R. McKee, R.J. Fonck, M. Jakubowski, et al., “Experimental characterization of coherent, radially-sheared zonalflows in theDIII-D tokamak,” Phys. Plasmas 10, 1712 (2003)

[6] K. Hallatschek, “Turbulent saturation of tokamak core zonal flows,”Phys. Rev. Lett. (in press)

[7] K. Hallatschek, W. Dorland, “Giant electron tails and passing electron pinch effects in tokamak core turbulence,” Phys. Rev. Lett.(submitted)

Spin up of global zonal flows in a turbulence computation. The torus cutshows the turbulent temperature fluctuations, which are just about to besheared by the poloidal zonal flow velocity displayed in the inset.

C E N T R E F O R I N T E R D I S C I P L I N A R Y P L A S M A S C I E N C E / P L A S M A T H E O R Y 69H

IG

H P

ER

FO

RM

AN

CE

CO

MP

UT

IN

G

Page 70: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

Sinificance: Synchrotron emission from radio loudAGNs is poorly understood. On the basis of the presentsimulation this radiation is explained as resulting from adistribution of many small magnetic reconnection sitesin hot relativistic pair plasma. The extension of eachreconnection region is microscopic with respect to thedimension of the radiation source, each one beingabsolutely collisionless. The filling factor is of the order106-108 yielding correct emission intensities while stillnot suffering from self-absorption. Thus for the firsttime a self-consistent radiation mechanism is provided.

The absence of any Hall effect in pair plasma showsthat fast collisionless reconnection takes place in plas-ma even without referring to Hall currents and gener-ates power law particle distributions of high energy.

Publications• C. H. Jaroschek, R. A. Treumann, H. Lesch, and M. Scholer, “Fast

reconnection in relativistic pair plasmas: Analysis of particle accel-eration in self-consistent full particle simulations”, Phys. Plasmas 11,1151 (2004).

• C. H. Jaroschek, H. Lesch, and R. A. Treumann, “Relativistic kineticreconnection as the possible source mechanism for high variabilityand flat spectra in extragalactic radio sources”, Astrophys. J. 605,L9 (2004).

Research objectives: The research involves the investi-gation of magnetic reconnection across very thin highlyrelativistic pair plasma current sheets composed of elec-trons and positrons, acceleration of these particles inthe self-consistent reconnection electric field, and gen-eration of radiation in view of astrophysical applicationto the vicinity of Black Holes and intense synchrotronradiation from radio-loud active galactic nuclei.

Computational approach: We use a fully relativistic self-consistent electromagnetic three-dimensional full parti-cle-in-cell (PIC) code implemented on the Max PlanckSociety Computing Center (RZG) supercomputer. Thiscode allows recording the particle and field evolution in3D. The data are subsequently used to obtain the finalparticle distribution functions and to calculate the syn-chrotron radiation emitted self-consistently by theaccelerated particles.

Project description: Magnetic Topology: Reconnectiontakes place here under extreme non-Hall conditions.Following a small initial localized disturbance of the col-lisionless current sheet the magnetic topology changesrapidly into a multi-X point configuration as expectedfor fast magnetic reconnection exhibiting current sheetdisruption with X points moving outward at near Alfvénspeed and coalescing when mutually interacting.

Electric Field Structure: Each X point develops anintense induction electric field in current directionwhich is of finite spatial extension thus rendering thereconnection intrinsically three dimensional. In addition,a fluctuating electric wave field is generated with electricfield locally perpendicular to the magnetic field which is,however, too small to provide anomalous collisions.

Particle Acceleration: Strong particle acceleration upto relativistic saturation γ factors of about 100 is provid-ed by the reconnection electric field in the magnetic Xpoints. Energy limitation is due to the finite extension ofthe electric field. The accelerated particles are releasedfrom the X point along the separatrices parallel to thereconnected magnetic field. Acceleration proceeds insteps when the particles catch up a second X point. Theresulting energy distribution is a truncated power lawdistribution.

Radiation: The accelerated particles emit intensesynchrotron radiation in the reconnected magnetic fieldwith power law spectrum which escapes from the sitewithout self-absorption.

70H

IG

H P

ER

FO

RM

AN

CE

CO

MP

UT

IN

G

Time evolution of the particle distribution function in relativistic collisionless 3D reconnection in pair plasma with power law energy distribution and high energy cut-off.

Claus Jaroschek, Manfred Scholer, and Rudolf Treumann, MPE

Magnetic Reconnection in Relativistic Pair Plasmas: Acceleration and Radiation

C E N T R E F O R I N T E R D I S C I P L I N A R Y P L A S M A S C I E N C E / P L A S M A T H E O R Y

Page 71: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

71H

IG

H P

ER

FO

RM

AN

CE

CO

MP

UT

IN

GC E N T R E F O R I N T E R D I S C I P L I N A R Y P L A S M A S C I E N C E / P L A S M A T H E O R Y

Research objectives: The research involvesthe investigation of the onset of magneticreconnection in thin current sheets in a colli-sionless plasma. This is of importance for theoccurrence of reconnection at the Earth’smagnetopause, in the geomagnetic tail, and inthe solar corona.

Computational approach: We use a fully rela-tivistic self-consistent electromagnetic three-dimensional full particle-in-cell (PIC) codeimplemented on the Max Planck SocietyComputing Center (RZG) supercomputer.This code allows recording the particle andfield evolution in 3D.

Project description: In a three-dimensionalcurrent sheet with anti-parallel magneticfields the lower hybrid drift instability (LHDI)is excited at the boundaries of the currentsheet. The inductive field of the LHD wavespenetrates in the case of a sufficiently thincurrent sheet to the center and accelerates theelectrons in the current direction. A returncurrent occurs at the outer edges of the cur-rent sheet. Since the ions are rather immobilethe total current increase results in a currentsheet thinning and in quick onset of recon-nection within a few inverse ion gyrofrequen-cies. Initially a number of reconnection patch-es develop in the current sheet which are shortin the in the current direction, so that reconnection isthree-dimensional. These patches extend with time inthe direction opposite to the current and a single X lineeventually results.

In the case of sheared magnetic fields the the growthrate of the LHDI is reduced. Nevertheless, the LHDIleads after a considerably longer time to a thinning of thelayer and to reconnection along a single X line.

Significance: Magnetic reconnection is believed to beimportant for the conversion of magnetic energy intoparticle and bulk flow energy in eruptive processes inthe solar atmosphere and during magnetic reconnectionin the magnetotail. Furthermore magnetic reconnectionat the magnetopause seems to be responsible for thetransfer of mass, energy, and momentum into the mag-netosphere In order for reconnection to proceed in a

collisionless plasma some non-ideal process has to pro-vide the electric field along the neutral line. This non-ideality can be provided by electron inertia or by asym-metric components of the electron pressure near a neu-tral (X) line.

An important question is how reconnection evolvesin a current sheet which is initially in equilibrium. Thesimulations have demonstrated that three-dimensionaleffects are essential in understanding the onset ofreconnection, since the thinning is due to waves with kvectors in the current direction. In a two-dimensionalapproach such waves are excluded by design.

Publications• M. Scholer, I. Sidorenko, C. H. Jaroschek, and R. A. Treumann,

“Onset of reconnection in thin current sheets: Three-dimensionalparticle simulations”, Phys. Plasmas 10, 3521 (2003).

MPE Manfred Scholer and Rudolf Treumann

Collisionless Magnetic Reconnectionin Thin Current Sheets

Color coded representation of the electron density (left) and the electric fieldcomponent in the current direction (right) in the plane perpendicular to theanti-parallel magnetic field before onset of reconnection showing the evolu-tion of the LHD instability. The anti-parallel magnetic field is into and out of theplane above and below the current sheet, respectively.

Page 72: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

Research objectives: The research involves the investi-gation of the collective plasma processes occurring incollisionless shock waves.

Computational approach: We use a fully relativistic self-consistent electromagnetic one-dimensional full parti-cle-in-cell (PIC) code implemented on the Max PlanckSociety Computing Center (RZG) supercomputer.

Project description: Using one-dimensional simulationsof quasi-perpendicular shocks, i.e., shocks where theangle between the upstream magnetic field and theshock normal is larger than 45 degrees, it has beenshown that the results are vastly different depending onthe ion to electron mass ratio used in the simulations.Using the physical ion to electron mass ratio is compu-tationally rather demanding, since the Debeye lengthhas to be resolved and the time step has to be small inorder to resolve the electron motion.

Quasi-parallel shocks are known to specularlyreflect incoming ions. These reflected ions moveupstream where they are responsible for the so-calledfoot in the magnetic field profile, are turned around bythe magnetic field, and eventually end up again in thedownstream medium. In low ion to electron mass ratiosimulations the reflected ions can, under certain condi-tions, accumulate at the upstream edge of the foot andlead to shock reformation: a new shock develops at theupstream edge. Furthermore, an instability is excited inthe foot between the electrons and the reflected ions(Buneman instability), which leads to electron heatingin the shock normal direction. At higher ion to electronmass ratio this Buneman instability is stabilized due tothe higher mobility of the electrons. At the realistic massratio a new instability occurs in the foot region: requir-ing zero current in the shock normal direction theincoming ions have to slow down. This results in avelocity difference between incoming ions and incom-ing electrons. The free energy excites the so-called mod-ified two-stream instability (MTSI) which leads viaphase-mixing to a hot ion distribution in the foot of theshock. Subsequently the upstream edge of the footbecomes the new shock ramp.

In the dilute plasmas in the Universe the usualCoulomb collisions are unimportant and the behavior ofcharged particles is governed by collective interactionsthrough long-range electromagnetic forces. When some

dynamic energy release occurs in these plasmas colli-sionless shocks arise where the magnetic flow is regu-lated by microscopic dissipation and where part of thethermal population is accelerated to high energies.Collisionless shocks are found in the corona of the Sun,in front of planetary magnetospheres, and in many otherastrophysical settings. The present simulation work isimportant for understanding how dissipation is achievedat shocks in a collisionless plasma. Previous full particlesimulations have used artificially small ion to electronmass ratios in order to be computationally feasible. Thesimulations with the physical mass ratio have shownthat new instabilities become important since theirgrowth rates strongly depend on mass ratio. Thus theyare artificially suppressed in the low mass ratio simula-tions.

Publications• M. Scholer, I. Shinohara, and S. Matsukiyo, “Quasi-perpendicular

shocks: Length scale of the cross-shock potential, shock reforma-tion, and implication for shock surfing”, J. Geophys. Res. 108, 1014,doi:10.1029/2002JA009515 (2003).

72H

IG

H P

ER

FO

RM

AN

CE

CO

MP

UT

IN

G

From top to bottom: magnetic field, ion density, and ion phasespace (velocity component in the shock normal direction) versusdistance in units of the electron inertial lengths). The shock is atabout 500 electron inertial lengths. Upstream (x<500) one can seethe incoming ions and the reflected ions. The incoming ions exhibitvortex like structures due to the modified two-stream instability.

Manfred Scholer, MPE

Structure of Quasi-Perpendicular Collisionless Shocks

C E N T R E F O R I N T E R D I S C I P L I N A R Y P L A S M A S C I E N C E / P L A S M A T H E O R Y

Page 73: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

Max Planck Institute for Quantum Optics

Laser ParticleAcceleratorThe research of this group is concerned with the interaction of ultra-short high-power laser pulses with solid and gaseous target materialsin the context of applications to inertial confinement fusion and newsources of table-top ultra-bright nuclear sources. The numerical work reported here is closely related to experimentswith the MPQ ATLAS, which delivers laser pulses with focused inten-sities of several 1019 W/cm2 . At these intensities, target electronsare driven to the velocity of light and regimes of relativistic laserplasmas are accessed. A most outstanding feature is the generationof highly collimated relativistic electron beams with very high cur-rents (10 – 100 kA) and associated magnetic fields of 10 – 100 kT.These beams may be sufficient to ignite compressed fusion fuel, anda major goal of the group has been to study the physics relevant forfast ignition of inertial fusion targets.Another goal has been to investigate laser wakefield accelerationwith few-cycle laser pulses. New terawatt lasers with pulse durationas short as 5 femtosecond are presently built at MPQ. Interacting

with millimeter-size gas targets, they turn the gas into plasma anddrive high-amplitude plasma waves. The wakefields generated bythese waves can accelerate particles very efficiently. The projectdescribed below is intended to design future experiments on wake-field acceleration at MPQ.

3D-PIC simulations where performed to study theunderlying acceleration mechanism. On the other handhigh energetic ions pulses have a broad range of appli-cations from fast ignition, source for the injection into aconventional accelerator, diagnostic of electromagneticfields in dense plasmas to cancer therapy.

Computational Approach: The state-of-the-art methodto simulate laser-plasma interaction in three dimensionsis a so-called particle-in-cell code (3D PIC-Code)which solves Maxwell’s equations simultaneously withthe motion of charged particles on a three-dimensionalgrid. The computational challenge consists in tracingapproximately 108 particles on a grid of some 107 cells.The simulations here were performed with the 3D-PICCode ILLUMINATION recently developed at MPQ[3]. The hardware demands are very high and are theultimate limit of the applicability of the code. Several10GB of main memory are needed to simulate reason-able interaction volumes. The only chance to get resultswithin some weeks of computing time is to fully paral-lelize the code. Due to the local character of the codethis can be done with high efficiency, which was tested

Research Objectives: The research reported here isfocused on two different regimes of Laser-Particleacceleration: The acceleration of electrons in an under-dense plasma and the acceleration of ions from a coldtarget.

MPQ has played a pioneering role in identifyingnew regimes of laser wakefield acceleration in denseplasma on the basis of three-dimensional particle-in-cell(3D-PIC) simulations, using RZG parallel computers[1]. Recent experiments have confirmed the MPQresults [2]. The electron bunches can be converted intoultra-short sources of X-rays, ion beams, and othernuclear species of unprecedented brightness. Havingthese options in mind, 5fs, 100 TW lasers are presentlydeveloped at MPQ for novel applications in biology,chemistry, medicine, material sciences etc.

*On the other hand recent experiments with theATLAS Laser at the MPQ and similar laser systemsshowed that highly collimated ion beams with someMeV energies are generated via the interaction of high-power Laser pulses with solid targets. Strong electricfields in the order of some TV/m are generated at thebackside of the target, which ionise and accelerate Ions.

Michael Geissler and Juergen Meyer-ter-Vehn

Laser Plasma Intercations

Page 74: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

Fig. 1: 3D-PIC simulation of laser generated wakefield. The colourscale denotes electron density. The laser pulse (5 fs, 115 mJ, 22TW) has passed through 230µm of plasma with density 1019 cm-3. Acharge of 109 electrons have been trapped and accelerated in thewakefield (“bubble”) behind the laser pulse.

Fig. 3: Distribution of C3+, C4+, C5+ and C6+ ions inside the target after100fs. A cut to the laser axis is applied the laser hits the target frombelow. C5+ and C6+ are generated at the front of the target.

74H

IG

H P

ER

FO

RM

AN

CE

CO

MP

UT

IN

GM P I F O R Q U A N T U M O P T I C S / L A S E R P L A S M A G R O U P

Fig. 2: Spectrum of accelerated relativistic electrons corre-sponding to the snapshot of Fig. 1. The peak at 75 ± 10 MeVcontains 109 electrons and 10 % of the initial laser energy

Page 75: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

75M P I F O R Q U A N T U M O P T I C S / L A S E R P L A S M A G R O U PH

IG

H P

ER

FO

RM

AN

CE

CO

MP

UT

IN

G

Fig. 4: Spectra of C4+ (left) and C6+(right) for simulation and experiment corresponding to the snapshot of fig.3

for up to 32 nodes. Consequently the IBM Power4cluster regatta is the appropriate machine for thesekinds of problems. For electron acceleration typicalsimulation volumes are 30*30*7µm, propagation dis-tances over several 100µm where achieved using acommoving frame. Figs. 1 and 2 show a simulation ofwakefield acceleration, corresponding to MPQ laserparameters in the near future. Simulations show thatwakefield acceleration works very efficient for sub-10fslaser pulses where almost 10% of the laser energy isconverted into a mono energetic electron bunch. Thisparameter regime can be perfectly simulated by ILLU-MINATION.

The second focus to larger laser systems of this proj-ect is to support ion acceleration experiments at theMPQ. Typically a 100fs laser irradiates a 5µm solid tar-get. Due to the high mass of the ions, the whole accel-eration process lasts some 1000fs. The full simulation ofsuch an experiment is currently out of reach. Namelythe high density of solid targets demands a high spatialand temporal resolution. Together with the dimensionsof the target some 100GB of main memory would beneeded and the whole acceleration process wouldrequires some 106 time steps which would take somemonths to compute. Nevertheless, basic accelerationmechanisms can be studied in a reduced problem size.Fig. 3 and 4 show simulations of a 5fs (instead of 100fs)laser pulse with 8 TW, which is the same on targetpower as in the experiment. The target consists of 1µmthick carbon with 1% solid density, placed inside a vol-ume of typical 20*20*15µm. The simulations are com-pared with experiments performed with the ATLASLaser at the MPQ [4]. Figure 4 shows the spectra

obtain from the simulation and the experiment. For C4+

the numbers of fast ions and maximum energy are inagreement with the experiment. The difference for C6+

is due to the fact that these ions are generated at thefront of the target (see fig.3) and only a small part ofthem will reach the detector behind the target. The sim-ulations indicate that table-top ultra-short laser pulsescan also accelerate ions as long as the power is compa-rable to larger laser systems. Perspectives: The development of these new high-power nuclear sources has just begun. An outstandingfeature is that they have table-top dimensions.Interacting with thin solid foils, intense multi-MeV ionpulses can be created. Propagation through a plasmaleads to the generation of mono-energetic electronbeams. A particular challenge will be to further acceler-ate the electron bunches to multi-GeV energies in asequence of wakefield acceleration stages. For all theseacceleration regimes the most advanced computingfacilities are required, such as provided by RZG.

Publications:[1] A. Pukhov and J. Meyer-ter-Vehn, Laser wakefield acceleration:

the highly non-linear broken-wave regime, Appl. Phys. B74, 355(2002).

[2] J. Faure, Y. Glinec, A. Pukhov et al., A laser-plasma acceleratorproducing mono-energetic electron beams, Nature 431, 541(June 2004)

[3] M. Geissler, J. Schreiber and J. Meyer-ter-Vehn , 3D-PICSimulations of Laser Electron Acceleration. EPS2004 Proceeding

[4] M. Geissler, J. Schreiber, M. Kaluza and J. Meyer-ter-Vehn , 3D-PIC Simulations of Laser Ion Acceleration. EPS2004 Proceeding

Page 76: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

Research Objectives: Our research concentrates on thequantum dynamics of molecular systems in the ultrafastregime. Molecules contain many internal degrees offreedom which lead to a complex behaviour that com-plicates their theoretical treatment while at the sametime it offers scientifically interesting challenges.

We are primarily aiming at a microscopic under-standing of fundamental molecular processes such aslaser induced reactions mediated by conical intersec-tions. Based on the obtained knowledge concerning thestatic and dynamic molecular properties we then selectcontrol objectives which can be achieved with high effi-ciency using shaped femtosecond laser pulses. By theirapplication it is possible to manipulate the outcome ofchemical reactions on the molecular time scale.

Computational Approach: The basis of our studies is thepropagation of nuclear wavepackets on coupled multidimensional ab initio potential energy surfaces underthe influence of laser interaction. The propagation pro-gram was developed in our group and is based on fastpropagation schemes for time dependent Hamiltonians.For high performance Fast Fourier Transformationmethods are included and the whole package is paral-lelized using the MPI standard. The quantum dynamicsas well as the optimal control calculations were per-formed on the IBM eServer p690 supercomputer.

Project description: Chemical reactions are induced bymolecular motions, thus the relevant time scale forobservation and control is in the femtosecond (fs)regime. Going beyond the mere understanding of quan-tum dynamics, concepts for coherent control of theunderlying processes are developed.

For this purpose specifically shaped laser pulses areemployed, which can be found using Optimal ControlTheory (OCT). The optimized pulse drives the systemfrom its initial state to the desired target exploiting inter-ference effects.

This concept of control is applied to the ring open-ing of cyclohexadiene (CHD), which constitutes theprototype for many electrocyclic reactions that are rele-vant in organic chemistry as well as in biological sys-tems. The photoinduced ring opening, respectively, thering closure is mediated by at least two conical intersec-tions (CoIns) (see Fig. 1, bottom) which allow an ultra-fast and radiationless return to the ground state. The rel-evant non-adiabatic coupling elements were calculatedusing high correlated quantum chemical methods andthey are included in the quantum dynamics calculationfor the nuclear wavepacket. In Fig. 2 two of the non-adi-abatic coupling elements located at either of the conicalintersections are shown exemplarily. Due to their spikyshape an extremely narrow grid is necessary for thepropagation code. High computer power was necessary

Max Planck Institute für Quantenoptik (MPQ) inGarching, Theoretical Femtoscience Group

TheoreticalFemtoscience

Regina de Vivie-Riedle and Dorothee Geppert,Max Planck Institute for Quantum Optics, Garching and Department of Chemistry, LMU, Munich

Wavepacket Dynamics and Coherent Control in the Presence of Conical Intersections

The group studies quantum dynamics of molecular systems inducedby femtosecond laser excitation. The primary goal is to elucidate themicroscopic mechanism of fundamental chemical reactions, such aspericyclic reactions, proton and energy transfer as well as more gen-eral molecular quantum processes like population inversion or thelaser assisted formation of cold molecules. Based on the understand-ing of the underlying mechanism we develop control strategies tomanipulate the outcome of individual molecular processes. Our con-trol tool is a femtosecond laser pulse again, which is now shaped inan optimal way to guide the molecular system from its initial state tothe selected final state. The formation of the selected objective willbe enhanced while concurring side reactions will be suppressed. Weapplied the optimal control theory for the formation of cold mole-cules, starting from an atomic BEC, the implementation of globalmolecular quantum gates, the control of ultrafast reactions throughconical intersections and the operation of molecular switches.

Page 77: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

77M P I F O R Q U A N T U M O P T I C S ( M P Q ) / T H E O R E T I C A L F E M T O S C I E N C E G R O U PH

IG

H P

ER

FO

RM

AN

CE

CO

MP

UT

IN

G

and supplied on the Regatta to perform these calcula-tions on a reasonable fast time scale.

First the evolution of the wavepacket is followed inreal time after excitation with a fs-laser pulse (Fig. 1,top). Thus a clear picture of the underlying microscopicmechanism was obtained.

Based on the quantum dynamical results an effi-cient control scenario was developed. The wavepacketcan be guided selectively through one of the conicalintersections (Fig. 1) and control of product yield andvelocity of the reaction can be achieved.

The microscopic understanding of fundamentalchemical reactions, such as the photo induced ringopening of cyclohexadiene is important as the reactionmechanism of many related systems can be derivedfrom them. Additionally, cyclohexadiene constitutes thereactive center and chromophore of fulgides which arebistable systems. Special furyl fulgide derivatives havethe potential to be utilized as molecular switches withpossible application to data storage and information pro-cessing. In our future studies we are exploring thepotential to efficiently operate molecular switches withthe help of shaped fs-laser pulses.

Publications• A. Hofmann and R. de Vivie-Riedle, “Adiabatic approach for

ultrafast quantum dynamics mediated by simultaneously active conical intersections. “, Chem. Phys. Lett. 346 (2001) 299-304.

• D. Geppert, A. Hofmann, and R. de Vivie-Riedle, “Control of collisioncomplex via a conical intersection.”, J. Chem. Phys. 119 (2003) 5901-5906.

• D. Geppert, L. Seyfarth, and R. de Vivie-Riedle, “Laser controlschemes for molecular switches.” App. Phys. B 79 (2004) 987-992.

Fig. 2 Non adiabatic coupling elements which enable the radiationless transfer at the conical intersections

Fig. 1 Top: wavepacket dynamics on ground (S0) and excited state(S1) after laser excitation À. The wave packet evolves on the S1state Á and returns to the S0 state via the conical intersection (C2-CoIn) Â. Bottom: ab initio potential energy surfaces for the ringopening reaction. The two ground state minima correspond tocyclohexadiene (CHD) and cZc-hexatriene (cZc-HT), respectively.

Page 78: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

Research Objectives: The earth’s atmosphere contains21% oxygen, therefore it is uniquely oxidizing. The oxi-dation processes transform natural and anthropogenicgases into products that can be more easily removedfrom the atmosphere through wet and dry deposition.This mechanism removes a multitude of gases thatwould otherwise accumulate and create a hothouseeffect – rather than a greenhouse effect – or be toxic forlife. This self-cleansing capacity of the atmosphere isregulated by radical reaction chains that have someresemblance to those in combustion processes, in whichhydrocarbons are oxidized to carbon dioxide and watervapor. On a global scale many gases, notably reactivecarbon and nitrogen compounds, can have profoundeffects on the abundance of atmospheric oxidants. Weinvestigate to what extent natural and anthropogenicemissions influence the self-cleaning capacity, and howthey contribute to regional and global changes of ouratmosphere and climate.

Computational Approach: Numerical modeling isimportant in the study of feedbacks in the atmosphere-climate system, and in the assessment of global envi-ronmental change. We aim to advance the theoreticalunderstanding of atmospheric transport, photochem-istry and links with climate. We develop models ranging

from zero-dimensional “box”models which describelarge sets of chemicalreactions specific to aparticular location orproblem, to intermedi-ate complexity columnmodels, to high-resolution (1x1 degree) global three-dimensional models. The global models describe thesources, reactions, transport and the removal of gasesand aerosols.

Project Description: Modular Earth Submodel System(MESSy): Recent developments towards earth system

Figure 1: Our global model sub-divides the atmosphere into gridcells at a horizontal and verticalresolution that can be selecteddepending on the application.

Our research focuses on ozone and the role of radicals in photo-oxidation mechanisms which play a central role in the self-cleansing capacity of the atmosphere. We develop highly sensi-tive instrumentation to measure trace gases, and uncover thephotochemical reaction chains. We have specialized in the con-struction of instrumentation for application on aircraft. Laser-opti-cal, mass spectrometric and gas chromatographic techniques, forexample, are used to determine the key breakdown products ofhydrocarbons and radicals. Our studies include laboratory investi-gations, field measurements on aircraft and ships, and the use ofsatellite observations. We develop computer models to simulatethe interactions of chemical and meteorological processes, andinvestigate the influences of atmospheric composition changeson climate. The models are an important aid in the analysis offield and satellite remote sensing measurements, whereas theyalso serve to study feedback mechanisms.

MPI for Chemistry,

Atmospheric Chemistry

J. Lelieveld, M. van Aalst, C. Bruehl, P. Joeckel and B. Steil

Global Atmospheric Chemistry Modeling

Page 79: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

modeling follow a “top-down” approach, coupling exist-ing models of different domains (e.g. land, ocean,atmosphere, biosphere) by means of a universal coupler.Yet, in order to study the interactions between bio-physico-chemical processes, the domain specific modelitself must be controllable in a transparent and userfriendly way. We have developed – not as an alternative,but rather as a complement – a “bottom-up” approach,providing a generalized interface structure for the stan-dardized control of submodels and their interconnec-tions. It has been successfully implemented into thegeneral circulation model ECHAM5 (European CentreModel Hamburg, version 5), thus extending it into afully coupled chemistry-climate model. ECHAM5/MESSy provides unique new possibilities to study feed-back mechanisms. Examples include stratosphere-tro-posphere coupling, atmosphere-biosphere interactions,multi-component aerosol and multiphase chemistryprocesses.Global ozone modeling: Ozone not only protects life onearth by absorbing ultraviolet radiation in the strato-sphere, it also plays a key role in tropospheric oxidationprocesses. In some industrial regions it can be a noxiouspollutant in photochemical smog. Its large scaleincrease in the troposphere, caused by anthropogenicemissions, contributes to climate change. TheECHAM5/MESSy model simulates global meteorology,atmospheric chemistry and climate. We have included

representations of natural and man-made sources ofgases and aerosols, photochemical and depositionprocesses. It appears that in the “background” tropo-sphere ozone transport from the stratosphere is quiteimportant (stratosphere-troposphere exchange). On aglobal scale, however, in situ photochemical formationis the dominant ozone source in the troposphere.Atmosphere-surface exchange processes: Soils, water sur-faces and the vegetation represent important sourcesand sinks of atmospheric water and trace species. Wedevelop model representations of these exchanges andmicrometeorological processes for global models. Theseinclude natural sources of hydrocarbons and nitrogenoxides, and dry deposition of reactive gases and aerosols.It appears that the vegetation plays an important role inthe control of hydrocarbons and exchanges between thelocal (canopy layer) and regional boundary layer, espe-cially over tropical rainforests.

Publications• Traub, M. and J. Lelieveld, Cross-tropopause transport over

the eastern Mediterranean. J. Geophys. Res. 108, 4712, doi:10.1029/2003JD003754 (2003).

• J. Lelieveld, F.J. Dentener, W. Peters and M.C. Krol, Hydroxyl radicals maintain the self-cleansing capacity of the troposphere.Atmos. Chem. Phys. Disc. 4, 3699-3720, 2004.

• Ganzeveld, L.N. and J. Lelieveld, Impact of Amazonian deforestationon atmospheric chemistry. Geophys. Res. Lett. 31, L06105, doi:10.1029/2003GL019205 (2004).

79M P I F O R C H E M I S T R Y / A T M O S P H E R I C C H E M I S T R Y D E P A R T M E N TH

IG

H P

ER

FO

RM

AN

CE

CO

MP

UT

IN

G

Figure 2: Atmospheric column ozone in April. The structures indicate the influence of synoptic weather systems, important in stratosphere-troposphere exchange.

Page 80: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

80 T H E R E C H E N Z E N T R U M G A R C H I N G ( R Z G )H

IG

HP

ER

FO

RM

AN

CE

CO

MP

UT

IN

G

Internal OrganisationThe RZG is organized in teams for• operation of supercomputers, robot attached

archive servers, general compute servers, data man-agement and operation of file servers, general usersupport, user administration,

• network planning, operation and support at IPP andGarching campus,

• application support in high-end parallel computing,• development of mass storage solutions for data from

fusion and satellite experiments,• application support for visualization in high-end com-

puting and experiments.

Normally RZG staff members are active in more thanone of the listed operational fields.

The IPP with two major fusion experiments on itscampus has also needs for a central IT center providingstandard tasks. Therefore the RZG hosts and supportsalso IPP-wide printer servers and special printers, ter-minal servers, license servers and other services.

Part of the RZG staff contributes to the work of aproject team responsible for the development of thedata acquisition system for the upcoming new Europeanfusion experiment W7-X in Greifswald, as well as forcompatible further developments at the ASDEXUpgrade experiment in Garching.

Collaborations inside and outside the Max Planck Society A close cooperation with the Max Planck Society’s „cus-tomers“ is considered the most important task. For RZGit is essential to provide services and resources accordingto the specific needs of its customers and to give out-standing support for optimum use of the resources.Especially in the field of high-end parallel computing, theapplication group has parallelized and optimized manyimportant codes in close collaboration with scientistsfrom astrophysics, plasma physics, and materials science.

In the field of data management initially a closecooperation with the Pittsburgh Supercomputer Center(PSC) had been started with main emphasis on multi-ple-resident AFS (MR-AFS). After the PSC droppeddevelopment and support for MR-AFS, the RZG tookthe lead for further developments. In the meantime newcollaborations with the Naval Research Lab/WashingtonDC and the TU Chemnitz have emerged in this field.

The Rechenzentrum Garching (RZG)

History, Main Duties, Organisationand Resources The Garching Computing Centre

Rechenzentrum Garching (RZG) is funded by the MaxPlanck Society and the Max Planck Institute for PlasmaPhysics (IPP) in Garching where it is located and alsoembedded in the IPP infrastructure. It has originallybeen established in the early nineteen sixties.

The RZG traditionally provides supercomputingpower and archival services for Max Planck InstitutesGermany-wide. Besides the operation of systems, appli-cation support is provided for all Max Planck Instituteswith high-end computing needs in the areas of materi-als science (such as solid state and polymer physics),theoretical chemistry, astrophysics, fusion and laserresearch (e.g. MPI for Solid State Physics, Fritz-Haber-Institute, MPI for Polymer Research, MPI forAstrophysics, MPI for Quantum Optics, MPI forGravitational Physics, Institute for Plasma Physics andothers). Large amounts of experiment data from thefusion experiments of the IPP (ASDUP, W7-AS andlater on W7-X) and satellite data of the MPI forExtraterrestrial Physics on the Garching campus areadministered and stored with high life-times.

The supervising organs of the RZG are the „Beirat“and the BAR („Beratender Ausschuss für Rechen-anlagen der MPG“) which are constituted by scientificdirectors of the Max Planck Society and external mem-bers in Universities. They provide independent advicein a variety of scientific and technical issues, especiallyin long-range plans and strategies to address the scien-tific aspects for the Max Planck Society. Beside thesetwo panels there is a Supercomputing Allocation Com-mittee responsible for policy definition and coordinationof the utilization of the supercomputing systems andadherent data storage.

Although the RZG covers most of the centrallyneeded services for high-performance computing withinthe Max Planck Society, there is another specializedcentral institution with partial funding of the MaxPlanck Society: The DKRZ (Deutsches Klimarechen-zentrum) in Hamburg which takes care of all the scien-tific computing related to climate research at the MPIfor Meteorology.

Page 81: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

81T H E R E C H E N Z E N T R U M G A R C H I N G ( R Z G )H

IG

HP

ER

FO

RM

AN

CE

CO

MP

UT

IN

G

The MiGenAS project is a joint activity of severalMax Planck Institutes, the University of Bielefeld andthe RZG in the area of bioinformatics. Based on therequirements formulated by the other partners RZGstaff is developing a workflow execution engine. Thissoftware extremely facilitates the researcher’s task tointeract with a multitude of bioinformatics tools.

Currently, the RZG is deeply involved in the real-

ization of the European DEISA (Distributed EuropeanInfrastructure for Supercomputing Applications) projectwhich started at 1st May 2004 and will last for 5 yearsas a Research Infrastructure project promoted by theEuropean Union in the context of FP6. DEISA itself isa consortium of leading national supercomputing cen-tres across Europe and aims at a jointly built and oper-ated distributed tera-scale supercomputing facility.

A variety of different compute systems is operated at RZG, ranging from supercomputesystems for capability computing to mid-range systems for capacity computing, amongthem many dedicated institute or department servers.

Actual Compute Systems at RZGSystem Size Interconnect Peak Purpose

Performanceand Main Memory

IBM 30 nodes IBM High 5 Tflops CapabilitypSeries 690 32-way Performance 2.25 TB computingCluster Power4 @ Switch main memory

1.3 GHz

IBM 86 nodes Gigabit 5 Tflops CapacitypSeries 575 8-way Ethernet 2.75 TB computingCluster Power5 @ main memory

1.9 GHz

Intel Xeon 300 nodes Gigabit 3.3 Tflops Capacity 32 bit Linux 2-way Ethernet 0.63 TB computingCluster Intel Xeon main memory

2.4 GHz2.8 GHz,3.06 GHz

SUN Opteron 112 nodes Infiniband 0.5 Tflops Capacity64 bit Linux 2-way 0.4 TB computingCluster Opteron main memory

2.2 GHz

SUN Opteron 5 nodes Infiniband 0.1 Tflops Capacity64 bit Linux 4-way 64 GB computingCluster Opteron main memory

2.4 GHz

SGI Altix 64 proc SGI 0.2 Tflops Capability 3700 Bx2 Itanium 2 @ NUMAlink 4 128 GB computing

1.6 GHz interconnect main memory

Page 82: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

82 T H E R E C H E N Z E N T R U M G A R C H I N G ( R Z G )H

IG

HP

ER

FO

RM

AN

CE

CO

MP

UT

IN

G

Supercomputing tradition reaches back more than fourdecades to the early nineteen sixties when an IBM 7090system had been installed in 1962 at the Institute forPlasma Physics (IPP).

With a peak performance of 100 kflop/s and 128 kBof main memory it was one of the best in the world. Thecurrent supercomputer of the Max Planck Society, anIBM pSeries cluster with a peak performance of 10Tflop/s and with 5 TB of main memory outperforms theIBM 7090 by a factor of one hundred million.

Other Max Planck Institutes also had access to thecomputer back in 1962. There was only one more com-puter of the same type in the Federal Republic, at the“German Computing Center” in Darmstadt. Seven yearslater an IBM 360/91 computer achieved an increase inperformance by two orders of magnitude, the newpipelining concept being largely responsible for theimprovement. This computer, which was installed at theIPP in 1969, was also used by other Max PlanckInstitutes, especially the Institute for Astrophysics.There was one other computer of the same type inEurope, namely at the Atomic Energy Commission inFrance. Another ten years passed before vector comput-er technology commenced its triumphant march in1979, with the very first vector computer in continentalEurope, a Cray-1 system, being installed at theGarching Computing Center (RZG) at the IPP. Theadvent of vector computer technology enabled plasma-,particle-, and astrophysicists to take great leaps forward,and the new technology began to establish itself inmaterials research. This Cray-1 was the first vectorcomputer which was available for general research. AnAmerican guest researcher at the Max Planck Institutefor Astrophysics, Larry Smarr, made use of the excellentcomputing conditions with great enthusiasm in 1982,and lamented the inadequate computer hardware avail-able to civilian researchers in the USA. In 1983 he ini-tiated the foundation of national supercomputer centresby the National Science Foundation. Vector computertechnology then dominated supercomputing worldwidefor almost two decades. With a Cray XMP/2 system in1986 and a Cray YMP/4 system in 1991 (with two andfour processors, respectively, and common working

memory), moderately higher-performance vector com-puters followed at the RZG. At the German ClimateComputing Centre (DKRZ) in Hamburg, which wasestablished in 1985 as an offshoot of the ComputingCentre at the Max Planck Institute for Meteorology, aCray-2 system was installed in 1987; this system wasone of the most powerful in the world at that time.Then, at the beginning of the nineteen nineties, camethe breakthrough to the new technology of massive-par-allel computing. Not only was calculating powerincreasing very significantly, but also advantage could betaken for the first time of extremely large amounts ofmain memory. The price to be paid for this progress wasthat appropriate parallel programs, far more complexthan the vector computer programs, needed to be devel-oped at great expense. Once again, the Max PlanckSociety was quick to face up to the new challenges:from 1991 on, with a 64-processor nCUBE2 parallelcomputer at the RZG and from 1992 with a 32-proces-sor KSR1 parallel computer at the GWDG inGöttingen. The new technology was made availablefrom 1995 in the Max Planck Society with a CrayT3D/128 system, and from 1996 with a Cray T3E sys-tem (816 processors following final upgrade). Sincethen, the use of parallel computing in the theoretically-oriented disciplines has increased considerably. Due tothe continuing huge advances in computer technolo-gies, this T3E system, which could in June 1998 boastseventh position in the world ranking list, was replacedin 2002 with an IBM pSeries system with an initial peakperformance of 1 Teraflop/s which was graduallyexpanded to 10 Teraflop/s, twenty times the power ofthe Cray T3E system. Also in 2003 a new NEC super-computer for the climate researchers was installed atthe DKRZ. Advances in computer technology have thusgiven the scientists an increase in performance by 8orders of magnitude over the past fourtythree years.Comparable “leaps forward” have also been made in cer-tain fields as a result of new and more efficient algo-rithms. The increasingly complex simulation calcula-tions and “virtual experiments” that these developmentshave made possible have proved extremely fruitful interms of scientific advance.

Supercomputing History in the Max Planck Society

Page 83: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

83T H E R E C H E N Z E N T R U M G A R C H I N G ( R Z G )H

IG

HP

ER

FO

RM

AN

CE

CO

MP

UT

IN

G

Main Computer Installationsat the RZG/IPP

1962-1969: IBM 70900,1Mflop/s, 128 kB RAM

1969-1980: IBM 360/9115 Mflop/s, 2 MB RAM

1979-1986: Cray-180 Mflop/s, 8 MB RAM

1986-1991: Cray-XMP/2256 Mflop/s, 32 MB RAM

1991-1997: Cray-YMP/4-641,3 Gflop/s, 512 MB RAM

1999-2005: NEC SX-5/3C24 Gflop/s, 12 GB RAM

Since 2002: IBM p690 cluster5 Tflop/s, 2 TB RAM

1991-1998: nCube2/64128 Mflop/s, 256 MB RAM

1995-1997: Cray T3D/12819,2 Gflop/s, 8 GB RAM

1996-2002: Cray T3E/816 0,47 Tflop/s, 104 GB RAM

Page 84: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

In May 2004 the FP6 EU Integrated InfrastructureInitiative (I3) Project DEISA has started. DEISAstands for Distributed European Infrastructure forSupercomputing Applications (see www.deisa.org). Goalof the project is the advancement of supercomputing inEurope. Major supercomputer centres are coordinatingtheir actions to computational sciences and jointlydeploy and operate a distributed terascale supercom-puter infrastructure. In the first step DEISA is consti-tuted from four homogeneous platforms at CINECA(Bologna, Italy), FZJ (Juelich, Germany), IDRIS (Orsay,France) and RZG (Garching, Germany). RZG belongsto the initiators of the project. Soon after platforms fromCSC (Espoo, Finland), ECMWF (Reading, UK),EPCC (Edinburgh, UK), SARA (Amsterdam, NL),

BSC (Barcelona, Spain), HLRS (Stuttgart, Germany)and LRZ (Munich, Germany) will be integrated. Serviceactivities are carried out to integrate leading grid tech-nologies. Joint research activities support the inclusionof leading applications in computational sciences andthe involvement of leading computational scientists inEurope. A virtual private network connection has beenrealized through GEANT with national access via theNRENs on the basis of premium IP services and priori-ty routing. The current to 1 Gb/s connection per con-nected site will be extended 10 Gb/s in phase with theforthcoming GEANT upgrade. RZG is task leading theDEISA service activity for global file systems and thejoint research activities for materials science and plasmaphysics.

The Deisa Project

Initial DEISA members

84 T H E R E C H E N Z E N T R U M G A R C H I N G ( R Z G )H

IG

HP

ER

FO

RM

AN

CE

CO

MP

UT

IN

G

Page 85: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

85T H E R E C H E N Z E N T R U M G A R C H I N G ( R Z G )H

IG

HP

ER

FO

RM

AN

CE

CO

MP

UT

IN

G

Project Overview In July 2002 several Max-Planckgroups1 with research focus on the analysis of microbialgenomes, together with RZG, launched a joint projectwhich has meanwhile received the name “MiGenAS”(Microbial Genome Analysis System). In an interdisci-plinary approach this consortium is establishing an inte-grated software platform for bioinformatics tools anddatabases which is tailored to the research with micro-bial genomes. Besides hosting dedicated computing andstorage facilities for the consortium and providing user-support on various levels, RZG contributes original soft-ware development, some aspects of which will be high-lighted in this article.

Motivation The amount of biological data stored in pub-lic databases has been growing exponentially for years.Numerous pieces of this vast source of informationneed to be considered and interrelated for almost anykind of analysis of biological systems, ranging from theassembly and functional annotation of a newlysequenced genome up to the dynamical modelling onthe system level (e.g., a biochemical reaction network, abiological cell or ultimately entire organisms). Besidescomputationally tackling the ubiquitous data-miningproblems, also predictive computational methods areemployed in order to advance theoretical insight, e.g.into problems which are not accessible by experimentswith reasonable effort. Not surprisingly, developmentand application of powerful software tools has becomeindispensable for biological research. But while numberand sophistication of bioinformatics software tools israpidly increasing, scientists trying to utilize them stillsuffer from a number of general problems: Whereasbeing useful from an algorithmic point of view manytools are difficult to handle, and that does not only holdfor the non-expert. Frequently this is caused, for exam-ple, by non-standard user interfaces and the existence ofmany proprietary input/output formats. In those casesadditional technical knowledge and tools for data con-version are needed. Working in such a heterogeneousenvironment can become enormously tedious and isalso inherently error-prone. Moreover, rather thanemploying a single monolithic tool many problems inbioinformatics (e.g., the construction of an evolutionarytree based on genomic information) require the setup ofa whole processing pipeline by chaining a number of dif-ferent tools and defining appropriate filters.

Obviously, the highlighted problems can severelyhamper the productivity of the scientist by drawing thefocus from his or her original field of expertise to pro-

gramming tasks which he or she might not be ade-quately trained for. One is tempted to speculate that inthe worst case decisive questions might not even beasked due to the presence of seemingly insurmountabletechnical prerequisites.

Software development at RZG Driven by the specificdemands of the scientists of the MiGenAS consortiumand motivated by the lack of existing solutions we havedeveloped a framework for a workflow engine whichintegrates a growing selection of state-of-the-art bioin-formatics software tools (which are taken from the pub-lic domain or contributed by scientists of the MiGenASconsortium) and databases into a single system. Basedon this framework we provide a powerful web applica-tion (www.migenas.mpg.de) with a coherent and easy-to-use interface to a variety of tools and databases, tai-lored to the analysis of microbial genomes. Since thebeginning of 2004 access to services is provided to allscientists of the MPG. The central aim is to giveresearchers the possibility to efficiently investigate newand more complex scientific questions with a minimumof technical effort and to obtain structured answerswithin a reasonable amount of time.

In its present state the MiGenAS workflow engineoffers the classic bioinformatics tasks such as homologysearches in databases of nucleic-acid or amino-acidsequences, the computation and validation of multiple

Markus Rampp, Thomas Soddemann, Hermann Lederer

MiGenAS: Bioinformatics at RZG

Figure 1: Overview of the fundamental architecture and design of theMiGenAS workflow engine. Interaction of the different components is indicat-ed by arrows. Only a limited, well-defined set of interfaces is directly exposedto client applications (i.e., a web browser or a custom, web-service orientedclient application). The native interfaces of “legacy” software componentssuch as the batch system, or stand alone bioinformatics tools are shieldedfrom the other software tiers by wrapping them in appropriate interfaces.

Page 86: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

different sources. To solve this problem, all data itemswill be tagged by time stamps. A global system timergenerates time and a master clock and distributes thesevalues via a star like network. All diagnostic and controlsystems are connected to this network and extract timeand clock through a local timer. Experiment time is rep-resented by a 64 bit counter, where the lowest bit is 1nanosecond. The resolution of the time is at themoment 10-20 nanoseconds, dependent on the imple-mented hardware devices. For transmission of the timevalues an accurate 25 MHz signal is used, which alsoserves as the master clock for the slave devices. On thelocal systems, time stamps are generated by a so-called“Time to Digital Converter” (UTDC/TTE) with thesame acquisition clock rate used for the sampling ofdata. The UTDC was developed as PCI card and testedin several prototype systems. To use the boards in vari-ous operating systems, drivers for VxWorks, Solaris,Windows NT/XP and Linux were developed. The cen-tral timing system will be designed and built by the con-trol group of W7-X.

An object data base (Objectivity) will be used topermanently store measured data as well as configura-tion parameters, needed for the setup of discharges,

Data Acquisition SystemsThe XDV group at the Garching computing center sup-ports the experiments of the IPP in respect of dataacquisition, data archiving and data processing. For theactive experiment ASDEX-Upgrade mainly enhance-ments for the existing data acquisition system are done.The experiment W7-X is at the moment under con-struction in Greifswald. The XDV group is responsiblefor the development of a new data acquisition and -pro-cessing, and -archiving system for W7-X. The concept ofthis system is strongly influenced by the design goals ofthe experiment, which includes long duration dis-charges of up to 30 minutes. To investigate the behaviorof the machine as well as the plasma, numerous diag-nostic systems will be implemented. In contrary to theexisting experiments, which were only operated withshort duration discharges, W7-X requires a new opera-tion mode with continuous acquisition and archiving ofextracted data. In accordance with the duration of thedischarge, the amount of measured data is growingenormously. This makes it necessary, to implementnovel methods for classification, compression andanalysis of measured data.

In a continuous sampling data acquisition systemtime plays an important role for comparison of data from

sequence alignments, phylogenetic analysis, as well asmodeling the secondary and tertiary structure and bio-chemical function of proteins or parts of them. In par-ticular, the user can seamlessly chain individual toolswithout the necessity to take care of any format conver-sions or tedious collection and interpretation (“parsing”)of intermediate results. An outstanding feature is theability to conveniently process large datasets (up to thescale of a complete microbial genome) with a minimumof user interaction. Importantly, sessions with the webapplication can be made persistent allowing the scien-tist to easily and reliably reproduce, re-examine andreprocess obtained results. Compute-intensive tasksassembled by the workflow engine are dispatched to aMiGenAS-owned 56-CPU Xeon-based Linux clusterallowing high-throughput analysis (most applicationsfall into the class of so-called “capacity computing”).

The software architecture of the workflow engineemploys a highly modular, object-oriented design (seeFigure 1). This enhances the maintainability of such asystem and gives the necessary flexibility for continuallyadapting and extending the functionality of the frame-work in response to growing and changing demands ofthe scientists working with it. In fact, we regularly

deploy novel features in order to support newly emerg-ing use-cases which are worked out in close collabora-tion with the end-users.

The actual implementation of the workflow engineis based on the Java 2 Platform (J2EE) which guaran-tees seamless portability across different operating sys-tems. Compliance with well-established software stan-dards and taking into account major world-wide techno-logical developments allow us to immediately embark onnew promising trends like, e.g., the upcoming comput-ing grids. By this means one is able to substantiallyextend scope and functionality of the provided servicesin the most flexible and efficient way. In particular, thegeneral framework architecture is not limited to theclass of applications discussed so far. The system can aswell serve as the basis for a more general workflowengine, which, according to the establishing grid-com-puting community, is a central challenge of “the Grid”.

1. The project members are: MPI of Biochemistry, Dept. Oesterhelt; MPI for Developmental Biology, Dept. Lupas and AG Schuster; MPI for Marine Microbiology, Dept. Amann; MPI for ComputerScience, Dept. Lengauer. The Center for Biotechnology (CeBiTec) at the University of Bielefeld joins as a partner.

86 T H E R E C H E N Z E N T R U M G A R C H I N G ( R Z G )H

IG

HP

ER

FO

RM

AN

CE

CO

MP

UT

IN

G

Page 87: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

87T H E R E C H E N Z E N T R U M G A R C H I N G ( R Z G )H

IG

HP

ER

FO

RM

AN

CE

CO

MP

UT

IN

G

Schematic representation Data Acquisition System of Wendelstein 7-X.

data acquisition and data analysis. To simplify theaccess to the object data base a graphical user applica-tion was developed to satisfy the urgent needs of theXDV group for working with setup parameters. The gen-eral user of the experiment will need a more sophisti-cated tool to work with the large amount of setupparameters in the data base. This tool is at the momentin an early development state. It will also be needed todefine discharge programs. A program is a list of sequen-tial discharge segments, that have to be defined inadvance and stored in the data base. The control systemis responsible for the time sequence of segments.

The continuous storage of data was achieved byimplementing a novel data stream model, where dataand time is stored in packets in timely order. The scala-bility of the data base allows many parallel streams ofdata to be transferred to the storage. The enormousamount of data to be processed concurrently certainlyrequires the use of several data base servers.

Since W7-X is designed for long discharges in theorder of 30 minutes, important discharge parameters aswell as important measured signals have to be supervisedduring the measurement phase. The display of these sig-nals should be near real time. The “monitoring” systemdesigned for W7-X is a three tier model. The sampleddata in the diagnostic systems will be reduced (in timeand channels) synchronously and sent to one or moremonitor server systems. Transmission of data is accom-

plished by IP multicast protocol, to avoid sending thesame data several times. The monitor server will analyzethe incoming signals and produce some physical mean-ingful values that again are sent by IP multicast to thenetwork. Every workstation on the network can be usedto display these signals (Monitor client). To view variouskind of signals like time traces, combined plots or videostreams different “visualizers” will be made available.

Sometimes it is necessary to exchange measureddata for feedback purposes between data acquisitionstations and control stations. To accomplish this, a spe-cialized network with a specially designed realtime pro-tocol on ethernet basis will be implemented. The itemsexchanged consist of a time stamp and a structured datapart. There is always only one producer of a signal andone or more consumers, which again leads to a multi-cast protocol on the network. In the same manner thismethod is used for the transport of events, that signalvarious states in the whole system. The real time proto-col is implemented for VxWorks, Windows NT/XP andLinux.

When W7-X starts operation there will be morethan 100 data acquisition and control systems at work,that all run different applications. Since most of the sta-tions are located near the experiment, there is no possi-bility to operate the stations by means of a console. It istherefore absolutely necessary to provide an interface toremotely control the systems from a central position.This is also true for messages originating from the sta-tions. The message logging system designed is based ona network logging mechanism. All messages are sent by

Page 88: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

Application Support For high-performance computing qualified applicationsupport is of key importance. When massively parallelcomputer architectures started to provide unprecedent-ed computing power more than a decade ago, this para-digm transition implied a real challenge for applicationsupport. To meet this challenge, a new application sup-port group was established at the RZG, first parallelizingor helping to parallelize codes for the nCUBE2 system,then for the Cray T3D and T3E systems.

Nowadays mainly the large IBM pSeries 690 basedsupercomputer and Linux clusters have to be supported.Tasks of the application support team include: projectconsulting, code parallelization, code optimization, coderestructuring and adaptation, performance tests, debug-ging and bug reporting, contacts to software developers ofvarious companies.

The team consists of scientists (physicists, mathe-maticians and computational scientists) with academicbackground (PhDs) in materials science, astrophysics,plasma physics, biophysics and computational biology,and experience in high-performance computing

Groups at RZG

IP multicast to the network and collected by one ormore message servers for appropriate handling, viewingand storage.

The above mentioned segment control delivers,according to the physical program, sequential informa-tion to change the behavior of the control devices. Theinformation distributed consists of a time mark and aspecial segment id. One segment is always valid untila new segment is activated. The change of a segmentis executed by all stations synchronously. The timing issynchronized by the central timer system. The inter-pretation, what has to be done on a segment change isaccomplished by the station itself. For this purpose the

whole list of segments, that are necessary during theexperimental phase have to be loaded in advance.Every loaded segment has a special identification andis activated on the reception of a segment change com-mand with this id. The description of a segment typi-cally consists of a set of parameters, that are needed tosetup the hardware equipment for a special measure-ment task.

The whole system is still under development andhas been verified several times by prototype implemen-tations. Recently the system was demonstrated in itsmain features with nearly all essential components in apresentation given to a larger audience in Greifswald.

(Hermann Lederer, Renate Dohmen, Roman Hatzky,Werner Nagel, Markus Rampp, Thomas Soddemann,Reinhard Tisma).

Group members often work in close cooperation withscientists from different institutes in small teams for codeparallelization and optimization, contributing comple-mentary skills for the problem solution. The scientificend user knows best about the functionality a code deliv-ers or should deliver, which restrictions have to be obeyedto allow for further functionality enhancements, and theexpected growth rate for problems to be solved with thecode in terms of main memory and cpu time consump-tion. An application support group member knows bestabout the target computer architectures and their pro-gramming. With this approach several important codeshave been accompanied from the beginning through sev-eral functional generations and computer generations.

Current general tasks include support for code porta-bility without performance degradation, uniformity ofcode package usage on different architectures, codeusage in grid-like environments as being built up in theEU FP6 project DEISA (www.deisa.org) in which the

88 T H E R E C H E N Z E N T R U M G A R C H I N G ( R Z G )H

IG

HP

ER

FO

RM

AN

CE

CO

MP

UT

IN

G

Page 89: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

Application Support Group

89T H E R E C H E N Z E N T R U M G A R C H I N G ( R Z G )H

IG

HP

ER

FO

RM

AN

CE

CO

MP

UT

IN

G

to use object storage for AFS are under way in a col-laboration with CERN and CASPUR.

• Even if disk space is getting all the time cheaper goodreasons remain to use Hierarchical StorageManagement Systems (HSM). In some areas datagrowth is still too large to accommodate all data ondisk, but the more important point is data security. Double tape copies on separate locations are stillmuch more secure than any RAID system. HSM sys-tems provide transparent data migration between diskand tape in robot systems. The RZG currently usestwo HSM systems in parallel: DMF on an SGI Origin2000 along with StorageTek robot systems and TSM-HSM on an IBM Regatta system with an ADIC robotsystem. The HSM systems are used either directlyTSM-HSM on the Regatta systems) or indirectly bymulti-resident AFS (MR-AFS). The additional map-ping of AFS file space to HSM file space done byMR-AFS allows to keep paths constant over very longperiods of time because the underlying HSM systems can be replaced transparently by a new one.

• The data stored in HSM systems at the RZG areeither large files from experiments or HPC computingor increasingly data with „eternal life“ which representa cultural heritage from the phototecs at Florence andRome and from the MPI for Psycholinguistic inNijmwegen. The Leibniz-Rechenzentrum (LRZ) andthe RZG exchange their second tape copies of TSMarchives.

• AFS and local file systems on PCs and workstationsare backed up into TSM. TSM allows users to restoreold versions of their files, but also whole file systemsin the case of broken disks.

Compute Systems The high-performance systems group is responsible forthe operation and administration of supercomputers atthe Computing Center Garching (RZG). Currently,there is an IBM p690 Regatta system installed at theRZG with a peak performance of 5 Tflop/s and a totalmain memory of 2.25 TBytes. The system is built of 29compute nodes with 32 processors each and 2 I/Onodes with 16 processors each. The nodes are connect-ed to the High Performance („Federation“) Switch ofIBM, that allows fast communication between thenodes. Additionally an IBM p575 based cluster hasbeen put into operation for capacity computing. Thissystem is composed of 8-way Power 5 nodes (5 Tflopspeak, 2.75 TByte main memory). About 100 TBytes ofdisk space are available on the HPC systems. In addi-tion, HSM (Hierarchical Storage Management) is inoperation which automatically migrates or retrieves filesto or from tape.

An NEC SX-5 Vector Supercomputer, the first oneinstalled in Europe, has been taken out of service inspring 2005 after nearly 6 years of operation.

RZG takes the task leadership for applications in materi-als science and plasma physics.

Other tasks arise from computational biology: by sup-porting automated workflows for sets of functionallyrelated codes their usage efficiency can be substantiallyimproved. This admits for qualitatively and quantitativelynew approaches (www.migenas.mpg.de). Details aredescribed in the respective article about bioinformatics.

Data Management Data management in a modern research environmenthas some special challenges: The data should be world-wide accessible on all platforms, but with fine-grainedaccess permissions. This requires a world-wide securefile system with personal authentication of each user. Atthe same time data access should be performing verywell on local batch HPC clusters and the data locationshould be fully transparent to the user keeping file pathsworld-wide constant over the lifetime of the data. Theserequirements along with scalability and fast growingdata volumes led the RZG to a rather unique solution ofa world-wide shared file system combined with hierar-chical storage management. The data managementgroup of the RZG provides the following storage andbackup solutions:• On all workstations and HPC clusters the user has the

same home directory which is located in the AFS. AFSis a secure global, distributed, and scalable file systemwhich allows access on nearly all platforms (Unix, Linux, Windows, MacOs). The RZG plays an activerole in AFS development. Large file support in theOpenAFS client and the OpenAFS port for AIX 5.2were done here. Acceleration of AFS access in trust-ed clusters is being developed. Future developments

Page 90: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

The members of the HPC group are Dr. IngeborgWeidl and Gisela Bacmeister, supported by AndreasSchott.

Capacity compute systems Besides the high-performance systems the RZG oper-ates several Linux clusters which are partly owned byand dedicated to individual Max Planck Institutes, part-ly open to all users of the RZG. Most of the systemsconsist of Xeon-based double-processor nodes withGigabit-Ethernet connection. They are especially suit-able as high-throughput machines for sequential codes,but likewise adequate for parallel applications withmoderate communication requirements. For communi-cation-intensive parallel programs which use only asmall number of processors a 64-bit Opteron clusterwith the rather strong Infiniband communication net-work has recently been installed at the RZG.

tory services managing the backbone and for providingfast and solid but also secure access to the internet aredeployed and realized by the network group. In additionall MPIs in the metropolitan area of Munich are usingthe internet access equipment of the RZG. This isaccomplished by being part of the powerful MunichUniversity Network („Muenchner Hochschulnetz“), anetwork connecting all the universities and researchinstitutes in and around Munich with speeds up to 10GBits/sec. The two branches of the Max PlanckInstitute for Plasma Physics (IPP) in Garching andGreifswald need to be tightly coupled via the GermanResearch Network GwiN. Network functions like tun-nelling and address translation are used in order to buildonly one intranet with minor or no differences for appli-cations like data access and backup.

Task overview:The Local Area Network (LAN) is the core of data com-munications, therefore a lot of tasks are mandatory tokeep it running permanently:• Planning (active and passive components)• Implementing network protocols (e.g. routing) • Management (troubleshooting, extending)• Maintaining Internet services (servers for Web, email,

FTP, DNS, DHCP, ...)• WLAN-integration (implementing wireless network

access)• Voice communications• Deploying new technologies

System overview:Our LAN is based on a collapsed backbone consistingof very powerful switches/routers (Foundry BigIron)omitting third-level switches and optimising therebythroughput, management, bottlenecks, flexibility andvoice readiness of the network.

Videoconferencing (VC)System overview:The Videoconferencing (VC) group of the RZG supportsthe IPP, numerous other Max Planck Institutes, theGeneral Administration of the MPG, the ProgramManagement of the Helmholtz Association and EFDAassociates (European Fusion Development Agreement).This includes about 50 VC group and room systems andsome 50 desktop systems. About half of these are regis-tered with the RZG H.323 Gatekeeper and Proxy (GNUopen source) in the RZG’s DMZ. The data volume is~200 GB/month with ~20 conferences/day. For dialling GDS respective E.264 numbering is obliga-tory, such that international (Internet2, ViDeNet) part-ners can be called without problems. The diallingscheme is supported by the German DFN Video-conferencing service (DFNVC), with DFN being theprovider of Germany’s scientific research GigaBit net-

Data Management Group

90 T H E R E C H E N Z E N T R U M G A R C H I N G ( R Z G )H

IG

HP

ER

FO

RM

AN

CE

CO

MP

UT

IN

G

Software environment For all compute systems software maintenance is ademanding task. Besides licensed software which has tobe installed and configured there are a lot of publicdomain products which have to be compiled and installedfor all sorts of architectures and compilers supported bythe RZG. Especially on the Linux clusters MPI commu-nication libraries have to be provided. For all systems theRZG offers common sequential and parallel numericallibraries, such as NAG, LAPACK, ScaLAPACK andBLAS, possibly in addition to proprietary libraries to facil-itate portability between architectures.

Network At the research site in Garching all four Max PlanckInstitutes (MPIs) are utilizing one IP based networkwhich is further divided into subnetworks. The manda-

Page 91: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

91T H E R E C H E N Z E N T R U M G A R C H I N G ( R Z G )H

IG

HP

ER

FO

RM

AN

CE

CO

MP

UT

IN

G

work (GWiN). For multi-point conferences the MCUpool of DFNVC is used.Task overview:The VC group runs 3 big lecture halls and 10 meetingrooms of various size on both sites of the IPP, namelyGarching (the RZG site) and Greifswald (branch insti-tute of the IPP). Currently all systems are H.323 stan-dard compatible, thus guaranteeing simple and stableconnectivity. New standards are tested thoroughlybefore being recommended to the users, new hard- andsoftware systems from all major providers are investigat-ed. New and non-standard approaches are explored(Access Grid, etc.).The members of the VC group, headed by Dr. UlrichSchwenn, come from different departments of RZGand IPP.

Experimental Data Processing (XDV) The XDV groupsupports the experiments of the IPP (Institute forPlasma Physics) in respect of data acquisition, dataarchiving and data processing. At the moment there isa collaboration with the existing experiment ASDEX-Upgrade in Garching mainly for the discussion ofenhancements of the existing data acquisition system.On the other hand the group is responsible for thedevelopment of the data processing system forWendelstein 7-X. W7-X is under construction and isbuilt up in Greifswald. The members of the groupcome from the computer group in Greifswald (3 per-sons) and from the computing center in Garching (4persons). The group is headed by Dr. Peter Heimannfrom the computing center Garching. Details aredescribed in the corresponding article about dataacquisition systems.

Desktop SystemsTask overview:The following services are provided by the desktop sys-tem group:• Setting up a team of experts for (MS-Windows)-PC

hard- and software (technical support in case of prob-lems)

• Centralising the ordering of PCs and defining a stan-dard configuration

• Planning, testing and implementing central services like active directory, securing data, electronic mail, printer support, distributing software, OS-migration.Besides these basic services also four to five appren-tices per year get instructed and educated for “Fach-informatiker”.

System overview:For the IPP and the about 1000 Windows-based PCswe adopted the Microsoft concept of an ActiveDirectory which gives us numerous possibilities ofadministering and securing users, data and resources.

The Greifswald branch of the RZG The Greifswald branch of the Garching ComputerCentre (RZG) has a staff of ten persons and is active intwo main fields:

About one half of the team is responsible for basicIT support on site like server installations, network serv-ices, office computers (MS-windows, UNIX), distribu-tion of standard software and organization of remoteparticipation of other research groups.

The other part of the team is involved in develop-ment, support and maintenance of software and servicesfor experimental data acquisition, data analysis and simu-lation. Most of this work is dedicated to the W7-X exper-iment where continuous operation and data taking isplanned and high data rates are expected which have tobe processed and stored while the experiment is running.

Multi-site video conference scenario

Page 92: High Performance Computing in the Max Planck Society€¦ · puting applications in the Max Planck Society, illus-trated by detailed descriptions of scientific projects currently

Advisory Boards

RZG is regularly supervised by a scientific steering coun-cil. Members are appointed by the President of the MaxPlanck Society. For major investments in new computeresources additionally the Advisory Board of the MaxPlanck Society for compute infrastructure investments(BAR) has to be consulted. This board also directlyreports to the President of the Max Planck Society.Additionally all significant HPC investments in Ger-many are centrally coordinated by a subcommittee ofthe Scientific Advisory Council of the German Govern-ment (Wissenschaftsrat).

The director of the RZG is directly supervised bythe chairman of the directorate of the Max PlanckInstitute for Plasma Physics, Prof. A. Bradshaw.Additionally, bi-annually the User Group ExecutiveCommittee advices the RZG.

Scientific Steering Council of the RZG The ScientificSteering Council advices the RZG in bi-annual meet-ings on the long range plans, general strategies, the taskprofile, budget priorities, and technical directions.

MembersProf. Dr. Siegfried Bethke, MunichProf. Dr. Wolfgang Hillebrandt, GarchingProf. Dr. Karl L. Kompa, GarchingProf. Dr. Kurt Kremer, MainzProf. Dr. Karl Lackner, GarchingProf. Dr. Eugen Morfill, GarchingProf. Dr. Jürgen Nührenberg, GreifswaldProf. Dr. Matthias Scheffler, BerlinProf. Dr. Walter Thiel, Mülheim/RuhrProf. Dr. Huajian Gao, Stuttgart

Scientific Computing Advisory Board of the MaxPlanck Society (BAR) The BAR provides advice anddedicated funding to all institutes of the Max PlanckSociety on a variety of complex scientific and technicalcomputing issues. The recommendations of the BARinclude advice on long-range plans, priorities, andstrategies to address more effectively the scientificaspects of advanced scientific computing. Moreover, theBAR tries to help keeping the balance between locallyand centrally provided resources.

MembersProf. Dr. Hans-Wolfgang Spiess, Mainz (Chairman)Prof. Dr. Arndt Bode, TU MunichProf. Dr. Heinrich Bülthoff, TübingenDr. Armin Burkhardt, Stuttgart

Dr. Helmut Hayd, LeipzigProf. Dr. Heinz-Gerd Hegering, MunichProf. Dr. Bernhard Neumair, GöttingenProf. Dr. Jürgen Renn, BerlinProf. Dr. Jan-Michael Rost, DresdenProf. Dr. Walter Stühmer, Göttingen Prof. Dr. Walter Thiel, Mülheim/RuhrDr. Manuela Urban, BerlinProf. Dr. Martin Vingron, BerlinProf. Dr.-Ing. Gerhard Weikum, SaarbrückenDipl.-Ing. Peter Wittenburg, Nijmwegen

Supercomputing Allocation Committee of RZG TheSupercomputing Allocation Committee is responsiblefor setting the policies associated with the utilization ofthe supercomputing resources available to Max Planckresearchers and coordinating challenging computationalprojects.

MembersProf. Dr. Wolfgang Hillebrandt, GarchingProf. Dr. Kurt Kremer, MainzProf. Dr. Karl Lackner, GarchingProf. Dr. Eugen Morfill, GarchingProf. Dr. Matthias Scheffler, Berlin

User Group Executive Committee of RZG ThisCommittee provides valuable user feedback on usage ofthe various systems and services provided.

MembersDr. Andreas Bergmann, GarchingHarald Bopp, MainzDr. Wolfgang Brinkmann, GarchingDr. Alexander Hartmaier, StuttgartDr. Michael Drevlak, Greifswald Günther Franz, MartinsriedDr. Christa Hausmann-Jamin, Potsdam Peter Martin, GarchingDr. Ewald Mueller, GarchingWolfgang Mueller, HeidelbergAlbrecht Preusser, BerlinPeter Schacht, MunichDr. Lutz Schaefer, MainzDr. Ruben Schattevoy, MartinsriedDr. Wolfgang Schneider, GarchingUte Schneider-Maxon, GarchingAntje Schuele, MartinsriedTorsten Stuehn, MainzDr. Reinhard Volk, Garching

92 T H E R E C H E N Z E N T R U M G A R C H I N G ( R Z G )H

IG

HP

ER

FO

RM

AN

CE

CO

MP

UT

IN

G