f rontiers of imulationweb.eecs.umich.edu/~qstout/pap/cise04.pdf · gasdynamics conservation form....

22
N umerical simulation and modeling are increasingly essential to basic and applied space-physics research for two primary reasons. First, the heliosphere and magnetosphere are vast regions of space from which we have relatively few in situ measurements. Numerical simulations let us “stitch together” observations from different re- gions and provide data-interpretation insight to help us understand this complex system’s global behavior. Second, models have evolved to where their physical content and numerical robustness, flexibility, and improving ease of use inspire re- searchers to apply them to intriguing scenarios with new measures of confidence. Indeed, many shortcomings and questions re- main for even the most advanced models in terms of inclusion of important physical mechanisms, the spatial and temporal domains they can address, and thorny technical numerical issues to be dispatched. Nonetheless, over the last several years modeling has crossed a threshold, making the transition from the arcane preserves of specialists to practical tools with widespread applications. Global computational models based on first- principles mathematical physics descriptions are es- sential to understanding the solar system’s plasma phenomena, including the large-scale solar corona, the solar wind’s interaction with planetary magnetos- pheres, comets, and interstellar medium, and the ini- tiation, structure, and evolution of solar eruptive events. Today, and for the foreseeable future, numer- ical models based on magnetohydrodynamics (MHD) equations are the only self-consistent mathematical descriptions that can span the enormous distances as- sociated with large-scale space phenomena. Although providing only a relatively low-order approximation to actual plasma behavior, MHD models have suc- cessfully simulated many important space-plasma processes and provide a powerful means for signifi- cantly advancing process understanding. Space scientists have used global MHD simula- 14 COMPUTING IN SCIENCE & ENGINEERING S OLUTION-A DAPTIVE MAGNETOHYDRODYNAMICS FOR S PACE P LASMAS : S UN- TO-E ARTH S IMULATIONS TAMAS I. GOMBOSI, KENNETH G. POWELL, DARREN L. DE ZEEUW, C. ROBERT CLAUER, KENNETH C. HANSEN, WARD B. MANCHESTER, AARON J. RIDLEY, ILIA I. ROUSSEV , IGOR V. SOKOLOV , AND QUENTIN F. STOUT , University of Michigan GÁBOR TÓTH University of Michigan and Eötvös University, Budapest, Hungary 1521-9615/04/$20.00 © 2004 IEEE Copublished by the IEEE CS and the AIP F RONTIERS OF S IMULATION Space-environment simulations, particularly those involving space plasma, present significant computational challenges. Global computational models based on magnetohydrodynamics equations are essential to understanding the solar system’s plasma phenomena, including the large-scale solar corona, the solar wind’s interaction with planetary magnetospheres, comets, and interstellar medium, and the initiation, structure, and evolution of solar eruptive events.

Upload: others

Post on 01-Jun-2020

10 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: F RONTIERS OF IMULATIONweb.eecs.umich.edu/~qstout/pap/CISE04.pdf · Gasdynamics conservation form. For one popular class of schemes, we write the equations in a form in which the

Numerical simulation and modelingare increasingly essential to basicand applied space-physics researchfor two primary reasons. First, the

heliosphere and magnetosphere are vast regionsof space from which we have relatively few insitu measurements. Numerical simulations let us“stitch together” observations from different re-gions and provide data-interpretation insight tohelp us understand this complex system’s globalbehavior. Second, models have evolved to wheretheir physical content and numerical robustness,flexibility, and improving ease of use inspire re-searchers to apply them to intriguing scenarioswith new measures of confidence.

Indeed, many shortcomings and questions re-main for even the most advanced models in termsof inclusion of important physical mechanisms, thespatial and temporal domains they can address, andthorny technical numerical issues to be dispatched.Nonetheless, over the last several years modelinghas crossed a threshold, making the transition fromthe arcane preserves of specialists to practical toolswith widespread applications.

Global computational models based on first-principles mathematical physics descriptions are es-sential to understanding the solar system’s plasmaphenomena, including the large-scale solar corona,the solar wind’s interaction with planetary magnetos-pheres, comets, and interstellar medium, and the ini-tiation, structure, and evolution of solar eruptiveevents. Today, and for the foreseeable future, numer-ical models based on magnetohydrodynamics (MHD)equations are the only self-consistent mathematicaldescriptions that can span the enormous distances as-sociated with large-scale space phenomena. Althoughproviding only a relatively low-order approximationto actual plasma behavior, MHD models have suc-cessfully simulated many important space-plasmaprocesses and provide a powerful means for signifi-cantly advancing process understanding.

Space scientists have used global MHD simula-

14 COMPUTING IN SCIENCE & ENGINEERING

SOLUTION-ADAPTIVEMAGNETOHYDRODYNAMICSFOR SPACE PLASMAS:SUN-TO-EARTH SIMULATIONS

TAMAS I. GOMBOSI, KENNETH G. POWELL, DARREN L. DE

ZEEUW, C. ROBERT CLAUER, KENNETH C. HANSEN, WARD B.MANCHESTER, AARON J. RIDLEY, ILIA I. ROUSSEV, IGOR V.SOKOLOV, AND QUENTIN F. STOUT, University of MichiganGÁBOR TÓTH

University of Michigan and Eötvös University, Budapest, Hungary

1521-9615/04/$20.00 © 2004 IEEE

Copublished by the IEEE CS and the AIP

F R O N T I E R SO F S I M U L A T I O N

Space-environment simulations, particularly those involving space plasma, presentsignificant computational challenges. Global computational models based onmagnetohydrodynamics equations are essential to understanding the solar system’splasma phenomena, including the large-scale solar corona, the solar wind’s interactionwith planetary magnetospheres, comets, and interstellar medium, and the initiation,structure, and evolution of solar eruptive events.

Page 2: F RONTIERS OF IMULATIONweb.eecs.umich.edu/~qstout/pap/CISE04.pdf · Gasdynamics conservation form. For one popular class of schemes, we write the equations in a form in which the

MARCH/APRIL 2004 15

tions for over 25 years to simulate space plasmas.Early global-scale 3D MHD simulations focused onsimulating the solar wind–magnetosphere system.Since then, researchers have used MHD models tostudy a range of solar system plasma environments.During the past 25 years, global MHD model nu-merical approaches have evolved in several ways.Early models were based on relatively simple cen-tral differencing methods, in which physical quan-tities are expressed as centered finite differences.Later models take advantage of the so-called high-resolution approach, in which a nonlinear switch,or limiter, blends a high-order scheme with a first-order scheme.1 In more advanced models, the lim-ited approximation is combined with an approxi-mate Riemann solver. Some models use anapproximate Riemann solver based on the fivewaves associated with the fluid dynamics system,and treat the electromagnetic effects using the con-strained-transport technique.2 The latest modelsuse approximate Riemann solvers based on thewaves associated with the full MHD system.3

In this article we outline elements of a modern, so-lution-adaptive MHD code that we use at the Uni-versity of Michigan for space-plasma simulations.

MHD EquationsMHD describes the time evolution of conductingfluids. The basic MHD equations combineMaxwell’s equations to describe electromagneticfields and the conservation laws of hydrodynamics.The sources of the electromagnetic fields (electriccharge and current densities) are calculated self-consistently with the fluid motion.

Classical MHDWe can write the governing equations for an ideal,nonrelativistic, compressible plasma in manyforms. While the MHD equations’ different formsdescribe the same physics at the differential equa-tion level, there are important practical differenceswhen we solve the various formulations’ dis-cretized forms.

According to the Lax–Wendroff theorem,4 wecan expect only conservative schemes to get thecorrect jump conditions and propagation speed fora discontinuous solution. This fact is emphasizedmuch less in global magnetosphere simulation lit-erature than the more-controversial divergence ofB issue. In some test problems, the nonconserva-tive discretization of the MHD equations can leadto significant errors, which do not diminish withincreased grid resolution.

Primitive variable form. In primitive variables, we

can write the governing equations of idealMHD, which represent a combination of Euler’sgasdynamics equations and Maxwell’s electro-magnetics equations:

(1)

(2)

(3)

, (4)

where µ0 and γ represent the magnetic perme-ability of a vacuum and the specific heat ratio ofthe gas. In addition, current density j and elec-tric field vector E are related to magnetic field Bby Ampère’s law and Ohm’s law:

(5)

E = –u × B. (6)

Gasdynamics conservation form. For one popularclass of schemes, we write the equations in aform in which the gasdynamic terms are in di-vergence form, and the momentum and energyequations’ electromagnetic terms are sourceterms. This gives

(7)

(8)

(9)

for the hydrodynamic flow, and

(10)

for the magnetic field’s time evolution. In theseequations, I is the identity matrix and Egd is the gas-dynamic total energy, given by

. (11)

You can see that the source terms in these equa-tions are entropy preserving.

Fully conservative form. The fully conservative

E u pgd = +

−12

11

2ργ

∂∂

+ ∇ × =B

Et

0

∂∂

+ ∇ ⋅ +[ ] = ⋅ ×E

tE pgd

gdu u j B( ) ( )1

∂∂

+ ∇ ⋅ + = ×( )

( )ρ

ρµ

uuu I j B

tp

1

0

∂∂

+ ∇ ⋅ =ρ

ρt

( )u 0

j B= ∇ ×

1

∂∂

+ ⋅∇ + ∇ ⋅ =pt

p pu uγ 0

∂∂

+ ∇ × =B

Et

0

ρ ρ

µ∂∂

+ ⋅∇ + ∇ − × =u

u u j Bt

p1

00

∂∂

+ ⋅∇ + ∇ ⋅ =ρ

ρ ρt

u u 0

Page 3: F RONTIERS OF IMULATIONweb.eecs.umich.edu/~qstout/pap/CISE04.pdf · Gasdynamics conservation form. For one popular class of schemes, we write the equations in a form in which the

16 COMPUTING IN SCIENCE & ENGINEERING

form of the equations is

, (12)

where U is the vector of conserved quantities, andF is a flux diad,

(13)

, (14)

where Emhd is the magnetohydrodynamic energy,given by

. (15)

Symmetrizable formulation. Sergei Godunov5

and many others have studied symmetrizablesystems of conservation laws. One property ofthe symmetrizable form of a system of conser-vation laws is that we can derive an added con-servation law

for entropy s by linearly combining the system ofequations. For the ideal MHD equations, as forthe gasdynamic equations, the entropy is s =log(p/ργ ). Another property is that the system isGalilean invariant;5 all waves in the system prop-agate at speeds u ± cw (for MHD, the possible val-ues of cw are the Alfvén, magnetofast, and ma-gentoslow speeds). Neither of these propertiesholds for the fully conservative form of the MHDequations.

Godunov showed that the fully conservativeform of the MHD equations (Equation 12) is notsymmetrizable.5 We can write the symmetrizableform as

, (16)

where

. (17)

Marcel Vinokur separately showed that we couldderive Equation 16 starting from the primitiveform, if no stipulation is made about ∇ • B in thederivation. Kenneth Powell showed that we coulduse this symmetrizable form to derive a Roe-typeapproximate Riemann solver for solving the MHDequations in multiple dimensions.3

The MHD eigensystem arising from Equations 12or 16 leads to eight eigenvalue–eigenvector pairs. Theeigenvalues and associated eigenvectors correspond toan entropy wave, two Alfvén waves, two magnetofastwaves, two magnetoslow waves, and an eighth eigen-value–eigenvector pair that depends on which form ofthe equations we are solving. This last wave (which de-scribes the jump in the normal component of the mag-netic field at discontinuities) has a zero eigenvalue inthe fully conservative case, and an eigenvalue equal tothe normal component of the velocity un in the sym-metrizable case. The eigenvector expressions and scal-ing are more intricate than in gasdynamics.

While Equation 12 is fully conservative, the sym-metrizable formulation (given by Equation 16) is for-mally not fully conservative. Terms of order∇ • B are added to what would otherwise be a diver-gence form. The danger of this is that shock-jumpconditions might not be correctly met, unless theadded terms are small and/or they alternate in sign sothat the errors are local and, in a global sense, cancelin some way with neighboring terms. We must weighthis downside, however, against the alternative; a sys-tem (the one without the source term) that, while con-servative, is not Gallilean invariant, has a zero eigen-value in the Jacobian matrix, and is not symmetrizable.

Semirelativistic MHDWhile the solar-wind speed remains nonrelativis-tic in the solar system, the intrinsic magnetic fieldsof several planets in the solar system are highenough, and the density of the plasma low enough,that the Alfvén speed

(18)

can reach appreciable fractions of the speed oflight. In the case of Jupiter, the Alfvén speed in thevicinity of the poles is on the order of 10 times thespeed of light. Earth has a strong enough intrinsicmagnetic field that the Alfvén speed reaches twicethe speed of light in Earth’s near-auroral regions.

V

BA =

2

0µ ρ

Q BB

u

u B

= −∇ ⋅

01

1

0

0

µ

µ

∂∂

+ ∇ ⋅ =U

F Qt

( )T

∂∂

+∂

∂+

∂∂

+∂

∂=

( ) ( ) ( ) ( )ρ ρ ρ ρst

sux

su

ysuz

x y z 0

E E Bmhd gd= +

12 0

2

µ

F

u

uu I BB

uB Bu

u u B B

=+ +

+ +

− ⋅

ρ

ρµ µ

µ µ

p B

E p Bmhd

12

1

12

1

0

2

0

0

2

0( )

T

Uu

B=

ρρ

Emhd

∂∂

+ ∇ ⋅ =U

Ft

( )T 0

Page 4: F RONTIERS OF IMULATIONweb.eecs.umich.edu/~qstout/pap/CISE04.pdf · Gasdynamics conservation form. For one popular class of schemes, we write the equations in a form in which the

MARCH/APRIL 2004 17

Limiting the Alfvén speed. For these regions, solvingthe nonrelativistic ideal MHD equations does notmake sense. Having waves in the system propagat-ing faster than the speed of light—besides beingnonphysical—causes several numerical difficulties.However, solving the fully relativistic MHD equa-tions is overkill. What is called for is a semi-rela-tivistic form of the equations, in which the flowspeed and acoustic speed are nonrelativistic, but theAlfvén speed can be relativistic. A derivation ofthese semirelativistic equations from the fully rel-ativistic equations is given elsewhere;6 we presentthe final result here. The essence of the derivationis that we must keep the displacement current and,thus, limit the Alfvén speed by the speed of light.

The semirelativistic ideal MHD equations are ofthe form

, (19)

where the state vector Usr and the flux diad Fsr are

(20)

. (21)

In the preceding equations,

(22)

(23)

(24)

are the Poynting vector, the electromagnetic en-ergy density, and the electromagnetic pressure ten-sor, respectively. The electric field E is related tothe magnetic field B by Ohm’s law (Equation 6).

Lowering the speed of light. This new system ofequations has wave speeds limited by the speedof light; for strong magnetic fields, the modifiedAlfvén speed (and the modified magnetofast

speed) asymptote to c. The modified magne-toslow speed asymptotes to a, the acoustic speed.This property offers the possibility of a trickyconvergence-acceleration technique for explicittime-stepping schemes, first suggested by JayBoris;7 the wave speeds can be lowered, and thestable time step thereby raised, by artificiallylowering the value taken for the speed of light.This method is known as the Boris correction.

In the next section, Equations 19 through 24 arevalid in physical situations in which VA > c. A slightmodification yields a set of equations, the steady-statesolutions of which are independent of the value takenfor the speed of light. Defining the true value of thespeed of light to be c0, to distinguish it from the arti-ficially lowered speed of light c, the equations are

, (25)

where the state vector, Usr, and the flux diad Fsr areas defined in Equations 20 and 21, and the newsource term in the momentum equation is

. (26)

Numerical Solution TechniquesNumerical solution of the MHD equations startswith discretization of the system of equations to besolved. All discretization schemes introduce errorsand other undesirable effects. Modern numericalsolution techniques minimize discretization errorsand optimize the efficiency of the solution.

Finite-Volume Schemesfor Systems of Conservation LawsWe can write a coupled system of conservation lawsin the form

, (27)

where U is the vector of conserved quantities (forexample, mass, x−momentum, mass fraction of aparticular species, magnetic field, and so on), Fconvis the convective flux, and S is the source-termmodeling diffusion, chemical reactions, and othereffects. If necessary, we also can include the trans-port of radiation in this framework.

Systems of conservation laws lend themselveswell to finite-volume discretization. We divide thecomputational domain into “cells,” typically hexa-hedra or tetrahedra, and integrate the system ofpartial differential equations given in Equation 27over each cell in the resulting grid. This leads to aset of coupled ordinary differential equations in

∂∂

+ ∇ ⋅ =U

F St conv

Q E Ec c c0

1 1 1

0 02 2= −

∇ ⋅

µ

∂+ ∇ ⋅ =

UF Qsr

sr ct( )T

0

P I BB EEA A= − −e

c1 1

0 02µ µ

e B

cEA

02= +

1 122

2

µ( )

S E BA = ×

1

0µ( )

F

uuu I P

uB Bu

u Ssr

p

u p

=+ +

+−

+

ρρ

ργ

γ

A

A12 1

2

Uu S

B

A

src

u p e

=+

+−

+

ρ

ρ

ργ

1

12

11

2

2

A

∂+ ∇ ⋅( ) =

UFsr

srtT 0

Page 5: F RONTIERS OF IMULATIONweb.eecs.umich.edu/~qstout/pap/CISE04.pdf · Gasdynamics conservation form. For one popular class of schemes, we write the equations in a form in which the

18 COMPUTING IN SCIENCE & ENGINEERING

time, with the conserved quantities’ cell-averagedvalues as the unknowns. A conserved physicalquantity’s rate of change is the sum of all fluxesthrough the faces defining the cell plus the volumeintegral of the source terms. This leads to the fol-lowing ordinary differential equation for the cellvolume averaged vector of conserved physicalquantities :

, (28)

where V is the volume of the cell, A is the surfacearea of a given cell face multiplied by the normalvector of the face (the normal vector always pointsoutward of the cell), and is the volume averageof all source terms. Equation 28 provides an inher-ently 3D update of and does not separate dif-ferent directions into different steps (as it is donein operator-splitting methods).

The result is a very physical one: each cell in thegrid is a small control volume, in which the integralform of the conservation laws hold. For example, thetime rate of change of the average mass in the cell isexpressed in terms of flux of mass through the facesof the cell. In this approach, the sophistication levelused to compute the fluxes across cell boundariesfundamentally determines the solution’s quality.

One distinct advantage of this conservation-law-based finite-volume approach is that we can achievediscontinuous solutions, while obeying properjump conditions, even at the discrete level. For ex-ample, any shocks in a flow will satisfy theRankine–Hugoniot conditions. While we canachieve this property using a scheme derived froma finite-difference approach, it is a natural conse-quence of adopting a finite-volume point of view.

High-Resolution Upwind SchemesEarly work in numerical methods for convection-dominated problems showed that results stronglydepended on how the spatial derivatives were nu-merically calculated. The most straightforwardmethods, obtained by using symmetric centereddifferences, led to numerically unstable schemes.

Before the development of modern high-resolu-tion upwind schemes, researchers solving hyper-bolic systems of conservation laws had a choice be-tween schemes such as Lax–Friedrichs or Rusanov,which were extremely dissipative, or Lax–Wendroff,which was much less dissipative but could not cap-ture even weakly discontinuous solutions (for ex-ample, shock waves) without nonphysical and po-tentially destabilizing oscillations in the solutions.

Over the past half a century, a rich class ofschemes became available for numerical solutions of

conservation laws. The basic building blocks were

• Godunov’s concept of using the solution toRiemann’s initial-value problem as a buildingblock for a first-order numerical method;

• Bram van Leer’s insight that Godunov’s orig-inal scheme could be extended to a higher or-der by making the scheme nonlinear; and

• work by Philip Roe, van Leer, Stanley Osher,and others on “approximate Riemann solvers,”which led to a wide array of schemes that weremuch less computationally expensive than Go-dunov’s original scheme.

These methods revolutionized computationalfluid dynamics and lead to the development ofmodern numerical methods for the solution ofMHD equations.

Upwind differencing. The most successful schemeswere those that used the convection direction tobias the derivatives’ numerical representation.These biased schemes are called upwindschemes, because the data in the update step is bi-ased toward the upwind direction. The simplestupwind scheme for the convection equation

(29)

is

, (30)

where i is an index denoting discrete spatial loca-tion, and n is an index denoting discrete tempo-ral location.

For systems of conservation laws, use of the up-winding idea relies on

• doing some type of characteristic decomposi-tion to determine which way is upwind foreach of the waves of the system and

• constructing an interface flux based on thischaracteristic decomposition, using upwind-biased data.

The first step makes the scheme stable; the sec-ond makes it conservative. There are many ways tocarry out the two steps; various approaches lead toa variety of upwind schemes.

Approximate Riemann solvers. The original Go-dunov scheme8 was a finite-volume scheme for so-

u ut

au u

xa

au u

xa

in

in i

nin

in

in

+ −

+

−∆

=−

−∆

>

−−

∆<

1 1

1

0

0

∂∂

+∂∂

=ut

aux

0

U

S

ddt conv

faces

UF A S= − ⋅ +∑1

V

U

Page 6: F RONTIERS OF IMULATIONweb.eecs.umich.edu/~qstout/pap/CISE04.pdf · Gasdynamics conservation form. For one popular class of schemes, we write the equations in a form in which the

MARCH/APRIL 2004 19

lution of inviscid, compressible gas flow equations.This scheme’s seminal idea was that at each timestep, the fluxes of mass, momentum, and energythrough the face connecting two cells of the gridwere computed by a solution to Riemann’s initialvalue problem. Specifically, the early interactionbetween the fluid states in two neighboring cellswere computed numerically from the nonlinear,self-similar problem of the wave interactions be-tween the two fluids. This procedure was carriedout for a time ∆t at each cell–cell interface in thegrid, which constituted one iteration.

This scheme, though computationally expensiveand only first-order accurate, had a huge impact oncomputational methods for conservation laws. First,the scheme was extremely robust, even for verystrong shocks. It was also much more accurate thanthe few other schemes that were similarly robust. Fi-nally, researchers soon realized that they could carrythe concept to other systems of conservation laws.

Several researchers refined Godunov’s scheme byreplacing the exact solution of the Riemann prob-lem with approximate solutions that were cheaper,and had certain nice properties.

One of these approximate Riemman solvers thatis particularly known because of its high accuracyis Roe’s scheme.9 We will briefly describe it here fora one-dimensional system of conservation laws; itsextension to multiple dimensions is relativelystraightforward.

Roe’s scheme computes the fluxes at a cell inter-face based on the states to the left and right of theinterface. It looks for simple wave solutions to thesystem of conservation laws and constructs a nu-merical flux that treats each of these waves in anupwind manner. If we substitute the relation

U(x, t) = U(x – λt) (31)

into the conservation law

, (32)

the eigenvalue problem

(33)

results, where I is an identity matrix. Roe’s schemeis based on the eigenvalues λk and right and lefteigenvectors Rk and Lk that arise from this eigen-value problem. In general, for a system of n con-servation laws, there will be n eigenvalues, eachwith a corresponding left and right eigenvector.The Roe flux is expressed in terms of the states ULand UR just to the left and right of the interface.

We can write it as

• the flux calculated based just on the left state,plus a correction due to waves traveling left-ward from the right cell,

• the flux calculated based just on the right state,plus a correction due to waves traveling right-ward from the left cell, or

• a symmetric form that arises from averagingthe previous two expressions, given by

. (34)

Research into approximate Riemman solvers ledto robust and low-dissipation schemes. These al-gorithmic advances yielded methods that had theminimum dissipation necessary to provide stabil-ity—they provided robustness nearly equal to thatof the Lax–Friedrichs scheme in conjunction withaccuracy near that of the Lax–Wendroff scheme.When coupled with the limited-reconstructiontechniques, these schemes provided the accurate,robust, efficient approaches that we generally clas-sify as high-resolution methods.

Limited reconstruction. Our approach takes advan-tage of these advances in approximate Riemannsolvers and limited reconstruction. The limited-reconstruction approach ensures second-order ac-curacy away from discontinuities, while simulta-neously providing the stability that ensuresnonoscillatory solutions. We use modern limitersto ensure these properties. The approximate Rie-mann solver approach provides the correct cap-turing of discontinuous solutions and a robustnessacross a wide range of flow parameters.

To compute the interface flux, we need interpolatedvalues of the flow states at the interfaces between cells.Since the interface lies halfway between the cell cen-ters on a Cartesian grid, a simple averaging of the twocell center states might seem appropriate. However,we need a more sophisticated interpolation to yieldschemes that meet our accuracy and stability criteria.

van Leer proposed a family of limited interpola-tion schemes, in what is now known as the mono-tone upstream-centered schemes for conservationlaws (MUSCL) approach.10 To interpolate a vari-able q to the interface i + 1/2 between cell centers i,

and i + 1, we can use

. (35)

q q r u u

r u u

ii i i i

i i i

++ +

+ −

= + + ( ) −( )[+ − ( ) −( )]

12

1 2 1

1 2 1

14

1

1 1

( )

( ) /

/

/

κ φ

κ φ

F F U F U

U U

interface = +[ ]

− −=

1212 1

( ) ( )

( )

L R

k k k R Lk

n

R Lλ

∂∂

=FU

I Uλ δ 0

∂∂

+∂∂

=U Ft x

0

Page 7: F RONTIERS OF IMULATIONweb.eecs.umich.edu/~qstout/pap/CISE04.pdf · Gasdynamics conservation form. For one popular class of schemes, we write the equations in a form in which the

20 COMPUTING IN SCIENCE & ENGINEERING

In the preceding, κ is a parameter that deter-mines the relative weighting of the upwind anddownwind cells in the reconstruction, and φ is alimiter function. If φ were taken to be zero, the in-terpolation would be the first-order one—that is,the interface value is the same as the cell-centeredvalue. If φ were taken to be one, the reconstructionwould be unlimited. For stability’s stake, φ is a func-tion of the ratio

. (36)

Popular choices for κ are zero (Fromm’s scheme)and minus one (second-order upwind). Popularchoices for φ include

van Albada

, (37)

Minmod

, (38)

Superbee: φ(r) = max(0, min(2r,1), min(r,2)). (39)

∇ • B Control TechniquesAnother way in which the numerical solution of theMHD equations differs from that of the gasdynamicequations is the constraint that ∇ • B = 0. We can en-force this constraint numerically, particularly inshock-capturing codes, in a number of ways, buteach way has its particular strengths and weaknesses.Next, we provide a brief overview of the schemes.(For more complete explanations, refer to the citedreferences. Gábor Tóth has a numerical comparisonof many of the approaches for a suite of test cases.11)

Eight-wave scheme. Kenneth Powell3,12 first pro-posed an approach based on the symmetrizableform of the MHD equations (Equation 16). Inthis approach, the source term on the right-handside of Equation 16 is computed at each timestep and included in the update scheme. Dis-cretizing this form of the equations leads to en-hanced stability and accuracy. However, there isno stencil on which the divergence is identicallyzero. In most regions of the flow, the divergencesource term is small, but it is not guaranteed tobe small near discontinuities. In essence, the in-clusion of the source term changes what wouldbe a system zero eigenvalue to one whose valueis un, the component of velocity normal to the

interface through which the flux is computed.We typically refer to the scheme as the eight-wave scheme; the eighth wave corresponds topropagation of jumps in the normal componentof the magnetic field.

We can think of the eight-wave scheme as a hy-perbolic or advective approach to controlling ∇ • B;symmetrizable form of the equations (Equation 16)are consistent with the passive advection of∇ • B/ρ. The eight-wave scheme is computationallyinexpensive, easy to add to an existing code, andquite robust. However, if there are regions in theflow in which the ∇ • B source term Equation 17 islarge, the numerical errors can create problemssuch as the generation of spurious magnetic fields.

Projection scheme. Jeremiah Brackbill and DanielBarnes13 proposed using a Hodge-type projec-tion to the magnetic field. This approach leadsto a Poisson equation to solve each time the pro-jection occurs:

∇2φ = ∇ • B (40)

Bprojected = B – ∇φ. (41)

The resulting projected magnetic field is diver-gence-free on a particular numerical stencil, to thelevel of error of the Poisson equation’s solution.While it is not immediately obvious that using theprojection scheme in conjunction with the fully con-servative form of the MHD equations gives the cor-rect weak solutions, Tóth proved this to be thecase.11 The projection scheme has several advan-tages, including the ability to use standard softwarelibraries for the Poisson solution, its relativelystraightforward extension to general unstructuredgrids, and its robustness. It does, however, requiresolving an elliptic equation at each projection step,which can be expensive, particularly on distributed-memory machines.

Diffusive control. Some of the most recent workon the ∇ • B = 0 constraint relates to modifyingthe eight-wave approach by adding a sourceterm proportional to the gradient of ∇ • B so thatthe divergence satisfies an advection-diffusionequation, rather than a pure advection equation.This technique, referred to as diffusive controlof ∇ • B, has the same advantages and disadvan-tages as the eight-wave approach. It is not strictlyconservative, but appears to keep the level of∇ • B lower than the eight-wave approach does.

Constrained transport. Several approaches exist

φ κκ

( ) min ,( )( )

r r= −−

131

φ( )r

r rr

=++

2

21

r

u uu ui

i i

i i+

+

−=

−−1 21

1/

Page 8: F RONTIERS OF IMULATIONweb.eecs.umich.edu/~qstout/pap/CISE04.pdf · Gasdynamics conservation form. For one popular class of schemes, we write the equations in a form in which the

MARCH/APRIL 2004 21

that combine a Riemann-solver-based schemewith a constrained-transport approach. Evansand Hawley’s constrained-transport approachtreated the MHD equations in the gasdynam-ics–electromagnetic-split form of Equations 7through 9.2 The grid was staggered, and the∇ • B = 0 constraint met identically, on a partic-ular numerical stencil.

The advantages of the conservative constrained-transport schemes are that they are strictly conser-vative and that they meet the ∇ • B = 0 constraint tomachine accuracy, on a particular stencil. The pri-mary disadvantage is the difficulty in extendingthem to general grids. Tóth and Roe14 made someprogress on this front; they developed divergence-preserving prolongation and restriction operators,allowing the use of conservative constrained-trans-port schemes on h-refined meshes.

However, conservative constrained-transporttechniques also lose their ∇ • B-preserving prop-erties if different cells advance at different phys-ical time rates. This precludes using local time-stepping. Thus, while for unsteady calculationsthe cost of the conservative constrained-transportapproach is comparable to the eight-wavescheme, for steady-state calculations (where wetypically would use local time-stepping), the costcan be prohibitive.

Time-SteppingBecause a major goal of global space-plasma simu-lations is creating a predictive space-weather tool,solution time is a paramount issue; a predictivemodel must run substantially faster than real time.From the starting point—observing a solar event—to the ending point—postprocessing the data froma simulation based on the initial observational con-ditions—a simulation must accomplish rapidly tobe useful.

The principal limitation of the present genera-tion of global space-plasma codes is the explicittime-stepping algorithm. Explicit time steps arelimited by the Courant-Friedrichs-Lewy (CFL)condition, which ensures that no information trav-els more than one cell size during a time step. Thiscondition represents a nonlinear penalty for highlyresolved calculations, because finer grid resolutionnot only results in more computational cells, butalso in smaller time steps.

In global MHD space-plasma simulations, twofactors control the CFL condition: the smallest cellsize in the simulation and the fast magnetosonicspeed in high-magnetic-field, low-plasma-densityregions. In a typical magnetosphere simulation, inwhich the smallest cell size is about 0.25 RE, the

CFL condition limits the time step to about 10–2 s.The high fast magnetosonic speed (due to the highAlfvén speed) in the near-Earth region primarilycontrols this small step.

Local Time-SteppingIn the local time-stepping approach, a local stabilitycondition determines the time step for each cell in acomputational domain. The flow variables in cell iare advanced from time step n to time step n + 1 as

Uin+1 = Ui

n + ∆tin (–∇ • F + Q)i , (42)

where the stability condition determines the localtime step. Here, U represents the conservative statevector, F is the flux diad, and Q is the source term.In case of ideal MHD, the time step is determinedby the CFL condition

, (43)

where C < 1 is the Courant number and cifast is the

fast speed in cell i. In more than one dimension,we use the sum of the speeds in all directions inthe denominator.

This technique is different from subcycling, inwhich cells advance at the same physical time rate, butthe number of time steps individual cells take varies.For example, in adaptive grids, usually we set the timestep to be inversely proportional to cell size, so that afiner cell typically makes two half time steps while thecoarser cell makes only one full time step. In thismethod, a global stability condition determines thetime steps compared with local time-stepping inwhich time steps are set on a cell-by-cell basis.

Equation 42 shows that if a steady-state solutionexists, it satisfies

0 = (–∇ • F + Q)i (44)

because in steady state, U in+1 = U i

n, and we can sim-plify with the time step ∆ ti

n, which always is a pos-itive number. Consequently, the steady-state solu-tion is independent of the time step, so it does notmatter if it is local or global.

The preceding proof assumes that the boundaryconditions fully determine the steady state. This isa nontrivial assumption because the MHD equa-tions are nonlinear. Initial boundary value prob-lems might or might not asymptote to steady statesindependent of the initial conditions; it depends onthe boundary conditions imposed, which are prob-lem dependent. In practice, magnetosphere simu-lations seem to converge to the same solution in-dependent of the initial conditions or the

∆ =∆

+t C

x

c uin i

ifast

i

Page 9: F RONTIERS OF IMULATIONweb.eecs.umich.edu/~qstout/pap/CISE04.pdf · Gasdynamics conservation form. For one popular class of schemes, we write the equations in a form in which the

22 COMPUTING IN SCIENCE & ENGINEERING

time-integration scheme.The applicability of the local time-stepping tech-

nique in a given scheme depends primarily on theevolution of ∇ • B. In some methods, even if ∇ • B = 0initially, the numerical transients toward steady statewill destroy this property with the application of lo-cal time-stepping. For instance, we can show that theconstrained transport scheme can’t be combined withlocal time-stepping. In our code, we use constrainedtransport only in time-accurate simulations, localtime-stepping with the eight-wave method.

Implicit Time-SteppingThe simplest and least-expensive time-steppingscheme is multistage explicit time-stepping, inwhich the CFL stability condition limits a timestep. We also have an unconditionally stable fullyimplicit time-stepping scheme. This second-orderimplicit time discretization requires solving a sys-tem of nonlinear equations for all flow variables ateach time step. We can achieve this by using theNewton–Krylov–Schwarz approach: applying aNewton iteration to the nonlinear equations, solv-ing them using a parallel Krylov type iterativescheme, and accelerating the Krylov solver con-vergence with a Schwarz-type preconditioning. Be-cause every block has a simple Cartesian geometry,we can implement the preconditioner very effi-ciently. The resulting implicit scheme requiresabout 20 to 30 times more CPU time per time stepthan the explicit method, but the physical time stepcan be 1,000 to 10,000 times larger. This implicitalgorithm has very good parallel scaling because ofthe Krylov scheme and the block-by-block pre-conditioner application. While our scaling for theimplicit is quite good, it’s not as near perfect as theexplicit, because of the higher amount of inter-processor communication overhead.

We also can combine explicit and implicit time-stepping. Magnetosphere simulations include largevolumes in which the Alfvén speed is quite low (tensof km/s), and the local CFL number allows large, ex-plicit time steps (tens of seconds to several minutes).In these regions, implicit time-stepping is a waste ofcomputational resources. Because the parallel im-plicit technique we use is fundamentally block based,we implicitly treat only those blocks where the CFLcondition would limit the explicit time step to lessthan the selected time step (typically, approximately10 s). This combined explicit–implicit time-steppingrepresents more computational challenges (such asseparate load balancing of explicit and implicitblocks). Overall, this solution seems to be a verypromising option, but we need to explore other po-tential avenues before we make a final decision about

the most efficient time-stepping algorithm for spaceMHD simulations.

Data Structure andAdaptive Mesh RefinementAdaptive mesh refinement (AMR) techniques that au-tomatically adapt the computational grid to the solu-tion of the governing PDEs are very effective in treat-ing problems with disparate length scales. They avoidunderresolving the solution in interested regions (forexample, high gradient) and, conversely, avoid over-resolving the solution in other less-interesting regions(low gradient), thereby saving orders of magnitude incomputing resources for many problems. For typicalsolar-wind flows, length scales can range from tens ofkilometers in the near Earth region to the Earth–Sundistance (1.5 × 1011 m), and timescales can range froma few seconds near the Sun to the expansion time ofthe solar wind from the Sun to Earth (~ 105 s). Theuse of AMR is extremely beneficial and almost a vir-tual necessity for solving problems with such disparatespatial and temporal scales.

Block-Adaptive AMRWe developed a simple and effective block-basedAMR technique used with the finite-volume schemepreviously described. We integrate the governingequations to obtain volume-averaged solution quan-tities within rectangular Cartesian computationalcells. The computational cells are embedded in reg-ular structured blocks of equal-sized cells. Theblocks are geometrically self-similar. Typically, theblocks we use consist of anywhere between 4 × 4 × 4= 64 and 12 × 12 × 12 = 1,728 cells (see Figure 1).We store solution data associated with each block instandard indexed array data structures, making itstraightforward to obtain solution information fromneighboring cells within a block.

Computational grids are composed of many self-similar blocks. Although each block within a gridhas the same data-storage requirements, blocks canbe of different sizes in terms of the volume of phys-ical space they occupy. Starting with an initial meshconsisting of blocks of equal size (that is, equal res-olution), we accomplish adaptation by dividing andcoarsening appropriate solution blocks. In regionsrequiring increased cell resolution, a parent blockis refined by dividing itself into eight children, oroffspring. Each of the eight octants of a parentblock becomes a new block with the same numberof cells as the parent, which doubles cell resolutionin the region of interest. Conversely, in overre-solved regions, the refinement process reverses;eight children coarsen and coalesce into a singleparent block. Thus, cell resolution reduces by a fac-

Page 10: F RONTIERS OF IMULATIONweb.eecs.umich.edu/~qstout/pap/CISE04.pdf · Gasdynamics conservation form. For one popular class of schemes, we write the equations in a form in which the

MARCH/APRIL 2004 23

tor of 2. We use standard multigrid-type restrictionand prolongation operators to evaluate the solutionon all blocks created by the coarsening and divisionprocesses, respectively.

Figure 1 shows two neighboring blocks, one re-fined and one that isn’t. Any block can be refined,successively leading to finer blocks. In the presentmethod, we constrain mesh refinement such thatthe cell resolution changes by only a factor of 2 be-tween adjacent blocks; the minimum resolution isnot less than the initial mesh’s.

To independently apply the update scheme for agiven iteration or time step directly to all blocks, ad-jacent blocks—those with common interfaces—sharesome additional solution information. This infor-mation resides in an additional two layers of over-lapping ghost cells associated with each block. At in-terfaces between blocks of equal resolution, theseghost cells simply take on the solution values associ-ated with the appropriate interior cells of the adja-cent blocks. At resolution changes, restriction andprolongation operators, similar to those used in blockcoarsening and division, evaluate the ghost-cell so-lution values. After each stage of the multistage time-stepping algorithm, ghost-cell values reevaluate toreflect the updated solution values of neighboringblocks. The AMR approach also requires additionalinterblock communication at interfaces with resolu-tion changes to strictly enforce the flux conservationproperties of the finite-volume scheme. In particu-lar, we use the interface fluxes computed on more re-fined blocks to correct the interface fluxes computedon coarser neighboring blocks to ensure that thefluxes are conserved across block interfaces.

Hierarchical Tree Data StructureWe use a hierarchical tree-like data structure withmultiple roots and trees and additional intercon-nects between the tree leaves to track mesh refine-ment and the connectivity between solution blocks.Figure 2 depicts this interconnected data structure“forest.” The blocks of the initial mesh are the for-est roots, which reside in an indexed array datastructure. Associated with each root is a separate oc-tree data structure that contains all the blocks mak-ing up the tree leaves created from the original par-ent blocks during mesh refinement. Each grid blockcorresponds to a tree node. To determine blockconnectivity, we can traverse the multitree structureby recursively visiting the parent and children solu-tion blocks. However, to reduce overhead associ-ated with accessing solution information from ad-jacent blocks, we compute and store the neighborsof each block directly, providing interconnects be-tween blocks in the hierarchical data structure that

are neighbors in physical space.An advantage of the preceding hierarchical data

structure is that it is relatively easy to carry out localmesh refinement anytime during a calculation. Forexample, if a particular flow region becomes suffi-ciently interesting at some point in a computation,we can obtain better resolution of that region by re-fining its solution blocks, without affecting the gridstructure in other regions of the flow. Reducing gridresolution in a region is equally easy. There is noneed to completely remesh the entire grid and re-calculate block connectivity with each mesh refine-ment. Although other approaches are possible, oursdirects block coarsening and dividing via multiplephysics-based refinement criteria. In particular, webase decisions about when to refine or coarsenblocks on comparisons of the maximum values ofvarious local flow quantities determined in eachblock to specified refinement threshold values. Notethat we dynamically adjust the refinement thresh-olds to control a calculation’s total number of blocksand cells. We can use other refinement criteria, suchas a combination of estimated numerical errors.

Figure 3 illustrates the adaptation of the block-based Cartesian mesh to an evolving solution. Itshows the grid at four time instances for an unsteadycalculation, showing the solution blocks (thick lines)and computational cells (thin lines) of the evolvinggrid. As previously noted, each grid refinement levelintroduces cells smaller in dimension by a factor of 2from those one level higher in the grid. Typically, cal-culations might have 10 to 15 refinement levels; some

(b)(a)

Figure 1. Self-similar blocks. (a) Those used in parallel block-basedadaptive mesh refinement schemes, and (b) illustrating the doublelayer of ghost cells for both coarse and fine blocks.

Page 11: F RONTIERS OF IMULATIONweb.eecs.umich.edu/~qstout/pap/CISE04.pdf · Gasdynamics conservation form. For one popular class of schemes, we write the equations in a form in which the

24 COMPUTING IN SCIENCE & ENGINEERING

calculations could have more than 20. With 20, thefinest cells on the mesh are more than one milliontimes (220) smaller in each dimension than the coars-est cells. The block-based AMR approach previouslydescribed has many similarities to the cell-basedmethod proposed by De Zeeuw and Powell.12 Al-though the block-based approach is somewhat lessflexible and incurs some solution resolution ineffi-ciencies compared with a cell-based approach, it of-fers many advantages when we consider parallel al-gorithm implementations and performance issues.Next, we look at how block adaptation readily en-ables domain decomposition and effective load bal-ancing and leads to low communication overhead be-tween solution cells within the same block.

Parallel ImplementationWe designed the parallel block-based AMR solverfrom the ground up, aiming to achieve very highperformance on massively parallel architectures.The underlying upwind finite-volume solution al-gorithm, with explicit time-stepping, has a very

compact stencil, making it highly local. The hier-archical data structure and self-similar blocks read-ily simplify domain decomposition and readily en-able good load-balancing, a crucial element fortruly scalable computing. Natural load-balancingoccurs by distributing the blocks equally among theprocessors. We achieve additional optimization byordering the blocks using the Peano–Hilbert space-filling curve to minimize interprocessor communi-cation. The self-similar nature of the solutionblocks also means that serial performance en-hancements apply to all blocks and that fine-grained algorithm parallelization is possible. Thealgorithm’s parallel implementation is so pervasivethat even the grid adaptation performs in parallel.

Other features of the parallel implementation in-clude using Fortran 90 as the programming languageand the message-passing interface (MPI) library forperforming the interprocessor communication. Useof these standards greatly enhances code portabilityand leads to very good serial and parallel perfor-mance. Message passing occurs asynchronously, withgathered wait states and message consolidation.

We implemented these new methods in the blockadaptive-tree solar-wind Roe-type upwind scheme(BATS-R-US) code developed at the University ofMichigan.6,12 BATS-R-US solves the relativisticMHD equations using block-based AMR technology,finite-volume methodology with four approximateRiemann solvers (Roe,9 Linde,16 artificial wind,17 andLax-Friedrichs/Rusanov), four different divergenceB control techniques (eight-wave, constrained trans-port, projection, and ∇ • B diffusion), and your choiceof five limiters. You also can choose different time-stepping methods (local, explicit, implicit, and com-bined explicit and implicit) depending on the type ofproblem you want to solve.

We’ve implemented the algorithm on Cray T3Esupercomputers, SGI and Sun workstations, Be-owulf-type PC clusters, SGI shared-memory ma-chines, a Cray T3D, several IBM SP2s, and Com-paq supercomputers. BATS-R-US nearly perfectlyscales to 1,500 processors and a sustained speed of342 GFlops has been attained on a Cray T3E-1200using 1,490 processors. For each target architec-ture, we use simple single-processor measurementsto set the adaptive block size. Figure 4 shows thescaling of BATS-R-US on various architectures.

Applications to Space Weather SimulationsResearchers have applied BATS-R-US to globalnumerical simulations of the inner heliosphereincluding coronal mass ejection (CME) propaga-tion,18 the coupled terrestrial

D

B

C

A0

1

2

3

Multiple roots

OctreeRefinementlevel

AC

D B

Figure 2. Solution blocks of a computational mesh with three refine-ment levels originating from two initial blocks and the associatedhierarchical multiroot octree data structure. The figure omitsinterconnects to neighbors.

Page 12: F RONTIERS OF IMULATIONweb.eecs.umich.edu/~qstout/pap/CISE04.pdf · Gasdynamics conservation form. For one popular class of schemes, we write the equations in a form in which the

MARCH/APRIL 2004 25

magnetosphere–ionosphere,19, 20 and the inter-action of the heliosphere with the interstellarmedium.21 Other applications include a host ofplanetary problems ranging from comets,22, 23 tothe planets Mercury,24 Venus,25 Mars,26 and Sat-urn,27 to planetary satellites.28,29 Next, we pre-sent a selection of space-weather-related simula-tions that are most relevant for practicalapplications.

Space Weather“Space weather” refers to conditions on the Sun andin the solar wind, magnetosphere, ionosphere, andthermosphere that can influence the performanceand reliability of space-borne and ground-basedtechnological systems; it can affect human life orhealth as well. Adverse conditions in the space en-vironment can disrupt satellite operations, commu-nications, navigation, and electric power distribu-tion grids, leading to broad socioeconomic losses.

The solar corona is so hot (> 106 K) that in openmagnetic field regions, it expands transonically, fill-ing all of interplanetary space with a supersonic mag-netized plasma flowing radially outward from theSun. As this flowing plasma—the solar wind—passesthe Earth, it interacts strongly with the geomagneticfield, severely compressing the field on the Earth’sdayside, and drawing it out into a long, comet-liketail on the nightside. The confined region of geo-magnetic field is called the Earth’s magnetosphere.

The solar wind not only confines the terrestrialmagnetic field within the magnetospheric cavity,but it also transfers significant mass, momentum,and energy to the magnetosphere, as well as to theionosphere and upper atmosphere. One dramaticconsequence of this solar wind and magnetosphereinteraction is the production of a variety of com-plex electric current systems, ranging from a sheetof current flowing on the solar wind and magne-tosphere boundary, to an enormous current ring

x / Rs

z/R

s

–10 0 10 20 30 40–20

–15

–10

–5

0

5

10

15

20x / Rs

z/R

s

–10 0 10 20 30 40–20

–15

–10

–5

0

5

10

15

20

x / Rs

z/R

s

–10 0 10 20 30 40–20

–15

–10

–5

0

5

10

15

20x / Rs

z/R

s

–10 0 10 20 30 40–20

–15

–10

–5

0

5

10

15

20

Figure 3. Evolution of a computational mesh illustrating grid adaptation in response to numerical solution changes. The cross-sectional cuts through a 3D grid are for a solar-wind calculation at four different time instances. The computational cells are notshown for the smaller blocks.

Page 13: F RONTIERS OF IMULATIONweb.eecs.umich.edu/~qstout/pap/CISE04.pdf · Gasdynamics conservation form. For one popular class of schemes, we write the equations in a form in which the

26 COMPUTING IN SCIENCE & ENGINEERING

flowing around Earth in the inner magnetosphere,to currents flowing throughout the ionosphere andconnecting along magnetic field lines to magne-tospheric currents systems. Solar wind–magnetos-phere interaction also produces populations of veryenergetic particles stored in the magnetosphereand precipitated into the upper atmosphere. Theelectric currents and the energetic particles canhave severe consequences for many human activi-ties from ground to space. The variation over timeof these electric current systems and energetic par-ticle populations in the geospace environmentmodulates the consequences for human activities.

Space-weather timescales range from minutes todecades. The longest timescales usually consideredimportant to space weather are the 11-year solaractivity cycle and the 22-year solar magnetic cycle(see Figure 5). Near the solar-activity cycle mini-mum, the solar wind is nearly completely domi-nated by a quasi-steady outflow.

Significant temporal solar-wind speed variations atEarth’s orbit routinely occur in response to the rotationwith the Sun of quasi-steady solar-wind structures. Thewind’s large-amplitude Alfvén waves produce largefluctuations in the southward component of the inter-planetary magnetic field (IMF). CMEs—the transientmass and magnetic field ejection from the solarcorona—also produce solar-wind speed and magneticfield variations. Indeed, the most severe storms expe-rienced in the Earth’s space environment are driven byexceptionally fast CMEs that exhibit a strong south-ward magnetic field component throughout a signifi-cant fraction of their volume. These very fast CMEs,which are ejected from the corona at speeds of morethan 1,000 km/s, also drive strong hydromagneticshocks. These shocks are efficient producers of ener-getic particles, which can impact the geospace envi-ronment. Of course, a very fast CME only is effectiveat producing a severe geomagnetic storm when it trav-els toward Earth, which presents a problem for thoseattempting to give forewarning of such storms.

Figure 6 illustrates this interaction. Figure 6a de-picts the CME-generated magnetic cloud approachesthe quiet magnetosphere. In Figure 6b, the cloud ini-tiates a stronger interaction that generates strongermagnetospheric current systems and larger, more en-ergetic, magnetospheric particle populations—a geo-magnetic storm. As solar activity increases, the fre-quency of CMEs increases substantially, and the“severity of space weather” concomitantly increases.

Magnetosphere SimulationsThe magnetosphere’s steady-state topology for duesouth- and north-pointing IMF conditions is of greattheoretical interest for magnetospheric physics. In an

Number of processors

Para

llel p

erfo

rman

ce (

gig

aflo

ps)

256256256 512512512 768768768 1,0241 021 02 1,2801 281 28 1,536536536000

100100100

200200200

300300300

323232 646464 969696 128128128000

101010

202020

303030

404040

505050

666 2512225122

Figure 4. Parallel speedup of BATS-R-US on various architectures. Blackdashed lines represent perfect scaling from single-node performance.

Figure 5. Twelve X-ray images of the Sun obtained by the Yohkohsatellite at 90-day increments provide a dramatic view of how the solarcorona changes from solar maximum to minimum. As we approachsolar maximum, the reverse progression will occur. (Image courtesy ofLockheed Martin.)

Page 14: F RONTIERS OF IMULATIONweb.eecs.umich.edu/~qstout/pap/CISE04.pdf · Gasdynamics conservation form. For one popular class of schemes, we write the equations in a form in which the

MARCH/APRIL 2004 27

idealized situation (assuming a nonrotating planetwith a strictly southward pointing magnetic dipolemoment), such configurations exhibit two planes ofsymmetry (the equatorial plane and the noon–mid-night meridian). Scientists have researched the duesouth and north IMF configurations countless times.

Our group used BATS-R-US simulations to inves-tigate the solar wind–terrestrial magnetosphere in-teraction under northward IMF conditions.20 Figure7 shows a 3D representation of the simulated mag-netospheric topology for northward IMF conditions.White solid lines represent the last closed magneticfield lines in the Northern hemisphere, in effect trac-ing the magnetopause (the discontinuity separatingthe shocked solar wind from the region dominated bythe terrestrial magnetic field). The color code in theequatorial plane represents the thermal pressure dis-tribution of magnetospheric particles.

Figure 8a shows simulation results for thenoon–midnight meridian for the Northern hemi-sphere. The upstream conditions are n = 5 cm–3, u =400 km/s, acoustic Mach number = 8, specific heat ra-tio = 5/3, and IMF magnitude = 5 nT. The white lineswith arrowheads are magnetic field lines. The colorcoding represents the logarithm of the thermal pres-sure. The thick red lines indicate topological bound-aries that separate flows of distinct characteristics.Figure 8b shows the equatorial plane for the morningside. White lines with arrowheads are streamlines,and color coding represents the sonic Mach number.

Figure 8 shows that a bow shock forms in frontof the magnetosphere. This fast magnetosonicshock decelerates, deflects, and heats the solar-windplasma. The magnetosphere is essentially closedexcept for a small region near the cusp where theIMF and magnetospheric field reconnect.

In the noon–midnight meridian plane, both themagnetic field and the velocity have no componentnormal to the plane. In ideal MHD, the magneticfield lines and streamlines are equipotentials becausethe electric field is perpendicular to both magneticfield and velocity. The magnetic field lines in thenoon–midnight meridian plane can be of the samepotential if they are on the same streamline. The redthick lines in Figure 8a separate different flow re-gions and, hence, regions with different potentials.

For symmetry reasons the magnetic field has nocomponent in the equatorial plane; the magneticfield is always perpendicular to the Z = 0 plane andpoints northward everywhere in our case. In con-trast, plasma streamlines do not leave the equatorialplane, meaning that the motional electric field is inthe equatorial plane and normal to the streamlines.The potential difference between two given stream-lines in the equatorial plane remains the same

throughout the plane. Magnetospheric electric cur-rents are associated with the distortion of the mag-netic field from the dipole magnetic field. Stretch-ing the magnetic field generates currents in aclockwise direction while compressing it produces acounterclockwise current in the equatorial plane.The Lorentz force associated with these currents al-ways tends to restore the dipole magnetic field

Figure 6. The interaction of the magnetosphere with an expandingmagnetic cloud. The coronal mass ejection releases a huge amount ofmagnetized plasma that propagates to Earth orbit in a few days. Whensuch a cloud hits Earth, intense geomagnetic storms are generated.(Illustration courtesy of the NASA International Solar Terrestrial Program.)

Pressure 27.2016 21.9336 19.2996 16.6656 14.0315 11.3975 8.76352 6.1295 3.49549 0.861481

Figure 7. 3D representation of the simulated magnetospheric topologyfor northward interplanetary magnetic field conditions. White solidlines represent the last closed magnetic field lines in the Northernhemisphere. The color code in the equatorial plane represents themagnetospheric particles’ thermal pressure distribution.

Page 15: F RONTIERS OF IMULATIONweb.eecs.umich.edu/~qstout/pap/CISE04.pdf · Gasdynamics conservation form. For one popular class of schemes, we write the equations in a form in which the

28 COMPUTING IN SCIENCE & ENGINEERING

geometry in the closed magnetic field line region.These currents’ divergence or convergence withinthe closed magnetic field line region generates thefield-aligned currents flowing out of the equatorial

plane. These field-aligned currents map into theionosphere. However, the equatorial region’s majorcurrent component of the distant tail does not con-verge–diverge within the magnetosphere. It is closedvia the magnetopause current in the tail region.

Figure 8 also reveals three topologically distinctregions in the magnetosphere: the inner core orplasmasphere, the outer magnetosphere, and thelow-latitude boundary layer (LLBL)–tail region.These regions are separated by boundaries markedby thick red lines.

Figure 9 shows the Y component of the electriccurrents in the noon–midnight meridian plane. Onthe day side, most of the currents are generated atthe bow shock. These currents’ Lorentz force de-celerates the solar-wind flow. Most of the magne-topause current appears in the region upstream ofthe first closed magnetospheric field line. Obser-vations of the sheath transition layer, or plasma-

X

Y

–80–60–40–200200

20

40

M8.07.26.45.64.84.03.22.41.60.80.0

X

Z

–80–60–40–200200

20

40

logP1.61.421.241.060.880.70.520.340.16

–0.02–0.2

(a)

(b)

Figure 8. Simulation results for northward interplanetary magnetic field. (a) Noon–midnight meridian for the Northernhemisphere. The white lines are magnetic field lines. The color coding represents the logarithm of the thermal pressure. Thethick red lines indicate the topological boundaries that separate flows of distinct characteristics. (b) Equatorial plane for themorning side. White lines are streamlines and the color coding represents the Mach number. The two red lines indicate theboundaries between distinct flow regions.

X

Z

–80–60–40–200200

20

40Jy2

10.50.1

–0.05–0.1–2–3

Figure 9. Currents in the noon–midnight meridian plane. The colorcoding shows the Y component of the current density.

Page 16: F RONTIERS OF IMULATIONweb.eecs.umich.edu/~qstout/pap/CISE04.pdf · Gasdynamics conservation form. For one popular class of schemes, we write the equations in a form in which the

MARCH/APRIL 2004 29

depletion layer, are consistent with this region ofcurrent. The topological boundary itself containslittle current and marks the inner edge of the mag-netopause current. Observations show that fornorthward IMF, the topological change occurs atthe inner edge of the sheath transition layer.

On the night side, the magnetopause current andthe magnetosphere’s topological boundary becomecompletely different. In the noon–midnight merid-ian plane, most of the current is associated with mag-netic field kinks located around 20 to 30 RE aboveand below the equator, a region few satellites havevisited. If we use this current to define the boundaryof the magnetosphere, the magnetosphere would ap-pear to be open. Between the last closed field linesand the magnetopause current, magnetic field andparticle characteristics are similar to (and observa-tionally difficult to distinguish from) those in theclosed magnetic field line region in the magnetotail.This is quite natural because these field lines have justrecently lost their topological connection with Earth.It will take an Alfvén traveling time for the field linesto communicate to the flux tube’s equatorial regionabout the magnetic field line being disconnected atthe cusp. It will take even longer before the plasmaon these flux tubes assimilates to the solar wind.

Solar SimulationsTraditionally, we’ve defined CMEs as large-scaleexpulsions of plasma from the corona seen as brightarcs in coronagraphs that record Thomson scat-tered light. These events are the most stunning ac-tivity of the solar corona in which typically 1015 to1016 g of plasma is hurled into interplanetary spacewith a kinetic energy of the order 1031 to 1032 ergs.Observations show that most CMEs originate fromthe disruption of large-scale coronal structuresknown as helmet streamers, arcade-like structurescommonly found in coronagraph images.

Central to understanding CME dynamics is thepre-event magnetic field’s nature. For a helmetstreamer to be in static equilibrium, the underlyingmagnetic field must be in a closed configuration toconfine the dense plasma that would otherwise becarried out with the solar wind. Observations showthat the photospheric magnetic field associatedwith helmet streamers is in a bipolar configuration,with a neutral line separating opposite magneticpolarities. The magnetic field configuration of pre-event helmet streamers is a sheared arcade, possi-bly containing a flux rope coinciding with theplasma cavity. CMEs could represent a significantrestructuring of the global coronal magnetic field.

We carried out a set of numerical simulations30

to test an analytical model for the initiation of

CMEs developed by Vyacheslav Titov and PascalDémoulin.31 They derived their model from a longline of previous analytical models containing fluxropes suspended in the corona by a balance be-tween magnetic compression and tension forces.Figure 10 shows the initial state of the simulation.

The flux rope undergoes an increasing accelera-tion, with its upward motion eventually forming acurrent sheet at the pre-existing X-line’s location.Once the current sheet starts forming, the flux ropebegins to decelerate. The effects of the line-tyingat the ends of the flux rope also might contributeto this deceleration. Another effect that we see atthis time (which is an entirely 3D effect) is that tor-sional Alfvén waves transport most of the magnetichelicity from the flux-rope’s footprints toward itstop. As a result, the restraining effect caused by theflux-rope feet’s line-tying becomes important.

Figure 11 shows a 3D view of the magnetic fieldconfiguration at t = 35 min. A close look at the flux-rope footprints reveals closed loops connecting thetwo flux regions. The most plausible explanation ofthis structure is that there is an interchange recon-nection between the highly twisted flux-rope fieldlines and the overlying closed field lines from the di-pole region. As a result of this process, the newlycreated closed field lines connect the two flux re-gions, while the highly twisted field lines originatefrom the dipole field region. The iso-surface of Bz =0 (shaded in gray) in Figure 10 indicates where thisprocess preferentially takes place.

|B| T1.1E-017.4E-025.1E-023.5E-022.4E-021.7E-021.2E-028.0E-035.5E-033.8E-032.6E-031.8E-031.3E-038.7E-046.0E-04

Z

X Y

Figure 10. 3D view of the magnetic field configuration for the initialstate. Solid lines are magnetic field lines, where the color codevisualizes the magnetic field strength in [T]. The surface shaded in grayis an iso-surface of Bz = 0.

Page 17: F RONTIERS OF IMULATIONweb.eecs.umich.edu/~qstout/pap/CISE04.pdf · Gasdynamics conservation form. For one popular class of schemes, we write the equations in a form in which the

30 COMPUTING IN SCIENCE & ENGINEERING

Simulations of Solar EruptionsOnly in the last few years have CME models beenproduced that allow for 3D spatial structures. The

approach we take to modeling CMEs is to start witha system that is initially out of equilibrium and sim-ulate its subsequent time evolution. We begin by nu-merically forming a steady-state corona model alongwith a bimodal solar wind (slow solar wind near theequator and fast wind at high solar latitudes). Themodel coronal magnetic field is representative of asolar minimum configuration with open polar fieldlines and low latitude closed field lines forming astreamer belt. Having attained this steady state, wesuperimpose a 3D magnetic flux rope and its en-trained plasma into the streamer belt of our steady-state coronal model (see Figure 12).32 The flux ropewe employ comes from the Sarah Gibson and BoonChye Low33 family of analytic solutions of the idealMHD equations describing an idealized, self-similarexpansion of a magnetized cloud resembling a CME.This configuration allows the system to contain sub-stantial free energy. In the subsequent time evolutionof the system, we find that the flux rope expandsrapidly, driving a strong shock ahead of it as it is ex-pelled from the corona along with large amounts ofplasma mimicking a CME. Including this flux ropein a numerical, steady-state corona and solar-windmodel extends the Gibson–Low model to address itsinteraction with the background solar wind.

Figure 13 displays the CME’s time evolution,with a time series of figures showing the system att = 1.0, 2.0, 3.0, and 4.0 hours. The figure depictsthe system in 2D equatorial slices. The panels showfalse-color images of the plasma velocity magnitudeon which solid white lines are superimposed rep-resenting the magnetic field.

We find the flux rope rapidly expanding and beingexpelled from the corona while decelerating. AnMHD shock front moves ahead of the flux rope, trav-eling at nearly the same speed as the rope on the yaxis while propagating far ahead to the sides of therope. In effect, the shock front moves at relativelyuniform speed, initially forming a spherical bubble,while the flux rope inside the front moves forwardfaster than it expands to the sides. The ambient solarwind’s structure has a profound influence on theshock front. The wind and magnetosonic speeds areminimal in the heliospheric current sheet and bothgrow with heliospheric latitude. As a result, the shocktravels at higher latitude in the fast solar wind with alower Mach number than found at low latitude.

Our 3D numerical MHD model of a high-speedCME possesses many of the observed bulk charac-teristics such as total mass and energy and also isrepresentative of basic features such as shock for-mation and related deceleration. Our model’s suc-cess in capturing many properties of CMEs, in-cluding pre-event structures and background solar

Z

X Y

|B| T1.7E-011.1E-027.1E-024.6E-023.0E-022.0E-021.3E-028.2E-035.3E-033.5E-032.3E-031.5E-039.5E-046.2E-044.0E-04

Figure 11. 3D view of the magnetic field configuration at t = 35 min. Asin Figure 10, the solid lines are magnetic field lines. The color (reversedto that in Figure 10) shows the magnetic field strength. The lowersurface shaded in purple is an iso-surface of electric current density ofmagnitude 0.0015 A/m2. The upper surface shaded in maroon is an iso-surface of flow velocity of magnitude 200 km/s.

|U|581.5552.0522.5493.0463.4433.9404.4374.8345.3315.8286.2256.7227.2137.7168.1138.6109.1 79.4 50.0

Y

Z X

Figure 12. 3D representation of the coronal magnetic field drawn assolid colored lines at t = 0 hours. Red and blue lines represent the fluxrope. Orange and yellow lines show the poloidal field of the steady-stateequatorial streamer belt. On the x-z plane, the computational mesh isrepresented by thin black lines superimposed on a false-color image ofthe velocity magnitude.

Page 18: F RONTIERS OF IMULATIONweb.eecs.umich.edu/~qstout/pap/CISE04.pdf · Gasdynamics conservation form. For one popular class of schemes, we write the equations in a form in which the

MARCH/APRIL 2004 31

wind, suggests its value for studying CME propa-gation and space-weather phenomena.

Sun-to-Earth SimulationIn our most ambitious project to date, we used BATS-R-US to simulate an entire space-weather event, fromits generation at the Sun through the formation andevolution of a CME, to its interaction with the mag-netosphere–ionosphere system. An earlier simulationsuccessfully demonstrated the feasibility of such end-to-end simulations, but it suffered from relatively lowresolution. In the original simulations, we used a cou-ple of million cells to describe the entire Sun–solar

wind–magnetosphere–ionosphere system with a den-sity pulse near the Sun generating the CME.

In the present end-to-end simulation, we use theCME generation mechanism described in the “Sim-ulation of Solar Eruptions” section and run the sim-ulation all the way from the solar surface to beyond1 AU. We used up to 14 million computational cellswith AMR. The smallest cell cell size was 1/32 solarradii (21,750 km), while the largest cell size was 4 so-lar radii (2.785 × 106 km). We carried out the grid re-finement so that the CME evolution along theSun–Earth line was well resolved, while far awayfrom this line the grid remained relatively coarse.

t = 1 hour t = 2 hours

t = 3 hours t = 4 hours

IUI km/s1200.0 960.0 720.0 480.0 240.0

0.0

IUI km/s1200.0

960.0 720.0 480.0 240.0

0.0

IUI km/s1200.0 960.0 720.0 480.0 240.0

0.0

IUI km/s1200.0

960.0 720.0 480.0 240.0

0.0

15

10

5

0

–5

–10

–15

–5 0 5 10 15 20 25 30

X (

Rs)

Y (Rs)

15

10

5

0

–5

–10

–15

–5 0 5 10 15 20 25 30

X (

Rs)

Y (Rs)

15

10

5

0

–5

–10

–15

–5 0 5 10 15 20 25 30

X (

Rs)

Y (Rs)

15

10

5

0

–5

–10

–15

–5 0 5 10 15 20 25 30

X (

Rs)

Y (Rs)

(a) (b)

(c) (d)

Figure 13. Time sequence of the coronal mass ejection in the equatorial plane at (a) t = 1 hour, (b) t = 2 hours, (c) t = 3 hours,and (d) t = 4 hours. Solid white lines display magnetic streamlines (two-dimensional projections of 3D magnetic field lines)superimposed on a color image of the velocity magnitude.

Page 19: F RONTIERS OF IMULATIONweb.eecs.umich.edu/~qstout/pap/CISE04.pdf · Gasdynamics conservation form. For one popular class of schemes, we write the equations in a form in which the

32 COMPUTING IN SCIENCE & ENGINEERING

This method enabled us to have good spatial andtime resolution in simulating the interaction of theCME with the magnetosphere, while still employinga manageable total number of computational cells.

Figure 14 shows a 2D cut through the CME inthe meridional plane going through the expandingmagnetic flux-rope. White lines represent magneticstreamlines in the meridional plane, while the colorcode shows the plasma density. An accumulation ofmaterial behind the leading shock and in front ofthe leading edge of the propagating flux rope iscalled the snowplow effect. You can see the mag-netic closed-loop structure of the flux rope sur-rounded by open field lines of the solar wind and asignificantly depleted density structure inside themagnetic flux rope that drives the CME.

The resolution along the Sun–Earth axis nearthe leading edge of the CME (near the location ofEarth) is 1/8 solar radii (approximately 14 Earthradii). This resolution is unprecedented inSun–Earth simulations, and it enables us to achieveapproximately 4-minute temporal resolution in de-scribing the interaction between the CME struc-ture and the magnetosphere–ionosphere system.

Figure 15 shows the temporal evolution of thesolar-wind parameters just upstream of Earth asthe CME moves by. From top to bottom, the pan-els show the three components of the magnetic

field vector and the solar-wind density and radialvelocity, respectively. We initiated the CME on 21March, 2000 at 1210 UT. The first signature of theCME arrived about 69 hours later in the form ofthe shock. Note that the solar wind radial velocityjumps by about 100 km/s when the CME arrivesand that efficient application of AMR makes theshock’s leading edge very well resolved. The solar-wind velocity remains nearly constant throughoutthe density pile-up preceding the magnetic bubblestructure associated with the expanding flux rope.When Earth enters the magnetic bubble at around80 hours after event initiation, the velocity exhibitsa slow decrease while the density suddenly dropsnearly an order of magnitude. At this time the Bzcomponent of the interplanetary magnetic field ex-hibits its most important variation from the pointof view of geo-effective interaction: first it in-creases to about +20 nT (northward IMF) andthen it rotates to –20 nT (southward IMF) inabout three hours. This rotation is highly geo-ef-fective and it generates all kinds of geomagneticactivities, such as reconfiguration of magnetos-pheric topology.

Figure 16 illustrates the geo-effectiveness of theBz rotation. The four panels represent noon–midnight meridional cuts through the 3D magne-tosphere solutions at 1800, 1900, 2000, and 2100UT. The peak of the northward IMF is at 1800 UTwith Bz ≈< 20 nT. At 1900 UT, the Bz still is north-ward (≈ 10 nT), and it changes sign shortly before2000 UT. At 2000 UT, the Bz already is southward,and the magnetosphere exhibits a transitional state.The magnetosphere shows day side reconnectioncharacterizing southward IMF conditions, while atthe same time the long magnetotail looks verymuch like a northward IMF configuration. Clearly,the effect of the IMF southward turning had notyet propagated all the way down the magnetotail.The last panel shows a snapshot at 2100 UT, wellover an hour after the IMF turned south at the noseof the magnetosphere. At this time, the magnetos-phere shows a fundamentally southward configu-ration, but the complex and long tail is a clear in-dicator that the configuration is still changing andthis is far from equilibrium.

The ionospheric convection pattern also dra-matically changes during the IMF southward turn-ing. The cross-polar cap potential changes from 75kV at 1800 UT to 190 kV at 2100 UT, consistentwith the reconfiguration of the magnetosphere.

This simulation is the first successful attempt touse first-principles-based global simulation codesto launch and follow a CME near the Sun and de-scribe its highly geo-effective interaction with the

t = 69.0 hours

Z (R

5)

Y (R5)

150

100

50

0

–50

–100

–150

0 100 200 300

130.0 120.8 111.7 102.5 93.3 84.2 75.0 65.8 56.7 47.5 38.3 29.2 20.0

n cm–3

Figure 14. Meridional cut through the coronal mass ejection as itapproaches Earth at around 69 hours after the initiation. The color coderepresents the solar-wind plasma density, while the white lines representthe projection of magnetic field lines onto the meridional plane.

Page 20: F RONTIERS OF IMULATIONweb.eecs.umich.edu/~qstout/pap/CISE04.pdf · Gasdynamics conservation form. For one popular class of schemes, we write the equations in a form in which the

MARCH/APRIL 2004 33

terrestrial magnetosphere. We are confident thatresearchers will routinely carry out similar space-weather simulations in the future and use the re-sults to forecast space-weather events.

Space-plasma simulations greatly maturedin the last decade. New simulations codesemploy modern and efficient numericalmethods and are able to run much faster

than real time on parallel computers. Our simu-lation capabilities also are rapidly evolving. Inthe next few years, model coupling will be at thecenter of development. A few years from now wewill be able to simulate the individual elementsof the complex Sun–Earth system with the helpof a software framework that automatically takescare of the coupling of various components. Ourgoal is to accomplish validated predictive capa-bilities of space weather events and hazards.

AcknowledgmentsUS Department of Defense Multidisciplinary UniversityResearch Initiative grant F49620-01-1-0359, NationalScience Foundation Knowledge and DistributedIntelligence grant NSF ATM-9980078, NSF Computerand Information for Science and Engineering grant ACI-9876943, NASA Earth Science and TechnologyOffice/Computational Technologies CooperativeAgreement NCC5-614, and NASA Applied InformationSystems Research Program grant NAG5-9406 at theUniversity of Michigan supported research for thismanuscript. The Hungarian Science Foundation (OTKAgrant T037548) partially supported Gábor Tóth.

References1. B. van Leer, “Towards the Ultimate Conservative Difference

Scheme V. A Second-Order Sequel to Godunov’s Method,” J.Computional Physics, vol. 32, no. 1, 1979, pp. 101–136.

2. C.R. Evans and J.F. Hawley, “Simulation of Magnetohydrody-namic Flows: A Constrained Transport Method,” Astrophysical J.,vol. 332, 1988, pp. 659–677.

3. K.G. Powell, An Approximate Riemann Solver for Magnetohydro-dynamics (That Works in More than One Dimension), tech. report94-24, Inst. for Computer Applications in Science and Eng.,NASA Langley Space Flight Center, 1994.

4. P.D. Lax and B. Wendroff, “Systems of Conservation Laws,”Comm. Pure and Applied Mathematics, vol. 13, 1960, pp.217–237.

5. S.K. Godunov, “Symmetric Form of the Equations of Magneto-hydrodynamics,” Numerical Methods for Mechanics of ContinuumMedium, Siberian Branch of USSR Academy of Science, vol. 1,1972, pp. 26–34 (in Russian).

6. T.I. Gombosi et al., “Semi-Relativistic Magnetohydrodynamicsand Physics-Based Convergence Acceleration,” J. ComputationalPhysics, vol. 177, 2002, pp. 176–205.

7. J.P. Boris, A Physically Motivated Solution of the Alfvén Problem,tech. report, NRL Memorandum Report 2167, Naval ResearchLab., 1970.

8. S.K. Godunov, “A Difference Scheme for Numerical Computa-tion of Discontinuous Solutions of Hydrodynamic Equations,”Sbornik: Math., vol. 47, no. 3, 1959, pp. 271–306 (in Russian).

9. P.L. Roe, “Approximate Riemann Solvers, Parameter Vectors, andDifference Schemes,” J. Computational Physics, vol. 43, 1981, pp.357–372.

10. B. van Leer, “Towards the Ultimate Conservative DifferenceScheme. II. Monoticity and Conservation Combined in a Second-Order Scheme,” J. Compuational Physics., vol. 14, 1974, pp.361–370.

11. G. Tóth, “The ∇ • B Constraint in Shock Capturing Magnetohy-drodynamic Codes,” J. Computational Physics, vol. 161, 2000, pp.605–652.

12. K.G. Powell et al., “A Solution-Adaptive Upwind Scheme for IdealMagnetohydrodynamics.” J. Computational Physics., vol. 154, no.2, 1999, pp. 284–309.

13. J.U. Brackbill and D.C. Barnes, “The Effect of Nonzero ∇ • B onthe Numerical Solution of the Magnetohydrodynamic Equa-tions,” J. Computational Physics, vol. 35, 1980, pp. 426–430.

14. G. Tóth and P.L. Roe, “Divergence- and Curl-Preserving Prolon-gation Formulas,” J. Computational Physics, vol. 180, 2002, pp.736–750.

15. D.L. De Zeeuw and K.G. Powell, “An Adaptively-Refined Carte-sian Mesh Solver for the Euler Equations,” J. ComputationalPhysics, vol. 104, 1993, pp. 55–68.

6420

–2–4–630

20

10

0

–103020 10

0 –10–20–30150

100

50

0

–50–340–360–380–400–420–440–460–480

24 to 25 March 2000 UT hours07 14 21 04

Vx

(km

/s)

Den

sity

(/c

c)B

z G

SE (

nT)

By

GSE

(n

T)B

x (n

T)

Figure 15. Temporal evolution of the solar-wind parameters just upstreamof at Earth as the CME moves by. The panels show the three componentsof the magnetic field vector, the solar wind density and speed,respectively. The CME was initiated on March 21, 2000 at 1210 UT.

Page 21: F RONTIERS OF IMULATIONweb.eecs.umich.edu/~qstout/pap/CISE04.pdf · Gasdynamics conservation form. For one popular class of schemes, we write the equations in a form in which the

34 COMPUTING IN SCIENCE & ENGINEERING

16. T. J. Linde, A Three-Dimensional Adaptive Multifluid MHD Model ofthe Heliosphere, doctoral thesis, Dept. of Aerospace Engineering,Univ. of Mich., 1998.

17. I. Sokolov et al., “Artificial Wind—A New Framework to ConstructSimple and Efficient Upwind Shock-Capturing Schemes,” J. Com-putational Physics, vol. 181, 2002, pp. 354–393.

18. C.P.T. Groth et al., “Global 3D MHD Simulation of a SpaceWeather Event: CME Formation, Interplanetary Propagation, andInteraction with the Magnetosphere,” J. Geophysical Research,vol. 105, 2000, pp. 25053–25078.

19. T.I. Gombosi et al., “The Length of the Magnetotail for North-ward IMF: Results of 3D MHD Simulations,” Physics of Space Plas-mas, T. Chang and J.R. Jasperse, eds., vol. 15, 1998, pp. 121–128.

20. P. Song et al., “A Numerical Study of Solar Wind–MagnetosphereInteraction for Northward IMF,” J. Geophysical Research, vol. 104,1999, pp. 28361–28378.

21. T.J. Linde et al., “The Heliosphere in the Magnetized Local Inter-stellar Medium: Results of a 3D MHD Simulation,” J. GeophysicalResearch, vol. 103, 1998, pp. 1889–1904.

22. T.I. Gombosi et al., “Three-dimensional Multiscale MHD Modelof Cometary Plasma Environments,” J. Geophysical Research, vol.101, 1996, pp. 15233–15253.

23. R.M. Häberli et al., “Modeling of Cometary X-Rays Caused by So-lar Wind Minor Ions,” Science, vol. 276, 1997, pp. 939–942.

24. K. Kabin et al., “Interaction of Mercury with the Solar Wind,”Icarus, vol. 143, 2000, pp. 397–406.

25. R. Bauske et al., “A Three-Dimensional MHD Study of Solar WindMass Loading Processes at Venus: Effects of Photoionization, Elec-tron Impact Ionization, and Charge Exchange,” J. Geophysical Re-search, vol. 103, 1998, pp. 23625–23638.

26. Y. Liu et al., “3D Multi-Fluid MHD Studies of the Solar Wind In-teraction with Mars,” Geophysical Research Letters, vol. 26, 1999,pp. 2689–2692.

27. K.C. Hansen et al., “A 3D Global MHD Simulation of Saturn’s Mag-netosphere,” Advances Space Research, vol. 26, 2000, pp.1681–1690.

28. K. Kabin et al., “Interaction of the Saturnian Magnetosphere withTitan: Results of a 3D MHD Simulation,” J. Geophysical Research,vol. 104, 1999, pp. 2451–2458.

29. K. Kabin et al., “Io’s Magnetospheric Interaction: an MHD Modelwith Day-Night Asymmetry,” Planetary Space Science, vol. 49,2001, pp. 337–344.

30. I.I. Roussev et al., “A Three-Dimensional Flux Rope Model forCoronal Mass Ejections Based on a Loss of Equilibrium,” Astro-

25

0

–25

0 –50X (R)

Z (R

)

25

0

–25

0 –50X (R)

Z (R

)25

0

–25

0 –50X (R)

Z (R

)

25

0

–25

0 –50X (R)

Z (R

)

2100 UT on 24 March 2000

1900 UT on 24 March 2000

2000 UT on 24 March 2000

1800 UT on 24 March 2000

(a) (b)

(c) (d)

Figure 16. Noon–midnight meridional cuts through the 3D magnetosphere solutions at (a) 1800, (b) 1900, (c) 2000, and (d)2100 UT. The peak of the northward interplanetary magnetic field is at 1800 UT with Bz ≈ 20 nT. The color code representspressure, while the white lines are projections of the magnetic field lines to the meridional plane.

Page 22: F RONTIERS OF IMULATIONweb.eecs.umich.edu/~qstout/pap/CISE04.pdf · Gasdynamics conservation form. For one popular class of schemes, we write the equations in a form in which the

MARCH/APRIL 2004 35

physical J., vol. 588, 2003, pp. L45–L48.

31. V.S. Titov and P. Démoulin, “Basic Topology of Twisted MagneticConfigurations in Solar Flares,” Astronomy and Astrophysics, vol.351, 1999, pp. 701–720.

32. W.B. Manchester et al., “Three-Dimensional MHD Simulation ofa Flux-Rope Driven CME,” doi:10.1029/2002JA009672, J. Geo-physical Research, vol. 109, 2004, pp. A01102.

33. S. Gibson and B.C. Low, “A Time-Dependent Three-DimensionalMagnetohydrodynamic Model of the Coronal Mass Ejection,”Astrophysical J., vol. 493, 1998, pp. 460–473.

Tamas I. Gombosi is professor and chair of the De-partment of Atmospheric, Oceanic, and Space Sci-ences at the University of Michigan. His research inter-ests include space-plasma physics, planetary science,and space weather. He has an MSc and PhD from theLóránd Eötvös University, Budapest, Hungary. He is afellow of the American Geophysical Union and a mem-ber of the International Academy of Astronautics, theAmerican Physical Society, and the American Astro-nomical Society. Contact him at [email protected].

Kenneth G. Powell is professor of aerospace engineer-ing at the University of Michigan. His research interestsinclude computational space physics and computationalaerodynamics. He has an ScD from MIT. He is an asso-ciate fellow of the American Institute of Aeronautics andAstronautics. Contact him at [email protected].

Darren L. De Zeeuw is an associate research scientistin the Department of Atmospheric, Oceanic, andSpace Sciences at the University of Michigan. His re-search interests include space-plasma physics, spaceweather, and large research code development. Hehas a BSc from Calvin College and an MSc and a PhDfrom the University of Michigan. He is a member ofthe American Geophysical Union and a senior mem-ber of the American Institute of Aeronautics and As-tronautics. Contact him at [email protected].

C. Robert Clauer is research professor and co-director ofthe Center for Space Environment Modeling within theDepartment of Atmospheric, Oceanic, and Space Sci-ences at the University of Michigan. His research inter-ests include solar wind–magnetosphere–ionosphere cou-pling, magnetospheric electrodynamics, space weather,and computer networks in support of scientific activities.He has an MSc and a PhD from the University of Califor-nia, Los Angeles. He is a member of the American Geo-physical Union. Contact him at [email protected].

Kenneth C. Hansen is an assistant research scientist atthe University of Michigan. His research interests includemagnetospheric science and the plasma environmentof comets, Jupiter, and Saturn. He has a BSc and an MScfrom Brigham Young University and a PhD from the Uni-versity of Michigan. He is a member of the American

Geophysical Union. Contact him at [email protected].

Ward B. Manchester is an assistant research scientist inthe Department of Atmospheric, Oceanic, and Space Sci-ences at the University of Michigan. His research inter-ests include solar and heliospheric physics with an em-phasis on coronal mass ejections and magnetic fluxemergence. He has an MSc and a PhD from the Univer-sity of Illinois. He is a member of the American Astro-nomical Society and the American Geophysical Union.Contact him at [email protected].

Aaron J. Ridley is an associate research scientist at theUniversity of Michigan. His research interests includeionospheric and thermospheric physics, ionosphere-magnetosphere coupling, and data assimilation. Hehas an MSc and a PhD from the University of Michi-gan. He is a member of the American GeophysicalUnion. Contact him at [email protected].

Ilia Roussev is an assistant research scientist at the De-partment of Atmospheric, Oceanic, and Space Sciencesat the University of Michigan. His research interests in-clude solar and heliospheric physics, space-plasmaphysics, and space weather. He has an MSc from SofiaUniversity, Sofia, Bulgaria, and a PhD from Queen’s Uni-versity, Belfast, Northern Ireland. He is a member of theAmerican Geophysical Union and the American Astro-nomical Society. Contact him at [email protected].

Igor V. Sokolov is an associate research scientist at theUniversity of Michigan. His research interests includecomputational, plasma, and laser physics. He has aPhD from the General Physics Institute of Russian Acad-emy of Sciences, Moscow. He is a member of theAmerican Physical Society and the American Geo-physical Society. Contact him at [email protected].

Quentin F. Stout is professor of electrical engineeringand computer science, and co-director of the Centerfor Space Environment Modeling, at the University ofMichigan. His research interests include parallel andscientific computing, adaptive statistical designs, andalgorithms. He has a BA from Centre College and aPhD in mathematics from Indiana University. Contacthim at [email protected].

Gábor Tóth is an associate research scientist at the Uni-versity of Michigan and an associate professor at EötvösUniversity, Budapest, Hungary. His research interests in-clude computational magnetohydrodynamics and itsapplications in space physics and astrophysics. He has aPhD in the astrophysical sciences from Princeton Uni-versity. He is a member of the American GeophysicalUnion. Contact him at [email protected].