wrf model: software architecture wrf tutorial, 8/16/01 john michalakes, ncar

25
WRF Model: Software Architecture WRF Tutorial, 8/16/01 John Michalakes, NCAR

Upload: braydon-stith

Post on 11-Dec-2015

276 views

Category:

Documents


3 download

TRANSCRIPT

WRF Model: Software Architecture

WRF Tutorial, 8/16/01

John Michalakes, NCAR

Overview

• Purpose– Familiarization with concept and structure behind WRF

software from the point of view of a scientific user/developer

• Outline– Code organization– Framework goals and design aspects– Software hierarchy, model interface, and memory

model– Parallelism– Data structures– The WRF Registry

Directory Structure (WRF 1.1)

clean* clean scriptcompile* compile scriptconfigure* configure scriptRegistry/ Registry registry filearch/ configure.defaults arch-specific compile optionsexternal/ IOAPI WRF I/O API specification RSL/ external comm package: RSL io_netcdf/ external I/O package: NetCDFinc/ holds intermediate include filessrc/ source directory (.F files)test/ directory containing test cases b_wave/ hill2d_x/ quarter_ss/ real/ squall2d_x/ squall2d_y/tools/ use_registry Perl scripts implementing Registry

Directory Structure (WRF 1.2)

clean* clean scriptcompile* compile scriptconfigure* configure scriptRegistry/ Registry registry filearch/ configure.defaults arch-specific compile optionsdyn_eh/ Eulerian-height dyncore source filesdyn_em/ Eulerian-mass dyncore source filesdyn_slt/ semi-implicit semi-Lagrangian dyncoreexternal/ same as 1.1frame/ driver layer (framework) source filesinc/ holds intermediate include filesmain/ wrf.F main WRF source filephys/ physics directoryshare/ files shared across dyncorestest/ test casestools/ registry* new implementation of registry mechanism

WRF Model Software

Goals• Good performance• Portable across a range of

architectures• Flexible, maintainable,

understandable• Facilitate code reuse• Multiple dynamics/physics

options• Run-time configurability• Package independence

Aspects of Design• Single-source code

• Fortran90 modules, dynamic memory, structures, recursion

• Hierarchical design

• Multi-level parallelism

• CASE: Registry

• Package APIs

Code structure

Solve_interface:• 1 step on 1 domain• Select dyncore• Dereference ADT

Solve_interface:• 1 step on 1 domain• Select dyncore• Dereference ADT

Integrate:• Time loop• Nesting (recursive)• Calls to I/O

Integrate:• Time loop• Nesting (recursive)• Calls to I/O

Top-level model layersubroutines:• One tile computation• Boundary conditions/avoidance• May select physics opt.• May dereference 4D fields

Top-level model layersubroutines:• One tile computation• Boundary conditions/avoidance• May select physics opt.• May dereference 4D fields

Top-level model layersubroutines:• One tile computation• Boundary conditions/avoidance• May select physics opt.• May dereference 4D fields

Top-level model layersubroutines:• One tile computation• Boundary conditions/avoidance• May select physics opt.• May dereference 4D fields

Top-level model layersubroutines:• One tile computation• Boundary conditions/avoidance• May select physics opt.• May dereference 4D fields

Top-level model layersubroutines:• One tile computation• Boundary conditions/avoidance• May select physics opt.• May dereference 4D fields

WRF main:• Top-level flow of control - Initialize packages - Input config info - Allocate, initialize, decompose main domain• Start integration• Shutdown

WRF main:• Top-level flow of control - Initialize packages - Input config info - Allocate, initialize, decompose main domain• Start integration• Shutdown

Solve_eh:• Sequence through tile loops calling model layer routines for dynamics and physics• Multi-threading (OpenMP)• Interprocessor communi- cation

Solve_eh:• Sequence through tile loops calling model layer routines for dynamics and physics• Multi-threading (OpenMP)• Interprocessor communi- cation

Solve_eh:• Sequence through tile loops calling model layer routines for dynamics and physics• Multi-threading (OpenMP)• Interprocessor communi- cation

Solve_eh:• Sequence through tile loops calling model layer routines for dynamics and physics• Multi-threading (OpenMP)• Interprocessor communi- cation

driv

erm

edia

tion

mod

el

Code structure

Solve_interface:• 1 step on 1 domain• Select dyncore• Dereference ADT

Solve_interface:• 1 step on 1 domain• Select dyncore• Dereference ADT

Integrate:• Time loop• Nesting (recursive)• Calls to I/O

Integrate:• Time loop• Nesting (recursive)• Calls to I/O

Top-level model layersubroutines:• One tile computation• Boundary conditions/avoidance• May select physics opt.• May dereference 4D fields

Top-level model layersubroutines:• One tile computation• Boundary conditions/avoidance• May select physics opt.• May dereference 4D fields

Top-level model layersubroutines:• One tile computation• Boundary conditions/avoidance• May select physics opt.• May dereference 4D fields

Top-level model layersubroutines:• One tile computation• Boundary conditions/avoidance• May select physics opt.• May dereference 4D fields

Top-level model layersubroutines:• One tile computation• Boundary conditions/avoidance• May select physics opt.• May dereference 4D fields

Top-level model layersubroutines:• One tile computation• Boundary conditions/avoidance• May select physics opt.• May dereference 4D fields

WRF main:• Top-level flow of control - Initialize packages - Input config info - Allocate, initialize, decompose main domain• Start integration• Shutdown

WRF main:• Top-level flow of control - Initialize packages - Input config info - Allocate, initialize, decompose main domain• Start integration• Shutdown

Solve_eh:• Sequence through tile loops calling model layer routines for dynamics and physics• Multi-threading (OpenMP)• Interprocessor communi- cation

Solve_eh:• Sequence through tile loops calling model layer routines for dynamics and physics• Multi-threading (OpenMP)• Interprocessor communi- cation

Solve_eh:• Sequence through tile loops calling model layer routines for dynamics and physics• Multi-threading (OpenMP)• Interprocessor communi- cation

Solve_eh:• Sequence through tile loops calling model layer routines for dynamics and physics• Multi-threading (OpenMP)• Interprocessor communi- cation

driv

erm

edia

tion

mod

el

WRF domain derived data type (frame/module_domain.F)

MODULE module_domain

TYPE (domain)

REAL, DIMENSION(:,:,:), POINTER :: eh_ru_1 REAL, DIMENSION(:,:,:), POINTER :: eh_ru_2 REAL, DIMENSION(:,:,:), POINTER :: eh_rv_1 . . . END TYPE (domain)

CONTAINS SUBROUTINE allocate_space_field ( grid , . . . ) TYPE (domain), POINTER :: grid IF ( dyn_opt == DYN_EH ) THEN ALLOCATE( grid%eh_ru_1(ims:ime,kms:kme,jms:jme) ALLOCATE( grid%eh_ru_1(ims:ime,kms:kme,jms:jme) . . . ELSE . . .

WRF domain derived data type (frame/module_domain.F)

MODULE module_domain

TYPE (domain)

REAL, DIMENSION(:,:,:), POINTER :: eh_ru_1 REAL, DIMENSION(:,:,:), POINTER :: eh_ru_2 REAL, DIMENSION(:,:,:), POINTER :: eh_rv_1 . . . END TYPE (domain)

CONTAINS SUBROUTINE allocate_space_field ( grid , . . . ) TYPE (domain), POINTER :: grid IF ( dyn_opt == DYN_EH ) THEN ALLOCATE( grid%eh_ru_1(ims:ime,kms:kme,jms:jme) ALLOCATE( grid%eh_ru_1(ims:ime,kms:kme,jms:jme) . . . ELSE . . .

generated

by Registry

Code structure

Solve_interface:• 1 step on 1 domain• Select dyncore• Dereference ADT

Solve_interface:• 1 step on 1 domain• Select dyncore• Dereference ADT

Integrate:• Time loop• Nesting (recursive)• Calls to I/O

Integrate:• Time loop• Nesting (recursive)• Calls to I/O

Top-level model layersubroutines:• One tile computation• Boundary conditions/avoidance• May select physics opt.• May dereference 4D fields

Top-level model layersubroutines:• One tile computation• Boundary conditions/avoidance• May select physics opt.• May dereference 4D fields

Top-level model layersubroutines:• One tile computation• Boundary conditions/avoidance• May select physics opt.• May dereference 4D fields

Top-level model layersubroutines:• One tile computation• Boundary conditions/avoidance• May select physics opt.• May dereference 4D fields

Top-level model layersubroutines:• One tile computation• Boundary conditions/avoidance• May select physics opt.• May dereference 4D fields

Top-level model layersubroutines:• One tile computation• Boundary conditions/avoidance• May select physics opt.• May dereference 4D fields

WRF main:• Top-level flow of control - Initialize packages - Input config info - Allocate, initialize, decompose main domain• Start integration• Shutdown

WRF main:• Top-level flow of control - Initialize packages - Input config info - Allocate, initialize, decompose main domain• Start integration• Shutdown

Solve_eh:• Sequence through tile loops calling model layer routines for dynamics and physics• Multi-threading (OpenMP)• Interprocessor communi- cation

Solve_eh:• Sequence through tile loops calling model layer routines for dynamics and physics• Multi-threading (OpenMP)• Interprocessor communi- cation

Solve_eh:• Sequence through tile loops calling model layer routines for dynamics and physics• Multi-threading (OpenMP)• Interprocessor communi- cation

Solve_eh:• Sequence through tile loops calling model layer routines for dynamics and physics• Multi-threading (OpenMP)• Interprocessor communi- cation

driv

erm

edia

tion

mod

el

pseudo code for integration (frame/module_integrate.F)

RECURSIVE SUBROUTINE integrate ( grid, start_step, end_step )

DO step = start_step , end_step

CALL solver ( grid ) ! Advance 1 step

WHILE ( active nests of grid )

CALL force_nest( grid, grid->child(i), mapping, subset, interpolator ) CALL integrate ( grid->child(i), (step-1)*(nest_ratio**nest_level), (step)*(nest_ratio**nest_level) CALL fdbck_nest( grid->child(i), grid, mapping, subset, interpolator ) END WHILE

END DO

END SUBROUTINE

pseudo code for integration (frame/module_integrate.F)

RECURSIVE SUBROUTINE integrate ( grid, start_step, end_step )

DO step = start_step , end_step

CALL solver ( grid ) ! Advance 1 step

WHILE ( active nests of grid )

CALL force_nest( grid, grid->child(i), mapping, subset, interpolator ) CALL integrate ( grid->child(i), (step-1)*(nest_ratio**nest_level), (step)*(nest_ratio**nest_level) CALL fdbck_nest( grid->child(i), grid, mapping, subset, interpolator ) END WHILE

END DO

END SUBROUTINE

Code structure

Solve_interface:• 1 step on 1 domain• Select dyncore• Dereference ADT

Solve_interface:• 1 step on 1 domain• Select dyncore• Dereference ADT

Integrate:• Time loop• Nesting (recursive)• Calls to I/O

Integrate:• Time loop• Nesting (recursive)• Calls to I/O

Top-level model layersubroutines:• One tile computation• Boundary conditions/avoidance• May select physics opt.• May dereference 4D fields

Top-level model layersubroutines:• One tile computation• Boundary conditions/avoidance• May select physics opt.• May dereference 4D fields

Top-level model layersubroutines:• One tile computation• Boundary conditions/avoidance• May select physics opt.• May dereference 4D fields

Top-level model layersubroutines:• One tile computation• Boundary conditions/avoidance• May select physics opt.• May dereference 4D fields

Top-level model layersubroutines:• One tile computation• Boundary conditions/avoidance• May select physics opt.• May dereference 4D fields

Top-level model layersubroutines:• One tile computation• Boundary conditions/avoidance• May select physics opt.• May dereference 4D fields

WRF main:• Top-level flow of control - Initialize packages - Input config info - Allocate, initialize, decompose main domain• Start integration• Shutdown

WRF main:• Top-level flow of control - Initialize packages - Input config info - Allocate, initialize, decompose main domain• Start integration• Shutdown

Solve_eh:• Sequence through tile loops calling model layer routines for dynamics and physics• Multi-threading (OpenMP)• Interprocessor communi- cation

Solve_eh:• Sequence through tile loops calling model layer routines for dynamics and physics• Multi-threading (OpenMP)• Interprocessor communi- cation

Solve_eh:• Sequence through tile loops calling model layer routines for dynamics and physics• Multi-threading (OpenMP)• Interprocessor communi- cation

Solve_eh:• Sequence through tile loops calling model layer routines for dynamics and physics• Multi-threading (OpenMP)• Interprocessor communi- cation

driv

erm

edia

tion

mod

el

pseudo code for solve interface

(share/solve_interface.F)

SUBROUTINE interface ( grid )

IF ( dyn_opt == DYN_EH ) THEN

CALL solve_eh ( grid->eh_ru, grid->eh_rv, grid->eh_rtb, grid->eh_rtb, grid->eh_rw, . . . ) ELSE IF ( dyn_opt == DYN_EM ) THEN CALL solve_em ( . . . ) ELSE IF ( dyn_opt == DYN_SL ) THEN CALL solve_sl ( . . . ) ELSE IF ( dyn_opt . . . ENDIF END SUBROUTINE

pseudo code for solve interface

(share/solve_interface.F)

SUBROUTINE interface ( grid )

IF ( dyn_opt == DYN_EH ) THEN

CALL solve_eh ( grid->eh_ru, grid->eh_rv, grid->eh_rtb, grid->eh_rtb, grid->eh_rw, . . . ) ELSE IF ( dyn_opt == DYN_EM ) THEN CALL solve_em ( . . . ) ELSE IF ( dyn_opt == DYN_SL ) THEN CALL solve_sl ( . . . ) ELSE IF ( dyn_opt . . . ENDIF END SUBROUTINE

generated

by Registry

Code structure

Solve_interface:• 1 step on 1 domain• Select dyncore• Dereference ADT

Solve_interface:• 1 step on 1 domain• Select dyncore• Dereference ADT

Integrate:• Time loop• Nesting (recursive)• Calls to I/O

Integrate:• Time loop• Nesting (recursive)• Calls to I/O

Top-level model layersubroutines:• One tile computation• Boundary conditions/avoidance• May select physics opt.• May dereference 4D fields

Top-level model layersubroutines:• One tile computation• Boundary conditions/avoidance• May select physics opt.• May dereference 4D fields

Top-level model layersubroutines:• One tile computation• Boundary conditions/avoidance• May select physics opt.• May dereference 4D fields

Top-level model layersubroutines:• One tile computation• Boundary conditions/avoidance• May select physics opt.• May dereference 4D fields

Top-level model layersubroutines:• One tile computation• Boundary conditions/avoidance• May select physics opt.• May dereference 4D fields

Top-level model layersubroutines:• One tile computation• Boundary conditions/avoidance• May select physics opt.• May dereference 4D fields

WRF main:• Top-level flow of control - Initialize packages - Input config info - Allocate, initialize, decompose main domain• Start integration• Shutdown

WRF main:• Top-level flow of control - Initialize packages - Input config info - Allocate, initialize, decompose main domain• Start integration• Shutdown

Solve_eh:• Sequence through tile loops calling model layer routines for dynamics and physics• Multi-threading (OpenMP)• Interprocessor communi- cation

Solve_eh:• Sequence through tile loops calling model layer routines for dynamics and physics• Multi-threading (OpenMP)• Interprocessor communi- cation

Solve_eh:• Sequence through tile loops calling model layer routines for dynamics and physics• Multi-threading (OpenMP)• Interprocessor communi- cation

Solve_eh:• Sequence through tile loops calling model layer routines for dynamics and physics• Multi-threading (OpenMP)• Interprocessor communi- cation

driv

erm

edia

tion

mod

el pseudo code for eh solver

(dyn_eh/solve_eh.F)

SUBROUTINE solve_eh ( ru, rv, rtb, rtb, rw, . . . )

dummy argument declarations i1 data declarations INTEGER, DIMENSION(max_tiles) :: i_start, i_end, j_start, j_end CALL get_tiles ( numtiles, i_start, i_end , j_start, j_end ) . . . #include “HALO_EH_A.inc”

!$OMP DO PARALLEL DO ij = 1, numtiles its = i_start(ij) ; ite = i_end(ij) jts = j_start(ij) ; jte = j_end(ij) CALL model_subroutine( arg1, arg2, . . . ids , ide , jds , jde , kds , kde , ims , ime , jms , jme , kms , kme , its , ite , jts , jte , kts , kte ) END DO . . .

END SUBROUTINE

pseudo code for eh solver

(dyn_eh/solve_eh.F)

SUBROUTINE solve_eh ( ru, rv, rtb, rtb, rw, . . . )

dummy argument declarations i1 data declarations INTEGER, DIMENSION(max_tiles) :: i_start, i_end, j_start, j_end CALL get_tiles ( numtiles, i_start, i_end , j_start, j_end ) . . . #include “HALO_EH_A.inc”

!$OMP DO PARALLEL DO ij = 1, numtiles its = i_start(ij) ; ite = i_end(ij) jts = j_start(ij) ; jte = j_end(ij) CALL model_subroutine( arg1, arg2, . . . ids , ide , jds , jde , kds , kde , ims , ime , jms , jme , kms , kme , its , ite , jts , jte , kts , kte ) END DO . . .

END SUBROUTINE

generated

by Registry

template for model layer subroutine

SUBROUTINE model ( & arg1, arg2, arg3, … , argn, & ids, ide, jds, jde, kds, kde, & ! Domain dims ims, ime, jms, jme, kms, kme, & ! Memory dims its, ite, jts, jte, kts, kte ) ! Tile dims

IMPLICIT NONE

! Define Arguments (S and I1) data REAL, DIMENSION (kms:kme,ims:ime,jms:jme) :: arg1, . . . REAL, DIMENSION (ims:ime,jms:jme) :: arg7, . . . . . . ! Define Local Data (I2) REAL, DIMENSION (kts:kte,its:ite,jts:jte) :: loc1, . . . . . . ! Executable code; loops run over tile ! dimensions DO j = jts, jte DO i = its, ite DO k = kts, kte IF ( i > ids .AND. I < ide ) THEN loc(k,i,j) = arg1(k,i,j) + … ENDIF END DO END DO END DO

template for model layer subroutine

SUBROUTINE model ( & arg1, arg2, arg3, … , argn, & ids, ide, jds, jde, kds, kde, & ! Domain dims ims, ime, jms, jme, kms, kme, & ! Memory dims its, ite, jts, jte, kts, kte ) ! Tile dims

IMPLICIT NONE

! Define Arguments (S and I1) data REAL, DIMENSION (kms:kme,ims:ime,jms:jme) :: arg1, . . . REAL, DIMENSION (ims:ime,jms:jme) :: arg7, . . . . . . ! Define Local Data (I2) REAL, DIMENSION (kts:kte,its:ite,jts:jte) :: loc1, . . . . . . ! Executable code; loops run over tile ! dimensions DO j = jts, jte DO i = its, ite DO k = kts, kte IF ( i > ids .AND. I < ide ) THEN loc(k,i,j) = arg1(k,i,j) + … ENDIF END DO END DO END DO

• Domain dimensions• Size of logical domain• Used for bdy tests, etc.

• Memory dimensions• Used to dimension dummy

arguments• Do not use for local arrays

• Tile dimensions• Local loop ranges• Local array dimensions

Model domains are decomposed for parallelism on two-levels

Patch: section of model domain allocated to a distributed memory nodeTile: section of a patch allocated to a shared-memory processor within a node; this is also the scope of a model layer subroutine.Distributed memory parallelism is over patches; shared memory parallelism is over tiles within patches

• Single version of code for efficient execution on:

– Distributed-memory– Shared-memory – Clusters of SMPs– Vector and microprocessors

WRF Multi-Layer Domain Decomposition

Logical domain

1 Patch, divided into multiple tiles

Inter-processor communication

• Halo updates

• Periodic boundary updates

• Parallel transposes

• Interface to code through the mediation layer

Distributed Memory Communications

dyn_eh/solve_eh.F

SUBROUTINE solve_eh ( & . . . ) IMPLICIT NONE

. . .

code before communication

#include “HALO_EH_A.incl”

code after communication

. . .

dyn_eh/solve_eh.F

SUBROUTINE solve_eh ( & . . . ) IMPLICIT NONE

. . .

code before communication

#include “HALO_EH_A.incl”

code after communication

. . .

Generated byR

egistry

SharedMemory Parallelism

pseudo code for eh solver

(dyn_eh/solve_eh.F)

SUBROUTINE solve_eh ( . . . )

USE module_tiles . . .

INTEGER, DIMENSION(max_tiles) :: i_start, i_end, j_start, j_end CALL get_tiles ( numtiles, i_start, i_end , j_start, j_end )

. . .

!$OMP DO PARALLEL DO ij = 1, numtiles its = i_start(ij) ; ite = i_end(ij) jts = j_start(ij) ; jte = j_end(ij) CALL model_subroutine( arg1, arg2, . . . ids , ide , jds , jde , kds , kde , ims , ime , jms , jme , kms , kme , its , ite , jts , jte , kts , kte ) END DO . . .

END SUBROUTINE

pseudo code for eh solver

(dyn_eh/solve_eh.F)

SUBROUTINE solve_eh ( . . . )

USE module_tiles . . .

INTEGER, DIMENSION(max_tiles) :: i_start, i_end, j_start, j_end CALL get_tiles ( numtiles, i_start, i_end , j_start, j_end )

. . .

!$OMP DO PARALLEL DO ij = 1, numtiles its = i_start(ij) ; ite = i_end(ij) jts = j_start(ij) ; jte = j_end(ij) CALL model_subroutine( arg1, arg2, . . . ids , ide , jds , jde , kds , kde , ims , ime , jms , jme , kms , kme , its , ite , jts , jte , kts , kte ) END DO . . .

END SUBROUTINE

Controlling parallelism at runtime

• Shared memory number of tiles– 1 tile if not compiled for OpenMP or if there is only one thread

available

– unless: numtiles > 1 in namelist

– unless: tile size is specified in namelist as tile_sz_x > 0 and tile_size_y > 0

– The tiling dimension is specified at compile time in frame/module_machine.F as either TILE_Y, TILE_X, or TILE_XY (this is overridden by tile_sz_x and tile_sz_y)

Controlling parallelism at runtime

• Distributed memory patches– The number of patches is always the same as the number of MPI

processes, which WRF learns by interrogating the particular comm package (e.g. RSL, which returns the value of MPI_Comm_size)

– The patching algorithm is built in and choses a decomposition that is closest to square (should be more controllable)

– unless: numtiles > 1 in namelist

– unless: tile size is specified in namelist as tile_sz_x > 0 and tile_size_y > 0

– The tiling dimension is specified at compile time in frame/module_machine.F as either TILE_Y, TILE_X, or TILE_XY (this is overridden by tile_sz_x and tile_sz_y)

WRF model data structures• State data

– Fields in domain data type, defined in Registry• Decomposed 2- and 3-D arrays; dimensions correspond to physical domain

dimensions• 4-D “scalar” arrays (e.g. moist); accessible individually or en masse• Boundary arrays• Misc. un-decomposed arrays and 0-dimensional variables

– Dimensioned using “memory” dimensions– Allocated using F90 ALLOCATE

• I1 (local to solver) data– Also defined in Registry but not in domain data type– Dimensioned using memory dimensions– Automatic allocation (usually on main program stack)

• I2 (local to model layer subroutine) data– Defined only in subroutine– Defined using “tile” dimensions– Automatic allocation (usually on thread-local stacks)

WRF Registry• CASE mechanism for managing complexity of WRF code:

– Data base (ASCII flat file) of information about code• State data and attributes

– Dimensionality, time levels, core association, other metadata

– I/O dataset membership

• Configuration data

• Packages (including multiple dyncores) with data associations

• Communication definitions

– Mechanism for compile-time generation of• Data definitions, allocations

• Driver/mediation layer interfaces

• I/O mechanisms

• Communication mechanisms

• WRF 1.2 Registry rewritten– Added Abstract Data Types (3DVAR)

– Additional communication options (e.g. transposes)

– More general data definition semantics

Registry Data Base

• ACSCII flat file: Registry/Registry

• Types of entry:– Dimspec -- Describes dimensions that are used to define arrays in the model

– State – Describes state variables and arrays in the domain DDT

– I1 – Describes local variables and arrays in solve

– Typedef -- Describes derived types that are subtypes of the domain DDT

– Rconfig – Describes a configuration (e.g. namelist) variable or array

– Package – Describes attributes of a package (e.g. physics)

– Halo -- Describes halo update interprocessor communications

– Period -- Describes communications for periodic boundary updates

– Xpose -- Describes communications for parallel matrix transposes

• Mechanism in tools directory; program name is “registry”; used by WRF build procedure

Dimspec entry

• Elements– Entry: The keyword “dimspec”

– DimName: The name of the dimension (single character)

– Order: The order of the dimension in the WRF framework (1, 2, 3, or ‘-‘)

– HowDefined: specification of how the range of the dimension is defined

– CoordAxis: which axis the dimension corresponds to, if any (X, Y, Z, or C)

– DatName: metadata name of dimension

• Example

#<Table> <Dim> <Order> <How defined> <Coord-axis> <DatName>dimspec i 1 standard_domain x west_eastdimspec j 3 standard_domain y south_northdimspec k 2 standard_domain z bottom_topdimspec l 2 namelist=num_soil_layers z soil_layers

State entry• Elements

– Entry: The keyword “state”

– Type: The type of the state variable or array (real, double, integer, logical, character, or derived)

– Sym: The symbolic name of the variable or array

– Dims: A string denoting the dimensionality of the array or a hyphen (-)

– Use: A string denoting association with a solver or 4D scalar array, or a hyphen

– NumTLev: An integer indicating the number of time levels (for arrays) or hypen (for variables)

– Stagger: String indicating staggered dimensions of variable (X, Y, Z, or hyphen for no staggering)

– IO: String indicating whether and how the variable is subject to I/O

– DName: Metadata name for the variable

– Units: Metadata units of the variable

– Descrip: Metadata description of the variable

• Example# Type Sym Dims Use Tlev Stag IO Dname Descrip# definition of a 3D, two-time level, staggered state arraystate real ru ikj dyn_eh 2 X irh "RHO_U" "X WIND COMPONENT“

# definition of fields in 4D scalar array “moist”state real qv ikjft moist 2 - irh "QVAPOR" "Water vapor mix ratio“state real qc ikjft moist 2 - irh "QCLOUD" "Cloud water mixing ratio"

Rconfig entry

• Elements– Entry: the keyword “rconfig”

– Type: the type of the namelist variable (integer, real, logical – no strings yet)

– Sym: the name of the namelist variable or array

– How set: indicates how the variable is set: e.g. namelist or derived, and if namelist, which block of the namelist it is set in

– Nentries: specifies the dimensionality of the namelist variable or array. If 1 (one) it is a variable and applies domain-wide; otherwise specify max_domains (which is an integer parameter defined in module_driver_constants.F).

– Default: the default value of the variable to be used if none is specified in the namelist; hyphen (-) for no default

• Example

# Type Sym How set Nentries Defaultrconfig integer dyn_opt namelist,namelist_01 1 1

Package Entry• Elements

– Entry: the keyword “package”,

– Package name: the name of the package: e.g. “kesslerscheme”

– Associated rconfig choice: the name of a rconfig variable and the value of that variable that choses this package

– Package state vars: unused at present; specify hyphen (-)

– Associated 4D scalars: the names of 4D scalar arrays and the fields within those arrays this package uses

• Example

# namelist entry that controls microphysics optionrconfig integer mp_physics namelist,namelist_04 max_domains 0

# specification of microphysics optionspackage passiveqv mp_physics==0 - moist:qvpackage kesslerscheme mp_physics==1 - moist:qv,qc,qrpackage linscheme mp_physics==2 - moist:qv,qc,qr,qi,qs,qgpackage ncepcloud3 mp_physics==3 - moist:qv,qc,qrpackage ncepcloud5 mp_physics==4 - moist:qv,qc,qr,qi,qs

Comm entries: halo and period

• Elements– Entry: keywords “halo” or “period”

– Commname: name of comm operation

– Description: defines the halo or period operation• For halo: npts:f1,f2,...[;npts:f1,f2,...]*

• For period: width:f1,f2,...[;width:f1,f2,...]*

• Example

# first exchange in eh solverhalo HALO_EH_A 24:u_2,v_2,ru_1,ru_2,rv_1,rv_2,w_2,t_2;4:pp,pip# a periodic boundary updateperiod PERIOD_EH_A 2:u_1,u_2,ru_1,ru_2,v_1,v_2,rv_1,rv_2,rw_1,rw_2

Additional Information

[email protected]

• www.wrf-model.org

• WRF Design and Implementation (draft)

• Tomorrow:– How to make changes in WRF code