Transcript
Page 1: REGIONAL AND LOCAL-SCALE EVALUATION OF MM5 METEOROLOGICAL FIELDS FOR VARIOUS AIR QUALITY MODELING APPLICATIONS 1 Jason Brewer, 2 * Pat Dolwick, and 3 *

REGIONAL AND LOCAL-SCALE EVALUATION OF MM5 METEOROLOGICAL FIELDS FOR VARIOUS AIR QUALITY MODELING APPLICATIONS

1 Jason Brewer, 2* Pat Dolwick, and 3* Rob Gilliam 1Department of Marine, Earth, and Atmospheric Sciences, North Carolina State University, Raleigh, North Carolina

2Air Quality Modeling Group, Office of Air Quality Planning and Standards (OAQPS), USEPA, Research Triangle Park, North Carolina

3Atmospheric Modeling Division, National Exposure Research Laboratory (NERL), USEPA, Research Triangle Park, North Carolina *On assignment from the Air Resources Laboratory, NOAA

Prognostic meteorological models are often used in a retrospective mode to provide inputs to air quality models that are used for environmental planning. These inputs govern the advection, diffusion, chemical transformation, and eventual deposition of pollutants within regional air quality models such as CMAQ1 (Community Multi-scale Air Quality modeling system) and are being investigated for use in local-scale assessments such as AERMOD2. The air quality models have consistently been subjected to a rigorous performance assessment, but in many cases the meteorological inputs to these models are accepted as is, even though this component of the modeling arguably contains more uncertainty that could significantly affect the results of the analysis3. Before initiating the air quality simulations, it is important to identify the biases and errors associated with the meteorological modeling.

The goal of the meteorological evaluation4 is to move toward an understanding of how the bias and error of the meteorological input data impact the resultant AQ modeling. Typically, there are two specific objectives:

1) determine if the meteorological model output fields represent a reasonable approximation of the actual meteorology that occurred during the modeling period (i.e., the “operational” evaluation), and

2) identify and quantify how the existing biases and errors in the meteorological predictions may affect the air quality modeling results (i.e., the “phenomenological” evaluation).

This analysis looks at the performance of the Penn State University / National Center for Atmospheric Research mesoscale model5 known as MM5 for two separate years (2001 and 2002) at two separate model resolutions (36 and 12km). The model evaluation is summarized for the entire domain, individual subregions within the domain, and certain individual sites to assess the suitability of the data to drive regional-scale, photochemical models (e.g., CMAQ) versus local-scale, dispersion models (e.g., AERMOD). The operational evaluation includes statistical comparisons of model/observed pairs (e.g., bias, index of agreement, root mean square errors, etc.) for multiple meteorological parameters (e.g., temperature, water vapor mixing ratio, winds, etc.). The phenomenological evaluation is based on existing air quality conceptual models and assesses performance for varied phenomena such as trajectories, low-level jets, frontal passages, and air mass residence time and uses a different universe of statistics such as false alarm rates and probabilities of detection. This poster is only able to show a small subset of all the completed analyses on which the conclusions are based.

1. Introduction / Background

3. Sample Regional-Scale Operational Evaluation (2002 12km MM5)

2. 2001 and 2002 MM5 ConfigurationModel version:

2001 (36): 3.6.1

2001 (12): 3.6.3 (w/ minor fixes to KF2 & Reisner 2)

2002 (36): 3.6.3

2002 (12): 3.7.2

Domain Size:

2001/2002 (36): 165 * 129 * 34

2001/2002 (12): 290 * 251 * 34

Major Physics Options:

Radiation: RRTM Long-wave Radiation

Cumulus Parametrization: Kain-Fritsch 1 (2001/36 only), Kain-Fritsch 2

Microphysics: Reisner 2 (2001), Reisner 1 (2002)

Land Surface Model / PBL Scheme: Pleim-Xiu / Asymmetric Convective Method (ACM1)

Analysis Nudging (12km):

winds (aloft): 1.0E-4; winds (surface): 1.0E-4,

temperature (aloft): 1.0E-4; temperature (surface): N/A

moisture (aloft): 1.0E-5; moisture (surface): N/A

Run Durations: 5.5 day individual runs, w/in 7 two-month simulations

Evaluation software: Atmospheric Model Evaluation Tool (AMET)

7. ConclusionsAll four sets of meteorological model output fields represent a reasonable approximation of the actual meteorology that occurred during the modeling period. (See panel 3.) Qualitative comparisons of synoptic patterns (not shown) indicate the model captures large scale features such as high pressure domes and upper-level troughs.

Certainly, the most troublesome aspect of meteorological model performance is the surface temperature “cold bias” during the winter, especially January. Across the four MM5 simulations, the January cold bias typically averaged around 2-3 deg C. The effect is largest overnight (panel 5d). The resultant tendency is to overestimate stability in the lowest layers. This could have a significant impact on the air quality results as pollutants emitted at the surface may not be properly mixed.

Generally, bias/error does not appear to be a function of region. However, individual model / observation comparisons in space/time can show large deviations. Caution should be exercised when using these meteorological data for air quality modeling in the Rocky Mountain region where the model errors/biases are much larger than in other regions analyzed. (See panel 3.)

Care will have to be exercised when using these MM5 results on the local scale. When averaged regionally, there is little to no bias in wind directions, but as shown in panel 4, local variances can be considerably higher. Users of Gaussian plume based models should scrutinize the MM5 performance closely over their areas of interest.

The model is generally unbiased for precipitation at large scale (panel 5a), though the 2001 results appear to better match the observations than 2002, perhaps indicating that use of the Reisner 2 microphysics scheme was justified.

The “key site” analysis shown in panel 6 looked at MM5 performance over a specific ozone event in the Ohio Valley. These evaluations can be time-consuming but are important for identifying appropriate modeling episodes.

This evaluation is not entirely complete. We would like to do more analysis on cloud coverage, PBL heights, as well as model performance as a function of meteorological regime (clusters), and wind field comparisons against trajectory models.

Note: All four of these data sets are available. If you are interested in acquiring the data, please e-mail Pat Dolwick ([email protected]). The transfer process requires the user to provide USB drives.

Acknowledgements / ReferencesThe MM5 runs evaluated as part of this study were completed by Alpine Geophysics (2001 simulations) and Computer Science Corporation (2002 simulations). The authors would like to thank Dennis McNally, Lara Reynolds, and Allan Huffman for the effort they put into completing the meteorological modeling.1 Byun, D.W., and K. L. Schere, 2006: Review of the Governing Equations, Computational Algorithms, and Other Components of the Models-3 Community Multiscale Air Quality (CMAQ) Modeling System. Applied Mechanics Reviews, Volume 59, Number 2 (March 2006), pp. 51-77.

2 U.S. Environmental Protection Agency, User’s Guide for the AMS/EPA Regulatory Model AERMOD, EPA-454/B-03-001, September 2004.

3 Tesche T. W., D.E. McNally, and C. Tremback, (2002), “Operational evaluation of the MM5 meteorological model over the continental United States: Protocol for annual and episodic evaluation.” Submitted to USEPA as part of Task Order 4TCG-68027015. (July 2002)

4- U.S. Environmental Protection Agency, Guidance on the Use of Models and Other Analyses for Demonstrating Attainment of Air Quality Goals for Ozone, PM2.5, and Regional Haze, Draft 3.2, September 2006.

5 Grell, G.A., J. Dudhia and D.R. Stauffer, (1994), “A Description of the Fifth-Generation Penn State/NCAR Mesoscale Model (MM5)”, NCAR/TN-398+STR, 138 pp.

Comparison of MM5 Predictions vs. NWS Observations Temperature: (3 sites, by quarter, 2001 12km MM5)

0

0.5

1

1.5

2

2.5

3

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

Bias (K)

Err

or

(K)

DET 1Q

DET 2Q

DET 3Q

DET 4Q

BHM 1Q

BHM 2Q

BHM 3Q

BHM 4Q

DCA 1Q

DCA 2Q

DCA 3Q

DCA 4Q

Comparison of MM5 predictions vs NWS observations Mixing Ratio: (3 sites, by quarter, 2001 12 km MM5)

0

0.5

1

1.5

2

2.5

3

-1.5 -1 -0.5 0 0.5 1 1.5

Bias (g/kg)

Err

or

(g/k

g)

DET 1Q

DET 2Q

DCA 3Q

DCA 4Q

BHM 1Q

BHM 2Q

BHM 3Q

BHM 4Q

DCA 1Q

DCA 2Q

DCA 3Q

DCA 4Q

Comparison of MM5 predictions vs NWS observations Wind Speed: (3 sites, by quarter, 2001 12 km MM5)

0

0.5

1

1.5

2

2.5

3

-2.5 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 2.5

Bias (K)

Err

or

(K)

DET 1Q

DET 2Q

DET 3Q

DET 4Q

BHM 1Q

BHM 2Q

BHM 3Q

BHM 4Q

DCA 1Q

DCA 2Q

DCA 3Q

DCA 4Q

Comparison of MM5 predictions vs NWS observations Wind Direction: (3 sites, by quarter, 2001 12 km MM5)

0

10

20

30

40

-20 -15 -10 -5 0 5 10 15 20

Bias (K)

Err

or

(K)

DET 1Q

DET 2Q

DET 3Q

DET 4Q

BHM 1Q

BHM 2Q

BHM 3Q

BHM 4Q

DCA 1Q

DCA 2Q

DCA 3Q

DCA 4Q

Comparison of MM5 predictions vs. NWS observations Temperature: (3 sites, by quarter, 2001 36km MM5)

0

0.5

1

1.5

2

2.5

-2.5 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 2.5

Bias (K)

Err

or

(K)

DET 1st Q

DET 2nd Q

DET 3rd Q

DET 4th Q

BHM 1st Q

BHM 2nd Q

BHM 3rd Q

BHM 4th Q

SEA 1st Q

SEA 2nd Q

SEA 3rd Q

SEA 4th Q

Comparison of MM5 predictions vs. NWS observationsMixing Ratio: (3 sites, by quarter, 2001 36km MM5)

0

0.5

1

1.5

2

2.5

3

-1.5 -1 -0.5 0 0.5 1 1.5

Bias (g/kg)

Err

or

(g/k

g)

DET 1st Q

DET 2nd Q

DET 3rd Q

DET 4th Q

BHM 1st Q

BHM 2nd Q

BHM 3rd Q

BHM 4th Q

SEA 1st Q

SEA 2nd Q

SEA 3rd Q

SEA 4th Q

Comparison of MM5 predictions vs. NWS observations Wind speed: (3 sites, by quarter, 2001 36km MM5)

0

0.5

1

1.5

2

2.5

3

-2.5 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 2.5

Bias (K)

Err

or

(K)

DET 1st Q

DET 2nd Q

DET 3rd Q

DET 4th Q

BHM 1st Q

BHM 2ndQ

BHM 3rd Q

BHM 4th Q

SEA 1st Q

SEA 2nd Q

SEA 3rd Q

SEA 4th Q

Comparison of MM5 predictions vs. NWS observationsWind Direction: (3 sites, by quarter, 2001 36km MM5)

0

10

20

30

40

50

-20 -15 -10 -5 0 5 10 15 20

Bias (deg)

Err

or

(de

g)

DET 1st Q

DET 2nd Q

DET 3rd Q

DET 4th Q

BHM 1st Q

BHM 2nd Q

BHM 3rd Q

BHM 4th Q

SEA 1st Q

SEA 2nd Q

SEA 3rd Q

SEA 4th Q

Comparison of MM5 predictions vs. NWS observations Temperature: (2 sites, by quarter, 2002 12km MM5)

0

0.5

1

1.5

2

2.5

3

-3 -2.5 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 2.5 3

Bias (K)

Err

or

(K)

DET 1Q

DET 2Q

DET 3Q

DET 4Q

BHM 1Q

BHM 2Q

BHM 3Q

BHM 4Q

DCA 1Q

DCA 2Q

DCA 3Q

DCA 4Q

Comparison of MM5 predictions vs. NWS observations Mixing Ratio: (2 sites, by quarter, 2002 12km MM5)

0

0.5

1

1.5

2

2.5

3

-1.5 -1 -0.5 0 0.5 1 1.5

Bias (g/kg)

Err

or

(g/k

g)

DET 1Q

DET 2Q

DET 3Q

DET 4Q

BHM 1Q

BHM 2Q

BHM 3Q

BHM 4Q

DCA 1Q

DCA 2Q

DCA 3Q

DCA 4Q

Comparison of MM5 predictions vs. NWS observations Wind Speed: (2 sites, by quarter, 2002 12km MM5)

0

0.5

1

1.5

2

2.5

3

-2.5 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 2.5

Bias (m/s)

Err

or

(m/s

)

DET 1Q

DET 2Q

DET 3Q

DET 4Q

BHM 1Q

BHM 2Q

BHM 3Q

BHM 4Q

DCA 1Q

DCA 2Q

DCA 3Q

DCA 4Q

Comparison of MM5 predictions vs. NWS observations Wind Direction: (2 sites, by quarter, 2002 12km MM5)

0

10

20

30

40

-25 -20 -15 -10 -5 0 5 10 15 20 25

Bias (deg)

Err

or

(de

g)

DET 1Q

DET 2Q

DET 3Q

DET 4Q

BHM 1Q

BHM 2Q

BHM 3Q

BHM 4Q

DCA 1Q

DCA 2Q

DCA 3Q

DCA 4Q

Comparison of MM5 predictions vs NWS observations Mixing Ratio: (3 sites, by quarter, 2002 36km MM5)

0

0.5

1

1.5

2

2.5

3

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

Bias (g/kg)

Err

or

(g/k

g)

DET 1Q

DET 2Q

DET 3Q

DET 4Q

BHM 1Q

BHM 2Q

BHM 3Q

BHM 4Q

SEA 1Q

SEA 2Q

SEA 3Q

SEA 4Q

Comparison of MM5 Predictions vs. NWS Observations Wind Speed: (3 sites, by quarter, 2002 36km MM5)

0

0.5

1

1.5

2

2.5

3

-2.5 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 2.5

Bias (m/s)

Err

or

(m/s

)

DET 1Q

DET 2Q

DET 3Q

DET 4Q

BHM 1Q

BHM 2Q

BHM 3Q

BHM 4Q

SEA 1Q

SEA 2Q

SEA 3Q

SEA 4Q

Comparison of MM5 Predictions vs. NWS Observations Wind Direction: (3 sites, by quarter, 2002 36km MM5)

0

10

20

30

40

-15 -10 -5 0 5 10 15

Bias (deg)

Err

or

(de

g)

DET 1Q

DET 2Q

DET 3Q

DET 4Q

BHM 1Q

BHM 2Q

BHM 3Q

BHM 4Q

SEA 1Q

SEA 2Q

SEA 3Q

SEA 4Q

Mean Error T Q WS WD Mean Bias T Q WS WDjanuary 3.0 0.6 1.3 24.1 january -2.4 -0.2 -0.3 7.4febuary 2.5 0.6 1.3 23.6 febuary -1.3 -0.2 -0.4 7.8march 2.7 0.7 1.4 24.9 march -1.5 -0.1 -0.7 7.5april 2.3 0.9 1.3 24.9 april -1.5 -0.1 -0.4 6.2may 1.8 1.0 1.3 25.8 may -0.5 -0.1 -0.3 6.8june 1.5 1.2 1.2 29.2 june -0.5 -0.2 -0.4 6.8july 1.5 1.4 1.1 31.4 july -0.1 -0.1 -0.4 7.8

august 1.5 1.3 1.1 30.5 august -0.1 -0.1 -0.4 7.1september 1.5 1.1 1.1 28.1 september 0.0 -0.3 -0.3 7.8

october 1.6 0.8 1.1 26.7 october 0.4 -0.2 -0.2 8.3november 2.0 0.7 1.2 24.3 november -0.2 0.0 -0.2 8.4december 2.5 0.6 1.3 24.5 december -0.6 0.1 -0.5 8.1

Mean Error T Q WS WD Mean Bias T Q WS WDflorida 1.5 1.3 1.2 29.7 florida -0.5 -0.1 -0.4 5.9

midatlantic 1.4 1.2 1.0 30.5 midatlantic -0.1 -0.1 -0.2 11.5northeast 1.6 1.1 1.1 32.0 northeast -0.2 -0.1 -0.2 18.3

south 1.5 1.4 1.1 24.4 south -0.3 -0.3 -0.4 1.6greatlakes 1.5 1.1 1.0 26.7 greatlakes 0.0 0.1 -0.2 8.9midwest 1.6 1.2 1.2 23.8 midwest -0.1 -0.2 -0.4 2.0

lowerrockies 2.2 1.3 1.7 43.7 lowerrockies -1.1 -0.2 -1.0 0.1upperrockies 2.0 1.1 1.6 40.6 upperrockies -0.7 -0.3 -0.9 -0.5

Mean Error T Q WS WD Mean Bias T Q WS WDurban 1.5 1.2 1.2 26.0 urban -0.5 -0.3 -0.6 6.0

ag 1.5 1.2 1.1 25.8 ag -0.1 -0.1 -0.3 6.6plains 1.9 1.3 1.5 30.9 plains -0.7 -0.3 -0.8 -0.4forest 1.6 1.2 1.1 33.1 forest -0.1 -0.1 -0.2 12.6desert 1.5 1.4 1.1 25.4 desert 0.2 -0.4 -0.3 2.2water 1.4 1.2 1.4 28.7 water -0.2 -0.2 0.2 11.9

Mean Error T Q WS WD Mean Bias T Q WS WDcoastal 1.5 1.2 1.2 28.1 coastal -0.4 -0.2 -0.2 8.7inland 1.5 1.2 1.0 28.3 inland -0.1 -0.1 -0.2 9.1

mountains 1.8 1.2 1.4 30.8 mountains -0.5 -0.2 -0.7 2.6

4. Sample Local-Scale Operational Evaluation Results (3 locations: Birmingham, Detroit, and Seattle)

Moi

stur

e

2001 – 36 km 2002 – 12 km2002 – 36 km2001 – 12 km

Comparison of MM5 predictions vs. NWS observations Temperature: (3 sites, by quarter, 2002 36km MM5)

0

0.5

1

1.5

2

2.5

3

3.5

4

-4 -3 -2 -1 0 1 2 3 4

Bias (K)

Err

or

(K)

DET 1Q

DET 2Q

DET 3Q

DET 4Q

BHM 1Q

BHM 2Q

BHM 3Q

BHM 4Q

SEA 1Q

SEA 2Q

SEA 3Q

SEA 4QTem

pera

ture

Win

d D

irec

tion

Win

d S

peed

5. Sample Phenomenological Evaluation Results (2002 12km MM5)

a) Observed vs. Modeled Precipitation (May 2002)

b) Seasonally-averaged vertical profiles: Model vs. Obis (GSO)

c) Detailed temperature performance

d) Diurnal temperature performance

Win

ter

Sum

me

r

6. “Key Site” Evaluation Results

Northern Indiana – August 3, 2002 (12 km MM5)

Cincinnati, OH – Summer 2002 (12km MM5)

Note: 2-meter temperature (T), mixing ratio (Q), wind speed (WS), and wind direction (WD) are in units of: K, g/kg, m/s, and degrees, respectively.

Top Related