probability forecasts from ensembles and their application at the spc david bright noaa/nws/storm...

Post on 21-Dec-2015

213 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Probability Forecasts Probability Forecasts from Ensembles and their from Ensembles and their

Application at the SPC Application at the SPC

David BrightNOAA/NWS/Storm Prediction Center

Norman, OK

AMS Short Course on Probabilistic Forecasting

January 9, 2005San Diego, CA

Where Americas Climate and Weather Services Begin

Outline

• Motivation for ensemble forecasting

• Ensemble products and applications– Emphasis on probabilistic products

• Ensemble calibration (verification)

• Decision making using ensembles

Outline

• Motivation for ensemble forecasting

• Ensemble products and applications– Emphasis on probabilistic products

• Ensemble calibration (verification)

• Decision making using ensembles

• Daily weather forecasts begin as an initial-value problem on large supercomputers

• To produce a skillful weather forecast requires:– An accurate initial state of the atmosphere to

begin the model forecast– Computer models that realistically represent the

evolution of the atmosphere (in a timely manner)• With a reasonably accurate initial analysis of the

atmosphere, the state of the atmosphere at any subsequent time can be determined by a super-mathematician." (Bjerknes 1919)

Example: DeterminismExample: Determinism

60h Eta Forecast valid 00 UTC 27 Dec 2004PMSL (solid); 10m Wind; 1000-500 mb thickness (dashed)

60h Eta Forecast valid 00 UTC 27 Dec 2004PMSL (solid); 10m Wind; 1000-500 mb thickness (dashed)

Precip amount (in) and type (blue=snow; green=rain)

Example: DeterminismExample: Determinism

60h Eta Forecast valid 00 UTC 27 Dec 2004

“Truth” 00 UTC 27 Dec 2004

Example: DeterminismExample: Determinism

60h Eta Forecast valid 00 UTC 27 Dec 2004

“Truth” 00 UTC 27 Dec 2004

• Ignores forecast uncertainty• Potentially misleading• Oversells forecast capability

??Example: DeterminismExample: Determinism

Ensemble forecasting can be traced back to the discovery of the "Butterfly Effect" (Lorenz 1963, 1965)…

Atmo a non-linear, non-periodic, dynamical system causes even tiny errors to grow upscale ... resulting in forecast uncertainty and eventually chaos

The Butterfly EffectThe Butterfly Effect

The Butterfly EffectThe Butterfly Effect

• Discovery of the “butterfly effect” (Lorenz 1963)

• Simplified climate model… When the integration was restarted with 3 (vs 6) digit accuracy, everything was going fine until…

Time

• Solutions began to diverge

Solutions diverge

Time

The Butterfly EffectThe Butterfly Effect

• Soon, two “similar” but clearly unique solutions

Solutions diverge

Time

The Butterfly EffectThe Butterfly Effect

• Eventually, results revealed two uncorrelated and completely different solutions (i.e., chaos)

Solutions diverge

Time

Chaos

The Butterfly EffectThe Butterfly Effect

• Ensembles can be used to provide information on forecast uncertainty

• Information from the ensemble typically consists of…

(1) Mean(2) Spread(3) Probability

Ensembles useful in this range!

Solutions diverge

Time

Chaos

The Butterfly EffectThe Butterfly Effect

• Ensembles extend predictability…

• A deterministic solution is no longer skillful when its error variance exceeds climatic variance • An ensemble remains skillful until error saturation (i.e., until chaos occurs)

Solutions diverge

Chaos

Time

Ensembles extend predictability

Ensembles especially useful in this range!

The Butterfly EffectThe Butterfly Effect

- NWP models...- Doubling time of small

initial errors ~ 1 to 2 days

- Maximum large-scale (synoptic to planetary) predictability ~10 to 14 days

It’s hard to get it right the first time!

Example: Synoptic Scale VariabilityExample: Synoptic Scale Variability7 day forecast – NCEP MREF 500 MB Height7 day forecast – NCEP MREF 500 MB Height

GFS “Control” Forecast GFS -12h “Control”

GFS Pert European Model and Start

• Reveals forecast uncertainty, e.g., se U.S. precip• Sensible weather often mesoscale dominated

Example: Mesoscale VariabilityExample: Mesoscale Variability1.5 day forecast – NCEP SREF Precipitation1.5 day forecast – NCEP SREF Precipitation

500 mb Hght (Dec. 2004; Greater U.S. Area)

Climate SD

1.41 x Climate SDGFS

Ens Means

Limit of deterministic skill ~7.5 days

Limit of ensemble skill ~10.5 days

1 2 3 4 5Days

7 8 9 10 11 12

RMSE

20 m

40 m

80 m

100 m

120 m

Error Growth with Time: GFSError Growth with Time: GFS

Determinism

Ensemble

Ensembles vs. DeterminismEnsembles vs. Determinism

Evaluating Weather Forecasts

Outline

• Motivation for ensemble forecasting

• Ensemble products and applications

–Emphasis on probabilistic products• Ensemble calibration (verification)

• Decision making using ensembles

SPC Approach to EnsemblesSPC Approach to Ensembles

• Develop customized products based on a particular application (severe, fire wx, etc.)

• Design operational guidance products that…– Help blend deterministic and ensemble approaches– Facilitate transition toward probabilistic thinking– Aid in critical decision making

• Increase confidence • Alert for rare but significant events

F15 SREF MEAN500 MB HGHT,TEMP,WIND

Ensemble MeansEnsemble Means

Synoptic-Statistical RelationshipsSynoptic-Statistical RelationshipsMean + SpreadMean + Spread

• Examples of simple relationships between dispersion patterns and synoptic interpretation can be defined.

• Obtain a quick overview of range of weather situations from ensemble statistics.

Amplitude

Location

F15 SREF MEAN/SD500 MB HGHT

Ensemble Mean + SpreadEnsemble Mean + Spread

F000 F048

F096 F144

500 mb Mean Height (solid) and Standard Deviation (dashed/filled)

Increased spread Less predictability Less forecast confidence

Ensemble Mean + SpreadEnsemble Mean + Spread

F000 F048

F096 F144

500 mb Mean Height and Normalized VarianceNormalize the ensemble variance by climatic variance

Values approaching 2 (dark color fill) => Ensemble variance saturated based on climo

2

Ensemble Mean + Normalized SpreadEnsemble Mean + Normalized Spread

F000 F048

F096 F144

500 mb Member Height “Spaghetti” - 5640 meter contour

Another way to view uncertainty:Another way to view uncertainty:SpaghettiSpaghetti

F63 SREF POSTAGE STAMP VIEW: PMSL, HURRICANE FRANCES F63 SREF POSTAGE STAMP VIEW: PMSL, HURRICANE FRANCES

Red = EtaBMJ

Yellow= Yellow= EtaKFEtaKF

Blue = RSM

White = White = OpEtaOpEta

SREF Member

F15 SREF MEDIAN/RANGECAPE

At least 1 memberhas >= 500 J/kg

All 16 membershave >=500 J/kg CAPE

Median

Spatial Variability: Median + RangeSpatial Variability: Median + Range

• Arithmetic mean…– Easy to compute and understand– Tends to increase coverage of light

pcpn and decrease max values.

3-hr Total Pcpn NCEP SREFF63 Valid 09 Oct 2003 00 UTC

Ways to view central value: MeanWays to view central value: Mean

• Median…– If the majority of members

don’t precip, will show large areas of no precip. Thus, often limited in areal extent.

3-hr Total Pcpn NCEP SREF

Ways to view central value: MedianWays to view central value: Median

• The blending of two PDFs, when one provides better spatial representation [e.g., ensemble mean QPF] and the other greater accuracy [e.g., QPF from all members]. See Ebert (MWR 2001) for more info.

Rank Ens Mean Rank Member QPF 1 1 2 16 3 32

Ways to view central value: Ways to view central value: Probability MatchingProbability Matching

• Probability matching…– Ebert (2001)

Found to be the best ensemble averaged QPF

– Max values restored; pattern from ens mean

3-hr Total Pcpn NCEP SREF

Ways to view central value: Ways to view central value: Probability MatchingProbability Matching

Probability 144h 2 meter Td <= 25 degF

Probabilistic Output of Basic Products: Probabilistic Output of Basic Products: 2 m Dewpoint2 m Dewpoint

Probability 144h Haines Index > 5

Probabilistic Output of Derived Probabilistic Output of Derived Products: Haines IndexProducts: Haines Index

Probability Convective Pcpn Probability Convective Pcpn >> .01” .01”

Prob Conv Pcpn > .01”Valid 00 UTC 20 Sept 2003

Prob Conv Pcpn > .01”Valid 00 UTC 20 Sept 2003

Probability Convective Pcpn Probability Convective Pcpn >> .01” .01”

Pcpn probs due to physics - No EtaBMJ members?!

Red = EtaBMJYellow = EtaKFBlue = RSM

Spaghetti: Different physicsSpaghetti: Different physics

Note clustering by model

Spaghetti: OutliersSpaghetti: Outliers

F39 SREF SPAGHETTI F39 SREF SPAGHETTI (1000 J/KG)(1000 J/KG)

Red = EtaBMJYellow = EtaKFBlue = RSMWhite solid = 12 KM OpEta (12 UTC)

12 UTC operational Eta clearlyan outlier from 09 UTC SREF - Is this the result of ICs or resolution? - Is this a better fcst (updated info) or an outlier

F15 SREF MINIMUMMINIMUM 2 METER RH

F15 SREF MAXIMUMMAXIMUMFOSBERG FIREWX INDEX

• Any member can contribute to the max or min value at a grid point

Extreme ValuesExtreme Values

Combined Probability ChartsCombined Probability Charts

• Probability surface CAPE >= 1000 J/kg– Generally

low in this case

– Ensemble mean < 1000 J/kg (no gold dashed line)

CAPE (J/kg)Green solid= Percent Members >= 1000 J/kg; Shading >= 50%

Gold dashed = Ensemble mean (1000 J/kg)F036: Valid 21 UTC 28 May 2003

• Probability deep layer shear >= 30 kts– Strong mid

level jet through Iowa

10 m – 6 km Shear (kts)Green solid= Percent Members >= 30 kts; Shading >= 50%

Gold dashed = Ensemble mean (30 kts)F036: Valid 21 UTC 28 May 2003

Combined Probability ChartsCombined Probability Charts

• Convection likely WI/IL/IN– Will the

convection become severe?

3 Hour Convective Precipitation >= 0.01 (in)Green solid= Percent Members >= 0.01 in; Shading >= 50%

Gold dashed = Ensemble mean (0.01 in)F036: Valid 21 UTC 28 May 2003

Combined Probability ChartsCombined Probability Charts

• Combined probabilities very useful

• Quick way to determine juxtaposition of key parameters

• Not a true probability– Not

independent– Different

members contribute

Prob Cape >= 1000 X Prob Shear >= 30 kts X Prob Conv Pcpn >= .01” F036: Valid 21 UTC 28 May 2003

Combined Probability ChartsCombined Probability Charts

Severe ReportsRed=Tor; Blue=Wind; Green=Hail

Prob Cape >= 1000 X Prob Shear >= 30 kts X Prob Conv Pcpn >= .01” F036: Valid 21 UTC 28 May 2003

• Combined probabilities a quick way to determine juxtaposition of key parameters

• Not a true probability

– Not independent– Different

members contribute

• Fosters an ingredients-based approach on-the-fly

Combined Probability ChartsCombined Probability Charts

F15 SREF PROBABILITYP12I x RH x WIND x TMPF(< .01” x < 10% x > 30 mph x > 60 F)

Ingredients for extreme fire weatherconditions over the Great Basin

Combined or Joint Probabilities - Not a true probability - An ingredients-based, probabilistic approach - Useful for identifying key areas

Combined Probability ChartsCombined Probability Charts

F15 SREF PROBABILITYTPCP x RH x WIND x TMPF(< .01” x < 10% x > 30 mph x > 60 F)

Ingredients for extreme fire weatherconditions over the Great Basin

Combined Probability ChartsCombined Probability Charts

Elevated Instability – General ThunderElevated Instability – General ThunderNCEP SREF 30 Sept 2003 09 UTC F12

Mean MUCAPE/CIN (Sfc to 500 mb AGL) Mean LPL (Sfc to 500 mb AGL)

Parcel Equilibrium LevelParcel Equilibrium Level NCEP SREF 30 Sept 2003 09 UTC F12

Mean Temp (degC) MUEquilLvl (Sfc to 500 mb AGL)

Prob Temp MUEquilLvl < -20 degC (Sfc to 500 mb AGL)

Lightning VerificationLightning Verification

Gridded Lightning Strikes 18-21 UTC 30 Sept 2003(40 km grid boxes)

Microphysical ExampleMicrophysical ExampleProbability cloud top temps > -8 degC Probability cloud top temps < -12 degC

Ice Crystals Unlikely Ice Crystals Likely

NCEP SREF 7 Oct 2003 21 UTC F15

Mode Mode

Most Common Precip Type (Snow = Blue); Mean Precip (in); Mean 32o F Isotherm

F015 SREF Valid: 00 UTC 21 December 2004

Probability Dendritic Layer Probability Dendritic Layer >> 50 mb 50 mb

F015 SREF Valid: 00 UTC 21 December 2004

F015 SREF Valid: 00 UTC 21 December 2004

Probability of Banded PrecipitationProbability of Banded PrecipitationPotentialPotential

Probability MPV < .05 PVU (saturated; 900 to 650 mb layer) x Probability Deep Layer FG > 1

Probability Omega <= -3 microbar/sProbability Omega <= -3 microbar/s

F015 SREF Valid: 00 UTC 21 December 2004

Probability 6h Precip >= .25”Probability 6h Precip >= .25”

F015 SREF Valid: 00 UTC 21 December 2004

Outline

• Motivation for ensemble forecasting

• Ensemble products and applications– Emphasis on probabilistic products

• Ensemble calibration (verification)• Decision making using ensembles

Combine Thunderstorm Ingredients Combine Thunderstorm Ingredients into Single Parameterinto Single Parameter

• Three first-order ingredients (readily available from NWP models):– Lifting condensation level > -10o C– Sufficient CAPE in the 0o to -20o C layer – Equilibrium level temperature < -20o C

• Cloud Physics Thunder Parameter (CPTP) CPTP = (-19oC – Tel)(CAPE-20 – K) K

where K = 100 Jkg-1 and CAPE-20 is MUCAPE in the 0o C to -20o C layer

Example CPTP: One MemberExample CPTP: One Member

18h Eta Forecast Valid 03 UTC 4 June 2003

Plan view chart showing where grid point soundings support lightning (given a convective updraft)

SREF Probability CPTP SREF Probability CPTP >> 1 1

15h Forecast Ending: 00 UTC 01 Sept 2004Uncalibrated probability: Solid/Filled; Mean CPTP = 1 (Thick dashed)

3 hr valid period: 21 UTC 31 Aug to 00 UTC 01 Sept 2004

SREF Probability Precip SREF Probability Precip >> .01” .01”

15h Forecast Ending: 00 UTC 01 Sept 2004Uncalibrated probability: Solid/Filled; Mean precip = 0.01” (Thick dashed)

3 hr valid period: 21 UTC 31 Aug to 00 UTC 01 Sept 2004

Joint Probability (Assumed Independence)Joint Probability (Assumed Independence)

15h Forecast Ending: 00 UTC 01 Sept 2004Uncalibrated probability: Solid/Filled

P(CPTP > 1) x P(Precip > .01”)3 hr valid period: 21 UTC 31 Aug to 00 UTC 01 Sept 2004

Perfect Forecast

No Skill

Climatology

P(CPTP > 1) x P(P03I > .01”)

Uncalibrated ReliabilityUncalibrated Reliability (5 Aug to 5 Nov 2004)(5 Aug to 5 Nov 2004)

Frequency[0%, 5%, …, 100%]

Adjusting ProbabilitiesAdjusting Probabilities

1) Calibrate based on the observed frequency of occurrence

– Very useful, but may not provide information concerning rare or extreme (i.e., low probability) events

2) Use statistical techniques to estimate probabilities in the tails of the distribution (e.g., Hamill and Colucci 1998; Stensrud and Yussouf 2003)

Ensemble CalibrationEnsemble Calibration1) Bin separately P(CPTP > 1) and P(P03M > 0.01”) into

11 bins (0-5%; 5-15%; …; 85-95%; 95-100%)2) Combine the two binned probabilistic forecasts into one

of 121 possible combinations (0%,0%); (0%,10%); … (100%,100%)

3) Use NLDN CG data over the previous 60 days to calculate the frequency of occurrence of CG strikes for each of the 121 binned combinations

4) Bin ensemble forecasts as described in steps 1 and 2 and assign the observed CG frequency (step 3) as the calibrated probability of a CG strike

5) Calibration is performed for each forecast cycle (09 and 21 UTC) and each forecast hour; domain is entire U.S. on 40 km grid

Before Calibration

Joint Probability (Assumed Independence)Joint Probability (Assumed Independence)

P(CPTP > 1) x P(Precip > .01”)3 hr valid period: 21 UTC 31 Aug to 00 UTC 01 Sept 2004

15h Forecast Ending: 00 UTC 01 Sept 2004Uncorrected probability: Solid/Filled

After Calibration

Calibrated Ensemble Thunder Probability Calibrated Ensemble Thunder Probability

15h Forecast Ending: 00 UTC 01 Sept 2004Calibrated probability: Solid/Filled

3 hr valid period: 21 UTC 31 Aug to 00 UTC 01 Sept 2004

Calibrated Ensemble Thunder ProbabilityCalibrated Ensemble Thunder Probability

15h Forecast Ending: 00 UTC 01 Sept 2004Calibrated probability: Solid/Filled; NLDN CG Strikes (Yellow +)

3 hr valid period: 21 UTC 31 Aug to 00 UTC 01 Sept 2004

Perfect Forecast

No Skill

Perfect Forecast

No Skill

Calibrated ReliabilityCalibrated Reliability (5 Aug to 5 Nov 2004)(5 Aug to 5 Nov 2004)

Calibrated Thunder Probability

Climatology

Frequency[0%, 5%, …, 100%]

Adjusting ProbabilitiesAdjusting Probabilities

1) Calibrate based on the observed frequency of occurrence

– Very useful, but may not provide information concerning extreme (i.e., low probability) events

2) Use statistical techniques to estimate probabilities in the “tails” of the distribution (e.g., Hamill and Colucci 1998; Stensrud and Yussouf 2003)

• Consider 2 meter temperature prediction from NCEP SREF– Construct a “rank histogram” of the ensemble

members (also called Talagrand diagram)• Rank individual members from lowest to highest• Find the verifying rank position of “truth” (RUC 2

meter analysis temperature) • Record the frequency with which truth falls in that

position (for a 15 member ensemble there are 16 rankings)

Adjusting ProbabilitiesAdjusting Probabilities

Adjusting ProbabilitiesAdjusting ProbabilitiesUncorrected Talagrand Diagram

Warm bias in 15h fcst of 12 UTC NCEP SREF

Uniform Distribution

2m temperature ending 27 December 2004

Under-dispersive

Truth is colder than all SREF membersa disproportionate amount of time

Clustering by model

Use 14-day bias to account for bias in forecastUse 14-day bias to account for bias in forecast

Members 1 through 15 of NCEP SREF

Adjusting ProbabilitiesAdjusting ProbabilitiesBias Adjusted Talagrand Diagram

Near neutral bias in 15h fcst of 12 UTC NCEP SREF

Large bias eliminated butremains under-dispersive

Uniform Distribution

2m temperature ending 27 December 2004

• Build the pdf by using observed data to fit a statistical distribution (Gamma, Gumbel, or Gaussian) to the tails

• This produces a calibrated pdf based on past performance– “Past performance does not guarantee future

results.”

Adjusting ProbabilitiesAdjusting Probabilities

Adjusting ProbabilitiesAdjusting ProbabilitiesCorrected Talagrand Diagram

~Uniform distribution in 15h fcst of 12 Z SREF

Uniform Distribution

SREF probabilities now reflect expectedoccurrence of event even in the “tails”

2m temperature ending 27 December 2004

Adjusted Temperature FcstAdjusted Temperature Fcst

Max temp (50%) valid 12 UTC 5 Jan to 00 UTC 6 Jan 2004

Probabilistic Temperature ForecastProbabilistic Temperature ForecastNorman, OK (95% Confidence)Norman, OK (95% Confidence)

50.0%

2.5%

2.5%

Dec 27 Dec 28 Dec 29

Norman, OK Temp Forecast from SREF

Actual mins & maxes indicated by red dots

Temp (degF)

Local Time 4 AM 6 PMMid Mid

Probabilistic MeteogramProbabilistic MeteogramProbability of severe thunderstorm ingredients: OUN; Runtime: 09 UTC 21 April

•Information on how ingredients are evolving•Viewing ingredients via probabilistic thinking

Probabilistic MeteogramProbabilistic MeteogramProbability of severe thunderstorm ingredients: OUN; Runtime: 09 UTC 21 April

•Information on how ingredients are evolving•Viewing ingredients via probabilistic thinking

Outline

• Motivation for ensemble forecasting

• Ensemble products and applications– Emphasis on probabilistic products

• Ensemble calibration (verification)

• Decision making using ensembles

Decision MakingDecision Making

• Probabilities from an uncalibrated, under-dispersive ensemble system are still useful in quantifying uncertainty

• Trends in probabilities (dprog/dt) may indicate less spread among members as t 0

12h Prob Thunder 12h Prob Severe

Prob Thunder Prob Severe

Day 6

Day 5

Day 4

Day 3 Day 2

•Increased probabilistic resolution as event approaches

• Run-to-run consistency

•Time-lagged members (weighted) add continuity to forecast

Trend over 5 daysTrend over 5 daysfrom NCEP MREF from NCEP MREF (Valid: 22 Dec 2004)(Valid: 22 Dec 2004)

Results…Results…

Decision MakingDecision Making

• Probabilities from an un-calibrated, under-dispersive ensemble system are still useful to quantify uncertainty

• Trends in probabilities (dprog/dt) may indicate less spread among members as t 0

• Decision theory can be used with or without separate calibration

Decision Theory ExampleDecision Theory Example

• Consider the calibrated thunderstorm forecasts presented earlier [see Mylne (2002) for C/L model]…

User: Electronics storeCritical Event: Lightning strike/surgeCost to protect: $300Expense of a Loss: $10,000

Yes No

Yes Hit

$300

F.A.

$300

No Miss

$10,000

C.R.

$0

Observed

Forecast

a = F.A. – C.R. Miss + F.A. – Hit – C.R.

C/L = a = 0.03

If no forecast information is available,user will always protect if a < o, andnever protect if a > o, where o is climatological frequency of the event

Decision Theory ExampleDecision Theory Example• If the calibration were perfect, then user would

seek protective action whenever forecasted probability is > a.

But, forecast is not perfectlyreliable…

• Apply a cost-loss model to assist in the decision (prior calibration is unnecessary)

• Define a cost-loss model as in Murphy (1977); Legg and Mylne (2004); Mylne (2002)– This can be done without

probabilistic calibration as the technique implicitly calibrates based on past performance

• V = Eclimate - Eforecast

Eclimate – Eperfect

Decision Theory ExampleDecision Theory Example

Decision Theory ExampleDecision Theory Example

V = Eclimate - Eforecast

Eclimate - Eperfect

V = a general assessment of forecast valuerelative to the perfect forecast (i.e., basically a skill score).

V = 1 indicates a perfect forecast system (i.e., action is taken only when necessary)

V < 0 indicates a system of equal or lesser value than climatology

V = Eclimate - Eforecast

Eclimate - Eperfect

Eclimate = min[ (1-o)F.A. + oHit; (1-o)C.R. + oMiss ]

Eperfect = oHit

Eforecast= hHit + mMiss + fF.A.+ rC.R.

o = climatological freq = h + m

Yes No

Yes Hit

$300

F.A.

$300

No Miss

$10,000

C.R.

$0

Yes No

Yes Hit

(h)

F.A.

(f)

No Miss

(m)

C.R.

(r)

Decision Theory ExampleDecision Theory ExampleObserved

Observed

Forecast

Forecast

Costs:

Performance:

Decision Theory ExampleDecision Theory Example

Never Protect

Always Protect

10%

Action probability for a = .03 is 7% with V = .64

PotentialValue

0.0

0.5

0.001 0.01 0.10

a = Cost/Loss Ratio (log scale)1.00

1.0Maximum Potential Value of the Forecast and its Associated Probability

.008 .14

SummarySummary

• Ensembles provide information on mean, spread, and forecast uncertainty (probabilities)

• Derived products viewed in probability space have proven useful at the SPC

• Combined or joint probabilities very useful• When necessary, ensembles can be

calibrated to provide reliable estimates of probability and/or aid in decision making

SPC SREF Products on WEBSPC SREF Products on WEB

http://www.spc.noaa.gov/exper/sref/

ReferencesReferencesBright, D.R., M. Wandishin, R. Jewell, and S. Weiss, 2005: A physically based parameter for

lightning prediction and its calibration in ensemble forecasts. Preprints, Conference on Meteor. Appl. of Lightning Data, AMS, San Diego, CA (CD-ROM 4.3)

Cheung, K.K.W., 2001: A review of ensemble forecasting techniques with a focus on tropical cyclone forecasting. Meteor. Appl., 8, 315-332.

Ebert, E.E., 2001: Ability of a poor man's ensemble to predict the probability and distribution of precipitation. Mon. Wea. Rev., 129, 2461-2480.

Hamill, T.M. and S.J. Colucci, 1998: Evaluation of Eta-RSM ensemble probabilistic precipitation forecasts. Mon. Wea. Rev., 126, 711-724.

Legg, T.P. and K.R. Mylne, 2004: Early warnings of severe weather from ensemble forecast information. Wea. Forecasting, 19, 891-906.

Mylne, K.R. 2002: Decision-making from probability forecasts based on forecast value. Meteor. Appl., 9, 307-315.

Sivillo, J.K. and J.E. Ahlquist, 1997: An ensemble forecasting primer. Wea. Forecasting, 12, 809-818.

Stensrud, D.J. and N. Yussouf, 2003: Short-range ensemble predictions of 2-m temperature and dewpoint temperature over New England. Mon. Wea. Rev., 131, 2510-2524.

Wilks, D.S., 1995: Statistical Methods in the Atmospheric Sciences. International Geophysics Series, Vol. 59, Academic Press, 467 pp.

top related