rank histograms – measuring the reliability of an ensemble forecast you cannot verify an ensemble...

32

Upload: daniel-heath

Post on 29-Dec-2015

219 views

Category:

Documents


2 download

TRANSCRIPT

Rank Histograms – measuring the reliability of an ensemble forecast

• You cannot verify an ensemble forecast with a single observation.

• The more data you have for verification, (as is true in general for other statistical measures) the more certain you are.

• Rare events (low probability) require more data to verify => as do systems with many ensemble members.

From Barb Brown

From Tom Hamill

Troubled Rank Histograms

Slide from Matt Pocernic

1 2 3 4 5 6 7 8 9 10Ensemble #

1 2 3 4 5 6 7 8 9 10Ensemble #

Coun

ts0

1020

30

Coun

ts0

1020

30

From Tom Hamill

From Tom Hamill

From Tom Hamill

From Tom Hamill

From Tom Hamill

Example of Quantile Regression (QR)

Our application

Fitting T quantiles using QR conditioned on:

1) Ranked forecast ens

2) ensemble mean

3) ensemble median

4) ensemble stdev

5) Persistence

R package: quantreg

T [K

]

Timeforecastsobserved

Regressor set: 1. reforecast ens2. ens mean3. ens stdev 4. persistence 5. LR quantile (not shown)

Prob

abili

ty/°

K

Temperature [K]

climatologicalPDF

Step I: Determineclimatological quantiles

Step 2: For each quan, use “forward step-wisecross-validation” to iteratively select best subsetSelection requirements: a) QR cost function minimum, b) Satisfy binomial distribution at 95% confidenceIf requirements not met, retain climatological “prior”

1.

3.2.

4.

Step 3: segregate forecasts into differing ranges of ensemble dispersion and refit models (Step 2) uniquely for each range

Time

forecasts

T [K

]

I. II. III. II. I.Pr

obab

ility

/°K

Temperature [K]

ForecastPDF

prior

posterior

Final result: “sharper” posterior PDFrepresented by interpolated quans

RPS =1

n−1CDFfc,i −CDFobs,i( )

2

i=1

n

Rank Probability Scorefor multi-categorical or continuous variables

Scatter-plot and Contingency Table

Does the forecast detect correctly temperatures above 18 degrees ?

Slide from Barbara Casati

BS =1n

yi −oi( )2

i=1

n

Brier Score

y = forecasted event occurenceo = observed occurrence (0 or 1)i = sample # of total n samples

=> Note similarity to MSE

Other post-processing approaches …1) Bayesian Model Averaging (BMA) –

Raftery et al (1997)

2) Analogue approaches –Hopson and Webster, J. Hydromet (2010)

3) Kalman Filter with analogues –Delle Monache et al (2010)

4) Quantile regression –Hopson and Hacker, MWR (under review)

5) quantile-to-quantile (quantile matching) approach –Hopson and Webster J. Hydromet (2010)

… many others

Quantile Matching: another approach when matched forecasts-observationpairs are not available => useful for climate change studies

2004 Brahmaputra Catchment-averaged Forecasts-black line satellite observations-colored lines ensemble forecasts-Basic structure of catchment rainfall similar for both forecasts and observations-But large relative over-bias in forecasts

ECMWF 51-member EnsemblePrecipitation Forecasts comparedTo observations

Pmax

25th 50th 75th 100th

PfcstPrec

ipita

tion

Quantile

Pmax

25th 50th 75th 100th

Padj

Quantile

Forecast Bias Adjustment - done independently for each forecast grid

(bias-correct the whole PDF, not just the median)

Model Climatology CDF “Observed” Climatology CDF

In practical terms …

Precipitation 0 1m

ranked forecasts

Precipitation 0 1m

ranked observations

Hopson and Webster (2010)

Brahmaputra Corrected Forecasts Original Forecast

Corrected Forecast

=> Now observed precipitation within the “ensemble bundle”

Bias-corrected Precipitation Forecasts