allen harvard 2008
TRANSCRIPT
-
8/14/2019 Allen Harvard 2008
1/61
Oxford Universit
What can be said about future
climate?Quantifying uncertainty in multi-decade climate
forecasting
Myles AllenDepartment of Physics, University of Oxford
mailto:[email protected]:[email protected] -
8/14/2019 Allen Harvard 2008
2/61
Oxford Universit
It is wrong to think that the task of physics
is to find out how nature is. Physics
concerns what we can say about nature.Niels Bohr
-
8/14/2019 Allen Harvard 2008
3/61
Oxford Universit
A recent failure of climate modelling
99% of the effort
99% of the impact
-
8/14/2019 Allen Harvard 2008
4/61
Oxford Universit
Sources of uncertainty in climate forecasts
Initial conditions: Only relevant for most variables on seasonal to inter-annual
timescales.
Boundary conditions:
Natural forcing is a major and irreducible source ofuncertainty on all timescales.
Anthropogenic forcing a source of uncertainty on >50-year
timescales: scenarios predict similar forcing to 2030s.
Response uncertainty, or model error:
Dominant source of uncertainty on critical 30- to 50-year
time-frame.
The focus of this talk.
-
8/14/2019 Allen Harvard 2008
5/61
Oxford Universit
Why didnt the IPCC just use inter-model spread
from the CMIP-3 ensemble of opportunity?
-
8/14/2019 Allen Harvard 2008
6/61
Oxford Universit
Global temperatures from the AR4 models: is
this too good to be true?
-
8/14/2019 Allen Harvard 2008
7/61Oxford Universit
Evidence that the fit to 20th century warming may
be misleadingly good (Kiehl, 2007)
-
8/14/2019 Allen Harvard 2008
8/61Oxford Universit
More evidence that ensembles of opportunity do not
span responses consistent with observations
Diamonds show models used in IPCC (2001)
TCR=warming after 70 years of 1% increasing CO2
-
8/14/2019 Allen Harvard 2008
9/61Oxford Universit
Box-whisker bar shows range of TCRs of AR4 models
(only one model has TCR greater than 2.2K)
More evidence that ensembles of opportunity do not
span responses consistent with observations
-
8/14/2019 Allen Harvard 2008
10/61Oxford Universit
Identifying the origin of the discrepancy:
Andrews & Allen (2007) after Forest et al (2006)
Diagnose effective climatesensitivity (ECS) and
effective heat capacity
(EHC) for AR4 models
Compare models with
observations of 20th century
temperature changes and
ocean heat uptake.
C
F
TFdt
dT
C
xCO
=
=
=
EHC
ECS 22
-
8/14/2019 Allen Harvard 2008
11/61Oxford Universit
Identifying the origin of the discrepancy:
Andrews & Allen (2007) after Forest et al (2006)
New coordinates:transient climate response (TCR)
& feedback response time (FRT)
TCR distribution is biased low, FRT
under-dispersed.
0
2
0
ECS
2
0
2
2)70(ECSFRT
2
)70(
12
)ECS(ECSTCR
2
0
TyC
C
yFT
eT
xCO
T
==
=
=
-
8/14/2019 Allen Harvard 2008
12/61Oxford Universit
Note that transient observable quantities are
inherently non-linear in ECS
For small ECS (large ) orlong timescales, transient
response varies as ECS.
For large ECS (small ) or
short timescales, responsevaries as (ECS)-1.
)ECS(3
21TCR
ECSTCR
00
TT
-
8/14/2019 Allen Harvard 2008
13/61Oxford Universit
Moving on from ensembles of opportunity
If modelling groups, either consciously or bynatural selection, are tuning their flagship models
to fit the same observations, spread of predictions
becomes meaningless: eventually they will all
converge to a delta-function.
-
8/14/2019 Allen Harvard 2008
14/61Oxford Universit
Three approaches to the treatment of model
error in ensemble forecasting
1. Subjectivist (e.g. Rougier, 2006; Murphy et al,2004): using expert priors on model parameters
weighted by goodness-of-fit to observations.
2. Realist (Allen et al, 2000; Forest et al, 2002): focus
on relative goodness-of-fit to observations,minimizing influence of prior assumptions.
3. Nihilist (Smith, 2002; Stainforth et al, 2005, 2007):
focus on spread of model results, irrespective of
goodness-of-fit to observations.
-
8/14/2019 Allen Harvard 2008
15/61Oxford Universit
The problems with option 3: Climate sensitivities
from climateprediction.net
Stainforth et al, 2005
-
8/14/2019 Allen Harvard 2008
16/61Oxford Universit
Stainforth et al, 2005, updated: raw distribution
based on ~50,000 45-year GCM simulations
Traditional range
-
8/14/2019 Allen Harvard 2008
17/61Oxford Universit
Distribution close to Gaussian in (ECS)-1, so high-
ECS models are inevitable: see Roe & Baker (2007)
-
8/14/2019 Allen Harvard 2008
18/61Oxford Universit
Many of these high sensitivity models will prove
significantly less realistic than the original
Global Top-of-Atmosphere Energy Imbalance
ClimateSensitivity
Colours: values of the entrainment coefficient
Rejected by Rodwell & Palmer
-
8/14/2019 Allen Harvard 2008
19/61Oxford Universit
But not all
Global Top-of-Atmosphere Energy Imbalance
ClimateSensitivity
-
8/14/2019 Allen Harvard 2008
20/61Oxford Universit
If you just report the spread of results, some people
(deliberately?) get the wrong end of the stick
-
8/14/2019 Allen Harvard 2008
21/61Oxford Universit
Option 2: The standard Bayesian approach to
probabilistic climate forecasting
=
=
dyP
PyPSP
dyPSPySP
)(
)()|()|(
)|()|()|(
S quantity predicted by the model, e.g. climate sensitivity
model parameters, e.g. diffusivity, entrainment coefficient etc.
y observations of model-simulated quantities e.g. recentwarming
P(y|) likelihood of observationsy given parameters
P() prior distribution of parameters
Simple models: P(S|)=1 if parameters gives sensitivity S
P(S|)=0 otherwise
-
8/14/2019 Allen Harvard 2008
22/61Oxford Universit
The problem with Bayesian approaches:
sensitivity of results to prior assumptions
Solid line: Forest et al (2002)
distribution if you start with uniform
sampling of model parameters
Dashed line: distribution obtained if
you start with uniform sampling of
climate sensitivity
-
8/14/2019 Allen Harvard 2008
23/61Oxford Universit
More recent example: Murphy et al (2004), using
a perturbed physics ensemble & the essential
method for UKCIP08
-
8/14/2019 Allen Harvard 2008
24/61Oxford Universit
The distribution P(ECS) Murphy et al would have
found, if other perturbations were ineffective,
through their sampling of the entrainment coefficient
Discontinuity at unperturbed value
Weighting by ECS-2
I t f i (ECS) 2 i hti ( lid) d
-
8/14/2019 Allen Harvard 2008
25/61Oxford Universit
Impact of removing (ECS)-2 weighting (solid) and
discontinuities in P(ECS) (dashed)
Wh th t d d B i h t
-
8/14/2019 Allen Harvard 2008
26/61Oxford Universit
Why the standard Bayesian approach wont ever
work
Sampling a distribution of possible modelsrequires us to define a distance between two models
in terms of their input parameters & structure, a
metric for model error.
As long as models contain nuisance parametersthat do not correspond to any observable quantity,
this is impossible in principle: should Forest et al
have sampled ECS, Log(ECS) or (ECS)-1?
Trying different priors doesnt help, because users
want one answer: e.g. Stern used Murphy et al with
discontinuities and (ECS)-2 weighting.
So what is the alternative?
-
8/14/2019 Allen Harvard 2008
27/61Oxford Universit
Forest et al (2001), Knutti et al (2002), Frame et al (2005),
Hegerl et al (2006): sample parameters to give a uniform
distribution in sensitivity, then weight by likelihood
IGG (Isopleth of
Global Goodness-
of-fit)
Sensitivity
E i l tl t lik lih d ll
-
8/14/2019 Allen Harvard 2008
28/61Oxford Universit
Equivalently, compute average likelihood over all
models that predict a given S
=
dPSP
dPyPSP
ySL)()|(
)()|()|(
)|(0
S quantity predicted by the model, e.g. climate sensitivity.
model parameters, e.g. diffusivity, entrainment coefficient etc.
y observations of model-simulated quantities e.g. recentwarming
Denominator: prior predictive distribution, orP(S)given by
parameter sampling withP(y|) = constant
-
8/14/2019 Allen Harvard 2008
29/61
Oxford Universit
Simple case in which all models that predict a
given Shave the same likelihood (e.g. single
observational constraint, monotonic in S)
)|()|( )(,0 yPySL SS=
No need to average over parameters, soP() does not feature.
Only applies in this very simple case.
If the model doesnt have just one parameter affecting S, or
different parameter-choices predicting the same Shave
different likelihoods, what should we do?
-
8/14/2019 Allen Harvard 2008
30/61
Oxford Universit
Impact of sampling nuisance parameters
-
8/14/2019 Allen Harvard 2008
31/61
Oxford Universit
Impact of sampling nuisance parameters
A b t h t i
-
8/14/2019 Allen Harvard 2008
32/61
Oxford Universit
A more robust approach: compute maximum
likelihood over all models that predict a given S
)|()|(max)|(1 yPSPySL =
P(S|) picks out models that predict a given value of the forecastquantity of interest, e.g. climate sensitivity.
P(y|) evaluates their likelihoods.
Likelihood profile,L1(S|y), is proportional to relative likelihood of
most likely available model as a function of forecast quantity.
Likelihood profiles follow parameter combinations that cause
likelihood to fall off as slowly as possible with S: the leastfavourable sub-model approach.
P() does not matter. Use any sampling design you like as long asyou find the likelihood maxima.
-
8/14/2019 Allen Harvard 2008
33/61
Oxford Universit
Likelihood profiling
Find the relative likelihood of the most likely (mostrealistic, or least unlikely) model as a function of the
forecast variable of interest.
Evaluate likelihood only over observable quantities
that are Relevant to the forecast.
Adequately simulated by the model (including variability).
Ignore the (meaningless) number of less realistic
models that you find that give a similar prediction.
Evaluate confidence intervals from likelihood
thresholds.
Generating models consistent with quantities we can
-
8/14/2019 Allen Harvard 2008
34/61
Oxford Universit
Generating models consistent with quantities we can
observe
and mapping their implications for quantities
-
8/14/2019 Allen Harvard 2008
35/61
Oxford Universit
and mapping their implications for quantities
we wish to forecast.
Note: only the outline (likelihood profile) matters, not the density of
models. Hence we avoid the metric-of-model-error problem.
Why you cant interpret the area under the
-
8/14/2019 Allen Harvard 2008
36/61
Oxford Universit
Why you can t interpret the area under the
likelihood profile as a probability
Blue line: area-conserving map of likelihood
-
8/14/2019 Allen Harvard 2008
37/61
Oxford Universit
Blue line: area-conserving map of likelihood
profile for climate sensitivity onto C2K
Implications for 21st century warming under a
-
8/14/2019 Allen Harvard 2008
38/61
Oxford Universit
Implications for 21st century warming under a
given scenario
Bayesian approach (e g incorporating a prior
-
8/14/2019 Allen Harvard 2008
39/61
Oxford Universit
Bayesian approach (e.g. incorporating a prior
from Murphy et al, 2004) gives a tighter forecast
But also a tighter hindcast: do we really believe
-
8/14/2019 Allen Harvard 2008
40/61
Oxford Universit
But also a tighter hindcast: do we really believe
our models this much?
And if some people believe models more than
-
8/14/2019 Allen Harvard 2008
41/61
Oxford Universit
And if some people believe models more than
others
IPCC attribution methodology
IPCC projection methodology
Some objections to a likelihood profiling
-
8/14/2019 Allen Harvard 2008
42/61
Oxford Universit
Some objections to a likelihood profiling
approach to probabilistic forecasting
Ignoring prior expectations gives misleadingly largeuncertainty ranges:
Attribution statements have consistently had more impact
than climate predictions.
Could this be because the attribution community has taken
such a cautious, data-driven approach?
Decision-makers need probability distribution
functions, not just confidence intervals.
Not at all clear this is true of real decision-makers.
Likelihood profiles require such large ensemblesthat you cant generate them with a GCM unless you
resort to crude pattern-scaling approaches.
Likelihood profiling with full-complexity climate
-
8/14/2019 Allen Harvard 2008
43/61
Oxford Universit
Likelihood profiling with full-complexity climate
models
Exploring even a low-dimensional parameter spacerequires large, multi-thousand-member, ensembles:
Varying parameters to generate multiple model versions,
allowing for non-linear interactions.
Varying initial conditions to quantify likelihood of each
model version with respect to observations.
Varying forcings to allow for forcing uncertainty in
likelihood and forecast.
How could we possibly do this with a full-complexity
coupled AOGCM?
Cli t di ti t th ld l t
-
8/14/2019 Allen Harvard 2008
44/61
Oxford Universit
Climateprediction.net: the worlds largest
climate modelling facility
>300,000 volunteers (50,000 active), 23M model-years
The climateprediction net BBC Climate Change
-
8/14/2019 Allen Harvard 2008
45/61
Oxford Universit
The climateprediction.net BBC Climate Change
Experiment
HadCM3L coupled GCM, fluxcorrected, atmospheric
resolution of HadCM3, lower
ocean resolution, no Iceland.
Spin-up with standard
atmosphere, 10 perturbedoceans.
Switch to perturbed
atmosphere in 1920 and re-
adjust down-welling fluxes.
Transient (A1B) and controlsimulations to 2080.
10 future volcanic and 5
solar scenarios.
23,000 runs completed to
date (7M model years).
Global temperatures from CMIP-3: transient-
-
8/14/2019 Allen Harvard 2008
46/61
Oxford Universit
Global temperatures from CMIP 3: transient
control simulations
Global temperatures from CPDN BBC climate
-
8/14/2019 Allen Harvard 2008
47/61
Oxford Universit
Global temperatures from CPDN BBC climate
change experiment: first 500 runs
Evaluating a likelihood profile for 2050
-
8/14/2019 Allen Harvard 2008
48/61
Oxford Universit
Evaluating a likelihood profile for 2050
temperature from CPDN-BBC results
Start with 5-year-averaged temperature time-seriesover 1961-2005 for Giorgi regions plus ocean
basins (29 regions, 9 pentades).
Project onto EOFs of CPDN ensemble.
Compute weighted r2
(Mahalanobis distance) usingCMIP3 control integrations to estimate expected
model-data discrepancy due to internal variability:
covariancecontrol
simulationmodel
nsobservatio
)()(
th
12
=
=
=
=
N
i
iN
T
ii
i
r
C
x
y
xyCxy
-
8/14/2019 Allen Harvard 2008
49/61
Extracting models in which r2
-
8/14/2019 Allen Harvard 2008
50/61
Oxford Universit
Extracting models in which r 95 percentile of
control distribution
Fi i ti f t li t
-
8/14/2019 Allen Harvard 2008
51/61
Oxford Universit
Fingerprinting future climate
Weight elements ofyand xby information content:correlation with 2050 temperature in CPDN ensemble
Discrepancy against observed change 1961-2005
-
8/14/2019 Allen Harvard 2008
52/61
Oxford Universit
sc epa cy aga st obse ed c a ge 96 005
versus T-2050, weighting by information content
Extracting models in which r2
-
8/14/2019 Allen Harvard 2008
53/61
Oxford Universit
g p
control distribution, weighting by IC
Information content in input parameter values?
-
8/14/2019 Allen Harvard 2008
54/61
Oxford Universit
p p
Shades denote values of entrainment coefficient
Methodological concl sions
-
8/14/2019 Allen Harvard 2008
55/61
Oxford Universit
Methodological conclusions
Ensembles of opportunity are too good: fit toobserved changes is consistent with pure internal
variability (no errors in forcing or response).
Deliberately perturbed ensembles give a much larger
spread of behavior, particularly if sulfur cycleparameters are included.
Standard Bayesian approach faces fundamental
problems with selection of priors. Open to pressure
e.g. to keep PDFs tight to maximise utility.
Likelihood profiling seems to be the only viable
option: we just arent yet incorruptible enough for
the Reverend Bayes.
Conclusions of the BBC Climate Change
-
8/14/2019 Allen Harvard 2008
56/61
Oxford Universit
g
Experiment so far
Range of predictions consistent with recent climatechange suggests lower bound (of the 95% C.I.)
similar to minimum of CMIP3 ensemble.
Upper bound is closer to CMIP3 mean plus 60%,
consistent with AR4 expert assessment. Next step: extension to regional forecasts.
Interesting methodological question: should the
definition of the likelihood function (weights in r2)
depend on the forecast quantity of interest?
And the people who actually did the work
-
8/14/2019 Allen Harvard 2008
57/61
Oxford Universit
And the people who actually did the work
Simple modelling: Dave Frame, Chris Forest, BenBooth & conversations with many others.
BBC Climate Change Experiment: Carl Christensen,
Nick Faull, Tolu Aina, Dave Frame, Frances
McNamara & the rest of the team at cpdn & the BBC. Distributed computing: David Anderson (Berkeley
and SETI@home) & the BOINC developers.
Paying the bills: NERC (COAPEC, e-Science, KT &
core), EC (ENSEMBLES & WATCH), UK DfT,Microsoft Research
And most important of all, the endlessly patient and
enthusiastic participants, volunteer board monitors
etc. of the climateprediction.net global community.
Things didnt work out entirely according to plan
-
8/14/2019 Allen Harvard 2008
58/61
Oxford Universit
Things didnt work out entirely according to plan
But its an ill wind sulphate emissions in the
-
8/14/2019 Allen Harvard 2008
59/61
Oxford Universit
p
first (dashed) and corrected (solid) ensembles
Using the altered aerosol experiment to explore
-
8/14/2019 Allen Harvard 2008
60/61
Oxford Universit
g
impact on Sahel rainfall (Ackerley et al, in prep)
With the corrected forcing files global
-
8/14/2019 Allen Harvard 2008
61/61
temperatures from the first completed runs