automatic weather stations and artificial neural networks

48
Automatic Weather Stations and Artificial Neural Networks: Improving the Instrumental Record in West Antarctica David B. Reusch 1 and Richard B. Alley Department of Geosciences and EMS Environment Institute The Pennsylvania State University University Park, PA 16802 USA 1. Corresponding author, e-mail [email protected] Submitted January 8, 2002

Upload: others

Post on 03-Feb-2022

4 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Automatic Weather Stations and Artificial Neural Networks

Automatic Weather Stations and Artificial Neural Networks:Improving the Instrumental Record in West Antarctica

David B. Reusch1 and Richard B. Alley

Department of Geosciences and EMS Environment Institute

The Pennsylvania State University

University Park, PA 16802 USA

1. Corresponding author, e-mail [email protected]

Submitted January 8, 2002

Page 2: Automatic Weather Stations and Artificial Neural Networks

1

Abstract

Automatic weather stations (AWS) currently provide the only year-round, continuous

direct measurements of near-surface weather on the West Antarctic ice sheet away from

the coastal manned stations. Improved interpretation of the ever-growing body of ice-

core-based paleoclimate records from this region requires a deeper understanding of

Antarctic meteorology. As the spatial coverage of the AWS network has expanded year

to year, so has our meteorological database. Unfortunately, many of the records are

relatively short (less than 10 years) and/or incomplete (to varying degrees) due to the

vagaries of the harsh environment. Climate downscaling work in temperate latitudes

suggests that it is possible to use GCM-scale meteorological data sets (e.g., ECMWF

reanalysis products) to address these problems in the AWS record and create a uniform

and complete database of West Antarctic surface meteorology (at AWS sites). Such

records are highly relevant to the improved interpretation of the expanding library of

snow-pit and ice-core data sets.

Artificial neural network (ANN) techniques are used to predict AWS surface data

(temperature, pressure) using large-scale features of the atmosphere (e.g., 500 mb

geopotential height) from a region around the AWS. ANNs are trained with a calendar

year of observed AWS data (possibly incomplete) and corresponding GCM-scale data.

This methodology is sufficient both for high-quality predictions within the training set

and for predictions outside the training set that are at least comparable to the state-of-the-

art. For example, our results for temperature prediction are approximately equal to those

from a satellite-based methodology but with no exposure to problems from surface melt

events or sensor changes. Similarly, the significant biases seen in ECMWF surface

Page 3: Automatic Weather Stations and Artificial Neural Networks

2

temperatures are absent from our predictions resulting in an RMS error half as large with

respect to the original AWS observations.

These results support high confidence in the ANN-based predictions from the GCM-scale

data for periods when AWS data are unavailable, e.g., before installation. ANNs thus

provide a means to expand our surface meteorological records significantly in West

Antarctica.

Page 4: Automatic Weather Stations and Artificial Neural Networks

3

1. Introduction

To advance our knowledge of paleoclimate, we must improve our calibration of the ice

core-based proxies to the modern climate. This will improve our interpretive skill and

deepen our confidence in climate reconstructions. Because the climate that makes an ice

sheet a good recorder of climate also makes it inhospitable for humans and their weather

instruments, meteorological records from these regions are sparse and suffer greatly in

comparison to more temperate regions. Yet research in the temperate world has

suggested a new solution to this problem of short, interrupted, polar meteorological

records: artificial neural networks (ANNs). Similar to traditional climate downscaling

(e.g., Crane and Hewitson 1998), our ANN-based approach uses GCM-scale data to

predict surface meteorology based on the available surface record. But unlike most

climate downscaling work, the surface data from polar ice sheets are very limited.

Automatic weather stations (AWS) currently provide the only year-round, direct

measurements of weather away from the coast in West Antarctica (Figure 1). As the

spatial coverage of the network has expanded year to year, so has our meteorological

database, thus adding to our calibration data. Unfortunately, many of the records are

relatively short (less than 10 years) and/or incomplete (to varying degrees) due to the

vagaries of the harsh environment. Presuming that current AWS remain active, the

records will lengthen over time and eventually solve the shortness-of-record problem.

Equipment problems may also decline as improved instruments are deployed and existing

components upgraded. Nonetheless, for progress to occur in the near-term these

problems need to be addressed. Our ANN-based approach provides a means to both fill

gaps from instrument failures (and thereby improve the overall record quality) and to

Page 5: Automatic Weather Stations and Artificial Neural Networks

4

extend records into time periods prior to AWS installation and after station

relocation/removal. In particular, we have used our ANN-based methodology to generate

complete records of pressure and temperature for Ferrell AWS (77.91¡ S, 170.82¡ E,

Figure 1) for the period 1979-1993. The new records are a merger of AWS observations

and ANN predictions for periods when observations were unavailable.

Our methodology is based on artificial neural networks (ANNs, Figure 2). An ANN is

composed of a large and highly-connected network of simple processing nodes organized

into layers and loosely modeled after neurons in the nervous system (e.g., Haykin 1999).

Nodes have multiple, weighted inputs and a single output (Figure 3). The weighted

inputs are combined and passed through a non-linear, often sigmoidal, activation function

to produce the output value. Multilayer feed-forward networks, the basic architecture of

our work, divide nodes into input, output, and hidden layers ( hidden because it is

internal to the network). In practice, feed-forward ANNs typically use three layers and

anywhere from just a few to hundreds of nodes per layer. Our methodology trains an

ANN with pairs of AWS observations and corresponding ECMWF variables from a

calendar year. The trained ANN is then used to predict missing AWS observations from

available ECMWF data, e.g., into years before the AWS was operational.

In Section 2 we describe the data used in our ANN-based prediction system. Further

details on the ANN architectures and training methods used are given in Section 3.

Section 4 presents analyses of our results and the new synthesized temperature and

pressure records for AWS Ferrell. Section 5 compares the ANN-based results to a

satellite-based temperature prediction technique and to ECMWF surface data.

Page 6: Automatic Weather Stations and Artificial Neural Networks

5

2. Data

AWS Data

The main source of direct meteorological data in West Antarctica is the network of AWS

maintained by the University of Wisconsin-Madison since 1980 (Lazzara 2000). All

stations provide near-surface air temperature, pressure and wind speed and direction;

some stations also report relative humidity and multiple vertical temperatures (e.g., for

vertical temperature differences). The main instrument cluster is nominally within 3 m

above the snow surface. This distance changes with snow accumulation and removal.

Pressure is calibrated to –0.2 hPa with a resolution of approximately 0.05 hPa.

Temperature accuracy is 0.25-0.5 ¡C with lowest accuracy at -70 ¡C, i.e., accuracy

decreases with decreasing temperature (M. Lazzara, pers. comm.). The data used here

are from the three hourly quality-controlled data sets available at the University of

Wisconsin-Madison FTP site (ice.ssec.wisc.edu). A subset of these data (for 0, 6, 12 and

18 UTC) is used to match ECMWF time-steps (see below).

Ferrell AWS was installed in December 1980 on the Ross Ice Shelf (77.91¡ S, 170.82¡

E), approximately 100 km east of McMurdo Station (Figure 1). Ferrell was selected here

because it has a longer and more continuous record (Figure 4) than most AWS, which are

generally more remote and harder to service. The 18 year (1981-1998) average

availability for temperature and pressure data, at three hourly resolution, is approximately

88%. Nine years exceed 95% availability while six years range from 65% to 75%. The

largest gaps in the study period (1979-1993), presumably from long-term equipment

problems, occur in the late austral winter/spring during 1983-1985 and 1991-1992

(Figure 4).

Page 7: Automatic Weather Stations and Artificial Neural Networks

6

ECMWF Data

The ECMWF 15-year reanalysis data product (ERA-15) provided GCM-scale

meteorological data for the period 1979-1993 (ECMWF 2000). The original ERA-15

production system used spectral T106 resolution with 31 vertical hybrid levels. A lower

resolution product (used here) derived from those data provides 2.5¡ horizontal resolution

for the surface and 17 upper air pressure levels. Data are available at 0, 6, 12 and 18

UTC. A subset of the available variables (Table 1) and grid points was used at each time

step. Each grid point included all selected variables.

ERA-15 variables and grid selection

Table 1 summarizes ERA-15 variables used to predict AWS pressure and temperature.

Briefly, these variables were chosen because of their physical relationship to the

quantities being predicted. An exception to this guideline is the Julian decimal date used

in predicting temperature. This was added as a proxy for the strong annual signal seen in

temperature. The pressure levels selected represent the lower atmosphere over the station

and capture a substantial fraction of the regional circulation. ECMWF surface data have

not been used as a compromise between local and general predictive skill and to test the

utility of upper air data as a predictor for surface meteorology. This also allows us to use

the ECMWF surface data as a reference for predictive skill.

Several different configurations of grid points have been tried. The goal was to select a

subset of the lower atmosphere in the AWS region that is well-related to the surface

meteorology at the station itself and thus supports the predictive skill of the selected

predictor variables. Finding the best group of grid points is an exponentially hard

problem, which we have not attempted to solve. Instead, the focus has been on adjacent

Page 8: Automatic Weather Stations and Artificial Neural Networks

7

points plus points from the corners of a square area centered approximately on the station

(Figure 5). Ferrell is located fortuitously close to an ECMWF grid point. Testing

showed that factors other than the grid point configuration have a substantially larger

influence on performance.

ERA-15 validity

Potential problems have been noted with the ECMWF (re)analysis data over Antarctica,

stemming in part from the flawed surface elevations used in these models (Genthon and

Braun 1995). Elevation errors exceeding 1000 m exist in some areas of Queen Maud

Land and the Antarctic Peninsula (e.g., Figure 3, Genthon and Braun 1995). Topography

in West Antarctica is generally much better but errors from outside our study area will

still have an influence on the reanalysis data (for example, an elevation error for Vostok

station has broad effects on geopotential heights). Evaluations of several operational

products (e.g., Bromwich et al. 1995; Bromwich et al. 2000; Cullather et al. 1998) and

discussions with experienced polar meteorologists (D. Bromwich, J. Turner, pers. comm.)

suggest that the ECMWF analyses are the best data sets currently available for Antarctica

(see also Bromwich et al. 1998). This is expected to remain true until such time as the

currently-in-progress ECMWF 40-year reanalysis is readily available (ECMWF 2001).

3. Methods

At the simplest level, artificial neural networks (ANNs) are a computer-based problem

solving tool inspired by the original, biological neural network — the brain. Because of

their ability to generate non-linear mappings during training, ANNs are particularly well-

suited to complex, real-world problems such as understanding climate (Elsner and Tsonis

1992; Tarassenko 1998). Meteorological examples include an improved understanding

Page 9: Automatic Weather Stations and Artificial Neural Networks

8

of controls on precipitation in southern Mexico (Hewitson and Crane 1994), prediction of

summer rainfall over South Africa (Hastenrath et al. 1995) and northeast Brazil

(Hastenrath and Greischar 1993), and extreme event analysis in the Texas/Mexico border

region (Cavazos 1999). Our ANNs were implemented with the MATLAB Neural

Network Toolbox (Demuth and Beale 2000; Haykin 1999). Separate ANNs are currently

used for each AWS variable due to the different physical controls involved.

ANN Architectures

Three ANN types were used, all variants of the basic multilayer feed-forward ANN

(Figure 2). All share the same general form of processing node (Figure 3) but use

differing connectivity and activation functions. The multilayer feed-forward (FF) ANN

was selected because of its widespread use in predictive tasks and to follow previous

work with climate downscaling in the literature (e.g., Cavazos 1999). The three variants,

radial basis, general regression (GRNN) and Elman, offer different approaches to the

prediction problem. FF ANNs consist of a large number of highly interconnected, simple

processing nodes (a.k.a. neurons) organized into at least three layers (Figure 2). The

input layer serves to receive input data, with one node for each input variable. The output

layer receives intermediate results from the hidden layer and translates them to the

desired output format. The intermediate, or hidden, layer nodes take inputs from the

preceding layer, usually nodes of the input layer, and pass output to the subsequent layer,

usually nodes of the output layer. The number of hidden nodes is both problem- and

architecture-dependent and is a significant factor in how well the ANN works. Too many

nodes can lead to overfitting while too few will result in the network not learning the

problem effectively. Processing within each node consists of three steps: 1) each input is

Page 10: Automatic Weather Stations and Artificial Neural Networks

9

multiplied by an input-dependent weight, 2) the weighted values and a node-dependent

bias (possibly zero) are summed, and 3) the result is passed to a non-linear, often

sigmoidal (e.g., tanh), activation function. The output of the activation function

determines the output of the node.

Elman networks add to the FF ANN a feedback from the hidden layer output to the

hidden layer input. Adding this recurrent connection allows this type of ANN to detect

(and generate) time-varying patterns (Demuth and Beale 2000). Our experience suggests

that this feature was of no particular benefit to our problem, though this is likely due to

our algorithm for selecting the training records. With our algorithm, the Elman network

appears to behave like a slightly improved FF ANN.

Radial basis ANNs make a number of changes to the FF ANN design. First, only one

hidden layer is ever used. Second, multiplication and summation is replaced by

calculation of the vector distance between an input vector and the weight vector

associated with each hidden layer node. This yields a vector of distances between the

input pattern and each node s weight vector. The distance vector and the bias are then

multiplied element-wise to adjust the sensitivity of each node. Third, the sigmoid

activation function is replaced by a radial basis function of the form exp(-n2), where n is

the result of the preceding computational steps. The net result of these changes is that a

node will only activate for input patterns closely matching its weight vector (Demuth and

Beale 2000). This means that radial basis nodes only respond to relatively small areas of

the input space, unlike the sigmoidal nodes of FF ANNs. Because of these differences,

radial basis ANNs typically require more hidden layer nodes than FF ANNs, but they

can, in theory, be trained more quickly (Demuth and Beale 2000).

Page 11: Automatic Weather Stations and Artificial Neural Networks

10

General regression neural networks are a variant of radial basis ANNs and are often used

for function approximation (Demuth and Beale 2000). As before, the hidden layer uses

radial basis functions but with one node for each input vector (this is also sometimes

done for standard radial basis ANNs). The GRNN also modifies the computations in

each output layer node. First, the node weights are fixed during training to be the target

vectors associated with the input vectors. Second, when the ANN is processing input, the

output nodes first compute the dot product of the hidden layer output vector and the

output node weight vector. This value is normalized by the sum of the elements of the

hidden layer output vector before being passed to the linear activation function to

produce the final output value. Thus an input closely matching an input/target pair used

in training will first produce a hidden layer node with an output close to 1. The output

layer then translates that node to the closest original target from training. Outputs for

input values not seen in training depend on the sensitivity of the radial basis nodes.

All results presented here are derived from the best-performing Elman ANNs for

temperature and pressure. The general regression and radial basis ANNs produced

comparable, but slightly poorer performance. That the three techniques performed

comparably supports the suitability of the ANN approach.

ANN Training and Testing

Our methodology revolves around finding an ANN best suited to predicting an AWS

variable using some set of ECMWF variables as input. This task can be broken down

into three nested/overlapping subtasks: training individual ANNs, creating ensembles of

ANNs with the same inputs, and searching for the best set of input predictors and non-

data dependent ANN parameters.

Page 12: Automatic Weather Stations and Artificial Neural Networks

11

ANN Training

FF ANNs need to be taught to produce the desired outputs (AWS observations) from the

inputs (ECMWF data) before they can be used for predictions, a task done iteratively in

three main phases: training, testing and validation (a step dependent on the ANN

architecture). The training phase adjusts the connection weights using an optimization

function that reduces the error in the network s results. Training records are selected

randomly from the set of input observations (covering one calendar year) and represented

between 30% and 70% of the input records. The training error is calculated by

comparing the network s output prediction to the AWS observations for all input/target

pairs. Weights in each layer are then adjusted with a backpropagation algorithm using

the cumulative error from one pass through the complete training set. Testing uses a

second subset (typically 20%) of the input data to evaluate training performance at the

end of each training iteration. Validation is used to avoid overfitting the training data and

tests the network with data distinct from the training and testing samples. Depending on

the architecture being trained, validation used 10% of the input or was done outside the

training/testing cycle with observations from different calendar years. The cycle then

repeats until the desired output is achieved or the error cannot be further reduced or

begins to go up significantly. Details of the training process vary between architectures.

Ensembles

Wrapped metaphorically around individual ANN training is a loop for training from

different initial conditions and training records. The extraordinary number of parameters

involved in ANNs (each weight, bias and input combines multiplicatively) leads to a

highly complex, multidimensional error surface with numerous local minima. Because

Page 13: Automatic Weather Stations and Artificial Neural Networks

12

of this, it is very important to train multiple versions of the same configuration using

different initial weights and/or training records. We achieve this by running a large

number of iterations (typically 50) of the same ANN configuration. Each instance of the

ANN starts from different randomly initialized weights and is trained with a different

randomly selected set of input data. The top 10 networks (by RMS error) were saved for

further testing. While it has not yet been implemented, some performance improvement

might be gained by stacking the results from, for example, the top 5 best ANNs from the

overall best performing configuration.

Experimental Design

Selection of a best ANN involves numerous dimensions of possible parameters. In the

physical domain, a variety of predictor variables are available (e.g., geopotential height,

thickness) as well as multiple pressure levels. Selection of appropriate grid points adds a

second physical dimension, though results have not been particularly sensitive to our

choices. We have explored many of these dimensions by wrapping an experimental

design loop around the above training/testing process for individual ANNs. Using this

logical loop, we were able to identify the most useful pressure levels and variables.

Optimal grid point selection would also happen in this loop. With the exponential nature

of that task, we have opted to work with a useful set of points rather than the optimal

set. There are also a number of logical dials that can be adjusted to optimize ANN

performance, such as learning rate and momentum, and testing of these variables was

done at this level. Table 1 summarizes the ECMWF variables used for the best ANNs.

Page 14: Automatic Weather Stations and Artificial Neural Networks

13

4. Results

The nature of our current ANN training methodology, where ANNs are trained with one

calendar year of AWS observations, leads to excellent performance within the training

year but a diminished skill for all other years. This approach is used to demonstrate the

extreme case of only one year s worth of AWS data being available for ANN training, a

situation likely to be true for the most recently installed AWSs. Few of the existing AWS

have the record length available at Ferrell so it is useful to test the methodology under

worst-case conditions. We also initially believed that one year of training data would be

sufficient to obtain acceptable predictive skill. This is not entirely false, but it is also

clear that performance would likely improve by taking advantage of more training data if

available.

Statistics

Figure 6 graphically summarizes surface temperature prediction results for a training year

(1987) and an arbitrary non-training year (1983). Figure 7 does the same for surface

pressure. In each case, the training year results are noticeably better than the non-training

year results.

Prediction results for all years are summarized in Figure 8. Although most non-training

years have lower performance than the training year results, there are some non-training

years that do nearly as well predicting with the training year ANN. Also shown in Figure

8 are results from training with a different calendar year. Again, the training year has the

best performance and other years do worse. We have tried a number of different training

years (1982, 1987, 1990 and 1993) but have not seen any distinct patterns of performance

Page 15: Automatic Weather Stations and Artificial Neural Networks

14

in the other years or any particular benefit to any given year. In short, all training years

seem to give roughly the same results for non-training years.

Table 2 summarizes seasonal statistics for the ANN predictions. Results were analyzed

on a seasonal basis to determine if the ANN performance had any relationship to the time

of year. The statistics show a small, possibly negligible effect for pressure with spring

and summer having slightly better results than fall and winter. The results for

temperature appear more compelling with an apparently significant difference between

higher summer and lower winter predictive skill. Spring and fall are nearly identical to

the average RMS error. Thus there may be some relationship to season for temperature

but it is not year-round.

New Records

After completing the ANN training process, the best ANNs were used to synthesize 15-

year records of pressure and temperature for this AWS. As with Shuman and Stearns

(2001), the final records are a merger of AWS observations and ANN predictions for

those periods where observations are unavailable. A measure of uncertainty for the

predictions was generated by using the ANN to predict available AWS observations for

each year and calculating the RMS error of the predictions. For those years where no

observations are available (e.g., before AWS installation), the average RMS error was

used. This provides the basis for the error bars in Figure 9.

5. Discussion

To further assess the quality of our methodology, we would like to compare our results

with the best available results from similar attempts to improve the AWS record.

Page 16: Automatic Weather Stations and Artificial Neural Networks

15

Unfortunately, very little has been published on this subject in Antarctica. One

alternative to our ANN-based technique uses satellite passive-microwave brightness

temperatures (Shuman and Stearns 2001). In lieu of other alternate techniques, it is also

reasonable to compare our performance to available model results, such as the ECMWF

surface data.

Comparison to a satellite-based technique

The recent work by Shuman and Stearns (2001, hereafter SS) used satellite passive

microwave brightness temperatures and approximate surface emissivity to reconstruct

surface temperatures at a number of AWS in West Antarctica. In the satellite-based

methodology, three-hourly AWS observations were first averaged to daily values before

comparison with the daily passive-microwave brightness temperatures. Our technique

produces a calculated surface temperature for all available six-hourly AWS observations

(for ECMWF time steps at 0, 6 12 and 18 UTC) thus yielding up to four predictions each

day (for those days with no missing six-hourly AWS observations). Thus both our

observed and calculated daily means are based on up to four six-hourly values whereas

only one daily calculated value is available from the SS technique.

Error analyses documented in SS include comparisons of calculated and observed surface

temperatures on a daily and annual basis. Figure 10 shows our mean daily surface

temperatures. There is a hint of improved predictive accuracy at higher temperatures but

this does not hold up to closer examination (e.g., a probability density function did not

show a strong difference for higher temperature predictions). There are otherwise no

distinct artifacts such as the curvature (from some calculated temperatures being too low

in spring and fall or too high in summer and winter) seen in some SS results (SS, Figure

Page 17: Automatic Weather Stations and Artificial Neural Networks

16

10). Table 3 summarizes statistics based on differences between the calculated and

observed daily surface temperatures. The values in the first three columns reproduce SS

Table 4. The column headed Training shows results from the best network trained on

1987 observations. The remaining columns summarize performance of the same network

on all other years. ANN performance is clearly comparable to the SS methodology for

the training year. For the remaining years, the mean error in our results is larger (0.56 ¡C

versus 0.14 ¡C) but the standard deviation (σn-1) is nearly identical (5.48 ¡C versus 5.52

¡C). As the training year results demonstrate, the errors from the ANN-based

methodology could be greatly reduced by using one ANN per year at the expense of

greater complexity. Improvement may also be possible by refining the method used for

selection of training records.

The transfer function used in the SS methodology depends on a modeled emissivity to

convert passive-microwave brightness temperatures to surface air temperature. The

accuracy of this transfer function is thus a significant contributor to the overall accuracy

of the calculated temperature values. The transfer function is based on temporally

overlapping brightness temperatures and AWS observations that are used to generate a

modeled emissivity time series. Thereafter, surface temperature is estimated from the

brightness temperatures via the emissivity time series. Significant departures in

microwave brightness temperatures can arise due to melt events and associated liquid

water in the snowpack, and to the density contrast remaining when the liquid water

refreezes. This may lead to incorrect calculated surface air temperatures if the transfer

function does not adjust for the changed relationship between brightness temperature and

air temperature (e.g., SS, Figure 9). This type of error might be reduced by including

Page 18: Automatic Weather Stations and Artificial Neural Networks

17

surface temperature data associated with melt events in the transfer function calibration

process. Suitable data may not always be available, however. Our methodology should

be immune to errors due to melt events since it does not rely on characteristics of the

snow surface. A site, such as Lettau, with observed temperatures near to above freezing

could be used to confirm this assumption (Ferrell observations are all below freezing).

As pointed out in SS, however, only merged records with substantial missing summer

temperature observations are likely to be susceptible to melt event related errors.

Annual averages of calculated and observed daily surface temperatures were also

analyzed in SS. Figure 11a compares annual means at Ferrell AWS for those years with

at least 340 days of observations; Figure 11b shows all years (1981-1993). Differences

between annual averages of calculated and observed values are all less than 1.5 ¡C. Table

4 summarizes the statistics of this comparison. The mean error in the ANN-based

methodology is directly comparable to the errors in the SS technique (their Table 5). Our

standard deviation (σn-1) is at the high end of the SS range but still reasonably low.

Adding in years with fewer than 340 days of observations (the lowest being 1992 with

~236 valid days) does not change the mean or standard deviation significantly.

The ANN-based technique compares well with the satellite-based approach. Our

approach is also immune to melt event-related problems, has minimal exposure to

changes in sensors, and is based on data (ECMWF) with no gaps. Furthermore, our

technique is also applicable to surface pressure.

Page 19: Automatic Weather Stations and Artificial Neural Networks

18

Evaluation of ECMWF surface data

ECMWF surface data could also be used to fill the gaps in the AWS records and would

appear to be just as reasonable as any data from an empirical methodology. They also

provide an alternative benchmark since these data were not used in our ANN training.

The nearest ECMWF grid point to Ferrell AWS is at 77.5 ¡S, 170¡E, approximately 50

km away (Figure 5). A comparison of 2-meter temperatures from this grid point to the 2-

m temperatures observed at the AWS (Figure 12) reveals the flaws in the ECMWF data.

While the correlation (0.91) and standard deviation (5.2 ¡C) are similar to our ANN-

based results (Figure 10), the RMS error is approximately twice as large (10.3 ¡C versus

5.4 ¡C). There are also clear biases in the ECMWF data not present in our predictions.

Thus our ANN-based predictions of surface temperature are also superior to the ECMWF

model data.

The ECMWF data fare better for surface pressure (Figure 13) and are comparable to our

ANN-based predictions (Figure 14). Our RMS error (2.9 mbar versus 4.7 mbar) and

mean error (0.41 mbar versus 4.13 mbar) are better, suggesting a benefit, albeit possibly

slight, to our methodology for this variable.

6. Conclusion

This work has shown the utility of an ANN-based approach to predicting AWS

observations of near-surface temperature and pressure using variables derived from

GCM-scale numerical forecast models. With the current methodology, skill within the

training year is high while predictions outside the training year are of moderately lower

quality. This is not seen as a major issue since there are still alternative training methods

and approaches remaining to be explored.

Page 20: Automatic Weather Stations and Artificial Neural Networks

19

The ANN-based technique also compares well with the satellite-based approach. Our

approach should be immune to melt event-related problems, has minimal exposure to

changes in sensors, and is based on data (ECMWF) with no gaps. Our results also do not

appear to be strongly seasonally biased, although there may be a minor seasonal

dependence for temperature. Furthermore, our technique is also applicable to surface

pressure. Lastly, we will be able to extend our methodology into the pre-satellite era

once the ECMWF 40-year reanalysis data sets become available.

Our results also compare well to the ECMWF surface data. These data are not used in

our methodology so independent comparisons can be made to the AWS observations.

Our temperature predictions have an RMS error approximately one-half that of the

ECMWF surface data without the biases present in the latter. This suggests that while the

upper air data may have similar imperfections, the ANN technique is not sensitive to

them. While this may be true in a sense, it is also possible that improvements in the

quality of the upper air data will require revisiting the ANN training process so that the

relationship to the AWS observations can be relearned.

By using one calendar year of training data, we have shown what can be expected from

applying this technique to AWS with short observational records. This should also be the

worst case for those AWS with longer records. Further research will explore using more

of the available record in training for those sites where this is an option, including Ferrell

AWS itself.

Page 21: Automatic Weather Stations and Artificial Neural Networks

20

References

Bromwich, D. H., R. I. Cullather, et al., 1998: Antarctic precipitation and its contribution

to the global sea-level budget. Ann. Glaciol., 27, 220-226.

Bromwich, D. H., F. M. Robasky, et al., 1995: The atmospheric hydrologic cycle over the

Southern Ocean and Antarctica from operational numerical analyses. Mon. Wea.

Rev., 123, 3518-3539.

Bromwich, D. H., A. N. Rogers, et al., 2000: ECMWF Analyses and Reanalyses

Depiction of ENSO Signal in Antarctic Precipitation. J. Climate, 13, 1406-1420.

Cavazos, T., 1999: Large-scale circulation anomalies conducive to extreme events and

simulation of daily rainfall in northeastern Mexico and southeastern Texas. J.

Climate, 12, 1506-1523.

Crane, R. G. and B. C. Hewitson, 1998: Doubled CO2 precipitation changes for the

Susquehanna basin: Down-scaling from the GENESIS general circulation model.

Inter. J. Climatol., 18, 65-76.

Cullather, R. I., D. H. Bromwich, et al., 1998: Spatial and temporal variability of

Antarctic precipitation from atmospheric methods. J. Climate, 11, 334-367.

Demuth, H. and M. Beale, 2000: Neural Network Toolbox. Mathworks, Inc., 844 pp.

ECMWF, cited 2001: ERA-15. [Available online from

http://wms.ecmwf.int/research/era/Era-15.html.]

, cited 2001: ERA-40 Project Plan. [Available online from

http://wms.ecmwf.int/research/era/Project_plan.html.]

Page 22: Automatic Weather Stations and Artificial Neural Networks

21

Elsner, J. B. and A. A. Tsonis, 1992: Nonlinear Prediction, Chaos, and Noise. Bull. Amer.

Meteor. Soc., 73, 49-60.

Genthon, C. and A. Braun, 1995: ECMWF Analyses and Predictions of the Surface

Climate of Greenland and Antarctica. J. Climate, 8, 2324-2332.

Hastenrath, S. and L. Greischar, 1993: Further Work on the Prediction of Northeast

Brazil Rainfall Anomalies. J. Climate, 6, 743-758.

Hastenrath, S., L. Greischar, et al., 1995: Prediction of the Summer Rainfall over South

Africa. J. Climate, 8, 1511-1518.

Haykin, S. S., 1999: Neural networks : a comprehensive foundation. Prentice Hall, 842

pp.

Hewitson, B. C. and R. G. Crane, 1994: Precipitation controls in southern Mexico.

Neural Nets: Applications in Geography, B. C. Hewitson and R. G. Crane, Eds.,

Kluwer Academic, 121-143.

Lazzara, M. A., cited 2000: Antarctic Automatic Weather Stations Web Site Home Page.

[Available online from http://uwamrc.ssec.wisc.edu/aws/.]

Shuman, C. A. and C. R. Stearns, 2001: Decadal-length composite inland West Antarctic

temperature records. J. Climate, 14, 1977-1988.

Tarassenko, L., 1998: A Guide to Neural Computing Applications. John Wiley & Sons,

Inc., 139 pp.

Page 23: Automatic Weather Stations and Artificial Neural Networks

22

Tables

Table 1. Variables used to predict AWS near-surface observations of pressure and

temperature. Geopotential height (m), wind speed (m/s) and wind direction (¡) are from

ECMWF datasets. Thickness (m) and temperature advection (¡C/km) are derived from

ECMWF data. Julian decimal date is simply the day of the year divided by the total days

in the year.

Table 2. Seasonal statistics for temperature and pressure prediction errors (1987 training

year). Seasons are defined for southern hemisphere as Dec-Feb (summer), Mar-May

(fall), Jun-Aug (winter), and Sep-Nov (spring). n refers to the number of data points in

each time period. r refers to a simple linear correlation.

Table 3. Prediction error (difference between calculated and observed) statistics for daily

mean temperatures (¡C). Values in first three data columns from Table 4 of Shuman and

Stearns (2001). The mean is the average of the error for their four AWS (Byrd, Lettau,

Lynn and Siple). The minimum and maximum are the extrema of the published values.

Remaining columns are from our work with AWS Ferrell. Training values are from the

ANN training year (1987) using the best ANN. Values in the remaining columns are for

all other years between 1981 and 1993. The 1987 ANN was used to calculate

temperatures for these years. Each year was predicted individually with statistics

calculated by year.

Table 4. Statistics for differences between calculated and observed daily mean

temperatures (¡C) on an annual average basis. Values in first two columns from Table 5

of Shuman and Stearns (2001) and are based on results from four AWS (Byrd, Lettau,

Page 24: Automatic Weather Stations and Artificial Neural Networks

23

Lynn and Siple). Remaining values based on AWS Ferrell. Only years with at least 340

days of observations were included in data columns one to three (seven years for our

results). All 13 years were included in data column four (1979 and 1980 were omitted

because they had no observations).

Page 25: Automatic Weather Stations and Artificial Neural Networks

24

Figures

Figure 1. Location map for Antarctic automatic weather stations (AWS) described in

text. Ferrell AWS is the subject of this study. Siple, Byrd, Lettau and Lynn AWS were

studied in Shuman and Stearns (2001). Remaining labels for reference. Ferrell AWS

was installed on the Ross Ice Shelf (77.91¡ S, 170.82¡ E) in December 1980 (no 1980

observations have been used in this work).

Figure 2. Generalized multi-layer feed-forward artificial neural network (ANN).

Figure 3. Sample artificial neural network processing node with three inputs, a sigmoidal

activation function and no bias.

Figure 4. Ferrell AWS 6-hourly observations of (a) 2-m pressure and (b) 2-m air

temperature, both for 1981-1993. Data extracted from the three hourly quality-controlled

AWS archive data sets available at the University of Wisconsin-Madison FTP site

(ice.ssec.wisc.edu). Time steps 0, 6, 12 and 18 UTC selected to match ECMWF data.

Figure 5. ECMWF grid points around Ferrell AWS (central black square). Light lines

show ECMWF 2.5¡ x 2.5¡ horizontal grid. Black circles show sample grid point

locations used for training and prediction. Base AVHRR image by Matt Lazzara,

University of Wisconsin

Figure 6. Temperature prediction results for a training year (1987: a, b) and a non-

training year (1983: c, d). In the scatter plots (a, c), the thin solid line from lower left to

upper right is the ideal 1:1 line where all points would fall with perfect predictive skill.

The thicker solid line is a linear regression through the data points (the equation of this

Page 26: Automatic Weather Stations and Artificial Neural Networks

25

line is shown in the legend box). Thin dashed lines are offset one RMS error from the

ideal 1:1 line to help show spread in the error. Standard statistics related to the difference

between observed and predicted are shown in the upper left corner. The absolute

prediction error (predicted - observed) is summarized in the error distribution plots (b, d).

The thin sloping line represents a normal distribution. Offsets from this line are offsets

from a normal distribution in the error. The vertical line is placed at the RMS error.

Figure 7. As in Figure 6 but for pressure predictions.

Figure 8. Summary of RMS errors for all years: (a) best temperature ANN trained with

1987 observations, (b) best pressure ANN also trained with 1987 observations, (c) best

pressure ANN trained with 1982 observations. Dashed horizontal line is the mean RMS

for all years excluding the training year.

Figure 9. Reconstructed six-hourly (a) surface pressure and (b) temperature at Ferrell

AWS for 1979-1993 (original observations as thin line, ANN-modeled values as points

with error bars). The ANN was trained with 1987 data. ECMWF data were used to fill

gaps and extend record back to 1979. Error bars on predictions are based on the RMS

error for the calendar year of the predictions (for 1981-1993) or the average RMS error

for 1981-1993 (for 1979-1980).

Figure 10. Daily mean calculated surface temperatures versus observations, 1981-1993.

Lines as in previous scatter plots.

Page 27: Automatic Weather Stations and Artificial Neural Networks

26

Figure 11. Annual mean calculated surface temperatures versus observations: (a) only

years with at least 340 days of observations; (b) all years, 1981-1993. Lines as in

previous scatter plots.

Figure 12. ECMWF 2-m temperatures from grid point 77.5 ¡S, 170¡E compared to

observed 2-m temperatures at AWS Ferrell (77.91¡ S, 170.82¡ E). Both data sets are

daily averages of 6-hourly data. Lines as in previous scatter plots.

Figure 13. ECMWF surface pressures from grid point 77.5 ¡S, 170¡E compared to

observed surface pressures at AWS Ferrell (77.91¡ S, 170.82¡ E). Both data sets are

daily averages of 6-hourly data. Lines as in previous scatter plots.

Figure 14. Daily mean calculated surface pressures versus observations, 1981-1993.

Lines as in previous scatter plots.

Page 28: Automatic Weather Stations and Artificial Neural Networks

27

Pressure Temperature

850 mb geopotential height 850 mb geopotential height

700 mb geopotential height 850 mb temperature advection

700-850 mb thickness 700-850 mb thickness

850 mb wind speed, direction Julian decimal date

Page 29: Automatic Weather Stations and Artificial Neural Networks

28

Temperature Pressure

Period n RMSE (¡C) r n RMSE (mbar) r

All data 16281 6.4 0.85 16305 3.3 0.96

Winter 4252 7.7 0.51 4260 3.7 0.96

Spring 3424 6.6 0.78 3424 2.9 0.96

Summer 3935 4.6 0.79 3948 2.5 0.94

Fall 4670 6.3 0.67 4673 3.7 0.94

Page 30: Automatic Weather Stations and Artificial Neural Networks

29

This Work

Shuman and Stearns Non-Training Years

Statistics Minimum Mean Maximum

Training

Year Mean σn-1 Minimum Maximum

Mean 0.094373 0.14198 0.19882 0.01 0.56 1.11 -1.95 1.78

σn-1 4.0027 5.5249 6.2450 3.0 5.48 0.44 4.8 6.1

Page 31: Automatic Weather Stations and Artificial Neural Networks

30

Shuman and Stearns This Work

Statistics Minimum Maximum Best Years All Years

Mean 0.076 0.489 0.45 0.50

σn-1 0.209 1.001 1.3 1.1

Page 32: Automatic Weather Stations and Artificial Neural Networks
Richard Alley
Richard Alley
Figure 1
Page 33: Automatic Weather Stations and Artificial Neural Networks
Richard Alley
Figure 2
Page 34: Automatic Weather Stations and Artificial Neural Networks
Richard Alley
Figure 3
Page 35: Automatic Weather Stations and Artificial Neural Networks

1979

1980

1981

1982

1983

1984

940

960

980

1000

1020

Fer

rell

AW

S 6

−ho

urly

Pre

ssur

e 19

79−

1993

mbar

1984

1985

1986

1987

1988

1989

940

960

980

1000

1020

mbar

1989

1990

1991

1992

1993

1994

940

960

980

1000

1020

mbar

Richard Alley
Figure 4a
Page 36: Automatic Weather Stations and Artificial Neural Networks

1979

1980

1981

1982

1983

1984

−60

−50

−40

−30

−20

−100

Fer

rell

AW

S 6

−ho

urly

Tem

pera

ture

197

9−19

93

deg C

1984

1985

1986

1987

1988

1989

−60

−50

−40

−30

−20

−100

deg C

1989

1990

1991

1992

1993

1994

−60

−50

−40

−30

−20

−100

deg C

Richard Alley
Figure 4b
Page 37: Automatic Weather Stations and Artificial Neural Networks
Richard Alley
Figure 5
Page 38: Automatic Weather Stations and Artificial Neural Networks

−50 −40 −30 −20 −10 0 10

−50

−40

−30

−20

−10

0

10

Training Year (1987)

AWS Observation

AN

N P

redi

ctio

n

RMSE=4.1 deg Cr=0.94Mean=−0.07 deg CStdDev=4.1 deg C

a.

Ideal 1:1y=0.88x−3.02

0 5 10 15

0.0010.0030.01 0.02 0.05 0.10

0.25

0.50

0.75

0.90 0.95 0.98 0.99

0.9970.999

Absolute Error (deg C)

Pro

babi

lity

Training Year (1987)

RM

SE

=4.

1

b.

−60 −50 −40 −30 −20 −10 0

−60

−50

−40

−30

−20

−10

0

Non−training Year (1983)

AWS Observation

AN

N P

redi

ctio

n

RMSE=6.2 deg Cr=0.86Mean=0.78 deg CStdDev=6.1 deg C

c.

Ideal 1:1y=0.77x−4.79

0 5 10 15 20

0.0010.0030.01 0.02 0.05 0.10

0.25

0.50

0.75

0.90 0.95 0.98 0.99

0.9970.999

Absolute Error (deg C)

Pro

babi

lity

Non−training Year (1983)

RM

SE

=6.

2

d.

Richard Alley
Figure 6
Page 39: Automatic Weather Stations and Artificial Neural Networks

940 960 980 1000

940

950

960

970

980

990

1000

1010

Training Year (1987)

AWS Observation

AN

N P

redi

ctio

n

RMSE=2.3 mbarr=0.98Mean=−0.05 mbarStdDev=2.3 mbar

a.

Ideal 1:1y=0.95x+46.73

0 5 10 15

0.0010.0030.01 0.02 0.05 0.10

0.25

0.50

0.75

0.90 0.95 0.98 0.99

0.9970.999

Absolute Error (mbar)

Pro

babi

lity

Training Year (1987)

RM

SE

=2.

3

b.

960 980 1000 1020

950

960

970

980

990

1000

1010

1020

1030Non−training Year (1983)

AWS Observation

AN

N P

redi

ctio

n

RMSE=3.1 mbarr=0.96Mean=0.36 mbarStdDev=3.0 mbar

c.

Ideal 1:1y=0.94x+60.65

0 5 10 15

0.0010.0030.01 0.02 0.05 0.10

0.25

0.50

0.75

0.90 0.95 0.98 0.99

0.9970.999

Absolute Error (mbar)

Pro

babi

lity

Non−training Year (1983)

RM

SE

=3.

1

d.

Richard Alley
Figure 7
Page 40: Automatic Weather Stations and Artificial Neural Networks

0

2

4

6

8

1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993

Temperature - 1987 ANN

7.2

6.1 6.26.5

7.36.8

4.1

6 66.5

6.2

7.3 7.1

RM

S E

rror

(de

g C

)

Year

Training YearMean6.6

0

1

2

3

4

5

1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993

Pressure - 1987 ANN4.7

3.43.1

3.73.3

2.8

2.3

2.8 2.83

2.2

3.53.9

RM

S E

rror

(m

bar)

Year

Training YearMean

3.3

0

1

2

3

4

5

1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993

Pressure - 1982 ANN4.7

2.5

3.53.9 3.9

3.3 3.4 3.3 3.4 3.4

2.8

3.84.1

RM

S E

rror

(m

bar)

Year

Training Year Mean3.6

a.

b.

c.

Richard Alley
Figure 8
Page 41: Automatic Weather Stations and Artificial Neural Networks

1979

1980

1981

1982

1983

1984

940

960

980

1000

1020

Fer

rell

AW

S 6

−ho

urly

Pre

ssur

e 19

79−

1993

mbar

1984

1985

1986

1987

1988

1989

940

960

980

1000

1020

mbar

1989

1990

1991

1992

1993

1994

940

960

980

1000

1020

mbar

Richard Alley
Figure 9a
Page 42: Automatic Weather Stations and Artificial Neural Networks

1979

1980

1981

1982

1983

1984

−60

−50

−40

−30

−20

−100

Fer

rell

AW

S 6

−ho

urly

Tem

pera

ture

197

9−19

93

deg C

1984

1985

1986

1987

1988

1989

−60

−50

−40

−30

−20

−100

deg C

1989

1990

1991

1992

1993

1994

−60

−50

−40

−30

−20

−100

deg C

Richard Alley
Figure 9b
Page 43: Automatic Weather Stations and Artificial Neural Networks

−60

−50

−40

−30

−20

−10

0

−60

−50

−40

−30

−20

−100

Tem

pera

ture

198

1 −

199

3, D

aily

ave

rage

AW

S O

bser

vatio

n

ANN Prediction

RM

SE

=5.

4 de

g C

r=0.

89M

ean=

0.55

deg

CS

tdD

ev=

5.4

deg

C

Idea

l 1:1

y=0.

81x−

4.22

Richard Alley
Figure 10
Page 44: Automatic Weather Stations and Artificial Neural Networks

−27

−26

−25

−24

−23

−22

−27

−26

−25

−24

−23

−22

Ann

ual A

vera

ge T

empe

ratu

re 1

981

− 1

993

AW

S O

bser

vatio

n

ANN Prediction

RM

SE

=1.

3 de

g C

r=0.

59M

ean=

0.45

deg

CS

tdD

ev=

1.3

deg

C

Yea

rs w

ith a

t lea

st 9

3% d

ata

Idea

l 1:1

y=0.

57x−

10.2

9

Richard Alley
Figure 11a
Page 45: Automatic Weather Stations and Artificial Neural Networks

−30

−28

−26

−24

−22

−20

−30

−28

−26

−24

−22

−20

Ann

ual A

vera

ge T

empe

ratu

re 1

981

− 1

993

AW

S O

bser

vatio

n

ANN Prediction

RM

SE

=1.

2 de

g C

r=0.

90M

ean=

0.50

deg

CS

tdD

ev=

1.1

deg

C

All

year

s

Idea

l 1:1

y=0.

75x−

5.79

Richard Alley
Figure 11b
Page 46: Automatic Weather Stations and Artificial Neural Networks

−60

−50

−40

−30

−20

−10

0

−60

−50

−40

−30

−20

−100

EC

MW

F v

s A

WS

Dai

ly A

vera

ge T

empe

ratu

re 1

981

− 1

993

AW

S O

bser

vatio

n

ECMWF Surface Prediction

RM

SE

=10

.3 d

eg C

r=0.

91M

ean=

8.86

deg

CS

tdD

ev=

5.2

deg

C

Idea

l 1:1

y=0.

70x+

1.31

Richard Alley
Figure 12
Page 47: Automatic Weather Stations and Artificial Neural Networks

930

940

950

960

970

980

990

1000

1010

1020

1030

930

940

950

960

970

980

990

1000

1010

1020

1030

EC

MW

F v

s A

WS

Dai

ly A

vera

ge P

ress

ure

1981

− 1

993

AW

S O

bser

vatio

n

ECMWF Surface Prediction

RM

SE

=4.

7 m

bar

r=0.

98M

ean=

4.13

mba

rS

tdD

ev=

2.3

mba

r

Idea

l 1:1

y=0.

95x+

53.9

4

Richard Alley
Figure 13
Page 48: Automatic Weather Stations and Artificial Neural Networks

940

950

960

970

980

990

1000

1010

1020

1030

940

950

960

970

980

990

1000

1010

1020

1030

Pre

ssur

e 19

81 −

199

3, D

aily

ave

rage

AW

S O

bser

vatio

n

ANN Prediction

RM

SE

=2.

9 m

bar

r=0.

96M

ean=

0.41

mba

rS

tdD

ev=

2.9

mba

r

Idea

l 1:1

y=0.

95x+

50.9

9

Richard Alley
Figure 14