journal of behavioral decision making: j.s. lim and m. o'connor, 1995, judgemental adjustment...

2
184 Research on forecasting / Journal of Forecasting 12 (1996) 183-187 3. Forecasts can provide good benchmarks for the evaluation of experiments. 'Before and after' experimental designs are often deficient because other things change. Evaluations that use ex ante forecasts offer advantages in that they take account of the other things that are expected to change. Winston conducted what seems to be a thorough search for studies with ex ante forecasts (although he does not tell how the search was conducted). He found almost 30 studies that had predicted the effects of deregulation. He then compared the ex ante predictions with the out- comes. The ex post analysis is informal. For example, none of the forecasters of the effects of banking deregulation had examined the impact of the government's policy of insuring the invest- ments made by savings and loan institutions. 4. The use of complex methods led to no obvious gains in forecast accuracy. 5. The use of complex theories seems to have been associated with less accurate prediction. The simple economic prediction that regulation is harmful to consumers proved to be robust. Also, wages typically went down, as predicted. Em- ployment predictions varied depending upon the impacts of improved efficiency (which would reduce employment) and increased consumer demand (due to improvements in price and service). 6. The biggest errors in the economists' predic- tions came about when they failed to consider important causal factors. In particular, the omit- ted factors usually involved technological change. My speculation is that this is likely to occur when factors that have been constant in the past change significantly over the forecast horizon. While these factors cannot be identified by formal analyses of historical data, they can sometimes be anticipated by asking a number of experts to brainstorm the factors that might possibly change in the future. When one looks at the problem retrospectively, it seems surprising that certain factors were overlooked. Surely someone would suggest that the government's policy of insuring against loss in investments by savings and loan institutions might have an influence on the S&L's investment policies, or that the failure to develop an efficient pricing scheme for airports would create service restric- tions. Winston's paper deals with an important sub- ject. The design, which compares actuals against forecasted results, represents a marked improve- ment over the commonly used 'before-after' studies in economics. Finally, the writing is a model of clarity. Reference Watson, P.C., 1986, Reasoning about a rule, Quarterly Journal of Experimental Psychology, 20, 273-281. J. Scott Armstrong [Clifford Winston, The Brookings Institution, 1775 Massachusetts Ave. NW, Washington, DC 20036, USA] SSD1 0169-2070(95)00649-4 J.S. Lim and M. O'Connor, 1995, Judgemental adjustment of initial forecasts: its effectiveness and biases, Journal of Behavioral Decision Mak- ing 8, 149-168. The integration of judgemental and statistical forecasts is common practice in many organiza- tions and research into this process can have important practical implications. There are, of course, a number of ways in which the integra- tion can be carried out, including simple averag- ing of the two forecasts, ad hoc judgemental adjustment of statistical forecasts, and Bayesian methods. Lim and O'Connor's paper examines another approach: after making an initial judgemental forecast, subjects in three laboratory experi- ments were supplied with a statistical forecast and invited to revise their estimate in the light of this new information. The experiments were designed to discover (i) whether people can make effective use of statistical forecasts to

Upload: paul-goodwin

Post on 22-Aug-2016

212 views

Category:

Documents


0 download

TRANSCRIPT

184 Research on forecasting / Journal of Forecasting 12 (1996) 183-187

3. Forecasts can provide good benchmarks for the evaluation of experiments. 'Before and after' experimental designs are often deficient because other things change. Evaluations that use ex ante forecasts offer advantages in that they take account of the other things that are expected to change. Winston conducted what seems to be a thorough search for studies with ex ante forecasts (although he does not tell how the search was conducted). He found almost 30 studies that had predicted the effects of deregulation. He then compared the ex ante predictions with the out- comes. The ex post analysis is informal. For example, none of the forecasters of the effects of banking deregulation had examined the impact of the government's policy of insuring the invest- ments made by savings and loan institutions.

4. The use of complex methods led to no obvious gains in forecast accuracy.

5. The use of complex theories seems to have been associated with less accurate prediction. The simple economic prediction that regulation is harmful to consumers proved to be robust. Also, wages typically went down, as predicted. Em- ployment predictions varied depending upon the impacts of improved efficiency (which would reduce employment) and increased consumer demand (due to improvements in price and service).

6. The biggest errors in the economists' predic- tions came about when they failed to consider important causal factors. In particular, the omit- ted factors usually involved technological change. My speculation is that this is likely to occur when factors that have been constant in the past change significantly over the forecast horizon. While these factors cannot be identified by formal analyses of historical data, they can sometimes be anticipated by asking a number of experts to brainstorm the factors that might possibly change in the future. When one looks at the problem retrospectively, it seems surprising that certain factors were overlooked. Surely someone would suggest that the government's policy of insuring against loss in investments by savings and loan institutions might have an influence on the S&L's investment policies, or that the failure to develop an efficient pricing

scheme for airports would create service restric- tions.

Winston's paper deals with an important sub- ject. The design, which compares actuals against forecasted results, represents a marked improve- ment over the commonly used 'before-after ' studies in economics. Finally, the writing is a model of clarity.

Reference

Watson, P.C., 1986, Reasoning about a rule, Quarterly Journal of Experimental Psychology, 20, 273-281.

J. Scott Armstrong

[Clifford Winston, The Brookings Institution, 1775 Massachusetts Ave. NW, Washington, DC 20036, USA]

SSD1 0169-2070(95)00649-4

J.S. Lim and M. O'Connor, 1995, Judgemental adjustment of initial forecasts: its effectiveness and biases, Journal of Behavioral Decision Mak- ing 8, 149-168.

The integration of judgemental and statistical forecasts is common practice in many organiza- tions and research into this process can have important practical implications. There are, of course, a number of ways in which the integra- tion can be carried out, including simple averag- ing of the two forecasts, ad hoc judgemental adjustment of statistical forecasts, and Bayesian methods.

Lim and O'Connor's paper examines another approach: after making an initial judgemental forecast, subjects in three laboratory experi- ments were supplied with a statistical forecast and invited to revise their estimate in the light of this new information. The experiments were designed to discover (i) whether people can make effective use of statistical forecasts to

Research on forecasting / Journal of Forecasting 12 (1996) 183-187 185

improve their original judgements and (ii) the conditions that are most likely to facilitate effec- tive integration. The results are both interesting and depressing.

The central message of the paper is that the subjects were far too conservative in revising their initial estimates. When supplied with highly reliable statistical forecasts, together with an indication of how reliable these forecasts were, they attached too great a weight to their original judgements. Even when they were forced to look at a computer screen which told them that their initial forecasts were far less accurate than those of statistical method they persisted in making insufficient adjustments to these intial forecasts. Indeed, as time progressed this tendency became more marked.

There was some evidence that subjects could take into account the reliability of the statistical forecasts: revision was greater when the forecast was more reliable, but it still tended to be insufficient. The authors hypothesised that this poor performance resulted from the inability of subjects to perform the mental arithmetic re- quired to carry out the integration effectively. However, when subjects were merely asked to provide weights for their own and the statistical forecasts, with the computer calculating the weighted average forecast, performance actually deteriorated.

It is, of course, wise to be cautious before drawing inferences about forecasting in practical context from a laboratory based study involving students as subjects (Goodwin and Wright, 1993), and the authors acknowledge this. How- ever, the subjects were apparently well moti- vated (monetary incentives were offered for accurate forecasts), they were part time students with full time employment in business, and they had completed a short course in forecasting techniques. Moreover, the three experiments appear to have been well designed and founded on a thorough review of the relevant forecasting and psychology literatures.

If these results are a reliable indication of the performance of judgemental forecasters in prac- tise then the implications are clear. Judgemental and statistical forecasts should be made indepen-

dently and a mechanical process (e.g. simple averaging) should be used to combine them.

Reference

Goodwin, P. and G. Wright, 1993, Improving judgmental time series forecasting: a review of the guidance provided by research, International Journal of Forecasting, 9, 147-61.

Paul Goodwin

[Joa Sang Lim and Marcus O'Connor, School of Information Systems, The University of New South Wales, P.O. Box 1, Kensington, NSW 2033, Australia]

SSDI 0169-2070(95)00650-8

John Hanke and Pam Weigand, 1994, What are business schools doing to educate forecasters, Journal of Business Forecasting, Fall, 10-12.

Those who are teaching forecasting should find something interesting in Hanke and Weigand's survey of forecasting courses. This is the third time that this survey has been done (Hanke, 1984, 1989). (Incidentally, Hanke, 1989 is incor- rectly cited as Hanke, 1988 in their paper.) This survey was sent to the 743 business schools in the American Assembly of Collegiate Schools of business, and there were 317 responding institu- tions (43%).

Hanke and Weigand compare some of their results with the survey done 10 years earlier (Hanke, 1984). For example, in 1983, 58% of the schools said that they offered a forecasting course at either the undergraduate or graduate level. This figure had increased to 62% in the 1987 survey (although this might have been attributed to a sharp decrease in the response rate, from 52% to 34%). In this latest survey, conducted in 1993, only 47% of the schools offered a course in forecasting. As to why they do not offer such courses, the answer were quite diverse. The most frequently mentioned (at