special feature - foresight · business practitioner forecasting gap is real, we’d like to...

23
FORESIGHT Summer 2016 24 I n a recent discussion on LinkedIn, the question was raised: Is there a disconnect between the academic and the business world when it comes to forecasting? As you can imagine, this is a matter of checking the record and reporting on what one finds in the data. Since I have neither the data nor the resources to conduct this research, though, a comment from me amounts perhaps to overreaching. Still, I’ll try to present what I personally see in the marketplace and do so (hopefully) without any biases. TYPES OF FORECASTING RESEARCH In my mind, there are four types of research on forecasting: • Statistics/mathematics-focused - With many subfields, such as time-series fore- casting and causal modeling - For the most part, using computer software • Behavioral-focused - Judgmental forecasts and judgmental adjust- ments to statistical forecasts - Behavioral biases and silo mentalities • Big-data-based - A relatively new line of research - Applies to both structured and unstructured data • Business performance focused - e impact of better (or worse) forecasting practices on the performance of the business - Forecast performance assessment and benchmarking Statistical Research For the most part, statistical research is ahead of actual business practice. ere appears to be a long gap between the publication of research and its first implementation in commercial software. I believe the gap here is not only wide but growing wider as more and more factors are brought into the forecasting model, resulting in greater complexity and confusion about the meaning of the results. Very relevant, then, is the question of whether more complex methods actually perform any better than simpler alternatives. ere is a body of evidence to the contrary, including the very recent Foresight article by Stephan Kolassa (2016), “Sometimes It’s Better to Be Simple than Correct.” A proposed new method should demonstrably improve upon existing (and simpler) alternatives, not just on one specific sample but across a broader range of data sets. Behavioral Research Similarly, behavioral research is ahead of the curve compared to what is being used in the business world. It has reached a critical mass whereby almost every other book seems to be touching on how the human brain works and how it affects the ways we do things. With this awareness, I see much discussion and some accommodation in the process of forecasting. Big Data Big-data-based research has come into its own in the last decade. e essential idea is to incorporate more detailed (and larger) data sets into forecasting models. A newer variant involves the use of non- structured data such as comments in tweets, blogs, and sections of websites. Business Performance is fourth type of research is the most relevant to our discussion. You would think—and probably hope —that this type of research makes the connection between theory and practice and points out the ben- efits of good forecasting to the business user. It is in this area that research can provide the most value to the business user. Forecasting deals with the future, and the future is inherently uncertain. Forecasters with any reasonable Forecasting: Academia versus Business Reprinted from Foresight, Issue 41 (Spring 2016) SUJIT SINGH SPECIAL FEATURE

Upload: others

Post on 27-Jun-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: SPECIAL FEATURE - Foresight · business practitioner forecasting gap is real, we’d like to tighten our focus on three particular areas: • Knowledge Gap. Many practitioners know

FORESIGHT Summer 201624

In a recent discussion on LinkedIn, the question was raised: Is there a disconnect between the academic

and the business world when it comes to forecasting?

As you can imagine, this is a matter of checking the record and reporting on what one finds in the data. Since I have neither the data nor the resources to conduct this research, though, a comment from me amounts perhaps to overreaching. Still, I’ll try to present what I personally see in the marketplace and do so (hopefully) without any biases.

TYPES OF FORECASTING RESEARCH

In my mind, there are four types of research on forecasting:

• Statistics/mathematics-focused- With many subfields, such as time-series fore-

casting and causal modeling- For the most part, using computer software

• Behavioral-focused- Judgmental forecasts and judgmental adjust-

ments to statistical forecasts- Behavioral biases and silo mentalities

• Big-data-based - A relatively new line of research - Applies to both structured and unstructured data

• Business performance focused- The impact of better (or worse) forecasting

practices on the performance of the business- Forecast performance assessment and

benchmarking

Statistical ResearchFor the most part, statistical research is ahead of actual business practice. There appears to be a long gap between the publication of research and its first implementation in commercial software. I believe the gap here is not only wide but growing wider as more and more factors are brought into the forecasting

model, resulting in greater complexity and confusion about the meaning of the results.

Very relevant, then, is the question of whether more complex methods actually perform any better than simpler alternatives. There is a body of evidence to the contrary, including the very recent Foresight article by Stephan Kolassa (2016), “Sometimes It’s Better to Be Simple than Correct.”

A proposed new method should demonstrably improve upon existing (and simpler) alternatives, not just on one specific sample but across a broader range of data sets.

Behavioral ResearchSimilarly, behavioral research is ahead of the curve compared to what is being used in the business world. It has reached a critical mass whereby almost every other book seems to be touching on how the human brain works and how it affects the ways we do things. With this awareness, I see much discussion and some accommodation in the process of forecasting.

Big DataBig-data-based research has come into its own in the last decade. The essential idea is to incorporate more detailed (and larger) data sets into forecasting models. A newer variant involves the use of non-structured data such as comments in tweets, blogs, and sections of websites.

Business PerformanceThis fourth type of research is the most relevant to our discussion. You would think—and probably hope —that this type of research makes the connection between theory and practice and points out the ben-efits of good forecasting to the business user. It is in this area that research can provide the most value to the business user.

Forecasting deals with the future, and the future is inherently uncertain. Forecasters with any reasonable

Forecasting: Academia versus BusinessReprinted from Foresight, Issue 41 (Spring 2016)SUJIT SINGH

SPECIAL FEATURE

Page 2: SPECIAL FEATURE - Foresight · business practitioner forecasting gap is real, we’d like to tighten our focus on three particular areas: • Knowledge Gap. Many practitioners know

www.forecasters.org/foresight FORESIGHT 25

training and/or experience recognize that one cannot codify uncertainty. As a result, no one really buys into the idea that a particular forecast method is perfect. The focus is put on reducing forecast error as much as possible.

I think that traditionally this has been done by com-paring forecast accuracy before and after a project or across other comparative situations. Forecast-accuracy improvements are fine as far as they go. However, the business world is always interested in the return on investment (ROI).

ROI is important for many reasons. During project approval stages we need to calculate the anticipated ROI in order to win management’s approval for the project. And it’s often the case that multiple projects are vying to get access to money from the same pool. Similarly, at the end of the project we need to show the gains made and the percentage of them that can be attributed to the forecast improvements.

THE GAPS

It is here that I see the biggest gap. The types of ques-tions that I see practitioners asking include these:

• What is a 10% improvement in forecast accuracy worth to my business?

- At what level is this 10% accuracy improvement taking place – the overall business level or the SKU-Customer level?

- At what time lag is this 10% accuracy improve-ment? One month in advance or three months in advance for companies with long lead times?

• Is this the same if the starting forecast accuracy is 20%? 40-60-80-90%? Is the impact easier to achieve if the starting forecast accuracy is very low?

• What is the minimum/average/maximum improve-ment one should expect using a commercial soft-ware package relative to (a) what the organiza-tion is doing currently and (b) a naïve (no change) model?

• What is the minimum/average/maximum improvement one should expect using bottom-up/top-down/middle-out approaches?

• Can forecast accuracy be correlated with a forecast-ability score such as the Coefficient of Variation? Theoretically, more-forecastable data should make it easier to get to more accurate results. What are the appropriate cutoffs of the said forecastability scores to determine different levels of forecastabil-ity? For each of these levels, what is an achievable degree of accuracy in the forecast?

• What is the expected improvement from collab-orative input supplied by people in the field, such as sales reps?

• If I measure forecast accuracy at the top level of my business, are there reliable ways to estimate forecast accuracy at the lower levels of my forecast hierarchy?

These questions elicit great interest from business folks. Even when practitioners have no interest in the details behind the science, they still need to know the answers to these types of questions in order to gain project approval from their management offices.

I see limited work along these lines coming out of academia. On the other hand, a lot of the work in this area is done by software vendors – but can this work be considered trustworthy? It is my hope that fore-casting researchers will give more attention to these practical aspects of forecasting research.

Sujit Singh, CFPIM, CSCP, Chief Operating Officer of Arkieva (www.arkieva.com), is respon-sible for managing the delivery of software and implementation services as well as customer rela-tionships and the day-to-day op-erations of the corporation.

[email protected]

Page 3: SPECIAL FEATURE - Foresight · business practitioner forecasting gap is real, we’d like to tighten our focus on three particular areas: • Knowledge Gap. Many practitioners know

Don’t Miss Out!

Business forecasters read Foresight to deepen their professional forecasting knowledge and improve their forecasts, in whatever context they’re used. Subscribe or Renew Now!

RENEW or START YOUR SUBSCRIPTION: www.forecasters.org/foresight/subscribe

RENEW or START YOUR IIF MEMBERSHIP:www.forecasters.org/join

The 2016 Foresight Practitioner Conference — October5-6, Raleigh, NC

Worst Practices in Forecasting: Today’s Mistakes to Tomorrow’s Breakthroughs

Eliminating harmful forecasting practices can bring breakthroughs in your forecasting function that dwarf the gains you’d get from squeezing an extra trickle of accuracy from forecasting methods and procedures.

Join us this October 5- 6 on the campus of North Carolina State University in Raleigh, NC, where our invited speakers—experienced practitioners, scholars, and consultants—will lead you to uncover these bad habits and equip you with strategies to mitigate the policies, behaviors, and procedures that chronically hinder forecasting performance.

For more info or to register, visit:https://forecasters.org/foresight/2016- conference/

The International Journal of Applied Forecasting www.forecasters.org/foresight

Page 4: SPECIAL FEATURE - Foresight · business practitioner forecasting gap is real, we’d like to tighten our focus on three particular areas: • Knowledge Gap. Many practitioners know

FORESIGHT Summer 201626

INTRODUCTION

In an interesting article published in the last issue of Foresight, Sujit Singh

(2016) comments on the widening gap between academic forecasting and busi-ness forecasting. Sujit argues that there are two main reasons for the gap. First, academics are not spending enough time researching the questions that practitioners are asking. Second, where new methods have been proposed by aca-demics, they are often overly complex and under evaluated.

We agree with Sujit that the gap between academics and practitioners of business forecasting is too wide. However, we do not agree that the fault for the gap should be laid entirely at the door of academia. “It takes two to tango,” as the saying goes. There is much that academics need to do and we outline our own views on this. But there are also things that practitioners need to do. Moreover, many of these things can be done much better working together rather than apart.

DEFINING AND REFINING THE GAP

While acknowledging that the academia/business practitioner forecasting gap is real, we’d like to tighten our focus on three particular areas:

• Knowledge Gap. Many practitioners know little forecasting theory, even relating to the most basic of forecasting methods. Similarly, many researchers are conducting their work with little knowledge of the practical settings in which forecasting is conducted. Both of these issues need to be addressed if there is to be any real hope of closing the remaining gaps.

• Research Gap. We agree with Sujit that there are topics that interest practitioners but are underrepresented in academic research. In fact, we would go further and say that there are whole areas of research that receive too little attention in academia. Our own area of specialization, supply-chain forecasting, is an example of this. Unfortunately, some of the research that is published is rather remote from real life, as they say – a consequence of the authors’ lack of interest in or understanding of the busi-ness context. For progress to be made, stronger research partnerships need to be established between academia and business.

• Implementation Gap. We also agree that there is a wide gap between new research findings being published and their implementation in practice. We strongly support Sujit’s view that meth-ods should be tested on a wide variety of data sets and compared with simpler benchmark methods. However, there are

examples of innovations in forecasting methods that have performed well on real data sets which have yet to be imple-mented by the main software providers. In these instances, software developers could do more to translate theory into practice for the benefit of their clients.

THE KNOWLEDGE GAP

There is a clear need for better training of those involved in business forecast-ing. In our experience, practitioners may have received some training in the basic operation of a forecasting or Enterprise Resource Planning (ERP) system, but hardly any training in the forecasting

Commentary: It Takes Two to Tango JOHN E. BOYLAN AND ARIS A. SYNTETOS

Companies may have invested large sums in the software itself. Their return on that investment could be increased substantially if the software were used by well-trained forecasters.

Page 5: SPECIAL FEATURE - Foresight · business practitioner forecasting gap is real, we’d like to tighten our focus on three particular areas: • Knowledge Gap. Many practitioners know

www.forecasters.org/foresight FORESIGHT 27

methods used by the software. This is a false economy. Companies may have invested large sums in the software itself. Their return on that investment could be increased substantially if the software were used by well-trained forecasters.

Another factor that may inhibit practi-tioners’ training is concern on their part that they have too little mathematical background to benefit from it. Here, the onus is on training providers, including universities, to design courses that genu-inely enhance understanding of forecast-ing but make minimal assumptions of previous knowledge.

There is also a need for more research on forecasting education. This is a neglected aspect of research, with the potential to enhance teaching methods to the benefit of those learning them for the first time.

The second knowledge gap rests with academics; most have never worked in business organisations and have had little if any contact with companies during their academic careers. There are those academic researchers who test their theo-ries using company data but without any further contact with the business world. This goes only so far towards addressing the knowledge gap. Greater involvement in the real problems faced by organisa-tions is needed. This may be achieved in a variety of ways, including business secondment, applied research projects, and supervision of student projects in industry.

THE RESEARCH GAP

From a research perspective, “forecasting in organisations” is quite distinct from “time-series analysis.” While the latter thrives, the former is somewhat neglect-ed. This is regrettable because without research on forecasting in organisations, we are not likely to understand the impact of forecasting on business performance.

Take the topic of supply-chain forecast-ing. There have been many papers written on this subject (see the review article by Aris Syntetos and colleagues, 2016), but relatively few papers consider the business context in any depth. For example, there

is flourishing research on the benefits of information sharing for supply-chain forecasting. However, this research has been predominantly theoretical in nature and subject to very specific assumptions and demand-process formulations.

This is, in a sense, inevitable – but as research progresses, these assump-tions should become less restrictive. Unfortunately, there has been hardly any case-based research investigating the business performance benefits from demand information sharing in practice and how such an evaluation should be conducted. This would not necessar-ily give a magic formula for a business to estimate benefits, but could yield insights into how an organisation should make its own assessments.

Sujit mentions the potential for improve-ment in forecast accuracy from using inputs by people in the field, such as sales reps. This is a good example of judgmen-tal forecasting, an area that’s attracted the interest of researchers but with rela-tively little actual research conducted on real business practice.

A notable exception is the research of Robert Fildes and colleagues (2009).

■ Academics should pay greater attention to forecasting in organisations to complement the more theoretical work they are conduct-ing in time-series analysis.

■ Researchers should provide more academic, case-based studies to offer frameworks for evaluating forecasting and its impact on busi-ness performance.

■ Companies should share data more readily with universities, and academics should put greater emphasis on the empirical testing of methods in industry.

■ Software providers should be more proactive in collaborating with universities in the imple-mentation of new methods, particularly those that have performed well in empirical tests.

Key Points

Page 6: SPECIAL FEATURE - Foresight · business practitioner forecasting gap is real, we’d like to tighten our focus on three particular areas: • Knowledge Gap. Many practitioners know

FORESIGHT Summer 201630

In his article in the Spring 2016 issue of Foresight, Sujit Singh raises an impor-

tant issue: Are academic researchers in forecasting addressing the questions that practitioners want answers to?

Sujit’s perception is that the picture is mixed. In some cases, academic work is having a beneficial influence on practice. For example, there is an increased aware-ness in organizations of behavioural issues that impinge on the forecasting process. But there is, he argues, one important gap. People in businesses want to know what benefits they can expect from accurate forecasts – particularly in improving their bottom line. They also want to know what potential there is for accuracy improvements in their spe-cific situation. On these key issues, Sujit asserts that the academic forecasting lit-erature is largely mute.

It’s predominantly a reductionist world where forecasting is stripped down to its purest essentials – a world that’s free from company politics, senior-management pressures, complaining customers, or unreliable suppliers; a world where there are no information silos, no rumours about competitors’ intentions, no wishful thinking, and no resistance to statistical algorithms. Above all, it’s a world where generic accuracy metrics—as opposed to profits, losses, costs, and payoffs—reign supreme.Reductionism can be a powerful approach to the progress of knowledge and is essen-tially how Western science has advanced since the Renaissance. In forecasting, it has led to the development of powerful analytic methods and improved ways of confronting uncertainty.

But it does need to be complemented by other approaches, such as the application

of qualitative and holistic research methods in live forecasting situations. Unfortunately, such approaches can be difficult and messy. Academics need access to companies and their data, which isn’t always easy. They also need to make sense of ill-defined phenomena like com-pany cultures, corporate relationships, and modus operandi. And they need to be able to argue that valid inferences about forecasting in general can be drawn from the small sample of willing companies that they have had time to study in depth. It’s far less risky to create some artificial data or to download some secondary data

Commentary: Academic Frameworks for Practitioner UsePAUL GOODWIN

It’s predominantly a reductionist world where forecasting is stripped down to its purest essentials – a world that’s free from company politics, senior-management pressures, complaining customers, or unreliable suppliers; a world where there are no information silos, no rumours about competitors’ intentions, no wishful thinking, and no resistance to statistical algorithms.

THE ACADEMIC WORLD

Anyone attending an academic conference on forecasting and then visiting forecast-ing practitioners working in their compa-nies is likely to perceive a disconnection between the two worlds. Academic con-ferences belong to a world where quanti-tative models, some of which are highly sophisticated, battle it out to achieve statistically significant improvements in MAPEs and RMSEs. It’s where we hear that management students, participating in clever experiments, are able to reduce their judgmental forecasting biases through better elicitation processes.

Page 7: SPECIAL FEATURE - Foresight · business practitioner forecasting gap is real, we’d like to tighten our focus on three particular areas: • Knowledge Gap. Many practitioners know

www.forecasters.org/foresight FORESIGHT 31

and to evaluate how well the latest cut-ting-edge method succeeds in forecasting that data using neatly defined accuracy metrics.

BARRIERS TO ENTERING THE PRACTITIONER WORLD

But why aren’t the specific gaps that Sujit highlights being addressed? One reason is that academics need to know what prac-titioners want. Surveys have repeatedly indicated that achieving high accuracy is at the top of forecasters’ objectives (e.g., see Fildes & Goodwin, 2007).

What Do Practitioners Want? In a 2006 survey of U.S.-based forecast-ing executives (McCarthy & colleagues, 2006), 82% of respondents indicated that accuracy was either important or extremely important. However, only 27% thought that ROI was important or very important when judging the effectiveness of sales forecasting. Of course, the views of forecasting executives may differ from those of senior managers who, at the end of the day, make the decisions on whether to invest in enhanced forecasting pro-cesses. But survey results such as these likely motivate academics to focus on accuracy improvements per se, and not their broader ramifications.

What’s Good for the Goose May Not Be…. A second and obvious problem is that companies may be different. A 10% accuracy enhancement in your company might be worth a lot more, in monetary terms, than a 10% enhancement in mine – even if we currently have similar levels of accuracy. The same applies to many of the other questions raised by Sujit. For example, the greater involvement of sales reps in the forecasting process may be more beneficial in one company than in another. In some cases, it may even

lead to deterioration of accuracy. Mike Gilliland’s article “Role of the Sales Force in Forecasting” (Gilliland, 2014) provides both sides of the picture.

In an ideal world, researchers might perhaps create a formula into which you could enter the characteristics of a given forecasting situation—the nature of the company and its products, the features of its market, and so on—and out would pop an estimate of potential accuracy improvements and the associated mon-etary benefits and ROI. The problem is that obtaining a reliable formula would require analysis of data from a very large and representative sample of companies. Without this information, we just don’t know how far general rules can be estab-lished for translating improvements in accuracy measures into dollars in specific situations.

FRAMEWORKS, NOT SPECIFICS

Given these difficulties, academics might be better employed trying to develop a framework that could be used to address these issues, rather than attempting to provide specific answers in particular situations. After all, I don’t know which computer system is best for your business, but I can supply you with a decision-anal-ysis framework for making that choice, along with ways to identify the attributes you want your new system to have, ways to determine the relative importance of these attributes, and so on. As in decision analysis, the framework would need to be underpinned by a solid body of theory. Such a theory could take into account not only how forecast errors directly relate to hard cash, but also how customers are likely to behave in the future when they find that their favorite product is absent from the supermarket shelves.

Page 8: SPECIAL FEATURE - Foresight · business practitioner forecasting gap is real, we’d like to tighten our focus on three particular areas: • Knowledge Gap. Many practitioners know

FORESIGHT Summer 201632

A useful framework could guide the fore-caster toward the information he or she should seek to determine the benefits of, say, a 10% improvement in accuracy and how the collected information might be aggregated. Some of this information may need to come from managers’ judg-ments or even market research data. For example, it’s often difficult to put an exact monetary cost on the loss of customer goodwill resulting from stock-outs caused by forecast errors. Such a structured approach to forecast system evaluation could be documented and hence auditable and defensible. After all, most ROI analy-ses and most strategic decisions rely on management judgments to a considerable extent.

IMPACT ASSESSMENT

There’s little point in carrying out aca-demic forecasting research if it has no chance of ultimately yielding practical benefits in organizations or in society as a whole. Clever and sophisticated fore-casting methods developed in academia with proven accuracy benefits are a waste of time if they end up unused because no one understands or trusts them, or appreciates their benefits. Consultants’ recommendations, based on academic research, are a waste of money if they end up collecting dust on office shelves. Yet when the relationship between aca-demia and business clicks, there can be significant gains. For example, a research

project I was involved in has subsequently led to an improved role for management judgment in forecasting in two of the companies that participated (Fildes & col-leagues, 2009).

In the UK, the government recently intro-duced a system of impact assessment to receive funding for research: you now need to demonstrate that the research outcomes are likely to have a positive impact on the world. Similarly, fund-ing for British business schools is now partly based on the impact that their past research has had. For academic forecast-ers, at least those in the UK, these devel-opments may act as a fillip to encourage a greater alignment of research with the needs of organizations and, better still, to develop ways of measuring and demonstrating the impact of improved forecasting.

But, as John Boylan and Aris Syntetos note in their commentary here, “it takes two to tango.” Academics need more com-panies to open their doors and their data files (which can be heavily disguised to preserve confidentiality) to researchers. When and if this happens, it can only be to the mutual benefit of both academia and business.

REFERENCESFildes, R. & Goodwin, P. (2007). Against Your Better Judgment? How organizations Can Improve their Use of Management Judgment in Forecasting, Interfaces, 37, 570-576.

Fildes, R., Goodwin, P., Lawrence, M. & Nikolopoulos, K. (2009). Effective Forecasting and Judgmental Adjustments: An Empirical Evaluation and Strategies for Improvement in Supply-Chain Planning, International Journal of Forecasting, 25, 3-23.

Gilliland, M. (2014). Role of the Sales Force in Forecasting, Foresight, Issue 35 (Fall 2014), 8-13.

McCarthy, T.M., Davis, D.F., Golicic, S.L. & Mentzer, J.T. (2006). The Evolution of Sales Forecasting Management: A 20-year Longitudinal Study of Forecasting Practices, Journal of Forecasting, 25, 303.

Paul Goodwin is Emeritus Professor of Management Science, University of Bath and Foresight’s Editor for Hot New Research.

[email protected]

Page 9: SPECIAL FEATURE - Foresight · business practitioner forecasting gap is real, we’d like to tighten our focus on three particular areas: • Knowledge Gap. Many practitioners know

www.forecasters.org/foresight FORESIGHT 33

There is much to like and agree with in Sujit Singh’s examination of the gaps

between forecasting research and applica-tion. And it befits Foresight—perhaps our best hope for bridging those gaps—to bring these issues to the forefront.

The quantity and variety of available data grow every year, as does our com-putational power; software sophistica-tion expands with each new commercial release (or R package). Under the circum-stances, we might expect a continuous progression of improved forecasting per-formance by businesses, but this doesn’t seem to be happening. Is this failure due to misguided research by academics, or the sloth and hubris of business manag-ers and executives loath to implement new methods?

MORE COMPLEX METHODS

Singh identifies four types of forecast-ing research: statistical, behavioral, big-data-based, and performance focused (although I might put “big-data-based” as a subset of “statistical”). He notes that

there will always be some gap between research publication and its implemen-tation in commercial software, and the time gap may be small or nonexistent in open source. Yet the more relevant issue is whether newer (and frequently more complex) methods actually perform bet-ter at forecasting than existing and sim-pler alternatives.

Stephan Kolassa (2016) and Paul Goodwin (2011) have considered this question. Defining a correct model as one that includes all the important demand influencing factors, Kolassa found such

a model may actually forecast worse than a simpler incorrect model. Goodwin illustrated, with many examples, that proposed new methods often have scant empirical support and rightly questions whether they even merit publication. Singh is correct to assert that proposed new methods “should demonstrably improve upon existing (and simpler) alternatives, not just on one specific sample but across a broader range of data sets.”

Even granting a new method will reduce forecast error, it is fair to question whether small improvements have any real business benefit. This is because the value of more accurate forecasting lies in better decision making that results in actions that improve company financial performance. Small error reductions (say from 40% to 39.6% – a 1% improvement) will likely go unnoticed and not change any decisions. Calculations purporting to show big dollar value for small forecast-accuracy improvements should be viewed with suspicion.

MORE AND BIGGER DATA

Just as more complex methods do not guarantee better forecasts, it has not yet been demonstrated that the use of “big data” will help matters either. But this hasn’t stopped the cheerleading from the industry analyst community.

In a long and incisive LinkedIn Pulse post, Shaun Snapp (2016) delivered what one commenter described as a “beat down” of Gartner’s claims about the value of big data in forecasting (made in their “Magic Quadrant on Supply Chain Planning” for 2016). Just because we can now access

Commentary: Refocusing Forecasting ResearchMICHAEL GILLILAND

Just because we can now access unprecedented variety, granularity, and quantity of demand-related data doesn’t prove it will help us forecast. This is a question for research.

Page 10: SPECIAL FEATURE - Foresight · business practitioner forecasting gap is real, we’d like to tighten our focus on three particular areas: • Knowledge Gap. Many practitioners know

FORESIGHT Summer 201634

unprecedented variety, granularity, and quantity of demand-related data doesn’t prove it will help us forecast. This is a question for research. And if the results turn out negative—that big data doesn’t improve the forecast—that could save companies a lot of wasted effort.

SOFTWARE UPGRADES

Sujit brings up some extremely funda-mental issues for research, such as what improvement can we realistically expect from statistical forecasting software com-pared to a company’s current practices and to a naïve model. (This is a key con-sideration in the software buying deci-sion, yet companies shouldn’t have to rely on vendor marketing materials for the answer!) Without independent research, we can’t get trustworthy answers to ques-tions like this – but the research is diffi-cult. Companies are reluctant to share the necessary data.

MISBEHAVIORS

In the behavioral realm, anyone having held positions in business forecasting knows the political element of the fore-casting process. In order to thrive, it may be less important to provide the most accurate forecasts than to provide fore-casts acceptable to management. John Mello (2016) has characterized many sales forecasting “misbehaviors” – actions which are perfectly rational by those com-mitting them, yet can be detrimental to the company. This is an important area of research that, we can hope, will be recog-nized and applied by business managers.

Recent work by Steve Morlidge (2014) has exposed the primitive state of real-life business forecasting. In his sample of eight companies and over 300,000 actual forecasts, Morlidge found over half of their forecasts were worse than a naïve “no change” model. We don’t know whether it was lousy software or ill-advised manual adjustments that created this negative “value add” of their forecasting processes – we just know there is a serious problem that needs to be addressed.

Morlidge’s findings, and earlier results from Robert Fildes and Paul Goodwin (2007), beg for more academic research into the misbehaviors and “worst prac-tices” that are causing such real-life grief. The Foresight Practitioner Conference in October 2016 will delve deeply into these causes. The event theme is “Worst Practices in Forecasting: Today’s Mistakes to Tomorrow’s Breakthroughs,” and Goodwin and Morlidge are among the confirmed speakers.

Perhaps the focus of forecasting research should be less about moving performance from good to great – squeezing out every fraction of a percent of accuracy. Instead, it might be more valuable to have research that helps us improve from “shooting ourselves in the foot” to “doing no worse than the naïve model.” Now if we could just figure that one out!

REFERENCESFildes, R. & Goodwin, P. (2007). Good and Bad Judgment in Forecasting: Lessons from Four Companies, Foresight, Issue 8 (Fall 2007), 5-10.

Goodwin, P. (2011). High on Complexity, Low on Evidence: Are Advanced Forecasting Methods Always as Good as They Seem? Foresight, Issue 23 (Fall 2011), 10-12.

Kolassa, S. (2016). Sometimes It’s Better to Be Simple than Correct, Foresight, Issue 40 (Winter 2016), 20-26.

Mello, J. (2016). Toward a More Rational Forecasting Process: Eliminating Sales-Forecasting Misbehaviors, Foresight, Issue 41 (Spring 2016), 14-17.

Morlidge, S. (2014). Using Relative Error Metrics to Improve Forecast Performance in the Supply Chain, Foresight, Issue 34 (Summer 2014), 39-46.

Snapp, S. (2016). Why Gartner Is Mistaken about Big Data’s Effect on Forecasting, https://www.linkedin.com/pulse/why-gartner-mistaken-big-datas-effect-forecasting-shaun-snapp

Michael Gilliland is Marketing Manager for forecasting software at SAS and Foresight Editor for Forecasting Practice. Mike is the author of The Business Forecasting Deal (2010), a book conceived to expose myths and eliminate bad practices in forecasting, and principal editor of Business Fore-casting: Practical Problems and Solutions (2016).

[email protected]

Page 11: SPECIAL FEATURE - Foresight · business practitioner forecasting gap is real, we’d like to tighten our focus on three particular areas: • Knowledge Gap. Many practitioners know

www.forecasters.org/foresight FORESIGHT 35

Sujit Singh asked this interesting ques-tion (originating from a discussion

on LinkedIn) in the Spring 2016 issue of Foresight: “Is there a disconnect between the academic and the business world when it comes to forecasting?” We’ve given this question some thought and decided to add our input, since both of us have extensive experience in business as well as academia. Coming from a manu-facturing and supply-chain management background, we will frame our comments within the confines of what academia could provide manufacturing and supply-chain managers in terms of beneficial research.

Both of us worked for a company that built its master schedules on sales fore-casts and judged the performance of manufacturing facilities on “attainment” —production of SKUs based on what the master schedule required. Informed by our experience with this method, we feel that forecasts alone should not drive manufacturing, but rather should serve as general guidelines for establishing machine capacity, available resources, and raw and packaging materials.

Incorrect forecasts greatly impact “avail-able time” in manufacturing, whether at the supplier or the producer. Once time is misallocated in production, it is lost for-ever. In manufacturing, time equates to machine availability, people availability, and material availability; in short, when forecasts are off, the business consumes more resources than it should, which is the classic definition of waste. Moreover, lost production time for one product means that the business cannot respond

to any corresponding oversells of another forecasted product. We therefore take the position that what manufacturing companies could benefit from are timely and frequent forecasts, based on the most current information, that plant schedul-ers and supply-chain managers can react to quickly.

What we propose is academic research on the potential benefits of what we term advisory forecasts − forecasts that give plant schedulers and supply-chain man-agers updated information on sales and how these might translate into future sales in the short term. This idea came

from our observations in the consumer packaged-goods industry that certain types of products could be predicted to over- or undersell the forecast based on existing sales partway through the month. We expect that adjustments could be made to the production schedule and stock-allocation plan if one could get a short-term forecast predicting actual sales based on SKU-level patterns within the month and other timely, pertinent data. We are talking here of forecasts that project out only a few weeks and predict what the sales will likely be, delivered in weekly increments, based on the most current information available.

At a CPG company where one of the authors worked, the entire merchant (marketing in nonretail businesses) and operations teams met every Monday night to review store/product perfor-mance across the business. The focus was on “what’s selling and what’s not selling.” By Wednesday, there were new

Commentary: Research Needed on Advisory ForecastsJOHN MELLO AND JOSEPH ROY

Forecasts alone should not drive manufacturing, but rather should serve as general guidelines for establishing machine capacity, available resources, and raw and packaging materials.

Page 12: SPECIAL FEATURE - Foresight · business practitioner forecasting gap is real, we’d like to tighten our focus on three particular areas: • Knowledge Gap. Many practitioners know

FORESIGHT Summer 201636

store plans for some products, including promotions/discounts. These plans would be sent to stores Wednesday evening and implemented on that weekend. For the supply-chain team, that meant a particu-lar product would be allocated to all 1,600 stores and shipped on Thursday or Friday. No forecast can react within this time frame.

We took the most recent forecast and compared it to the manufacturing plan that was established, a plan that included length of production runs, changeover time and frequency, store sales, and antic-ipated promotions. The most important information coming from the planning department was the rate at which store sales (build) would grow as a result of the promotion or through special selling sea-sons (Holiday 1 and Holiday 2, Valentine’s, Mother’s Day, Back to School). The fore-cast and build rates were used to establish a responsive manufacturing plan, which only changed the frequency of SKU pro-duction. As a result, forecast shortfalls or oversells did not impact product avail-ability at the store. In fact, total company inventories decreased significantly, while

in-stock service levels rose to 99.1% (as measured every Monday morning).

Research into advisory forecasts could investigate whether sales predictions can be made for certain products or product families based on 1) previous sales pat-terns, and 2) where the sales are at any given time period within the month. This would fall under what Singh classi-fies as “statistics/mathematics-focused” research and include previous work on forecasting with partially known demands, such as the publication by Sunder Kekre and colleagues (1990).

Computer models could potentially be built to research the impact of such fore-casts and subsequent production and stock deployment decisions on inventory carrying costs, customer service, return on investment, and other metrics. This research would fall under the “business-performance focused” heading.

Research could also be conducted into how competitive information on promo-tions, new products, or other deals – plus social-media data and point-of-sale infor-mation – might be incorporated into an advisory forecast, and whether these kinds of information would improve such a forecast. This research would fall under “big-data-based” research. Behavioral research could also be conducted into the impact of judgmental adjustments to short-term advisory forecasts, and the types of persons best positioned and equipped to provide such a forecast.

What we propose would not eliminate but rather supplement the existing sales-forecasting process within a company. An advisory forecast could potentially enable manufacturing and supply-chain manag-ers to improve resource allocation (time, people, machines, materials, finished goods) while maintaining or even improv-ing customer service. We see this as one important way in which academia could bridge the gap between sales-forecasting research and the needs of businesses.

REFERENCEKekre, S., Morton, T., & Smunt, T. (1990). Forecasting with Partially Known Demands, International Journal of Forecasting, V 6:1, 115-125.

John E. Mello is Professor of Supply Chain Management and Director of the Center for Supply Chain Management at Ar-kansas State University. He brings to academ-ia almost three decades of experience in the CPG industries. For the past three years he has

served as Foresight Editor for S&OP.

[email protected]

Joseph Roy is a senior supply-chain executive with plant management and global supply-chain experience in consumer products and retail. He is a graduate of Norwich University, received an MA from North Carolina State and an MBA from the University of New Haven. Joe has held positions of Chief Operating Officer (Hernon Manufacturing),

Senior Vice President Supply Chain (Bath & Body Works), and Di-rector of Manufacturing and Plant Manager (Unilever Home & Per-sonal Care North America). For the past three years he has taught supply-chain management as an Adjunct Professor at Arkansas State University.

[email protected]

Page 13: SPECIAL FEATURE - Foresight · business practitioner forecasting gap is real, we’d like to tighten our focus on three particular areas: • Knowledge Gap. Many practitioners know

www.forecasters.org/foresight FORESIGHT 37

Issue 41 of Foresight featured an article by Sujit Singh on the gaps between aca-demia and business. Since we happen to be academicians whose focus is on pro-ducing and disseminating research that is directly applicable to commercial prac-tices, we feel we’re in a good position to tackle the points raised by Sujit. In this commentary, we present our views on some of these very useful and interesting points and conclude with our vision for enhanced communication between the two worlds.

ON TRANSLATING ACCURACY TO MONEY

It is true that the majority of traditional error measures (along with the MAPE, very widely used in practice) focus on the performance of point forecasts and their respective accuracy. These are con-venient as summary statistics that are context-free but hardly relate to the costs of real business decisions. A critical ques-tion, then, is how these are translated into business value and how improving forecasting affects utility metrics, such as inventory and backlog costs, custom-er-service levels (CSL), and the bullwhip effect. Fortunately, there is a good bit of research that focuses on such links. Here are two examples.

A very recent study by Barrow and Kourentzes (2016) explored the impact of forecast combinations—combining fore-casts from different methods—on safety stocks and found that combinations can lead to reductions compared to using a single “best” forecast. Another current study, by Wang and Petropoulos (2016), evaluated the impact on inventory of base-statistical and judgmentally revised

forecasts. Both of these works show that there is a strong connection between the variance of forecast errors and improved inventory performance.

However, one important point has to be emphasised here: there is limited trans-parency in how forecasts produced by demand planners are translated into ordering decisions by inventory manag-ers. Research typically looks at idealized cases, ignoring the targets and politics that drive inventory decisions. In such cases, the economic benefit of improved

forecasts may not reflect organizational realities: forecasting research should pay more attention to the organisational as-pects of forecasting.

ON WHAT IS “GOOD” ACCURACY

Forecast accuracy levels vary across differ-ent industries and horizons. For example, a 20% forecast error would be sensible in certain retailing setups but disastrous in aggregate electricity-load forecasting. Short-term forecasting is typically easier, while long-term is more challenging. The nature of the available data is also rel-evant: fast- versus slow-moving items, presence of trend and/or seasonality, pro-motional frequency, and so on.

Our approach would be always to bench-mark against (i) simple methods, such as naïve or seasonal naïve, and (ii) industry-specific (“best practices”) benchmarks. Reporting the improvements in accuracy relative to these benchmarks helps iden-tify specific problems with the forecast-ing function and can lead to further refinements. Using relative metrics also overcomes the misplaced focus on what is a good target for percentage accuracy,

Commentary: Two Sides of the Same CoinFOTIOS PETROPOULOS AND NIKOLAOS KOURENTZES

There is limited transparency in how forecasts produced by demand planners are translated into ordering decisions by inventory managers. Research typically looks at idealized cases, ignoring the targets and politics that drive inventory decisions.

Page 14: SPECIAL FEATURE - Foresight · business practitioner forecasting gap is real, we’d like to tighten our focus on three particular areas: • Knowledge Gap. Many practitioners know

FORESIGHT Summer 201638

since these targets do not appreciate the data intricacies that the forecast has to represent.

ON AVAILABLE SOFTWARE PACKAGES

Different software packages offer differ-ent core features, with some of them spe-cialising in specific families of methods and/or industries. Previously, software vendors were invited to participate in large-scale forecasting exercises (see M3 competition) with the relative rankings of the participating software being avail-able through the original (Makridakis & Hibon, 2000) and subsequent research reports.

In any case, the expected benefits from adopting a software package are a func-tion of data availability, the forecast ob-jective (what needs to be forecast and how long into the future), and the need for au-tomation. Nonetheless, there is need for an up-to-date review and benchmarking of available commercial and non-com-mercial software packages. Differences exist even in the various implementations of the simplest methods (such as Simple Exponential Smoothing), with often un-known effects in accuracy. But while soft-ware packages are important in structur-ing the forecasting process, vendors often impose their own visions of what is im-portant, and these are not often backed up by research. How should one explore the time series at hand? Can we support model selection and specification? How to best incorporate judgemental adjust-ments?

Our view is that software vendors should provide the tools for users of varying de-grees of expertise to solve their problems (see comments on customisability by Pet-ropoulos, 2015), but also be explicit about the risks of a solution. Training users is regarded as an important dimension of improving the forecast quality (Fildes & Petropoulos, 2015), as demand planners cannot be replaced by an algorithm. We should not aim for a single solution that will magically do everything, and there are always “horses for courses.”

ON HIERARCHICAL FORECASTS

Organisations often look at their inven-tory of data in hierarchies. These can be across products, across markets, or across any other classification that is meaning-ful from a decision-making or reporting point of view. Data at different hierarchi-cal levels reveal different attributes of the product history. Although forecasts pro-duced at different hierarchical levels can be translated to forecasts of other levels via aggregation or disaggregation (top-down and bottom-up), the level at which the forecasts are produced will influence the quality of the final forecasts at all the various levels.

Can we know a priori what the best level is to produce forecasts? Unfortunately, it’s just not possible: data have different properties, resulting in different “ideal levels.” More importantly, companies have different objectives, and each objec-tive may require different setups.

We believe that the greatest benefit from implementation of hierarchical approach-es to forecasting is the resulting recon-ciliation of forecasts at different decision-making levels. The importance of aligning decision making across levels cannot be understated. More novel techniques allow hierarchies to be forecast and reconciled across different forecast horizons (Pet-ropoulos & Kourentzes, 2014). Recent research (Hyndman & Athanasopoulos, 2014) has demonstrated that approaches focusing on a single level of the hierarchy, such as top-down or bottom-up, should be replaced with approaches that appro-priately combine forecasts (and subse-quently information) from all aggrega-tion levels.

It’s important to remember that forecasts calculated from data at any level of the hierarchy can be evaluated at all other required levels. You first have to produce the aggregated/disaggregated forecasts, and then compare with the actual data points at the respective level.

FORECASTS ARE USED BY COMPANIES

Research often considers forecasting as an abstract function that is not part of a

Page 15: SPECIAL FEATURE - Foresight · business practitioner forecasting gap is real, we’d like to tighten our focus on three particular areas: • Knowledge Gap. Many practitioners know

www.forecasters.org/foresight FORESIGHT 39

company or its ecosystem. At the same time, there is ample evidence of the bene-fits of collaborative forecasting and infor-mation sharing, both within the different departments of a company and across the supply chain.

A recent example is provided by Trapero and colleagues (2012), who analyse retail data and show that information sharing between retailer and supplier can signifi-cantly improve forecasting accuracy (up to 8 percentage points in terms of MAPE). This research is useful both for modeling in the context of how forecasts are gener-ated and how they are used in organiza-tions.

A CALL FOR MORE DATA AND CASE STUDIES

Sujit urges production of evidence of “minimum/average/maximum” benefits in different contexts. But current fore-casting research has analysed very few data sets, and very few company cases are publicly available. The M1 and M3 com-petition data sets have been utilised time and again in subsequent studies, so that the results and solutions they derived are susceptible to “over-fitting” and hence not generalisable. Most papers on inter-mittent demand forecasting make use only of automotive-sales data as well as data sets from the Royal Air Force in the UK. It would be valuable to test our theo-ries and methods on more diverse data sets, but researchers find these are hard to acquire.

We call on practitioners and on vendors to share (after anonymising) empirical data with researchers. The availability of a large number of time series and/or cross-sectional data across a number of industries will increase our understand-ing of the advantages, disadvantages, and limitations of existing and new forecast-ing methods, models, frameworks, and approaches.

Researchers are hungry for data while practitioners hunger for solutions to their problems; reducing the barriers will benefit both sides. Still, research-ers must appreciate the constraints that

limit a company’s willingness to make its data public, and practitioners need to be more proactive in facilitating forecasting research.

REFERENCESBarrow, D. & Kourentzes, N. (in press). Distributions of Forecasting Errors of Forecast Combinations: Impli-cations for Inventory Management, International Jour-nal of Production Economics.

Fildes, R. & Petropoulos, F. (2015). Improving Forecast Quality in Practice, Foresight, Issue 36 (Winter 2015), 5–12.

Hyndman R. & Athanasopoulos, G. (2014). Optimally Reconciling Forecasts in a Hierarchy, Foresight, Issue 35 (Fall 2014), 42–48.

Makridakis, S. & Hibon, M. (2000). The M3-Competi-tion: Results, Conclusions and Implications, Interna-tional Journal of Forecasting, 16, 451-476.

Petropoulos, F. & Kourentzes, N. (2014). Improving Forecasting via Multiple Temporal Aggregation, Fore-sight, Issue 34 (Summer 2014), 12-17.

Petropoulos, F. (2015). Forecasting Support Systems: Ways Forward, Foresight, Issue 39 (Fall 2015), 5-11.

Trapero, J.R., Kourentzes, N. & Fildes, R. (2012). Impact of Information Exchange on Supplier Forecast-ing Performance, Omega 40, 738-747.

Wang, X. & Petropoulos, F. (in press). To Select or to Combine? The Inventory Performance of Model and Expert Forecasts, International Journal of Production Research.

Nikolaos Kourenztes is an Associate Professor (Senior Lecturer) at Lancaster Univer-sity and a researcher at the Lancaster Centre for Forecasting, UK. Nikos’s research addresses forecasting issues of temporal aggregation and hierarchies, model selection and com-bination, intermittent demand, promotional modeling, and supply-chain collaboration.

[email protected]

Fotios Petropoulos is an Assistant Pro-fessor (Lecturer) at Cardiff Business School of Cardiff University and the Forecasting Support Systems Editor of Foresight. His research exper-tise lies in behavioral aspects of forecasting and improving the forecasting process, applied in the context of business and supply chain. He is the cofounder with Nikos Kourenztes of the Forecasting Society (www.forsoc.net).

[email protected]

Page 16: SPECIAL FEATURE - Foresight · business practitioner forecasting gap is real, we’d like to tighten our focus on three particular areas: • Knowledge Gap. Many practitioners know

FORESIGHT Summer 201640

As Sujit Singh rightly comments, this is a topic on which there is little

research; but, as anyone who operates on the divide between these two worlds is aware, there is an abundance of experi-ential and anecdotal evidence of a discon-nect between academic forecasting and business.

THE DISCONNECT

In my experience, the vast majority of businesspeople are not interested in fore-casting. Forecasting is a means to the end —better service, lower costs, less stock —and the less they need to know about forecasting to meet those goals, the hap-pier they are.

This is misguided, however, not least because the business is then prey to peo-ple who promise to make the problem go away through clever software – which, in fact, does nothing of the sort.

Very early on in my days of selling services to business, I was strongly coached never to refer to “academic evidence,” as this was likely to be seen as proof that what we were offering was probably overly theoretical and of little real-world value.

But for academics in this field, forecasting is the end. Academics tend to have too little interest in what forecasts are used for or how effective they are in practice. I often find my academic friends behav-ing like zoologists who, having only seen occasional stuffed specimens of business people, are often stunned by my tales of the bizarre forecasting behaviours I have observed in the wild on my business travels.

The absence of meaningful dialogue between academics and practitioners is not unique to the field of forecasting, in

part because the rewards systems in these two domains of activity militate against it. In business, the level of complexity is such that it is never easy to definitely measure “real” performance and assign it to a cause, so many of the judgements made about the efficacy of processes and the performance of managers responsible for them are highly unscientific.

In academia, there is a bias toward study-ing problems that are “interesting,” math-ematically tractable, within your specific academic subdiscipline, and recognised as worthy by academic peers.

WHAT NEEDS TO BE DONE?

The big question is: does the disconnect matter? It would not if there were intermediar-ies willing and able to convert academic knowledge into implementable tech-nologies, or if there were a rich source of reliable, independent information on the performance of real-world processes, information that identified shortcomings in practice and highlighted fruitful areas for useful new research.

Unfortunately, neither exists.

As a result, innovation in this area is driven by marketing claims that are taken to be true by virtue of being repeated over and over, and academics are often left as puzzled and frustrated bystanders.

So what needs to be done?

First, there is a need to educate busi-ness managers about forecasting—not just forecasting practitioners.Managers need to appreciate the nature and scale of the contribution that forecasting can make to their business. For example, what is the value of excess

Commentary: The End vs. the MeansSTEVE MORLIDGE

In my experience, the vast majority of businesspeople are not interested in fore-casting. Forecasting is a means to the end—better service, lower costs, less stock—and the less they need to know about forecasting to meet those goals, the happier they are.

Page 17: SPECIAL FEATURE - Foresight · business practitioner forecasting gap is real, we’d like to tighten our focus on three particular areas: • Knowledge Gap. Many practitioners know

www.forecasters.org/foresight FORESIGHT 41

inventory that businesses hold as a result of suboptimal forecasts, and what is the potential impact on service levels?

Managers must be made to understand that forecasting is not something that can be left to experts. There is no software silver bullet that can solve their problems. And outsourcing forecasting processes does not make them go away.

But neither is forecasting just “common sense.” The route to good forecasting often runs in counterintuitive directions. For example:

• Unlike many other performance met-rics, forecast errors are not inherently meaningful – they cannot be targeted or compared.

• Forecast methods with a good fit to history often produce poor forecasts, and a simple (naïve) extrapolation may provide a better forecast than one that “looks like” the pattern of demand.

• More sophisticated algorithms often fail to beat simple ones, and many well-intentioned interventions and process steps degrade rather than improve performance.

In a nutshell, there is a need to create more intelligent consumers of forecasts and forecasting technology.

Second, academics need to acknowl-edge that the merit of any work stands or falls by its ability to improve forecast performance in a way that is meaningful to business.

This means there has to be a demonstrable benefit in real-world performance that can be expressed in terms of money or customer service. These will vary depend-ing how the techniques are applied and the nature of the industry in which they are used.

It also demands an understanding of how forecasts are used, so forecasting in a product supply chain cannot be studied in isolation from inventory management, since this where forecasts are consumed.

This will also involve academics recognis-ing that forecasts are one possible means

to an end and not an end in themselves, so evaluating their performance involves comparing them to alternatives, which might not involve forecasting at all.

For example, through a combination of poor method selection, overfitting, and inappropriate judgemental interventions, it is very easy for a forecast process to destroy value by failing to beat a simple naïve forecast, meaning that a simple kan-ban-style replenishment strategy is supe-rior to forecasting. In my review of the results of the M3 competition, conducted by forecasting experts, 30% of forecasts failed to beat the naïve, which shows that this phenomenon is not merely the prod-uct of bad practice but a consequence of using forecasts in circumstances where other methods may be superior.

What all of these challenges for academics have in common is measurement.

It is, of course, not easy to compare the performance of different methods in dif-ferent circumstances. But just because it is difficult doesn’t mean that it should not be done. The complexity of the prob-lems tackled by meteorologists means that they face much greater challenges in attempting to measure the quality of their forecasts, but they recognise that they would have no claim on the public purse if they weren’t able to measure the contribution that they make. By the same token, for academic forecasters to command the respect of business and to exercise the influence that they deserve, they need to be able to demonstrate the value that they can bring.

Steve Morlidge is founder of CatchBull, whose ForecastQT™ application is used as a Fore-cast Performance Management system; coauthor of Future Ready: How to Master Business Forecast-ing; and author of numerous Foresight articles on forecasting processes and performance metrics.

[email protected]

Page 18: SPECIAL FEATURE - Foresight · business practitioner forecasting gap is real, we’d like to tighten our focus on three particular areas: • Knowledge Gap. Many practitioners know

FORESIGHT Summer 201642

Sujit gives us much to think about on the apparent disconnect between aca-

demia and business forecasting. His com-mentary and, indeed, the LinkedIn dis-cussion on which it was based is a good conversation to pick up periodically.

His suggestion is that academic research outpaces the business world and, to close the gap, the benefits of forecasting must be rephrased in terms with which the business community can identify: aca-demic treatises on statistical forecast-ing methods cannot easily be applied in practice.

ACADEMIC INCENTIVES

From a personal perspective, I’d like to suggest a third hypothesis (while gen-eralising wildly). Perhaps our friends in academia are more communicative of their research. Or perhaps these two com-munities communicate in different ways. Publications in peer-reviewed journals are central to the career progression of many academics. This very naturally creates a strong incentive to publish – and to pub-lish, you have to present something that is original if not unique.

BUSINESS IMPEDIMENTS

The business environment offers up quite different incentives. Day-to-day opera-tions and delivery take primacy over

communication and publication. But also, the importance of delivering results cre-ates conservatism towards reliable, prov-en techniques. As a frequent presenter and attendee of various conferences, I very much enjoy academic presentations, but there is always special attention for a fellow business practitioner.

To present at a conference, business people must be able to justify (to them-selves and/or their bosses) their absence from the workplace. They will also have to secure clearance for external communica-tion, a process that invariably strips out anything deemed to give a commercial

advantage or too much of an insight into the inner workings of their company.

FAILURE TO COMMUNICATE

These obstacles often mean that any con-tent from the business world is several years old, the latest innovations being too sensitive to share. They can also mean that the only communication about lat-est business practices comes from those business practitioners who have joined consultancies or are from software ven-dors. And, as Sujit suggests, can this work be considered trustworthy? After all, any vendor permitting too close examination and scrutiny leaks commercial advantage.

On balance, I think that Sujit is correct in suggesting that academia is “further

Commentary: The Incentives GapALEX HANCOCK

In business, the importance of delivering results creates conservatism towards reliable, proven techniques.

Isn’t it time business practitioners stepped up their thought leadership by communicating more about their needs?

Page 19: SPECIAL FEATURE - Foresight · business practitioner forecasting gap is real, we’d like to tighten our focus on three particular areas: • Knowledge Gap. Many practitioners know

www.forecasters.org/foresight FORESIGHT 43

ahead” than business practitioners – but I speculate that the gap is narrower and more uneven than might be thought. Academia is incentivised to innovate while business is incentivised to be con-servative, and these incentives drive behaviours.

I often say to my team, “Customers don’t pay for your analysis or PowerPoint. They pay for product.” In a way, we all need to be salespeople of our own work. The best techniques in the world have no value unless we can implement them, and this is a challenge both for the academic and the business practitioner. As such, we need to present our intellectual output in a way that gathers support, and we need to be very sure that we can deliver value in the “real world.”

Sujit’s questions offer a great basis for making that pitch. If the stakeholder doesn’t think to ask “What’s a 10% improvement in forecast accuracy worth to my business?” we should ask—and answer—it for them.

In his closing remarks, Sujit invites fore-casting to give more attention to the prac-tical issues of interest to business practi-tioners. I would echo that – but also, isn’t it time business practitioners stepped up their thought leadership by communicat-ing more about their needs?

Alex Hancock is Head of Treasury Ana-lytics at Royal Dutch Shell.  In his career with Shell he has been closely connected with the problems of forecasting products in Lubri-cants Supply Chain as well as cash forecasting in Treasury.  He holds a PhD in psychology.

[email protected]

Academia is incentivised to innovate while business is incentivised to be conservative, and these incentives drive behaviours.

Page 20: SPECIAL FEATURE - Foresight · business practitioner forecasting gap is real, we’d like to tighten our focus on three particular areas: • Knowledge Gap. Many practitioners know

FORESIGHT Summer 201644

Sujit Singh has started a very interest-ing discussion about whether there

is a disconnect between academia and business when it comes to forecasting. I’ll offer a few comments on this from the point of view of someone whose main focus is creating forecasting software for business applications and dabbling in academic forecasting every now and then.

I agree with Sujit that there is a gap between forecasting as researched (and taught) in academia and as practiced in business. I further agree that this gap is growing. However, I see a few recent trends that may help in narrowing the space between the two.

WHY THE DISCONNECT?

So why do I see this disconnect? It really comes down to the kinds of people who do forecasting in academia and business, the kinds of people who make decisions about forecasts in either setting, and how the key characteristics of a successful forecaster fit into the academic and busi-ness world.I had better first explain what I see as those key characteristics. In my opinion, the most important attribute of a fore-caster is a feeling for randomness. If you forecast, you are faced with data that are essentially random processes, and your aim is to understand the structure of this randomness well enough to forecast the data. This also involves understand-ing the residual randomness with enough thoroughness to allow you to set safety stocks that actually reach target service levels, or to do realistic scenario planning.

I have argued elsewhere that we should be paying more attention to this kind of probabilistic forecasting, also known as predictive distributions (Kolassa, 2016).

On the other hand, this aim at getting randomness under control can easily get out of hand. If you look hard enough at your time series, sift through enough other data, and perhaps talk to people out in the field, you will start seeing patterns – or have them pointed out to you by the people at the front lines. These patterns will come with stories that motivate them. We humans are storytelling animals, and it’s very hard for us to resist a well-told story (I’ll refrain from drawing political

implications in a U.S. election year). The problem is that, while stories may actually be true, following the story down the rab-bit hole (to mix metaphors) may leave us deeply in the dark indeed. Yes, a particu-lar effect may drive our sales. However, if we need to estimate this effect, estimation uncertainty may counterintuitively mean that incorporating this (true!) effect yields worse forecasts than if we used a simpler model that does not include this effect (Kolassa, 2016a)!

CANNIBALIZATION AND STATISTICAL THINKING

An example is in order, particularly since my earlier illustration (Kolassa, 2016a) used simulated data, and you might rea-sonably wonder whether simulations have anything to tell us in the real world. Over the years, a number of topics have come up over and over again from my cli-ents. One of these is cannibalization.

Commentary: That Feeling for RandomnessSTEPHAN KOLASSA

If you forecast, you are faced with data that are essentially random processes, and your aim is to understand the structure of this randomness well enough to forecast the data.

Page 21: SPECIAL FEATURE - Foresight · business practitioner forecasting gap is real, we’d like to tighten our focus on three particular areas: • Knowledge Gap. Many practitioners know

www.forecasters.org/foresight FORESIGHT 45

Retailers are deathly afraid of cannibaliza-tion, i.e., one product on promotion “can-nibalizing” sales of a substitute product. To estimate the actual effect of a planned promotion, they need to take the can-nibalization effect into account. So far, so good, and I am certain that this effect is strong and important on an aggregate level.

However, I am often asked to incorporate a cannibalization effect into day-by-day store × SKU-level replenishment fore-casts, which is about as finely granular as you can get. You will typically expect the cannibalization effect of a promotion on SKU A to be different for SKUs B, C, and D, and also differ between stores X, Y, and Z. The logical consequence is to estimate cannibalization effects for all combina-tions of affected SKUs × stores. And in my experience, this fine granular estimation procedure introduces so much variability that the net effect on forecast quality is zero or even negative. Understanding this and similar effects is what I mean by hav-ing a feeling for randomness.

Now, what does all this have to do with a possible divide between academic and business forecasting? Well, what sort of people will exhibit this kind of feeling for randomness? In my experience, they’re not business students. Or economists. Or programmers or computer scientists. Or mathematicians. The only course of study that draws people attracted to ran-domness (and subsequently nurtures this predilection) is statistics.

BUSINESS VS. STATISTICAL MENTALITY

How many business forecasters have a degree in statistics?

Few, if any. The typical business fore-caster is a business major, or possibly an

IT person who was offered the position either because he or she “understands software, like this forecasting software here,” or has a reputation as a “numbers person.” Both qualities are helpful for forecasters, but, as I tried to illustrate above, neither one is the necessary core competency. Plus, of course, even if a few forecasters in a company are statisticians by training, their manager will be even less likely to have such a background. At best, the manager will come from finance, and finance people are maybe even less comfortable with randomness than busi-ness people.

At the same time, academic forecasting is becoming more and more statisti-cal. Fifty years ago, people would run through a Box-Jenkins procedure, and even if ARIMA does have rigorous sta-tistical underpinnings, it’s a univariate time-series method and can be mastered with only a little feeling for randomness. Nowadays, even supply-chain forecasters are looking at predictive distributions (Kolassa, 2016), and they are incorpo-rating predictors whose effects are esti-mated, so they must deal with estimation uncertainty and its effects. Plus, forecast-ers are starting to see the implications of their statistics on topics like the appropri-ateness of their error metrics (Morlidge, 2015) – would you have thought that the shape of a probability distribution would have an impact on whether the mean absolute error (MAE) is a suitable error measure?

Where does this leave us? On the one hand, you have business forecasting, which keeps on being dominated by busi-ness or computer-science majors. On the other hand, you have academic forecast-ing, which more and more embraces a probabilistic view of forecasting. The two

Page 22: SPECIAL FEATURE - Foresight · business practitioner forecasting gap is real, we’d like to tighten our focus on three particular areas: • Knowledge Gap. Many practitioners know

FORESIGHT Summer 201646

are on divergent tracks. Little wonder that there is a gap, and that this gap is getting wider!

DATA SCIENCE AND HOPEFUL TRENDS

However, there are a few encouraging trends. Several years ago, Hal Varian, the Chief Economist at Google, expressed the

opinion that statistics would be the sexi-est job in the next decade. Sometimes I think he was referring to my job – other times, not so much. In any case, it seems to me like statisticians are leaving their image as boring number-crunchers and chart-drawers behind. And in the long run, this can only be good (a) for the student statistician pipeline, and (b) for the standing of and the respect accorded to established statisticians and their rec-ommendations. In addition, there is of

course the current buzz phrase of “data science.” I have argued that a “data sci-entist” is a generalist who has four key competencies—statistics, programming, a particular subject matter, and commu-nication (Kolassa, 2014)—which interact in complex ways, as shown in Figure 1.

I could have written the same above about forecasters. Why didn’t I? Simple: because

statistics, or a feeling for randomness, is the least obvious and the least well under-stood of these four aspects. Everyone will doubtless agree that a quality forecaster should be good with software, understand the business he or she is forecasting, and be a top-notch communicator; but as I have argued above, the “statistical touch” is less obvious. That data science is so prevalent nowadays may help propagate statistical thinking in the business com-munity and thus help to narrow the gap.

The Data Scientist Venn Diagram

Communi−cation

Statistics Programming

Business

HotAir

TheAccountant

TheDataNerd

TheHacker

TheStatsProf

TheIT

Guy

RCoreTeam

TheGood

Consultant

DrewConway's

DataScientist

Theperfect

DataScientist!Comp

SciProf

TheNumber

CruncherHeadof IT

Ana−lyst

TheSalesperson

That data science is so prevalent nowadays may help propagate statistical think-ing in the business community and thus help to narrow the gap.

Figure 1. The Data Scientist

Page 23: SPECIAL FEATURE - Foresight · business practitioner forecasting gap is real, we’d like to tighten our focus on three particular areas: • Knowledge Gap. Many practitioners know

www.forecasters.org/foresight FORESIGHT 47

Nevertheless, much remains to be done to repair the disconnect between academia and business in forecasting. Academics need to inculcate statistical thinking in their students, or nurture whatever seeds that are already there. Forecasting should not be seen as an add-on to “real” MBA classes, but as a specialization in its own right. Finance people and accountants rightly point out that not everyone is qualified to gainsay them on the differ-ences between closing their books accord-ing to US-GAAP or according to IFRS. Similarly, trained forecasters should be able to block “stories” from “hobby fore-casters” that actively harm forecasting accuracy.

Conversely, business forecasters must accept their need to master statistics, at least to a certain extent. They may make the decision to go with a normal distribu-tion approximation to their error distri-bution – but this should be an informed decision, not one that is made because the normal distribution is the only one the forecaster has ever seen or, worse, because they’ve only ever seen safety-stock formulas that implicitly invoke a normal distribution but never clarified their preconditions.

Thankfully, readers of Foresight are already ahead of the curve in this respect.

Forecasting should not be seen as an add-on to “real” MBA classes, but as a specialization in its own right.

REFERENCESKolassa, S. (2016). Evaluating Predictive Count Data Distributions in Retail Sales Forecasting, International Journal of Forecasting, 32, 788-803.

Kolassa, S. (2016a). Sometimes It’s Better to Be Simple than Correct, Foresight, Issue 40 (Winter 2016), 20-26.

Kolassa, S. (2014). Data Science Without Knowledge of a Specific Topic, Is It Worth Pursuing as a Career? http://datascience.stackexchange.com/a/2406/2853

Morlidge, S. (2015). Measuring the Quality of Intermittent Demand Forecasts: It’s Worse than We’ve Thought! Foresight, Issue 37 (Spring 2015), 37-42.

Stephan Kolassa is a Research Expert at SAP Switzerland AG and Fore-sight’s Associate Editor. Together with Enno Siemsen, he is the author of Demand Forecasting for Managers (Business Expert Press, forthcoming, Fall 2016).

[email protected]