the statistical value chain - a benchmarking checklist for decision

14
General rights Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. Users may download and print one copy of any publication from the public portal for the purpose of private study or research. You may not further distribute the material or use it for any profit-making activity or commercial gain You may freely distribute the URL identifying the publication in the public portal If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim. Downloaded from orbit.dtu.dk on: Apr 09, 2018 The Statistical Value Chain - a Benchmarking Checklist for Decision Makers to Evaluate Decision Support Seen from a Statistical Point-Of-View Herrmann, Ivan Tengbjerg; Henningsen, Geraldine; Wood, Christian D. ; Blake, James I. R. ; Mortensen, Jørgen Birk; Spliid, Henrik Published in: International Journal of Decision Sciences Publication date: 2013 Document Version Publisher's PDF, also known as Version of record Link back to DTU Orbit Citation (APA): Herrmann, I. T., Henningsen, G., Wood, C. D., Blake, J. I. R., Mortensen, J. B., & Spliid, H. (2013). The Statistical Value Chain - a Benchmarking Checklist for Decision Makers to Evaluate Decision Support Seen from a Statistical Point-Of-View. International Journal of Decision Sciences, 4(2), 71-83.

Upload: hoangnhan

Post on 11-Feb-2017

218 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: The Statistical Value Chain - a Benchmarking Checklist for Decision

General rights Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights.

• Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain • You may freely distribute the URL identifying the publication in the public portal

If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim.

Downloaded from orbit.dtu.dk on: Apr 09, 2018

The Statistical Value Chain - a Benchmarking Checklist for Decision Makers toEvaluate Decision Support Seen from a Statistical Point-Of-View

Herrmann, Ivan Tengbjerg; Henningsen, Geraldine; Wood, Christian D. ; Blake, James I. R. ; Mortensen,Jørgen Birk; Spliid, HenrikPublished in:International Journal of Decision Sciences

Publication date:2013

Document VersionPublisher's PDF, also known as Version of record

Link back to DTU Orbit

Citation (APA):Herrmann, I. T., Henningsen, G., Wood, C. D., Blake, J. I. R., Mortensen, J. B., & Spliid, H. (2013). TheStatistical Value Chain - a Benchmarking Checklist for Decision Makers to Evaluate Decision Support Seen froma Statistical Point-Of-View. International Journal of Decision Sciences, 4(2), 71-83.

Page 2: The Statistical Value Chain - a Benchmarking Checklist for Decision

The Statistical Value Chain – A Benchmarking Checklist for Decision Markers... 71

THE STATISTICAL VALUE CHAIN – A BENCHMARKING CHECKLISTFOR DECISION MAKERS TO EVALUATE DECISION SUPPORT SEEN

FROM A STATISTICAL POINT-OF-VIEW

Ivan T. Herrmann*, Geraldine Henningsen*, Christian D. Wood**, James I. R. Blake**Jørgen B. Mortensen*** and Henrik Spliid****

When decisions are made, by decision makers (DMs) in private and public organizations the DMs are supported by analysts(ANs) who provide decision support to the DM. Therefore, the quality of decision support provided by the AN directlyaffects the quality of a DM’s decision. At present, many quantitative methods exist for evaluating uncertainty—for example,Monte Carlo simulation—and such methods work very well when the AN is in full control of the data collection and model-building processes. In many cases, however, the AN is not in control of these processes. In this article we develop a simplemethod that a DM can employ in order to evaluate the process of decision support from a statistical point-of-view. We callthis approach the “Statistical Value Chain” (SVC): a consecutive benchmarking checklist with eight steps that can be usedto evaluate decision support seen from a statistical point-of-view.Keywords: Statistical Value Chain; Decision Theory; Benchmarking Checklist; Decision Makers; Evaluate; DecisionSupport; Statistics; Data Analysis; Uncertainty; Quality

I J D S : Vol. 4, No. 2, July-December 2013

* Climate Change and Sustainable Development, System Analysis Division, Department of Management Engineering, Technical Universityof Denmark, Frederiksborgvej 399, 4000 Roskilde, Denmark, corresponding author E-mail: [email protected]

** University of Southampton, SO17 1BJ, UK, E-mails: [email protected]; [email protected]*** Department of Economics, Copenhagen University, ØsterFarimagsgade 5, 1353 Copenhagen, Denmark, E-mail:

[email protected]**** Department of Informatics and Mathematical Modelling, Technical University of Denmark, Building 305, DK-2800, Kgs. Lyngby, Denmark,

E-mail: spliid,[email protected]

INTRODUCTION

When decisions are made by managers and topmanagers—the “decision makers” (DMs) areusually heavily supported by analysts (ANs) whoprovide decision support. This article addresses theneed for a pragmatic checklist with which to gaugethe quality of information supplied to DMs by ANs.Because the quality of the information supplied bythe AN directly affects the quality of a DM’sdecision, a DM should evaluate the quality ofdecision support based on well-defined criteria.Criteria of significance include levels of uncertaintyin the decision-support information, the cost of theanalysis, and the time taken for the decisionsupport. Improvements in just one of these criteriacan lead to improvements in others, since there is adependency between them. The focus of this articleis on the reduction of uncertainty for DMs.

Uncertainty in decision support is oftenevaluated through the use of simulation tools, suchas the Monte Carlo approach or similar methods.Such methods work well when data, modelparameters, and models are correctly defined andextracted, and (if necessary) adjusted. However,these approaches cannot compensate for corrupteddata, bad data extraction or incorrectly adjustedparameters, and unclear definitions in models.

This article therefore provides a methodologyfor evaluating uncertainty in the decision-supportprocess from a statistical point-of-view. Thisapproach has been named “the Statistical ValueChain” (SVC). The SVC can be considered the“correct” way of handling data in a cradle-to-graveperspective—that is, the process from theextraction of raw data to its use for decisionsupport.

Page 3: The Statistical Value Chain - a Benchmarking Checklist for Decision

72 Ivan T. Herrmann, Geraldine Henningsen, Christian D. Wood, James I. R. Blake and et al.

It is the goal of this article to define and developthe SVC, which in addition to allowing uncertaintyevaluation of decision support can also beconsidered as a best practice for an AN in providingthat decision support. By benchmarking real-lifedecision support processes using the SVC, we cantrace the consequences of practices that from astatistical point-of-view lead to uncertainty, bias,and errors in the analysis used for decision support.The steps of the SVC are developed and explainedhere.

The authors of the present article are not awareof a similar method developed for decision support;however, each step in the SVC is based on methodsand approaches that are well-defined in thestatistical literature. As such, the value of the SVCis that aggregated knowledge from different areasof statistical literature is compiled into eight simplesteps that can be employed as a benchmark checklistfor evaluating the quality of decision-supportinformation.

We demonstrate the SVC through an applieddecision-support case, which took place inDenmark from 2009-2012 when three largeorganizations requested a sustainability assessmentof the production and use of biodiesel. This case isdescribed in much greater detail by Herrmannet al. (2012). In the rest of the article, this decision-support case is referred to as “the biodiesel case”.

CASE STUDY AND BACKGROUND

This article builds on experiences from a largebiodiesel project initiated in 2008 by threeorganizations (Novozymes A/S, Emmelev A/S, andthe Danish National Advanced TechnologyFoundation). The goal of the project was to developa new enzymatic biodiesel transesterification processthat theoretically should be superior to conventionaltransesterification. In the present article we considerthese three organizations as being the DMs, or just asingle DM. The Technical University of Denmark(DTU) Division of Quantitative SustainabilityAssessment (QSA) was asked to conduct asustainability assessment of the biodiesel, and isconsidered the AN providing decision support.

From the beginning of the project, it wasstipulated that the sustainability assessment shouldbe based on life cycle assessment (LCA)methodology. LCA is a quantitative approach used

to assess the sustainability of products,technologies, and services based on differentenvironmental impacts (Wenzel et al. , 1997;Finnveden et al., 2009; EC-JRC, 2010). In LCA, allinputs of energy and raw materials and outputs ofemissions in a cradle-to-grave perspective of aproduct are quantified and summarized into oneor a number of environmental impact scores—forexample, greenhouse gas emissions. As suggestedby several researchers (including Weidema andWesnæs, 1996; Heijungs and Frischknecht, 2005;and Hung and Ma, 2009), the typicalrecommendation for assessing uncertainty in anLCA is based on a two-step procedure: (1) expertguess on a distribution/uncertainty range, and(2) applying the Monte Carlo uncertaintysimulation tool or similar tools. This works wellwhen the statistical analysis is correctly carried out;however, expert guesses, Monte Carlo, and similaruncertainty simulation tools do not reveal whenmistakes have been introduced into the analyticalprocess (Gy, 1998; Petersen et al., 2005). In thebiodiesel case, it has been demonstrated that therewas considerable uncertainty involved prior todecision support because of errors in the softwaretools used for the analysis (Herrmann et al., 2013a).Many other similar examples of erroneous anduncertain analyses are presented in differentreferences in the literature, for example byMakridakis (1998), Bezdek et al. (2002), Nielsen etal. (2007), and Mathiesen et al. (2009).

These cases support the need for a robustmethodology designed to evaluate and reduce thelevel of uncertainty in decision support, and this isthe motivation for developing the Statistical ValueChain. Here, we define the level of uncertainty indecision support with the following symbolicexpression (1):

f (A, B, C) = U (1)

“U” is the uncertainty level1 of the statementdelivered by the AN. “A” represents the resourcesavailable for the AN with regard to both time andcapital2. By increasing the resources available forthe AN, we can reduce the uncertainty level (andreducing the resources has the opposite effect, ofcourse). “B” represents the size of the space (orscope) that is investigated. Given that there is a fixedamount of resources available for the AN, theuncertainty level can therefore be reduced bydecreasing the size of the space that is investigated.

Page 4: The Statistical Value Chain - a Benchmarking Checklist for Decision

The Statistical Value Chain – A Benchmarking Checklist for Decision Markers... 73

Finally, “C” represents the capability of the AN, andby increasing the capability of the AN theuncertainty level is also reduced. The uncertaintylevel of a given decision support can then be higherthan, equal to, or lower than what the DM willaccept, as outlined in this symbolic expression (2):

U > � V� � U (2)

“�” = the accepted uncertainty level set by the DM.It is well known from the literature that differentDMs have different accepted risk attitudes, andhence different accepted uncertainty levels (RoyalSociety, 1992; Farmer et al., 1997; Simonet et al., 1997;Estrin et al., 2008). By keeping “A” and “B” constant,we can evaluate the uncertainty of an AN’s decisionsupport. The relation given in symbolic expression1 is also given in the statistical literature, by Cochran(1977), Gy (1998), Crawley MJ (2005), and Petersenet al. (2005), among others. As a theoretical examplewe can consider two different questions concerningbiofuels, keeping “A” and “C” constant:

Q1.What is the environmental impact of producing10 tons of bioethanol in a specific company inBrazil today based on sugar cane?

Q2.What is the environmental impact from the totalBrazilian production of bioethanol today?

The two questions differ significantly in termsof the size of the space under investigation. Q1focuses on one specific company in Brazil, whileQ2 is looking at all companies in Brazil that producebioethanol (>100). The scope of Q2 is therefore muchlarger than of Q1. If the DM wants these questionsto be answered with the same level of certainty, thenthe AN requires significantly more resources fordata gathering for Q2 compared to Q1. On the otherhand, if the resources for data gathering are fixedthen the uncertainty of the answer to Q2 willincrease significantly compared to Q1. The keyassumption of this article is that the three variablesA, B, and C basically drive the uncertainty. “B” iselaborated in Herrmann et al. (2013b), where ataxonomy has been developed that segregates andranks the size of the space (“B”) into 64 classes. “A”is given by the DM (or paying party), while “C” isthe quality of the process that the AN uses fordecision support (SVC). In general we use theexpression ceteris paribus3 to keep everything elseconstant except the quality of “C”. Bybenchmarking real-life decision support cases of“C” with the SVC, we can evaluate the uncertainty

(or reliability) of the given decision support.

The inspiration for naming our concept the“Statistical Value Chain” comes from both thestatistical literature and business literature. Theconcept of an analytical chain is used in the statisticalliterature by Petersen et al. (2005) in “RepresentativeSampling for Reliable Data Analysis: Theory ofSampling”. In the business literature, the conceptvalue chain is frequently used, developed by MichaelPorter in 1985 in his bestselling book “CompetitiveAdvantage: Creating and Sustaining SuperiorPerformance”. From these concepts the idea of theStatistical Value Chain arose.

THE STATISTICAL VALUE CHAIN (SVC), ABENCHMARKING CHECKLIST FOREVALUATING DECISION SUPPORT

The SVC is derived from Gy (1998) and Petersen etal. (2005), and also from more classical statisticaland probability theory, such as Pitman (1993),Johnson (2005), and Montgomery (2005). It is notthe goal of this article to describe in detail the stepsof the statistical value chain—each step is describedthoroughly in the literature, and relevant referencesare provided. The SVC can be operated by the DMas a benchmarking checklist (from a statisticalperspective) of the process that the AN uses to makedecision support. If the AN did not apply the SVCthen the DM might ask what method was usedinstead to derive data, and how the data wereaggregated to a final decision-support level.

From a decision-making point of view, decisionsupport makes sense only when there are optionsto choose between. With no options, decisionmaking and decision support is pointless (Lindley,1985). Describing the likely impacts orconsequences of different options or choices can bedone only by collecting retrospective data and theninterpreting these data in a prospective manner, anda reduction in the quality of retrospective datacollection will in general lead to a troublesomeprospective assessment. In other words, in adecision-support context there can be no soundscientific method of compensating for poorretrospective data collection and analysis, and adeterioration in the quality in each step of the SVCis likely to accumulate through the statistical valuechain in terms of increased uncertainty and bias.Ultimately, this can make the final decision supportproblematic. The SVC is illustrated in Figure 1.

Page 5: The Statistical Value Chain - a Benchmarking Checklist for Decision

74 Ivan T. Herrmann, Geraldine Henningsen, Christian D. Wood, James I. R. Blake and et al.

Based on the SVC, the DM can for each step inthe SVC ask whether the AN has followed suitablestatistical guidelines. If not, what has the AN doneinstead? Each step in the SVC is essentially a check-box. The criteria for checking “yes” is simply thatthe DM considers the given step, based on the

outline in this article, in the SVC to be sufficientlyfulfilled (as given in symbolic expression 2).

We assume that it is a matter of fact what thestate of the world, here denoted by St, is at any givenpoint in time, where t is index in time. It is alsoassume that the state of the physical world can bedescribed as the location and quantity of matter andenergy in time and space.

The state of the world at St-1 ... t-m (retrospective)is unchangeable, although prospectively (St+1... t+n)it is possible to influence the state of the world.However, it is necessary that stringent rules forinduction, deduction, and abduction be applied inorder to achieve the clearest picture of the state ofthe world and to understand how we can affect thisstate—that is, to change it to a more desirable state(at a later point in time). Statistics is the (applied)science of deduction, induction, and abduction, andtherefore we assume that statistical analysis offersan acceptable benchmark point for evaluating thedecision-support process.

Initially, we distinguish between (1) a physicalworld, which is the location and quantity of matterand energy in time and space, and (2) value, theworth placed on that same physical entity by oneor more DMs. In the following statistical valuechain, steps 1 to 7 are concerned only with thephysical properties of the world.

Step 1 (R-DITASP): Defining in Time and Spacethe Population(s) that is/are Under Investigation

For information about the world we must collectempirical data. Obviously we cannot collect dataon the entire world, but we need to collect data onthe population(s) about which we are makinginquiries. The starting point for this data-collectingprocedure is to define (or outline) these populations,with regard to both space and time—for example,a specific corn field in the present year, all soybeanfields in a given country in 2006, or a batch ofprinted circuit boards in 2028. In most decision-support contexts there are many populationsfrom which to collect data, and we refer to this as asystem.

Figure 2 shows a simplified model of thebiodiesel case system that was investigated. Thesystem consists of different populations, including“Rapeseed production in the field”, “Rapeseedmeal”, and Alcohol”, with a reference year of 2010.

Figure 1: The Statistical Value Chain (SVC). Based on the SVC,the DM can for each step in the SVC ask whether theAN has followed suitable statistical guidelines. Ifnot, what has the AN done instead? Each step in theSVC is essentially a check-box. The criteria forchecking “yes” is simply that the DM considers thegiven step, based on the outline in this article, in theSVC to be sufficiently fulfilled (as given in symbolicexpression 2)

Page 6: The Statistical Value Chain - a Benchmarking Checklist for Decision

The Statistical Value Chain – A Benchmarking Checklist for Decision Markers... 75

To see the full biodiesel system, refer to Herrmannet al. (2012).

Based on step 1 in the SVC, the DM could ask, “Whohas defined the populations to be investigated?Which populations are considered with regard totime and space? If these are not well-defined, whatthen?”

Step 2 (R-TOSOFI): Theory of Sampling (TOS) orFull Investigation

Once we have defined the system about which wewant to make inquiries, there are two options forseeking information regarding these populations:(a) seeking full information (i.e., examining all unitsof the populations in the entire system), or (b) usingrepresentative sampling for each population in thesystem. The latter method—Theory of Sampling(TOS)—is a statistical option that if conductedproperly, in principle, conserves resourcescompared to the first procedure where all units ofthe given population must be investigated fully(Cochran, 1977). Only well-used samplingprocedures described by TOS can lead torepresentative sampling of the differentpopulations in the product system. The startingpoint of any sampling procedure is outlined in step1 of the SVC, and the sample size (and henceresources needed) depends on (1) how accurate theDM needs the results to be, (2) the size of thepopulation, and (3) the true variation of thepopulation. To achieve representativeness(“unbiasedness” and accuracy), it is important thatall items of the sample be randomly chosen from the

population/system, meaning that they have equalprobability of being sampled4. For example,sampling from a batch of print circuit boards, in(e.g., four full) containers, it is not a correct sampleprocedure to pick the ten circuit boards closest todoors of the first container. One possible correctprocedure for sampling from such a batch wouldbe to label all the circuit boards with consecutivenumbers and then to draw randomly from thesenumbers. Correct sampling is not a trivial task, andboth Gy (1998) and Petersen et al. (2005) concludethat there can be grave errors in applied sampling.As noted by Petersen et al., using incorrect samplingprocedures can potentially corrupt the rest of thestatistical value chain used for decision support:“Without representativity in this first stage in theentire analytical chain, there is no way of everevaluating the degree of sampling bias andsampling errors embedded in the final analyticalresults subjected to data analysis. It has been knownfor more than 50 years that the combined samplingerrors typically amount to 10-100, or even as muchas 100-1000 times the specific analytical errors”.

For the biodiesel case, the AN was never incontrol of the sampling process and so had no wayto ensure that the principle of randomization hadbeen applied correctly when extracting data fromthe different populations. The primary data sourcewas various databases with environmental data onthe different populations identified in step 1. Someof the data extracted from these databases wereadequate (based on a specific accepted uncertaintylevel, according to symbolic expression 2); however,some of the data were rather old, and clearly notrepresentative for the reference year (2010). Giventhe resources available to the AN, there were noother options than to use these data as a “bestguess” and accept increased uncertainty in the finalresults.

Based on step 2 in the SVC, the DM could ask, “Bywhom and how have the data extractions beenundertaken? Has full investigation been used, orhas TOS been applied with a proper randomizationprocess? If TOS has been applied, what percentageof the population has been investigated? If not fullinvestigation or TOS, what then?”

Step 3 (R-DES): Descriptive Statistics

Descriptive statistics is about computing averages,analysis and estimating of variations, min and max,

Figure 2: A simplified model of the analysed system forproduction and combustion of biodiesel forpassenger-car transport in Denmark based onrapeseed oil. The reference year is 2010. To see thefull system, refer toHerrmann et al. (2012)

Page 7: The Statistical Value Chain - a Benchmarking Checklist for Decision

76 Ivan T. Herrmann, Geraldine Henningsen, Christian D. Wood, James I. R. Blake and et al.

distributions, and confidence intervals etc. for eachpopulation investigated. See Johnson (2005) forfurther information. This step is to some degreetrivial, and its quality is closely linked with the AN’scapability to undertake these computations (Gy,1998).

In the biodiesel case, the AN did not do suchcalculations, since all data in the databases werealready aggregated. In the LCA software tool“SimaPro” (Pre-sustainability, 2013), an uncertaintyrange for the different data points is given, and fivedifferent distributions are available: “Nodistribution”, “Range”, “Triangular”, “Normaldistribution”, and “Lognormal distribution”.According to Pre-sustainability (2013), theseuncertainty distribution estimates are based largelyon literature studies or expert estimations. For thefinal decision support to the DM in the biodiesel case,the AN took the approach that the estimate of thebiodiesel environmental impact was at the lowestpossible uncertainty level, given the resourcesavailable for the project and the capability of the AN.

Based on step 3 in the SVC the DM could ask, “Howand by whom was the descriptive statistic stepundertaken?”

Step 4 (R-SAP): The Retrospective Summary ofPopulation(s) Investigated

“Prior to determining where we are going: we mustfirst ascertain from whence we came” (A. Lincoln).As long as a given assessment can be categorizedas retrospective, we assume that the assessment isjust a matter of accounting, and that based on theprevious steps this accounting is more or lessstraightforward and covers the full system—thatis, all populations. This process is analogous to acompany’s financial statement. In Gowthorpe(2003) and Andersen et al. (2005), the process of howto make a financial statement and its basicassumptions are described. We assume that thebetter (more accurate and unbiased) the accounting,the better it can serve as a starting point for prospectiveassessments of a system. We also assume that thebetter the AN is equipped to investigate theretrospective system, the better the AN can provideprospective assessments—analogous to issuestreated in “Financial Statement Analysis” (Wild,2007). In the biodiesel case, the AN calculated thatthe environmental impact from driving 1000 km ina diesel-engine car running on biodiesel in 2010

(retrospective) resulted in greenhouse gasemissions of 57 kg (Herrmann et al., 2012).

Based on step 4 in the SVC the DM could ask, “Whoperformed the retrospective investigation of thepopulation(s) investigated? What are the“retrospective” numbers that support theestimations of prospective events in the followingsteps of the SVC?”

Step 5 (P-ISMF): Inferential Statistics Model forForecasting

The prospective assessment should be based on aclear model, and inferential statistics is useful fordeveloping such a model. Both the model and theprospective assessment must be based on theinformation and data gathered in the past(retrospectively). Three prospective assessmentmethods are outlined here.

(a) Naïve forecast method. The simplest methodfor making a forecast is the naïve forecast method(Makridakis, 1998), which assumes that the bestforecast for the future is the current value (of agiven time series). However, in many cases it isunlikely that a system will remain static over a(longer) time period. Hence, using the naïveforecast method can lead to inaccuracy and biascompared to other methods, described below.Different forces can affect the system, which canbe divided into exogenous forces and endogenousforces. Exogenous forces are those that the DMcannot (or at least, not easily) influence—theyare imposed from “the outside”. Endogenousforces are controlled by the DM by makingvarious alterations to the system. In thebiodiesel case we regarded the choice of alcoholtype as an endogenous force on the system,since the DM (the company owner) couldchoose to use different types of alcohol, such asbioethanol or petrochemical methanol.However, we regarded political forces in thebiodiesel system as exogenous; political changeshave affected the prices, demand, andproduction methods for biodiesel in Europesince 20105. As such, assuming that theenvironmental impact of the illustratedproduction system (Figure 2) is the same todayin 2013 as it was in 2010 is probably incorrect.

(b) Times series. It can be possible to deduce howexogenous and endogenous forces impact the

Page 8: The Statistical Value Chain - a Benchmarking Checklist for Decision

The Statistical Value Chain – A Benchmarking Checklist for Decision Markers... 77

system by studying time series, at least if timeseries have been recorded for both the systemand the forces that might impact the system.Based on this information, we can attempt tomake forecasts and trend analyses (Makridakis,1998). However, the study of time series can bedangerously misleading. As an example, tendifferent exogenous forces might affect asystem, but five of these forces might beunknown to the AN, and only two of the“known forces” might have reliable times seriesavailable. If the AN makes a correlation analysisbased on a response variable (such asgreenhouse gas emissions) for the system andthe time series for only the two knownexogenous forces, the resulting forecast can bebiased and misleading—after all, it omits eightof the ten potential forces affecting the system.Also, an observed correlation between differenttime series does not necessarily indicate thatthere is a causal relation. We see no other way toevaluate whether an observed correlation is alsoan expression for an actual causality exceptsound human judgment.

(c) Explanatory model: Information on thedifferent forces affecting a system might (inmany cases) already be summarized andavailable through the literature or in the mindas a memory, although both can be biased andinaccurate, as pointed out in Kahneman (2013).Given this, we do not (necessarily) need toundertake time series studies ourselves toinvestigate the impact of different forces on theproduct system. In the following, we use“explanatory variables” interchangeably with“forces”. As an example, take the endogenousforce “alcohol type” in the biodiesel systemshown in Figure 1. For this endogenous force,information (stoichiometry) was alreadyavailable in the literature, which we used toassess how this force could impact the responsevariable (greenhouse gas emissions) bychanging the alcohol-type input to the biodieselsystem. Hence, we can (also) produce forecaststhrough the use of explanatory models, whichconsist of explanatory variables and responsevariables (Montgomery, 2005). A breakdown ofthe explanatory variables can be useful forachieving a better result from the forecastingprocess. The following breakdown is notnecessarily a complete list of possibilities, but

rather is a suggestion for what can be consideredat least a starting point.

(i) Explanatory variables can be separated intofour categories, the “(un)knowns”: “Theknown knowns, the known unknowns, theunknown knowns, and the unknownunknowns” (Herrmann et al., 2013b). Thisdistinction between different explanatoryvariables is partly also discussed by Walkeret al. (2003) and Montgomery (2005), whooutline an uncertainty continuum goingfrom “statistical uncertainty” to “totalignorance”.

(ii) Both endogenous and exogenousexplanatory variables can affect a system,so it is important to consider both typeswhen defining the baseline (andalternatives) while forecasting. In otherwords, not considering importantexplanatory variables potentially leads toincreased uncertainty in the prospectiveassessment, as these variables do affect andchange the system.

(iii)The PESTEL framework (Johnson et al.,2005) can be used as a further breakdownof explanatory forces or variables in theexplanatory model. PESTEL is an acronymfor the political, economic, sociocultural,technological, environmental, and legalvariables. For further information regardingthe PESTEL framework, see Johnson et al.(2005).

One of the most important factors whenforecasting for decision support is that theexplanatory variables must be adjusted correctly,and those that impact an investigated system mustnot be missed. If variables that induce changes in aresponse variable describing a system are missed,the result can be bias or too much “weight” on theapplied explanatory variables. As an example, takethe debate of indirect land use change (ILUC). Itcan be misleading when land use changes areexplained as driven solely by the increasedproduction of biofuels in other countries, as seemsto be the case in articles by Searchinger et al. (2008)and Schmidt (2010). Other variables could also driveland use changes. Kline et al. (2008) list a range ofother possible explanatory variables beyond asingle-crop market that can potentially impact this

Page 9: The Statistical Value Chain - a Benchmarking Checklist for Decision

78 Ivan T. Herrmann, Geraldine Henningsen, Christian D. Wood, James I. R. Blake and et al.

analysis. Cultural, technological, biophysical, andeconomic forces could, for example, also explainchanges in the land use response variable. Some ofthese explanatory variables can also changedrastically over time (that is, the coefficient usedfor characterizing a given explanatory variable canchange), for example the already-mentioned“political” explanatory variable. In the biodieselcase a very simple inferential statistical model wasemployed—for details see Herrmann et al. (2013c).

Based on step 5 in the SVC the DM could ask, “Whoperformed the prospective assessment? What kindof model was employed to perform the prospectiveassessment?”

Step 6 (P-Baseline): Developing the Baseline forProspective Systems

The first step for constructing a baseline should becharacterized by one question: “What will happenif the change under consideration is notintroduced?” No forces that could potentially affectthe baseline should be ignored.

In the biodiesel case, the forecast of the baselinewas created by assuming that the latestretrospective data point (57 kg of greenhouse gasemissions per 1000 km of driving in an ordinarydiesel-engine car) would offer the best estimate ofthis response variable for a similar prospectiveevent, given the resources that were available tothe AN (The naïve forecast method). However, it wasalso assumed that the forecast would be valid onlyfor a particular short period of time, in order toavoid the likely increased uncertainty fromprospective changes in the biodiesel system as timeprogresses.

Based on step 6 in the SVC the DM could ask, “Whodeveloped the baseline study? Have all forces thatcould affect the system been included? How hasthe baseline for prospective events beendeveloped?”

Step 7 (P-Alternatives): Developing Alternativesto the Baseline, Rooted in the P-ISMF Model

Any relevant alternatives to the baseline study aredeveloped in step 7 of the SVC. Like the baselinestudy (step 6), important exogenous andendogenous forces influencing the alternativesshould not be ignored. The forces that affected thebaseline might not affect the alternatives in the same

way. For example, in the biodiesel case theprospective baseline would be based onpetrochemical alcohol used in the transesterificationprocess, while an alternative to the baseline wouldbe to use biomass-based alcohol, such as bioethanol.A tax on bioethanol would then affect thealternative differently. The difference between thebaseline study and alternatives describes thepotential for change.

In the biodiesel case, different alternatives wereapplied in addition to the baseline. Thesealternatives were based on a change to (1) the typeof alcohol, (2) the ratio of fertilizers used (given byfertilizer/manure and assuming a fixed amount of“NKP” applied to the agriculture soil), and(3) transport distance. Based on these changes inthe explanatory variables for the biodiesel system,changes in the response variables could beobserved—including changes in the greenhouse gasemissions from the biodiesel system. One of thesealternatives gave the highest potentialimprovement for the response variable of thebiodiesel system that was modeled, dropping fromthe baseline of 57 kg of CO2 emission per 1000 kmdriven in an ordinary diesel-engine car to only 31kg per 1000 km. Whether this change is consideredpositive or negative in the biodiesel system dependson the value that the DM places on CO2 emission.Putting a value on such factors is the final step (step8, below) in the SVC.

Based on step 7 in the SVC the DM could ask, “Whodeveloped the different alternatives? Have all forcesthat could affect the alternatives been included?How were the different alternatives developed?”

Step 8 (Valuation): Putting a Value on the PhysicalProperties Given in Steps 1-7 in the SVC

Steps 1 to 7 are concerned only with strictly physicalproperties of the world. In step 8, “valuation” isconsidered. Valuation in this article is understoodas the process of the DM placing a value on thephysical entities treated in steps 1-7 above. Weassume in this article that valuation takes place incollaboration with all DMs in a society that isdemocratic. Further, we assume that these entitiescan be both tangible and intangible. In the followingsection we refer to such tangible and intangibleentities simply as “goods”. It is beyond the scopeof this article to compare different methods to assignvalues to these goods; however, three problems

Page 10: The Statistical Value Chain - a Benchmarking Checklist for Decision

The Statistical Value Chain – A Benchmarking Checklist for Decision Markers... 79

(points I, II, and III below) recognized in theeconomic literature regarding the valuation ofgoods indicate why valuation from an economicperspective is not trivial, and why without duediligence this step can lead to (increased)uncertainty if it is included by the AN and used fordecision support.

The starting point of valuation from aneconomic perspective is an economy where nomarket failures take place, and as a result resources/goods are allocated in a Pareto Optimal (PO) way.Pareto optimality means that resources are allocatedsuch that it is not possible to reallocate them in away where someone is better off without someoneelse being worse off (Lindeneg, 1998).

I. When a transaction in a perfect economy(without market failures) takes place, then aprice is established on a good, and this is thereal price of the good. Before this transactiontakes place (and potentially afterwards) theowner (or any agent in the market) might, forstrategic reasons, claim that the good is worthmuch more to the owner (or to other agents inthe market) than it was actually traded for(Lindeneg, 1998; Johnson et al., 2005). Suchstrategic claims are of little interest in this article.Values not adopted from actual transactionshave a significant risk of being biased anduncertain.

II. Transactions of non-market goods fall victim tomarket failures. For example, environmentalproblems can be considered transactions thathappen outside of a perfect market (Hanley etal., 2007). Methods to determine the valuationof non-market goods are many, but suchmethods will be inaccurate and biased ifsufficient resources and care are not taken whenapplying them.

III. Values placed on different physical entitieschange repeatedly over time. If such changesare not reflected when the value of a particulargood is given, then this can also lead toincreased uncertainty and bias.

Based on these factors, we find it reasonable toassume that valuations which are not adopted froma perfect market are resource-intensive (given thata “reasonable”6 low uncertainty level is intended),and can potentially lead to bias, and hence toincorrect decision support. Cost-benefit analysis

(CBA) can be used to assess the value of a givenproject even when the market fails. Differentmethods for CBA are available, such as avoided-cost analysis, social cost-benefit analysis (SCBA),cost-effectiveness analysis (CEA), and scoringmethods. Concepts such as willingness-to-pay(WTP), willingness-to-accept-compensation(WTAC), and similar measures are used in thevaluation of non-market goods in a CBA. For furtherinformation, see Møller (1996), Lindeneg (1998), andHanley et al. (2007).

In the biodiesel case, the AN refrained fromperforming a valuation of the greenhouse gasemission as one of the response variables that weused for the system. Refraining from putting a valueon the physical property (greenhouse gas emission)clearly has a downside in that the DM mustexplicitly (or implicitly) make the valuation of thechanges proposed in order to consider trade-offsbetween other response variables—such asrespiratory inorganics emissions or the cost ofdeveloping and marketing new processes—compared to the potential reduction of greenhousegas emission from the biodiesel system.

Based on step 8 in the SVC the DM could ask, “Whohas undertaken the valuation step? Has thevaluation been investigated in representative way(if the society is democratic)? In general, how hasdata been obtained for the valuation step? Whatpercentage of society has been asked orinvestigated? If asked, how have the participatorsbeen asked?”

DEMONSTRATION OF THE SVC INDIFFERENT DECISION-SUPPORT CONTEXTS

In the following we include three concrete cases ofdecision support that did not follow the statisticalguidelines given above, and how that decisionsupport resulted in significant consequences for theDM or DMs.

Exploitation of Mining Blocks, Based on Gy (1998)

This case took place in 1982 in Australia, at whatwas the world’s second-largest copper mine.Extremely biased samples were taken by hand fromcones of blast-hole cuttings, with a consequentannual loss of some $8 million. In this case, theidentified error seems to be in step 2 (R-TOSOFI) inthe SVC. That $8 million marks the difference

Page 11: The Statistical Value Chain - a Benchmarking Checklist for Decision

80 Ivan T. Herrmann, Geraldine Henningsen, Christian D. Wood, James I. R. Blake and et al.

between “business-as-usual” and the alternate (and,seen from a statistical point-of-view) improved wayof handling data in steps 6 (P-Baseline), 7 (P-Alternatives), and 8 (Valuation) in the SVC in aretrospective perspective.

Forecasting IBM’s Sales, Based on Makridakis(1998)

In 1984, IBM chairman John Opel announced thatsales would double to $100 billion by 1990, whileprofits would continue to exhibit exponentialgrowth. Based on this forecast, IBM hired more than100,000 new personnal. In this article we interpretthis as happening at step 7 “P-Alternatives” in theSVC, since hiring 100,000 new personnal is a changecompared to the 1984 level of employees. However,things did not turn out as expected. In 1996, IBM’ssales were only $72 billion, while it incurred lossesof more than $13 billion in 1991, 1992, and 1993.Moreover, IBM’s work force was, by the end of 1996,at about half its 1986/87 peak of 430,000. Figure 3shows the difference between the forecast andactual developments. Figure 3 also shows theretrospective numbers that led to Opel’s forcast. Inthis article, we assume that these numbers werestatistically identified correctly. However, themistake that Opel seemed to make was in notconsidering changes in the business environment—that is, the exogenous variables at both step 6(P-Baseline) and 7 (P-Alternatives) in the SVC. Inthis case it could seem that Opel underestimatedthe exogenous forces imposed externaly by

competitors to IBM; the competitors also wanted abigger slice of the growing pie.

Yearly Evaluation of the Energy Sector in aEuropean Country

In this case, a European Country GovernmentalBody (ECGB) was in charge of the yearly evaluationof a new energy-saving agreement that targeted theenergy sector by increasing the number of energy-saving activities. The first problem the ECGBencountered was that neither the total populationof all energy-saving activities nor the characteristicsof all energy-saving activities was known to them—data on these activities were available only inaggregated form. This corresponds to a diversionfrom the benchmark described in step 1 (R-DTSP)in the SVC. Ignoring the problem of not havingidentified the population that they were samplingfrom, the ECGB used a two-step samplingprocedure. In the first step they sampled from thepopulation of energy producers, and in a secondfrom producer-specific projects. However, in eachyear the sampling was not representative of theenergy producers in the population, because thedistribution in the sample deviated from thedistribution of the population by the type and sizeof the producer—large-scale energy producers orenergy producers of a certain branch were highlyoverrepresented in the sample. In some cases, theselection probability for units of a certain type was25 times as high as for units of another type. Also, alarge-scale energy producer had a 5 times higher

Figure 3: IBM chairman John Opel’s forecast for IBM in 1984, from Makridakis (1998)

Page 12: The Statistical Value Chain - a Benchmarking Checklist for Decision

The Statistical Value Chain – A Benchmarking Checklist for Decision Markers... 81

chance of being selected than a smaller energyproducer, despite the fact that small-scale energyproducers made up a larger share of the population(in a highly right-skewed population). Clearly thesampling procedure used by the ECGB is not inalignment with the TOS rule of equal probabilityfor each element of the population to be sampled.The consequence of the procedure employed by theECGB is therefore likely to be a rather biasedevaluation of the energy-saving agreement, as large-scale energy producers with newer technologieswere disproportionately represented in theevaluation scheme compared to smaller and olderenergy producers.

CONCLUDING REMARKS

In this article we have developed a benchmarkingchecklist, the SVC, that the DM can employ toevaluate uncertainty in decision support from astatistical perspective. Based on the SVC, the DMcan for each step in the SVC ask whether the ANhas followed suitable statistical guidelines. If not,what has the AN done instead? The DM can then,based on the DM’s own accepted uncertainty level,decide either to reject the decision support or tocontinue with it. Each step in the SVC is essentiallya check-box. The criteria for checking “yes” issimply that the DM considers the given step in theSVC to be sufficiently fulfilled. The value of the SVCis that it aggregates the knowledge from thestatistical literature and puts it into a simplechecklist that can be used by most DMs. Deviationfrom the statistical value chain will, at any step ofthe SVC, lead to increased uncertainty in the finaldecision support.

A challenge for the statistical value chain is thatit might be relatively cost-intensive. However, bothGy (1998) and Petersen et al. (2005) argue that inthe long run it pays off to employ proper statisticalapproaches when performing decision support,since it can be (and usually is) even more expensiveto not use proper statistical approaches, asdemonstrated with the IBM case.

The statistical value chain should notnecessarily be thought of, or used as, a rigidprocedure for employing statistics in decisionsupport. As is recognized by Collins (1998), projectscan rarely be put on a chain with a certain andcorrectly-defined number of steps before the projectcomes to an end. How a project develops is often

better described as being an ex-ante “N-step”process, meaning that carrying out a project fordecision support is an iterative activity with anunknown number of N-steps, going back and forthbetween the different steps. This is also ourrecommendation when using the SVC, where thefundamental principle of the Deming Circleapproach (“Plan � Do � Check � Act” repeatedly)should also be used for a sound decision-supportprocess.

ACKNOWLEDGEMENTSWe would like to thank Matthew Ritchie for proofreadinga draft of this article; Dominic Roberton (SouthamptonUniversity), Dan Svenstrup (Risk Analyst Danske Bank);Jens Schmidt Antonsen (Group Financial Controller ISS);Jørgen Lindgaard Petersen (Department of ManagementEngineering, Technical University of Denmark); AndreasJørgensen (Department of Management Engineering,Technical University of Denmark) for their valuable andhelpful comments the editors and the reviewers and editorsof the International Journal of Decision Science. Fundingwas provided by Technical University of Denmark,Lawrence National Laboratory Berkeley, Novozymes,and The Danish National Advanced TechnologyFoundation.

NOTES1. In general this would be the expected uncertainty level,

since of course there is no guarantee that the uncertaintylevel will always follow this model. There can be (veryfew) cases for which the uncertainty level will not beas expected. To reduce the use of technical language,we use only the uncertainty level and not “the expecteduncertainty level”.

2. This would also include resources used by other partiesfor data gathering whothen make these data free andavailable to the AN.

3. In the economic literature this corresponds to the ceterisparibus expression used to clarify when everything elseis held constant. However, in general the society oftomorrow will have accumulated more informationthan that of today.

4. For stratified populations, this applies to all units withina stratum.

5. In fact, the product system investigated in Herrmannet al. (2012) has changed significantly since 2010.

6. This is, naturally, the authors’ perception of what is areasonable uncertainty level.

REFERENCES[1] Andersen M., Rohde C., Worre Z. (2005),

Introduktiontil Virksomhedens Bogføringog Regnskab[Introduction to the Company’s Accounting and

Page 13: The Statistical Value Chain - a Benchmarking Checklist for Decision

82 Ivan T. Herrmann, Geraldine Henningsen, Christian D. Wood, James I. R. Blake and et al.

Financial Statement], 2. Edition. Samfundslitteratur,Copenhagen, Denmark.

[2] Bezdek R. H., Wendling R. M. (2002), A Half Centuryof Long-Range Energy Forecasts: Errors Made, LessonsLearned, and Implications for Forecasting. Journal ofFusion Energy. 21(3-4): 155-172.

[3] Cochran W. (1977), Sampling Techniques. 3. Edition.Wiley, New York, London.

[4] Collins D. (1998), Organizational Change: SociologicalPerspectives. Routledge, New York.

[4] Crawley M. J. (2005), Statistics: an Introduction UsingR. Wiley, Chichester, West Sussex, England.

[5] Estrin S., David L., Dietrich M. (2008), Microeconomics.5. Edtion. FT Prentice Hall, Harlow.

[6] European Commission. (2010), International ReferenceLife Cycle Data System (ILCD) Handbook - GeneralGuide for Life Cycle Assessment - Detailed Guidance.EUR 24708 EN. Luxembourg. Publications Office of theEuropean Union: Joint Research Centre - Institute forEnvironment and Sustainability.

[7] Farmer C. M., Lund A. K., Trempel R. E., Braver E. R.(1997), Fatal Crashes of Passenger Vehicles Before andafter Adding Antilock Braking Systems. AccidentAnalysis & Prevention. 29(6): 745-757.

[8] Finnveden G., Hauschild M. Z., Ekvall T., Guinée J.,Heijungs R., Hellweg S., Koehler A., Pennington D., SuhS. (2009), Recent Developments in Life Cycle Assessment.Journal of Environmental Management. (91)1: 1-21.

[9] Gowthorpe C. (2003), Business Accounting and Finance:for Non-specialists. Thomson Learning, London.

[10] Gy P. (1998), Sampling for Analytical Purposes. JohnWiley, Chichester, New York.

[11] Hanley N., Shogren J. F., White B. (2007),Environmental Economics: in Theory and Practice. 2.Edition. Palgrave Macmillan, Basingstoke England,New York.

[12] Hauschild M. Z. (2005), Assessing EnvironmentalImpacts in a Life-cycle Perspective. EnvironmentalScience and Technology. (39)4: 81A-88A.

[13] Heijungs, R. and R. Frischknecht. (2005), RepresentingStatistical Distributions for Uncertain Parameters inLCA. Relationships Between Mathematical Forms,Their Representation in EcoSpold, and TheirRepresentation in CMLCA. The International Journal ofLife Cycle Assessment. 10 (4): 248-54.

[14] Herrmann I. T., Jørgensen A., Bruun S., Hauschild M.Z. (2012), Potential for Optimized Production and useof Rapeseed Biodiesel. Based on a Comprehensive Real- time LCA Case Study in Denmark with MultiplePathways. The International Journal of Life CycleAssessment. (18) 2: 418-430.

[15] Herrmann I. T., Jørgensen A., Birkved M., HauschildM. Z. (2013a), Does it Matter which LCA Tool youChoose? Comparative Assessment of SimaPro andGaBi.Working paper.

[16] Herrmann I. T., Hauschild M. Z., Sohn M., McKone T.(2013b), Confronting Uncertainty in Life CycleAssessment used for Decision Support - Developing aTaxonomy for LCA studies.Journal of IndustrialEcology. In press.

[17] Herrmann, I. T., M. Lundberg-Jensen, A. Jørgensen, H.Spliid, T. Stidsen, and M. Z. Hauchild. (2013c), Enablingoptimization in LCA: From “Ad hoc” to “Structural”LCA Approach - Based on a Biodiesel Well-to-WheelCase Study. The International Journal of Life CycleAssessment.DOI 10.1007/s11367-013-0615-z.

[18] Heijungs, R. and R. Frischknecht. (2005), RepresentingStatistical Distributions for Uncertain Parameters inLCA. Relationships Between Mathematical Forms,Their Representation in EcoSpold, and TheirRepresentation in CMLCA. The International Journal ofLife Cycle Assessment. 10 (4): 248-54.

[19] Hertwich E. G., Hammitt J. K., Pease W. S. (2000), ATheoretical Foundation for Life-Cycle Assessment.Journal of Industrial Ecology. 4(1): 13-28.

[20] Huijbregts MAJ, Ragas AMJ, Reijnders L, Gilijamse W.(2003), Evaluating Uncertainty in Environmental Life-Ccycle Assessment. A Case Study Comparing TwoInsulation Options for a Dutch One-family Dwelling.Environmental Science and Technology. 37(11): 2600-2608.

[21] Hume D. (1896), A Treatise of Human Nature.Clarendon Press, Oxford.

[22] Hung, M. L., and H. Ma. (2009), Quantifying SystemUncertainty of Life Cycle Assessment based on MonteCarlo Simulation. The International Journal of Life CycleAssessment. 14 (1): 19-27.

[23] Johnson G., Scholes K., Whittington R. (2005), ExploringCorporate Strategy. 7. Edition. FT/Prentice Hall,Harlow, Essex, England, New York.

[24] Johnson R. A. (2005), Miller & Freund’s Probability andStatistics for Engineers. 7. Edition. Pearson PrenticeHall, Upper Saddle River, NJ.

[25] Kahneman D. (2013), Thinking, Fast and Slow.1pbkedn. New York: Farrar, Straus and Giroux.

[26] Kline K. L., Dale V. H. (2008), Biofuels: Effects on Landand Fire. Science. 321(5886).

[27] Kotter J. P. (1999), I Spidsen for Forandringer [At theForefront of Changes]. Industriens Forlag, PeterAsschenfeldt’s Nye Forlag, København.

[28] Lindeneg K. (1998), Prioritering og Styring[Prioritization and Regulation]. 2. edition. AkademiskForlag, Copenhagen, Denmark.

[29] Lindley D. V. (1985), Making Decisions, 2. Edition.Wiley, London.

[30] Loève M. (1963), Probability Theory. 3. Edition. VanNostrand, Princeton, N. J.

[31] Makridakis S. (1998), Forecasting: Methods andApplications. 3. edition. Wiley, New York, Chichester.

[32] Mathiesen B. V., Münster M., Fruergaard T. (2009),Uncertainties Related to the Identification of the

Page 14: The Statistical Value Chain - a Benchmarking Checklist for Decision

The Statistical Value Chain – A Benchmarking Checklist for Decision Markers... 83

Marginal Energy Technology in Consequential LifeCycle Assessments. Journal of Cleaner Production. 17(15):1331-1338.

[33] Montgomery D. C. (2005), Design and Analysis ofExperiments. 6 edition. John Wiley & Sons, Hoboken,NJ.

[34] Møller F. (1996), Værdisætning af Miljøgoder[Valuation of Environmental Goods]. Jurist- ogØkonomforbundets Forlag, København.

[35] Nielsen S. K., Karlsson K. (2007), Energy Scenarios: AReview of Methods, Uses and Suggestions forImprovement. International Journal of Global Energy.Issues 27(3): 302.

[36] Petersen L., Minkkinen P., Esbensen K. H. (2005),Representative Sampling for Reliable Data Analysis:Theory of Sampling. Chemometrics and IntelligentLaboratory Systems. 77(1-2): 261-277.

[37] Pitman J. (1993), Probability. Springer-Verlag, New York.[38] Pre-sustainability (2013) http://www.pre-sustainability.

com/simapro-lca-software. Accessed 11 January 2013.[39] Royal Society Great Britain (1992), Risk: Analysis,

Perception and Management (Royal Society, London).[40] Schmidt J. (2010), Comparative Life Cycle Assessment

of Rapeseed Oil and Palm Oil. The International Journalof Life Cycle Assessment. 15(2): 183-197.

[41] Searchinger T., Heimlich R., Houghton R. A., Dong F.,Elobeid A., Fabiosa J., Tokgoz S., Hayes D., Yu T. (2008),Use of US Croplands for Biofuels Increases GreenhouseGases Through Emissions From Land-use Change.Science. 319(5867): 1238-1240.

[42] Simonet S., Wilde GJS. (1997), Risk: Perception,Acceptance and Homeostasis. Applied Psychology - AnInternational Review. 46(3): 235-252.

[43] Walker W. E., Harremoës P., Rotmans J., Sluijs J. P.,Asselt MBA, Janssen P., Krauss MPK. (2003), DefiningUncertainty: A Conceptual Basis for UncertaintyManagement in Model-Based Decision Support.Integrated Assessment. 4(1): 5.

[44] Weick K. E. (2001), Making Sense of the Organization.Blackwell Publishers, Oxford, UK, Malden, MA.

[45] Weidema, B. and M. S. Wesnæs. (1996), Data QualityManagement for Life Cycle Inventories—an Exampleof Using Data Quality Indicators. Journal of CleanerProduction. 4 (3-4): 167-74.

[46] Wenzel H., Hauschild M. Z., Alting L. (1997),Environmental Assessment of Products. Chapman &Hall, London, New York.

[47] Wild J. J. (2007), Financial Statement Analysis. 9. Edtion.McGraw-Hill/Irwin, Boston, Mass, London.