tfa for tech mgmt

Upload: brahmi-despaired

Post on 04-Apr-2018

222 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/29/2019 TFA for tech mgmt

    1/15

    Forecasting can be broadly considered as a method or a technique for estimating many futureaspects of a business or other operation. There are numerous techniques that can be used toaccomplish the goal of forecasting. For example, a retailing firm that has been in business for 25

    years can forecast its volume of sales in the coming year based on its experience over the 25-yearperiodsuch a forecasting technique bases the future forecast on the past data.

    While the term "forecasting" may appear to be rather technical, planning for the future is

    a critical aspect of managing any organizationbusiness, nonprofit, or other. In fact, thelong-term success of any organization is closely tied to how well the management of theorganization is able to foresee its future and to develop appropriate strategies to deal with likelyfuture scenarios. Intuition, good judgment, and an awareness of how well the economy is doingmay give the manager of a business firm a rough idea (or "feeling") of what is likely to happenin the future. Nevertheless, it is not easy to convert a feeling about the future into a precise anduseful number, such as next year's sales volume or the raw material cost per unit of output.Forecasting methods can help estimate many such future aspects of a business operation.Suppose that a forecast expert has been asked to provide estimates of the sales volume for aparticular product for the next four quarters. One can easily see that a number of other decisions

    will be affected by the forecasts or estimates of sales volumes provided by the forecaster. Clearly,production schedules, raw material purchasing plans, policies regarding inventories, and sales

    quotas will be affected by such forecasts. As a result, poor forecasts or estimates may lead topoor planning and thus result in increased costs to the business.How should one go about preparing the quarterly sales volume forecasts? One will certainly

    want to review the actual sales data for the product in question for past periods. Suppose thatthe forecaster has access to actual sales data for each quarter over the 25year period the firmhas been in business. Using these historical data, the forecaster can identify the general levelof sales. He or she can also determine whether there is a pattern or trend, such as an increaseor decrease in sales volume over time. A further review of the data may reveal some type ofseasonal pattern, such as peak sales occurring before a holiday. Thus by reviewing historicaldata over time, the forecaster can often develop a good understanding of the previous patternof sales. Understanding such a pattern can often lead to better forecasts of future sales ofthe product. In addition, if the forecaster is able to identify the factors that influence sales,

    historical data on these factors (or variables) can also be used to generate forecasts of futuresales volumes.

    FORECASTING METHODS

    All forecasting methods can be divided into two broad categories: qualitative and quantitative.Many forecasting techniques use past or historical data in the form oftime series. A time seriesis simply a set of observations measured at successive points in time or over successive periodsof time. Forecasts essentially provide future values of the time series on a specific variable suchas sales volume. Division of forecasting methods into qualitative and quantitative categories is

    based on the availability of historical time series data.

    QUALITATIVE FORECASTING METHODSQualitative forecasting techniques generally employ the judgment of experts in the appropriatefield to generate forecasts. A key advantage of these procedures is that they can be applied insituations where historical data are simply not available. Moreover, even when historical dataare available, significant changes in environmental conditions affecting the relevant time seriesmay make the use of past data irrelevant and questionable in forecasting future values of thetime series. Consider, for example, that historical data on gasoline sales are available. If thegovernment then implemented a gasoline rationing program, changing the way gasoline issold, one would have to question the validity of a gasoline sales forecast based on the past data.

    http://www.referenceforbusiness.com/encyclopedia/Kor-Man/Management.htmlhttp://www.referenceforbusiness.com/encyclopedia/Kor-Man/Management.htmlhttp://www.referenceforbusiness.com/encyclopedia/Kor-Man/Management.html
  • 7/29/2019 TFA for tech mgmt

    2/15

    Qualitative forecasting methods offer a way to generate forecasts in such cases. Three importantqualitative forecasting methods are: the Delphi technique, scenario writing, and the subjectapproach.

    DELPHI TECHNIQUE.

    In the Delphi technique, an attempt is made to develop forecasts through "group consensus."Usually, a panel of experts is asked to respond to a series of questionnaires. The experts,physically separated from and unknown to each other, are asked to respond to an initialquestionnaire (a set of questions). Then, a second questionnaire is prepared incorporatinginformation and opinions of the whole group. Each expert is asked to reconsider and torevise his or her initial response to the questions. This process is continued until some degreeof consensus among experts is reached. It should be noted that the objective of the Delphitechnique is not to produce a single answer at the end. Instead, it attempts to produce arelatively narrow spread of opinionsthe range in which opinions of the majority of experts lie.

    SCENARIO WRITING.

    Under this approach, the forecaster starts with different sets of assumptions. For each set of

    assumptions, a likely scenario of the business outcome is charted out. Thus, the forecasterwould be able to generate many different future scenarios (corresponding to the different sets ofassumptions). The decision maker or businessperson is presented with the different scenarios,and has to decide which scenario is most likely to prevail.

    SUBJECTIVE APPROACH.

    The subjective approach allows individuals participating in the forecasting decision to arrive ata forecast based on their subjective feelings and ideas. This approach is based on the premisethat a human mind can arrive at a decision based on factors that are often very difficult toquantify. "Brainstorming sessions" are frequently used as a way to develop new ideas or tosolve complex problems. In loosely organized sessions, participants feel free from peer pressureand, more importantly, can express their views and ideas without fear of criticism. Many

    corporations in the United States have started to increasingly use the subjective approach.

    QUANTITATIVE FORECASTING

    METHODS

    Quantitative forecasting methods are used when historical data on variables of interest areavailablethese methods are based on an analysis of historical data concerning the time seriesof the specific variable of interest and possibly other related time series. There are two majorcategories of quantitative forecasting methods. The first type uses the past trend of a particular

    variable to base the future forecast of the variable. As this category of forecasting methodssimply uses time series on past data of the variable that is being forecasted, these techniques are

    called time series methods.The second category of quantitative forecasting techniques also uses historical data. Butin forecasting future values of a variable, the forecaster examines the cause-and-effectrelationships of the variable with other relevant variables such as the level of consumerconfidence, changes in consumers' disposable incomes, the interest rate at which consumerscan finance their spending through borrowing, and the state of the economy represented bysuch variables as the unemployment rate. Thus, this category of forecasting techniques usespast time series on many relevant variables to produce the forecast for the variable of interest.Forecasting techniques falling under this category are called causal methods, as the basis of such

  • 7/29/2019 TFA for tech mgmt

    3/15

    forecasting is the cause-and-effect relationship between the variable forecasted and other timeseries selected to help in generating the forecasts.

    TIME SERIES METHODS OF FORECASTING.

    Before discussing time series methods, it is helpful to understand the behavior of time seriesin general terms. Time series are comprised of four separate components: trend component,cyclical component, seasonal component, and irregular component. These four components are

    viewed as providing specific values for the time series when combined.In a time series, measurements are taken at successive points or over successive periods. Themeasurements may be taken every hour, day, week, month, or year, or at any other regular (orirregular) interval. While most time series data generally display some random fluctuations, thetime series may still show gradual shifts to relatively higher or lower values over an extendedperiod. The gradual shifting of the time series is often referred to by professional forecastersas the trend in the time series. A trend emerges due to one or more long-term factors, suchas changes in population size, changes in the demographic characteristics of population, andchanges in tastes and preferences of consumers. For example, manufacturers of automobilesin the United States may see that there are substantial variations in automobile sales from onemonth to the next. But, in reviewing auto sales over the past 15 to 20 years, the automobile

    manufacturers may discover a gradual increase in annual sales volume. In this case, thetrend for auto sales is increasing over time. In another example, the trend may be decreasingover time. Professional forecasters often describe an increasing trend by an upward slopingstraight line and a decreasing trend by a downward sloping straight line. Using a straight line torepresent a trend, however, is a mere simplificationin many situations, nonlinear trends maymore accurately represent the true trend in the time series.

    Although a time series may often exhibit a trend over a long period, it may also displayalternating sequences of points that lie above and below the trend line. Any recurring sequenceof points above and below the trend line that last more than a year is considered to constitutethe cyclical component of the time seriesthat is, these observations in the time series deviatefrom the trend due to cyclical fluctuations (fluctuations that repeat at intervals of more than one

    year). The time series of the aggregate output in the economy (called the real gross domestic

    product) provides a good example of a time series that displays cyclical behavior. While thetrend line forgross domestic product (GDP) is upward sloping, the output growth displaysa cyclical behavior around the trend line. This cyclical behavior of GDP has been dubbed

    business cyclesby economists.The seasonal component is similar to the cyclical component in that they both refer to someregular fluctuations in a time series. There is one key difference, however. While cyclicalcomponents of a time series are identified by analyzing multiyear movements in historicaldata, seasonal components capture the regular pattern of variability in the time series

    within one-year periods. Many economic variables display seasonal patterns. For example,manufacturers of swimming pools experience low sales in fall and winter months, but they

    witness peak sales of swimming pools during spring and summer months. Manufacturersof snow removal equipment, on the other hand, experience the exactly opposite yearly sales

    pattern. The component of the time series that captures the variability in the data due toseasonal fluctuations is called the seasonal component.The irregular component of the time series represents the residual left in an observation of thetime series once the effects due to trend, cyclical, and seasonal components are extracted. Trend,cyclical, and seasonal components are considered to account for systematic variations in thetime series. 'h e irregular component thus accounts for the random variability in the time series.The random variations in the time series are, in turn, caused by short-term, unanticipated andnonrecurring factors that affect the time series. The irregular component of the time series, bynature, cannot be predicted in advance.

    http://www.referenceforbusiness.com/encyclopedia/Bre-Cap/Business-Cycles.htmlhttp://www.referenceforbusiness.com/encyclopedia/Bre-Cap/Business-Cycles.htmlhttp://www.referenceforbusiness.com/encyclopedia/Bre-Cap/Business-Cycles.htmlhttp://www.referenceforbusiness.com/encyclopedia/Bre-Cap/Business-Cycles.htmlhttp://www.referenceforbusiness.com/encyclopedia/Bre-Cap/Business-Cycles.html
  • 7/29/2019 TFA for tech mgmt

    4/15

    TIME SERIES FORECASTING USING SMOOTHING METHODS.

    Smoothing methods are appropriate when a time series displays no significant effects of trend,cyclical, or seasonal components (often called a stable time series). In such a case, the goal is tosmooth out the irregular component of the time series by using an averaging process. Once thetime series is smoothed, it is used to generate forecasts.The moving averages method is probably the most widely used smoothing technique. In orderto smooth the time series, this method uses the average of a number of adjoining data points orperiods. This averaging process uses overlapping observations to generate averages. Suppose aforecaster wants to generate three-period moving averages. The forecaster would take the firstthree observations of the time series and calculate the average. Then, the forecaster would dropthe first observation and calculate the average of the next three observations. This process wouldcontinue until three-period averages are calculated based on the data available from the entiretime series. The term "moving" refers to the way averages are calculatedthe forecaster movesup or down the time series to pick observations to calculate an average of a fixed number ofobservations. In the three-period example, the moving averages method would use the averageof the most recent three observations of data in the time series as the forecast for the nextperiod. This forecasted value for the next period, in conjunction with the last two observations of

    the historical time series, would yield an average that can be used as the forecast for the secondperiod in the future.The calculation of a three-period moving average can be illustrated as follows. Suppose aforecaster wants to forecast the sales volume for American-made automobiles in the UnitedStates for the next year. The sales of American-made cars in the United States during theprevious three years were: 1.3 million, 900,000, and 1.1 million (the most recent observation isreported first). The three-period moving average in this case is 1.1 million cars (that is: [(1.3 +0.90 + 1.1)/3 = 1.1]). Based on the three-period moving averages, the forecast may predict that1.1 million American-made cars are most likely to be sold in the United States the next year.In calculating moving averages to generate forecasts, the forecaster may experiment withdifferent-length moving averages. The forecaster will choose the length that yields the highestaccuracy for the forecasts generated.

    " It is important that forecasts generated not be too far from the actual future outcomes. Inorder to examine the accuracy of forecasts generated, forecasters generally devise a measure ofthe forecasting error (that is, the difference between the forecasted value for a period and theassociated actual value of the variable of interest). Suppose retail sales volume for American-made automobiles in the United States is forecast to be 1.1 million cars for a given year, but onlyI million cars are actually sold that year. The forecast error in this case is equal 100,000 cars. Inother words, the forecaster overestimated the sales volume for the year by 100,000. Of course,forecast errors will sometimes be positive, and at other times be negative. Thus, taking a simpleaverage of forecast errors over time will not capture the true magnitude of forecast errors; largepositive errors may simply cancel out large negative errors, giving a misleading impressionabout the accuracy of forecasts generated. As a result, forecasters commonly use the meansquares error to measure the forecast error. The mean squares error, or the MSE, is the average

    of the sum of squared forecasting errors. This measure, by taking the squares of forecastingerrors, eliminates the chance of negative and positive errors canceling out.In selecting the length of the moving averages, a forecaster can employ the MSE measureto determine the number of values to be included in calculating the moving averages. Theforecaster experiments with different lengths to generate moving averages and then calculatesforecast errors (and the associated mean squares errors) for each length used in calculatingmoving averages. Then, the forecaster can pick the length that minimizes the mean squarederror of forecasts generated.

  • 7/29/2019 TFA for tech mgmt

    5/15

    Weighted moving averages are a variant of moving averages. In the moving averages method,each observation of data receives the same weight. In the weighted moving averages method,different weights are assigned to the observations on data that are used in calculating themoving averages. Suppose, once again, that a forecaster wants to generate three-period movingaverages. Under the weighted moving averages method, the three data points would receivedifferent weights before the average is calculated. Generally, the most recent observation

    receives the maximum weight, with the weight assigned decreasing for older data values.The calculation of a three-period weighted moving average can be illustrated as follows.Suppose, once again, that a forecaster wants to forecast the sales volume for American-madeautomobiles in the United States for the next year. The sales of American-made cars for theUnited States during the previous three years were: 1.3 million, 900,000, and 1.1 million (themost recent observation is reported first). One estimate of the weighted three-period movingaverage in this example can be equal to 1.133 million cars (that is, [ 1(3/6) x (1.3) + (2/6) x(0.90) + (1/6) x (1.1)}/ 3 = 1.133 ]). Based on the three-period weighted moving averages,the forecast may predict that 1.133 million American-made cars are most likely to be sold inthe United States in the next year. The accuracy of weighted moving averages forecasts aredetermined in a manner similar to that for simple moving averages.Exponential smoothing is somewhat more difficult mathematically. In essence, however,

    exponential smoothing also uses the weighted average conceptin the form of the weightedaverage of all past observations, as contained in the relevant time seriesto generate forecastsfor the next period. The term "exponential smoothing" comes from the fact that this methodemploys a weighting scheme for the historical values of data that is exponential in nature.In ordinary terms, an exponential weighting scheme assigns the maximum weight to themost recent observation and the weights decline in a systematic manner as older and olderobservations are included. The accuracies of forecasts using exponential smoothing aredetermined in a manner similar to that for the moving averages method.

    TIME SERIES FORECASTING USING TREND PROJECTION.

    This method uses the underlying long-term trend of a time series of data to forecast its futurevalues. Suppose a forecaster has data on sales of American-made automobiles in the United

    States for the last 25 years. The time series data on U.S. auto sales can be plotted and examinedvisually. Most likely, the auto sales time series would display a gradual growth in the salesvolume, despite the "up" and "down" movements from year to year. The trend may be linear(approximated by a straight line) or nonlinear (approximated by a curve or a nonlinear line).Most often, forecasters assume a linear trendof course, if a linear trend is assumed when, infact, a nonlinear trend is present, this misrepresentation can lead to grossly inaccurate forecasts.

    Assume that the time series on American-made auto sales is actually linear and thus it can berepresented by a straight line. Mathematical techniques are used to find the straight line thatmost accurately represents the time series on auto sales. This line relates sales to differentpoints over time. If we further assume that the past trend will continue in the future, future

    values of the time series (forecasts) can be inferred from the straight line based on the past data.One should remember that the forecasts based on this method should also be judged on the

    basis of a measure of forecast errors. One can continue to assume that the forecaster uses themean squares error discussed earlier.

    TIME SERIES FORECASTING USING TREND AND SEASONAL COMPONENTS.

    This method is a variant of the trend projection method, making use of the seasonal componentof a time series in addition to the trend component. This method removes the seasonal effect orthe seasonal component from the time series. This step is often referred to as de-seasonalizingthe time series.

  • 7/29/2019 TFA for tech mgmt

    6/15

    Once a time series has been de-seasonalized it will have only a trend component. The trendprojection method can then be employed to identify a straight line trend that represents thetime series data well. Then, using this trend line, forecasts for future periods are generated.The final step under this method is to reincorporate the seasonal component of the time series(using what is known as the seasonal index) to adjust the forecasts based on trend alone. In thismanner, the forecasts generated are composed of both the trend and seasonal components. One

    will normally expect these forecasts to be more accurate than those that are based purely on thetrend projection.

    CAUSAL METHOD OF FORECASTING.

    As mentioned earlier, causal methods use the cause-and-effect relationship between thevariable whose future values are being forecasted and other related variables or factors. Thewidely known causal method is called regression analysis, a statistical technique used todevelop a mathematical model showing how a set of variables are related. This mathematicalrelationship can be used to generate forecasts. In the terminology used in regression analysiscontexts, the variable that is being forecasted is called the dependent or response variable. The

    variable or variables that help in forecasting the values of the dependent variable are called theindependent or predictor variables. Regression analysis that employs one dependent variable

    and one independent variable and approximates the relationship between these two variablesby a straight line is called a simple linear regression. Regression analysis that uses two or moreindependent variables to forecast values of the dependent variable is called a multiple regressionanalysis. Below, the forecasting technique using regression analysis for the simple linearregression case is briefly introduced.Suppose a forecaster has data on sales of American-made automobiles in the United Statesfor the last 25 years. The forecaster has also identified that the sale of automobiles is relatedto individuals' real disposable income (roughly speaking, income after income taxes are paid,adjusted for the inflation rate). The forecaster also has available the time series (for the last25 years) on the real disposable income. The time series data on U.S. auto sales can be plottedagainst the time series data on real disposable income, so it can be examined visually. Mostlikely, the auto i sales time series would display a gradual growth in sales volume as real

    disposable income increases, despite the occasional lack of consistencythat is, at times,auto sales may fall even when real disposable income rises. The relationship between the two

    variables (auto sales as the dependent variable and real disposable income as the independentvariable) may be linear (approximated by a straight line) or nonlinear (approximated by a curveor a nonlinear line). Assume that the relationship between the time series on sales of American-made automobiles and real disposable income of consumers is actually linear and can thus berepresented by a straight line.

    A fairly rigorous mathematical technique is used to find the straight line that most accuratelyrepresents the relationship between the time series on auto sales and disposable income. Theintuition behind the mathematical technique employed in arriving at the appropriate straightline is as follows. Imagine that the relationship between the two time series has been plotted onpaper. The plot will consist of a scatter (or cloud) of points. Each point in the plot represents

    a pair of observations on auto sales and disposable income (that is, auto sales correspondingto the given level of the real disposable income in any year). The scatter of points (similar tothe time series method discussed above) may have an upward or a downward drift. That is, therelationship between auto sales and real disposable income may be approximated by an upwardor downward sloping straight line. In all likelihood, the regression analysis in the presentexample will yield an upward sloping straight lineas disposable income increases so does the

    volume of automobile sales.Arriving at the most accurate straight line is the key. Presumably, one can draw many straightlines through the scatter of points in the plot. Not all of them, however, will equally represent

  • 7/29/2019 TFA for tech mgmt

    7/15

    the relationshipsome will be closer to most points, and others will be way off from mostpoints in the scatter. Regression analysis then employs a mathematical technique. Differentstraight lines are drawn through the data. Deviations of the actual values of the data points inthe plot from the corresponding values indicated by the straight line chosen in any instanceare examined. The sum of the squares of these deviations captures the essence of how close astraight line is to the data points. The line with the minimum sum of squared deviations (called

    the "least squares" regression line) is considered the line of the best fit.Having identified the regression line, and assuming that the relationship based on the pastdata will continue, future values of the dependent variable (forecasts) can be inferred from thestraight line based on the past data. If the forecaster has an idea of what the real disposableincome may be in the coming year, a forecast for future auto sales can be generated. One shouldremember that forecasts based on this method should also be judged on the basis of a measureof forecast errors. One can continue to assume that the forecaster uses the mean squares errordiscussed earlier. In addition to using forecast errors, regression analysis uses additional ways ofanalyzing the effectiveness of the estimated regression line in forecasting.Box-Jenkins Forecasting Method: The univariate version of this methodology is a self-projecting time series forecasting method. The underlying goal is to find an appropriate formula

    so that the residuals are as small as possible and exhibit no pattern. The model- building processinvolves four steps. Repeated as necessary, to end up with a specific formula that replicates thepatterns in the series as closely as possible and also produces accurate forecasts.Box-Jenkins MethodologyBox-Jenkins forecasting models are based on statistical concepts and principles and are able tomodel a wide spectrum of time series behavior. It has a large class of models to choose from anda systematic approach for identifying the correct model form. There are both statistical tests for

    verifying model validity and statistical measures of forecast uncertainty. In contrast, traditionalforecasting models offer a limited number of models relative to the complex behavior of manytime series with little in the way of guidelines and statistical tests for verifying the validity of theselected model.Data: The misuse, misunderstanding, and inaccuracy of forecasts is often the result of not

    appreciating the nature of the data in hand. The consistency of the data must be insured andit must be clear what the data represents and how it was gathered or calculated. As a rule ofthumb, Box-Jenkins requires at least 40 or 50 equally-spaced periods of data. The data mustalso be edited to deal with extreme or missing values or other distortions through the sue offunctions as log or inverse to achieve stabilization.Preliminary Model Identification Procedure: A preliminary Box-Jenkins analysis with aplot of the initial data should be run as the starting point in determining an appropriate model.The input data must be adjusted to form a stationary series, one whose values vary more or lessuniformly about a fixed level over time. Apparent trends can be adjusted by having the modelapply a technique of "regular differencing," a process of computing the difference between everytwo successive values, computing a differenced series which has overall trend behavior removed.If a single differencing does not achieve stationarity, it may be repeated, although rarely if ever,are more than two regular differencings required. Where irregularities in the differenced seriescontinue to be displayed, log or inverse functions can be specified to stabilize the series suchthat the remaining residual plot displays values approaching zero and without any pattern. Thisis the error term, equivalent to pure, white noise.Pure Random Series: On the other hand, if the initial data series displays neither trend norseasonality and the residual plot shows essentially zero values within a 95% confidence level andthese residual values display no pattern, then there is no real-world statistical problem to solveand we go on to other things.Model Identification Background

  • 7/29/2019 TFA for tech mgmt

    8/15

    Basic Model: With a stationary series in place, a basic model can now be identified. Threebasic models exist, AR (autoregressive), MA (moving average) and a combined ARMA inaddition to the previously specified RD (regular differencing) combine to provide the availabletools. When regular differencing is applied together with AR and MA, they are referred to as

    ARIMA, with the I indicating "integrated" and referencing the differencing procedure.Seasonality: In addition to trend, which has now been provided for, stationary series quite

    commonly display seasonal behavior where a certain basic pattern tends to be repeated atregular seasonal intervals. The seasonal pattern may additionally frequently display constantchange over time as well. Just as regular differencing was applied to the overall trending series,seasonal differencing (SD) is applied to seasonal nonstationarity as well. And as autoregressiveand moving average tools are available with the overall series, so too, are they available forseasonal phenomena using seasonal autoregressive parameters (SAR) and seasonal movingaverage parameters (SMA).Establishing Seasonality: The need for seasonal autoregression (SAR) and seasonalmoving average (SMA) parameters is established by examining the autocorrelation and partialautocorrelation patterns of a stationary series at lags that are multiples of the number of periodsper season. These parameters are required if the values at lags s, 2s, etc. are nonzero and displaypatterns associated with the theoretical patterns for such models. Seasonal differencing is

    indicated if the autocorrelations at the seasonal lags do not decrease rapidly.Referring to the above chart, know that, the variance of the errors of the underlying model must

    be invariant (i.e. constant). This means that the variance for each subgroup of data is the sameand does not depend on the level or the point in time. If this is violated then one can remedy this

    by stabilizing the variance. Make sure that, that there are no deterministic patterns in the data.Also one must not have any pulses or one-time unusual values. Additionally there should be nolevel or step shifts. Also no seasonal pulses should be present.The reason for all of this is that if they do exist then the sample autocorrelation and partialautocorrelation will seem to imply ARIMA structure. Also the presence of these kind of modelcomponents can obfuscate or hide structure. For example a single outlier or pulse can create aneffect where the structure is masked by the outlier.

    Improved Quantitative Identification MethodRelieved Analysis Requirements: A substantially improved procedure is now availablefor conducting Box-Jenkins ARIMA analysis which relieves the requirement for a seasonedperspective in evaluating the sometimes ambiguous autocorrelation and partial autocorrelationresidual patterns to determine an appropriate Box-Jenkins model for use in developing aforecast model.

    ARMA (1, 0): The first model to be tested on the stationary series consists solely of anautoregressive term with lag 1. The autocorrelation and partial autocorrelation patterns areexamined for significant autocorrelation often early terms and to see whether the residualcoefficients are uncorrelated, that is the coefficient values are zero within 95% confidence limitsand without apparent pattern. When fitted values as close as possible to the original series

    values are obtained, the sum of the squared residuals will be minimized, a technique called leastsquares estimation. The residual mean and the mean percent error should not be significantlynonzero. Alternative models are examined comparing the progress of these factors, favoringmodels which use as few parameters as possible. Correlation between parameters should not besignificantly large and confidence limits should not bracket zero. When a satisfactory model has

    been established a forecast procedure is applied.ARMA (2, 1): Absent a satisfactory ARMA (1, 0) condition with residual coefficientsapproximating zero, the improved model identification procedure now proceeds to examinethe residual pattern when autoregressive terms with order 1 and 2 are applied together with amoving average term with an order of 1.

  • 7/29/2019 TFA for tech mgmt

    9/15

    Subsequent Procedure: To the extent that the residual conditions described above remainunsatisfied, the Box-Jenkins analysis is continued with ARMA (n, n-1) until a satisfactory modelis arrived at. In the course of this iteration, when an autoregressive coefficient (phi) approacheszero, the model is reexamined with parameters ARMA (n-1, n-1). In like manner whenever amoving average coefficient (theta) approaches zero, the model is similarly reduced to ARMA(n, n-2). At some point, either the autoregressive term or moving average term may fall away

    completely and the examination of the stationary series is continued with only the remainingterm until the residual coefficients approach zero within the specified confidence levels.Morphological AnalysisThe morfological analysis is actually a group of methods that share the same structure. Thismethod breaks down a system, product or process into its essential sub-concepts, each conceptrepresenting a dimension in a multi-dimensional matrix. Thus, every product is consideredas a bundle of attributes. New ideas are found by searching the matrix for new combinationof attributes that do not yet exist. It doesnt provide any specific guidelines for combining theparameters. It tends to provide a large number of ideas.The morphological analysis has several advantages over less structured approaches:

    "It may help us to discover new relationships or configurations, which may not be soevident, or which we might have overlooked by other less structured methods.

    It encourages the identification and investigation of boundary conditions, i.e. the limits

    and extremes of different contexts and factors. It also has definite advantages for scientific communication and notably for group

    work. "[source:www.swemorph.com ] It allows us to find possible solutions to complex problems characterised by several

    parameters. Richness of data it can provide a multitude of combinations permutations not yet

    explored.

    Systematic analysis this technique allows for a systematic analysis of future structureof an industry (or system) and identification of key gaps.

    How to Use Morphological AnalysisMany problems challenge us with too many possible solutions, though yet uncovered, only someof which may be new and useful. This process, drains the swamp, so to speak, by systematicallyarranging appropriate and promising aspects of the situation and combining them just assystematically in order to identify new and suitable combinations.

    The object is to break down the system, product, or process problem at hand into its essentialparameters or dimensions and to place them in a multi-dimensional matrix. Then to find newideas by searching the matrix for creative and useful combinations. Some combinations mayalready exist, others may not be possible or appropriate. The rest may represent prospective newideas.

    If you can describe a problem situation in terms of its aspects or dimentions, morphologicalanalysis will uncover original and often innovative solutions.

    Morphological Analysis Steps

    http://www.swemorph.com/http://www.swemorph.com/http://www.swemorph.com/http://www.swemorph.com/http://www.swemorph.com/http://www.swemorph.com/
  • 7/29/2019 TFA for tech mgmt

    10/15

    1. Determine suitable problem characteristics. The individual problem solver or a facilitatedgroup brainstorms to define problem characteristics, also refered to as parameters.2. Make all the suggestions visible to everyone and group them in various ways until consensusis reached regarding the groupings.

    3. Label the groups reduce them to manageable number. Rather than reaching for arecommended number, consider the capabilities of the group and the time available. Consideralso that there are computer applications and other tools that can assist the process.

    When working with the tangible aspects of something like a consumer product, for example,the labels gleaned from the groupings might include parameters such as product ingredients,color, textures, temperature, and flavor as well as package size, shape, function, and graphics.In the case of manufacturing issues, parameters might include material, function, process,construction, maintenance, and the like.

    4. The next step is to fill a grid or grids with lists of parameters arranged along the axes. Nowcombinations can be identified within the grid. Depending on the number of items in play, great

    numbers of combinations may be available.

    5. Eliminate those combinations that are impossible or undesirable to execute, put aside thosethat you do not want to eliminate but do not want to execute, and develop as many of the rest aspossible.

    Morphological analysis was first applied to the aerospace industry by F. Zwicky, a professorat the California Institute of Technology. Zwicky chose to analyze the structure of jet enginetechnology. His first task was to define the important parameters of jet engine technology, whichinclude thrust mechanism, oxidizer, and fuel type. He continued, in turn, to break each of thesetechnologies down into its component parts. Having exhausted the possibilities under eachparameter heading, the alternative approaches were assembled in all possible permutations: for

    example, a ramjet that used atmospheric oxygen and a solid fuel. For some permutations, a jetengine system already existed; for others, no systems or products were available. Zwicky viewedthe permutations representing "empty cells" as stimuli for creativity and for each asked, "Whynot?" For example, "Why not a nuclear-powered ceramic fan-jet?"Morphological analysis is a proven ideation method that leads to "organized invention." Thetechnique allows for two key elements:

    a systematic analysis of the current and future structure of an industry area (or domain)as well as key gaps in that structure.

    a strong stimulus for the invention of new alternatives that fill these gaps and meet anyimposed requirements.

    "Essentially, morphological analysis is a method for identifying and investigating the total set ofpossible relationships contained in any given, multi-dimensional problem complex that can beparameterized."[source:www.swemorph.com ]In his main work on the subject, Discovery, Invention, Research through the Morphological

    Approach (Zwicky, 1966), Zwicky summarises the five (iterative) steps of the process:First stepThe problem to be solved must be very concisely formulated.

    Second stepAll of the parameters that might be of importance for the solution of the given problem must belocalized and analysed.Third step

    http://www.swemorph.com/http://www.swemorph.com/http://www.swemorph.com/http://www.swemorph.com/http://www.swemorph.com/http://www.swemorph.com/
  • 7/29/2019 TFA for tech mgmt

    11/15

    The morphological box or multidimensional matrix, which contains all of the potential solutionsof the given problem, is constructed.

    Fourth stepAll solutions contained in the morphological box are closely scrutinized and evaluated withrespect to the purposes that are to be achieved.

    Fifth step

    The optimally suitable solutions are selected and are practically applied, provided thenecessary means are available. This reduction to practice requires in general a supplementalmorphological study.Steps 2 and 3 form the heart of morphological analysis since Steps 1, 4, and 5 are often involvedin other forms of analysis. Step 2, identification of parameters, involves studying the problemand present solutions to develop a framework. This step is useful to develop a relevance treeto help define a given topic. Once parameters are identified, a morphological box can beconstructed that lists parameters along one dimension. The second dimension is determined bythe nature of the problem.Morphological box

    "The approach begins by identifying and defining the parameters (or dimensions) of theproblem complex to be investigated, and assigning each parameter a range of relevant valuesor conditions. A morphological box also fittingly known as a Zwicky box is constructed

    by setting the parameters against each other in an n-dimensional matrix. Each cell of the n-dimensional box contains one particular value or condition from each of the parameters, and

    thus marks out a particular state or configuration of the problem complex.This is the point: to examine all of the configurations in the field, in order to establish whichof them are possible, viable, practical, interesting, etc., and which are not. In doing this, wemark out in the field what might be called a solution space. The solution space of a Zwickianmorphological field consists of the subset of configurations, which satisfy some criteria.However, a typical morphological field can contain between 50,000 and 5,000,000 formalconfigurations, far too many to inspect by hand. Thus, the next step in the analysis-synthesisprocess is to examine the internal relationships between the field parameters and "reduce" thefield by weeding out all mutually contradictory conditions.This is achieved by a process of cross-consistency assessment: all of the parameter values inthe morphological field are compared with one another, pair-wise, in the manner of a cross-impact matrix. As each pair of conditions is examined, a judgment is made as to whether or to

    what extent the pair can coexist, i.e. represent a consistent relationship. Note that there is noreference here to causality, but only to internal consistency."Short:

    Auto correlation The autocorrelation ( Box and Jenkins, 1976) function can be used for thefollowing two purposes:

    1. To detect non-randomness in data.2. To identify an appropriate time series model if the data are not random.

    http://www.itl.nist.gov/div898/handbook/eda/section4/eda43.htm#Boxhttp://www.itl.nist.gov/div898/handbook/eda/section4/eda43.htm#Boxhttp://www.itl.nist.gov/div898/handbook/eda/section4/eda43.htm#Boxhttp://www.itl.nist.gov/div898/handbook/eda/section4/eda43.htm#Boxhttp://www.itl.nist.gov/div898/handbook/eda/section4/eda43.htm#Boxhttp://www.itl.nist.gov/div898/handbook/eda/section4/eda43.htm#Boxhttp://www.itl.nist.gov/div898/handbook/eda/section4/eda43.htm#Box
  • 7/29/2019 TFA for tech mgmt

    12/15

    Randomness is one of the keyassumptions in determining if a univariate statistical process is incontrol. If the assumptions of constant location and scale, randomness, and fixed distributionare reasonable, then the univariate process can be modeled as:

    where Ei is an error term.If the randomness assumption is not valid, then a different model needs to be used. This willtypically be either a time series model or a non-linear model (with time as the independent

    variable).PACFIn time series analysis, the partial autocorrelation function (PACF) plays an importantrole in data analyses aimed at identifying the extent of the lag in an autoregressive model.The use of this function was introduced as part of the Box-Jenkins approach to time seriesmodelling, where by plotting the partial autocorrelative functions one could determine theappropriate lags p in an AR(p)model or in an extendedARIMA(p,d,q) model.

    In general, a partial correlation is a conditional correlation. It is the correlation between two variables under the assumption that we know and take

    into account the values of some other set of variables. For instance, consider a regression context in which y = response variable andx1,

    x2, andx3 are predictor variables. The partial correlation between y andx3 is the

    correlation between the variables determined taking into account how both y andx3 arerelated tox1 andx2.

    In regression, this partial correlation could be found by correlating the residuals fromtwo different regressions: (1) Regression in which we predict y fromx1 andx2, (2)regression in which we predictx3 fromx1 andx2. Basically, we correlate the parts ofyandx3 that are not predicted byx1 andx2.

    Relevance Trees

    Most major technological development projects are complex. Their fulfillment is likely

    to depend on the accomplishment of substantial improvements on existing technologies.

    These advances are not usually coordinated. Many products result from technological

    changes that were not originally intended to provide them assistance. The planner must beable to distinguish a large number of potentially supporting technologies and to forecast

    their futures. Relevance trees, a slight variant of the network analysis discussed earlier, are

    of great aid in such work.

    Relevance trees can be used to study a goal or objective, as in morphological analysis,

    or to select a specific research project from a more general set of goals, as in network

    analysis. The methodology of relevance trees requires that the planner determine the

    most appropriate path of the tree by arranging, in a hierarchical order, the objectives,

    subobjectives, and tasks in order to ensure that all possible ways of achieving the

    objectives have been found. The relevance of individual tasks and subobjectives to the

    overall objective is then evaluated.

    An example of a relevance tree is shown in Figure 14. The objective is to develop a

    means of air pollution control. The subobjectives "Develop Petroleum . . ." and "Develop

    Alternatives . . ." further define the main objective. Tasks and subtasks are then defined.

    once all the "good" alternative ways of achieving the subobjectives have been found, the

    relevance of individual solutions to the main objective can be evaluated.

    http://www.itl.nist.gov/div898/handbook/eda/section2/eda2.htmhttp://www.itl.nist.gov/div898/handbook/pmc/section4/pmc4.htmhttp://www.itl.nist.gov/div898/handbook/pmd/section1/pmd142.htmhttp://en.wikipedia.org/wiki/Time_series_analysishttp://en.wikipedia.org/wiki/Autoregressive_modelhttp://en.wikipedia.org/wiki/Box-Jenkinshttp://en.wikipedia.org/wiki/Autoregressive_modelhttp://en.wikipedia.org/wiki/Autoregressive_integrated_moving_averagehttp://en.wikipedia.org/wiki/Autoregressive_integrated_moving_averagehttp://en.wikipedia.org/wiki/Autoregressive_modelhttp://en.wikipedia.org/wiki/Box-Jenkinshttp://en.wikipedia.org/wiki/Box-Jenkinshttp://en.wikipedia.org/wiki/Box-Jenkinshttp://en.wikipedia.org/wiki/Autoregressive_modelhttp://en.wikipedia.org/wiki/Autoregressive_modelhttp://en.wikipedia.org/wiki/Autoregressive_modelhttp://en.wikipedia.org/wiki/Time_series_analysishttp://en.wikipedia.org/wiki/Time_series_analysishttp://en.wikipedia.org/wiki/Time_series_analysishttp://en.wikipedia.org/wiki/Time_series_analysishttp://en.wikipedia.org/wiki/Time_series_analysishttp://www.itl.nist.gov/div898/handbook/pmd/section1/pmd142.htmhttp://www.itl.nist.gov/div898/handbook/pmd/section1/pmd142.htmhttp://www.itl.nist.gov/div898/handbook/pmd/section1/pmd142.htmhttp://www.itl.nist.gov/div898/handbook/pmd/section1/pmd142.htmhttp://www.itl.nist.gov/div898/handbook/pmd/section1/pmd142.htmhttp://www.itl.nist.gov/div898/handbook/pmc/section4/pmc4.htmhttp://www.itl.nist.gov/div898/handbook/pmc/section4/pmc4.htmhttp://www.itl.nist.gov/div898/handbook/pmc/section4/pmc4.htmhttp://www.itl.nist.gov/div898/handbook/pmc/section4/pmc4.htmhttp://www.itl.nist.gov/div898/handbook/pmc/section4/pmc4.htmhttp://www.itl.nist.gov/div898/handbook/eda/section2/eda2.htm
  • 7/29/2019 TFA for tech mgmt

    13/15

    Linear TrendA first step in analyzing a time series, to determine whether a linear relationship provides agood approximation to the long-term movement of the series; computed by the method ofsemiaverages or by the method of least squares.Multiple regression

    WHAT IS MULTIPLE REGRESSION?Multiple regression is a statistical technique that allows us to predict someonesscore on one variable on the basis of their scores on several other variables. Anexample might help. Suppose we were interested in predicting how much an

    individual enjoys their job. Variables such as salary, extent of academicqualifications, age, sex, number of years in full-time employment and socioeconomic statusmight all contribute towards job satisfaction. If we collected dataon all of these variables, perhaps by surveying a few hundred members of thepublic, we would be able to see how many and which of these variables gave rise tothe most accurate prediction of job satisfaction. We might find that job satisfactionis most accurately predicted by type of occupation, salary and years in full-timeemployment, with the other variables not helping us to predict job satisfaction.

    When using multiple regression in psychology, many researchers use the termindependent variables to identify those variables that they think will influencesome other dependent variable. We prefer to use the term predictor variablesfor those variables that may be useful in predicting the scores on another variablethat we call the criterion variable. Thus, in our example above, type ofoccupation, salary and years in full-time employment would emerge as significantpredictor variables, which allow us to estimate the criterion variable how satisfiedsomeone is likely to be with their job. As we have pointed out before, human

    behaviour is inherently noisy and therefore it is not possible to produce totallyaccurate predictions, but multiple regression allows us to identify a set of predictor

    variables which together provide a useful estimate of a participants likely score ona criterion variable.

  • 7/29/2019 TFA for tech mgmt

    14/15

    Experience Curve:

    Normative Methods of Forecasting:Normative forecasting is at the opposite extreme on the sophisticationscale, fully utilizing Bayesian statistics, linear and dynamic programming,and other operations research tools. Here, despite the uniqueness, uncertainty, and lack ofuniformity of research and development activities,each of the designers of normative techniques has proposed a single-format

    wholly quantitative method for resource allocation. Along the dimensionsof unjustified standardization and needless complexity, for example, theproposed R&D allocation methods far exceed the general cost-effectiveness

    approach used by the Department of Defense in its program and system reviews.For both exploratory and normative purposes, dynamic models of broadtechnological areas seem worthy of further pursuit. In attempting to develop"pure predictions" the explicit recognition of causal mechanisms offered bythis modeling approach seems highly desirable. This feature also has normative utility, providedthat the dynamic models are limited in their application to the level of aggregate technologicalresource allocation and are notcarried down to the level of detailed R&D project funding.Moving averages:

  • 7/29/2019 TFA for tech mgmt

    15/15

    In statistics, a moving average, also called rolling average, rolling mean or runningaverage, is a type offinite impulse response filter used to analyze a set of datum points bycreating a series ofaverages of different subsets of the full data set.Given a series of numbers and a fixed subset size, the first element of the moving averageis obtained by taking the average of the initial fixed subset of the number series. Then thesubset is modified by "shifting forward", that is excluding the first number of the series and

    including the next number following the original subset in the series. This creates a new subsetof numbers, which is averaged. This process is repeated over the entire data series. The plot lineconnecting all the (fixed) averages is the moving average. A moving average is a set of numbers,each of which is the average of the corresponding subset of a larger set of datum points. Amoving average may also use unequal weights for each datum value in the subset to emphasizeparticular values in the subset.

    http://en.wikipedia.org/wiki/Statisticshttp://en.wikipedia.org/wiki/Finite_impulse_response_filterhttp://en.wikipedia.org/wiki/Averagehttp://en.wikipedia.org/wiki/Averagehttp://en.wikipedia.org/wiki/Averagehttp://en.wikipedia.org/wiki/Averagehttp://en.wikipedia.org/wiki/Finite_impulse_response_filterhttp://en.wikipedia.org/wiki/Finite_impulse_response_filterhttp://en.wikipedia.org/wiki/Finite_impulse_response_filterhttp://en.wikipedia.org/wiki/Finite_impulse_response_filterhttp://en.wikipedia.org/wiki/Finite_impulse_response_filterhttp://en.wikipedia.org/wiki/Finite_impulse_response_filterhttp://en.wikipedia.org/wiki/Finite_impulse_response_filterhttp://en.wikipedia.org/wiki/Statistics