appendix 5.doc

Upload: meddydanial

Post on 03-Apr-2018

212 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/28/2019 Appendix 5.doc

    1/4

    Appendix 5

    The analytical hierarchy process

    5.1 THE BASIC AHP PROCEDURE

    At the core of the Analytic Hierarchy Process (AHP) lies a method for converting subjectiveassessments of relative importance to a set of overall scores or weights. The method wasoriginally devised by Saaty.84It has proved to be one of the more widely applied MCA methods,see, for example, Zahedi,85Golden et al.86and Shim87for summaries of applications. However, atthe same time, it has attracted substantial criticism from a number of MCA specialists. Therehave also been attempts to derive similar methods that retain the strengths of AHP while avoidingsome of the criticisms.

    The fundamental input to the AHP is the decision maker's answers to a series of questions of thegeneral form, 'How important is criterionA relative to criterion B?'. These are termed pairwisecomparisons. Questions of this type may be used to establish, within AHP, both weights forcriteria and performance scores for options on the different criteria.

    Consider firstly the derivation of weights. It is assumed that a set of criteria has already beenestablished, as discussed in chapters 5 and 6. For each pair of criteria, the decision-maker isthen required to respond to a pairwise comparison question asking the relative importance of thetwo. Responses are gathered in verbal form and subsequently codified on a nine-point intensityscale, as follows:

    How important isA relative to B? Preference index assigned

    Equally important 1

    Moderately more important 3

    Strongly more important 5

    Very strongly more important 7

    Overwhelmingly more important 9

    2, 4, 6 and 8 are intermediate values that can be used to represent shades of judgement betweenthe five basic assessments.

    If the judgement is that B is more important thanA, then the reciprocal of the relevant index valueis assigned. For example, ifB is felt to be very strongly more important as a criterion for thedecision thanA, then the value 1/7 would be assigned toA relative to B.

    Because the decision maker is assumed to be consistent in making judgements about any onepair of criteria and since all criteria will always rank equally when compared to themselves, it is

  • 7/28/2019 Appendix 5.doc

    2/4

    only ever necessary to make 1/2n(n - 1) comparisons to establish the full set of pairwisejudgements forn criteria. Thus a typical matrix for establishing the relative importance of threecriteria might look like:

    The next step is to estimate the set of weights (three in the above example) that are mostconsistent with the relativities expressed in the matrix. Note that while there is completeconsistency in the (reciprocal) judgements made about any one pair, consistency of judgementsbetween pairs is not guaranteed. Thus the task is to search for the three wj that will provide thebest fit to the 'observations' recorded in the pairwise comparison matrix. This may be done in anumber of ways.

    Saaty's basic method to identify the value of the weights depends on relatively advanced ideas inmatrix algebra and calculates the weights as the elements in the eigenvector associated with themaximum eigenvalue of the matrix. For the above set of pairwise comparisons, the resultingweights are:

    w1 = 0.751 w2 = 0.178 w3 = 0.070.

    The calculations required are quite complex. In practice they would be undertaken by a specialAHP computer package.

    A more straightforward alternative, which also has some theoretical attractions (see below) is to:

    calculate the geometric mean of each row in the matrix; total the geometric means; and

    normalise each of the geometric means by dividing by the total just computed.

    In the example, this would give:

    Geometric mean

    Weight88

    Criterion 1 (1 x 5 x 9)1/3 3.5568 0.751

    Criterion 2 ( 1/5x 1 x 3)1/3 0.8434 0.178

    Criterion 3 ( 1/9x 1/3 x 1)1/3 0.3333 0.070

    Sum

    4.7335 (=1.00)

    http://www.odpm.gov.uk/stellent/groups/odpm_about/documents/graphic/odpm_about_608524-41.gif
  • 7/28/2019 Appendix 5.doc

    3/4

    Taken to further decimal points of accuracy, the weights estimated by the two different methodsare not identical, but it is common for them to be very close.

    In computing weights, it is normal to cluster criteria in a value tree (see section 6.2.6). In AHPapplications, this allows a series of small sets of pairwise comparisons to be undertaken withinsegments of the value tree and then between sections at a higher level in the hierarchy. In this

    way, the number of pairwise comparisons to be undertaken does not become too great.

    In addition to calculating weights for the criteria in this way, full implementation of the AHP alsouses pairwise comparison to establish relative performance scores for each of the options oneach criterion. In this case, the series of pairwise questions to be answered asks about therelative importance of the performances of pairs of alternatives in terms of their contributiontowards fulfilling each criterion. Responses use the same set of nine index assessments asbefore. If there are m options and n criteria, then n separate m X m matrices must be created andprocessed.

    Although this may seem a daunting task, computer packages such as Expert Choice, and HIPRE3+ automate most of the computations. Generally, non-specialist users find the pairwisecomparison data entry procedures of AHP and related procedures attractive and easy to

    undertake.

    With weights and scores all computed using the pairwise comparison approach just described,options are then evaluated overall using the simple linear additive model used for MCDA. Alloptions will record a weighted score, Si, somewhere in the range zero to one. The largest is thepreferred option, subject as always to sensitivity testing and other context-specific analysis of theranking produced by the model.

    5.2 CONCERNS ABOUT THE AHP

    The AHP provides the same benefits as do MCDA models in terms of focusing decision makerattention on developing a formal structure to capture all the important factors likely to differentiatea good choice of an option from a poor one. Pairwise comparisons are generally found to be

    readily accepted in practice as a means of establishing information about the relative importanceof criteria and the relative performance of options. The fact that the pairwise comparison matrixprovides some redundant information about relative values allows some cross-checking to bedone. Arguably, the resulting weights or scores may be more stable and consistent than if theywere based on a narrower set of judgements. AHP also fits comfortably with circumstanceswhere judgements, rather than measurements of performance (say), are the predominant form ofinput information.

    Nonetheless, despite these attractions, decision analysts have voiced a number of concernsabout the AHP. French89provides a succinct critique; see also Goodwin and Wright.90The maindoubts raised are:

    (a) The 1 - 9 scale has the potential to be internally inconsistent. A may be scored 3 inrelation to B and B similarly scored 5 relative to C. But the 1 - 9 scale means that aconsistent ranking of A relative to C (requiring a score of 15) is impossible.

    (b) The link between the points on the 1 - 9 scale and the corresponding verbal descriptionsdoes not have a theoretical foundation.

    (c) Weights are elicited for criteria before measurement scales for criteria have been set.Thus the decision maker is induced to make statements about the relative importance ofitems without knowing what, in fact, is being compared (see section 6.2.10).

  • 7/28/2019 Appendix 5.doc

    4/4

    (d) Introducing new options can change the relative ranking of some of the original options.This 'rank reversal' phenomenon, first reported by Belton and Gear,91is alarming and arisesfrom a failure consistently to relate scales of (performance) measurement to theirassociated weights.

    (e) Although it is a matter of debate among decision analysts, there is a strong view that the

    underlying axioms on which AHP is based are not sufficiently clear as to be empiricallytestable.

    5.3 ALTERNATIVES TO AHP

    A number of attempts have been made to develop MCA procedures that retain the strengths ofAHP while avoiding some of the objections. The focus of these efforts has largely been on findingdifferent ways of eliciting and then synthesising the pairwise comparisons. It is beyond the scopeof the manual to go into great detail about these developments

    The best known alternative is REMBRANDT(see Lootsma,92and Olson93). REMBRANDT uses adirect rating system which is on a logarithmic scale to replace the 1 - 9 scale of AHP andexchanges the eigenvector-based synthesis approach for one which is based on use of the

    geometric mean to identify estimated weights and scores from pairwise comparison matrices.94Amore recent alternative is the MACBETH procedure, outlined in section 5.6.

    84 Saaty, T. (1980) The Analytical Hierarchy Process, John Wiley, New York.

    85 Zahedi, F. (1986) 'The analytic hierarchy process: a survey of the method and its applications',Interfaces, 16, pp.96-108.

    86 Golden, B., Wasil, E. and Harker, P. (eds.) (1989) The Analytic Hierarchy Process: Applicationsand Studies, Springer Verlag, New York.

    87 Shim, J.P.(1989) 'Bibliographical research on the analytic hierarchy process (AHP)' Socio-

    Economic Planning Sciences, 23, pp.161-7.

    88 Weights should sum to one. There is a small rounding error.

    89 French, S. (1988) Decision Theory: an Introduction to the Mathematics of Rationality, EllisHorwood, Chichester, pp. 359-361.

    90 Goodwin, P. and Wright, G. (1998) Decision Analysis for Management Judgement, secondedition, John Wiley, Chichester.

    91 Belton, V,. and Gear, T. (1983) 'On a short-coming in Saaty's method of analytic hierarchies',Omega, 11, pp.228-30.

    92 Lootsma, F.A. (1992) The REMBRANDTSystem forMulti-criteria Decision Analysis viaPairwise Comparisons or Direct Rating, Report 92-05, Faculty of Technical Mathematics andInformatics, Delft University of Technology, The Netherlands.

    93 Olson, D. (1996) Decision Aids for Selection Problems, Springer Verlag, New York.

    94 For succinct critiques see French, pp. 359-361, and Goodwin and Wright, pp. 394-397.