8 pj performance evaluation

Upload: benandy

Post on 05-Mar-2016

213 views

Category:

Documents


0 download

DESCRIPTION

eva

TRANSCRIPT

  • Performance MeasurementPublic Service PerformanceEvaluation: A Strategic PerspectivePeter M. JacksonPerfonmnceeualiiatitmofgovernmerUatwitiesisesseri^ Government,no matter the level (central/federal, state, or heal), should he accountable and responsible to theelectorate and a host of other stakeholders. Accountability involves, among other things, anassessment of policy outcomes, along with the means and processes used to deliver Ihe policies.Were the policies sidtabk for the problems that the electorate unnted to be solved? Were thepolicies implemented efficiently and effectively ? Did the electorate, taxpayers, and users ofpublic services (often these aw distinct groups) obtain vatue-for-mmey ?Images of PerformanceA large number of public service organizations inthe UK have introduced performance-monitoringsystems'. In doing so, attention has tended to focusupon the appropriateness and suitability oimeasuresand indicators of performance. A great battery ofindicators has been published and much ink hasbeen used in evaluating their heuristic value. Whatimages of performance do these indicators reveal?

    Performance-monitoring systems haveimportant hidden dimensions below the water-lineof indicators and measures. These include thelogical foundations and values upon which thewhole system rests. Weakness in these areas, alongwith a lack of appreciation of the difFerent valuesthat lurk in the depths of the performance-monitoring system, results in severe implementationproblems. The design ofa performance-monitoringsystem reflects specific images of publicadministration (Kass and Catrow, 1990). Theserange from the new public sector management,with its emphasis upon technical developments inpublic policy analysis, market oriented metaphorsand an emphasis upon operational efficiency andeffectiveness, to those whose preferred imagefocuses upon the importance of community andpublic service values.

    Implementation problems arise when theseimages conflict within the public service organizationor when the organization's stakeholders hold imagesthat are significantly different from those of theruling organizational elite. Introducingperformance-monitoring systems can raiseimportant issues in the management of change(Jackson and Pedmer, 1993). Performance criteriaare by definition value-laden. They are, therefore,the currency of political debate. When consideringthe evaluation ofthe performance of public service* Past issues of Public Money 6f Management have recorded theintroduction of such systems. See aiso: Fimmdat AaountaUUtyand Management, and ]ackson and Palmer (1993).

    organizations it is necessary to go beyond adiscussion ofthe data used for the construction ofperformance indicators and to confront suchquestions as:

    What are the origins ofthe criteria used to evaluatepublic services?

    Who sets the criteria? Whose interests are being served by the evaluation ?

    Performance evaluation in public serviceorganizations is fraught with theoretical,methodological and practical problems which rundeep in any discussion of democracy. Some ofthese issues are reviewed in this article.

    Performance Evaluation and StrategicManagementThe introduction of performance-evaluationsystems to public service organizations is one oftheelements of the new public sector management.Some regard the monitoring and measurement ofperformance as an expression of classical andscientific management techniques. Others arehostile to what they see to be the importation ofnaive production management techniques fromthe private sector. Yet others view the productionof performance measures as a means of exercisinggreater control over public service bureaucracies.While there is some strength in these views, theyare too negative and too narrow and can result inthe ridiculous conclusions that either theperformance of public service organizations shouldnot be subjected to measurement or that suchperformance is always impossible to measure.

    Public service organizations are extremelycomplex: they serve multiple objectives; have adiversity of clients; deliver a wide range of policiesand services; and exist within complex and uncertainsocio-political environments. The values whichdrive such org;anizations, and against which theirperformance mustbejudged, are also more complex

    Peter M. Jackson isProfessor of Economicsand Director ofthePublic SectorEconomics ResearchCentre at the Universityof Leicester.

    CIPFA, 1993Published by Blackwell Publishers, 108 Cowley Road, Oxford OX4 IJF and238 Main Street, Cambridge, MA 02142, USA.

    PUBLIC MONEY & MANAGEMENT OCTOBER-DECEMBER 1 9 9 3

  • 10

    than the simple commercial market-led values ofprofitability. Given this environment within whichpublic service managers must make decisions, it isnot a trivial remark to suggest that the challengeswhich face public service managers are often moredaunting than their private sector counterpartsand require a wider range and greater intensity ofskills.

    It is the sheer complexity of the tasks whichpublic service organizations face that makes themcandidates for the public domain rather than themarket-place. This simple fact is so often forgottenby the advocates of privatization. The inclusion ofpublic service clauses in the contracts awarded tonewly privatized companies highlights some ofthe difficulties and the conflicts between competingobjectives that exist for public service managers.

    When evaluating the performance of publicservice organizations their complexity needs tobe recognized. Too often, it is the trivial andmore easily measured dimensions of performancewhich are recorded. The deeper, more highlyvalued, but difficult to measure, aspects areignored.

    The complexity of organizations and thedifficulties of measuring their performance arenot ignored if a strategic-management perspective,such as that advocated by Jackson and Palmer(1992), is adopted. Indeed, the better managedprivate sector organizations have tended to use astrategic-management approach when designingtheir performance measures. Simple input/outputmeasures or profitability and liquidity measuresare of very limited value. They provide managersin private sector organizations with little relevantinformation. This is now well recognized in theliterature on private sector performance (seeKaplan, 1990),

    Successful private sector organizations nowuse performance-monitoring systems which areclosely related to their strategic decision-makingcycle. This approach captures a much richer setof performances than the simple and naiveapproach. It focuses attention on consumersatisfaction, product or service quality, and theperformance of processes, not just the simplefmancial measures of profitability and rate ofreturn on investment. Writers such as Kaplan(op. cit.) have also recognized that financialstatements provide information which is as goodas the conventions and definitions which underliethe accounts. The same applies to managementaccounts which are used for decision-making.Their relevance for modern capital-intensiveproduction processes has been called into question(Johnson and Kaplan, 1987). Much greater careand attention is given to the fundamentals ofperformance measures in private sectororganizations. This now includes going outsidethe simple metric of profitability. Attempts toincorporate non-commercial values, which reflectthe corporation's environmental (green) and social(ethical) responsibilities, are to be found in manyprivate companies.

    Control versus LeamingPerformance monitoring need not be used only asa means of organizational control, as it undoubtedlyis in the classical and scientific managementparadigm. In the strategic-managementperspective the information generated by theperformance-monitoring system is a means oforganizational learning. It enables any plan, or setof targets, to be compared against outcomes andforces the question: 'Why is there a deviation'?Answers to th^t question facilitate learning adiscovery that while some critical success factorsmight be under the direct control of managersmany others are not. Learning can also mean arealization that the performance targets set bymanagement may be too ambitious given tbeorganization's current capabilities.

    Organizational learning is illustrated in figure1. This diagram sets out the standard strategic-management model which is found in mosttextbooks. Where, however, figure 1 differs fromstandard models is in the inclusion of a feedbackloop between the information produced bycomparing performance measures and indicatorsagainst targets and the locus of decision-making.In order to learn, it is necessary to have more thanjust a set of performance indicators. These indicatorsmust be compared against some benchmark ortarget and there must be a capacity and capabilityto analyse any gaps. Furthermore the output ofgap analysis needs to be fed back into the decision-making process. Few public service organizationsare in a position to carry out performance-gapanalysis.

    The task of strategic management is theprovince of the senior management group. Thisgroup should be involved in tackling the followingissues:

    Setting the strategic direction (long term) of theorganization.

    I mplementing and managing the change processin line with the chosen strategic direction.

    Ensuring continuous improvement of operationalperformance.

    The distinction between strategic and operationcdmanagement is shown in figure 1.

    This list of activities and responsibilities can beapplied to any organization: public or private sector.Once the strategy is chosen, appropriateperformance measures and indicators are definedthat will show whether the organization is on theright tracks. Performance indicators show progressthey are milestones along the road and are avisible manifestation of what the organization standsfor and represents. Thatassumes, however, thatallsignificant dimensions of the organization can berepresented by some measure or indicator. This isnot always the case often the measurable drivesout the immeasurable.

    In what areas of an organization shouldindicators be revealing progress and achievement?Our research reveals that most public sector

    PUBLIC MONEY & MANAGEMENT OCTOBER-DECEMBER 1 9 9 3 CIPFA, 1993

  • 11

    organizations focus upon operational performanceand within that area greatest attention is given tocost indicators (economy) and efficiency*. Few ofthe organizations studied employed an explicitstrategic-management fi-amework. They were,therefore, not in a position to produce measuresand indicators of the organization's performancein the achievement of the long-term or strategicobjectives. None ofthe organizations reviewed hadindicators that would demonstrate performance inthe area of change management. Indeed, few ofthe organizations had a change-managementprogramme in place. Another important featureof the organizations in the sample was that whilethey professed to have performance measures andindicators, few of them had set targets for theseindicators. It was, therefore, impossible to know ifprogress had been made or indeed if theorganization had any successes. The indicatorsillustrated what was done, not what should bedone. Performance gaps could not be identified.This obviously impedes organizational learningand creates the suspicion that indicators were simplybeing produced as an end in themselves asuccessful organization is one that produces manyperformance indicators or that indicators werean instrument of organization control.

    ImplementationThe use of performance indicators by public sectororganizations has been on the public sectormanagement agenda in the U K for about ten years.Research at the University of Leicester shows thatdespite the encouragement and leadership ofagencies such as the National Audit Office, theAudit Commission and the Accounts Commissionfor Scotland, implementation of performance-evaluation systems has been slow, piecemeal andoften half-hearted. In-depth case studies of thoseorganizations which have successfiiUy introduced aperformance-evaluation system demonstrate dearlythat a critical success factor is the amount of thoughtand effort which is put into managing the changeprocess. Most organizations underestimate thetime it takes to design a performance-evaluationsystem, to negotiate it into place, to trial it, debug itand develop it. This whole cycle can take betweenthree and four years.

    It is the responsibility ofthe senior managementgroup to ensure that a system exists that willproduce the strategic, continuous improvementand change-management performance indicators.They also need to sanction the resources requiredto establish a management information system thatwill collect and disseminate such information. Ofcourse the senior management team do not makedirect decisions on the precise performanceindicators. Successful implementation requiresthat those who are responsible for delivering tbe

    'Research carried out at the Public Sector Economics ResearchCentre, University ofLeicester, into the practice of performancemeasurement in puhlic service organizations.

    Figure 1.Evaluation of theexternal environment

    Evaluation of theinternal environment

    i

    < -

    Setting

    Mission-values-objectives DirectionStructure-style Reward-information systems Performance measures

    i

    Business/service plans

    IActivities

    IImplementation

    4.Outputs/outcomes

    ICompare against performance targets

    _^ Strategicmanagement

    Operational'management

    various elements of performance at the operationallevel, throughout the organization, are involved indesigning the system. A sense of ownership ofperformance indicators is important for motivationand the acceptance ofthe indicators.

    It is easy to use the results of Leicester's researchto criticise public service organizations for theirlimited success in implementing sophisticatedperformance-evaluation systems. That would bean unfortunate and unintended outcome. Publicservice organizations when compared to theirprivate sector counterparts are well in advance ofthem in the measurement of performance and theemployment of internal continuous improvementindicators. A recent study by Harris ResearchCentre (1990) shows that the directors of UKcompanies have a strong tendency to focus internallyand on financial indicators. They pay little attention(in some cases none) to sucb external factors as theperceptions of customers; competitors' actions;and their company's position relative to itscompetitors. Strategic planners expresseddissatisfaction with the quality of the informationthey had to formulate strategy. Indeed, fewcompanies were engaged in strategic planning. Ifthey did have a strategy they seldom monitored theoutcomes against the plans. Managementinformation systems were dominated by fmancialinformation but even then that information tendedto be confined to those areas which were requiredfor statutory financial accounting and reporting.

    CIPFA, 1993 PUBLIC MONEY & MANAGEMENT OCTOBER-DECEMBER 1 9 9 3

  • 12

    Little relevant management accounting informationwas recorded.

    The Governance of Public ServiceOrganizationsThe language and concepts of strategicmanagementcan be transported between the publicand the private sectors, but not without somemodification. This is no more so than at the seniormanagement level of decision-making. In a privatecompany it is the board of directors who are chargedwith the strategic areas of responsibility. Operatingon behalf of the owners ofthe company (the ordinaryshareholders) they make decisions about where thecompany should be in five, ten, or 15 years' time.What businesses should they be in? Wbat servicesand products sbould they produce? What newproducts should be introduced and when? Whereshould production be located? These decisionsposition the company to serve its customers; tocompete against existing or potential rivals; and toprovide a satisfactory return for shareholders andother stakeholders including its employees.

    What is the equivalent to the board of directorsfor public service organizations? What is the seniormanagement team that is responsible for creatingstrategy? Who are its members? These questionsreduce to the single question: 'What is the nature ofthe governance system of public serviceorganizations?'

    Attempts at answering this question tend toreveal a general weakness in the public sectorperformance-evaluation literature. There is atendency to regard the performance and success ofpublic services as a managerial responsibility. Atthe operational level that is a reasonable assumptionto make but at the strategic level what does itmean? Who are the members of the seniormanagement team equivalent to the board ofdirectors? This is an inherent problem ofthe newpublic sector management paradigm it fails tospecify adequately the organization's governancesystem, especially the nature of the seniormanagement group.

    Who sets the strategic direction for publicservice organizations? Is it the political leaders(central and local)? Is it the chief executive andchief officers? Or is it a partnership ofboth groups?These questions are familiar within the context ofthe policy-analysis literature who sets publicpolicy and who is accountable for the outcomes? Itis the age-old separation of powers between thelegislature and tbe executive branch of government.

    For some this will be a statement ofthe glaringlyobvious. But a moment's reflection soon producesa realization tbat while political leaders are keen tointroduce systems which will ensure operationalperformance is measured and evaluated, they dolittle to enable the evaluation of the strategicperformance of public services. To do so would beto evaluate the effectiveness of policies and,therefore, to put into the public domain politicallysensitive information that would be used to judge,in the political arena, the efficiency and effectiveness

    of the policy-making process.Where are the boundaries ofthe public sector

    management model? To what set of activities doesit apply. The National Audit Office, the AuditCommission and the Scottish Accounts Commissiondraw aboundary round the policy-making process.They investigate, when conducting value-for-money (VFM) studies, the efficiency andeffectiveness of implementing given policies. Theirinterest is in evaluating managerial effectivenessand not policy effectiveness per se. Of course inpractice it is difficult to be mute about mattersrelating to policy especially if public sector managershave been assigned an impossible mission ofimplementing the unimplementable such as theCommunity Charge,

    Value-for-Money (VFM): UnbalancedEmpbasisThe separation of powers between the legislatureand the executive has produced a separation ofthedimensions of performance evaluation, VFMauditing is a misnomer since by its very nature itcannot make anyjudgement about the values placedon public policy outcomes. Instead, VFM auditingfocuses attention upon technical efficiency (i,e. X-efficiency). It addresses the important issues ofinput/output relationships, along with purchasingand contractual relationships. These are all directedat one side ofthe value for money equation, namely,costs.

    Nothing is usually said about allocativeefficiency. Significant efforts have been made togenerate performance indicators which reflect theoutputs (final and intermediate) of public servicesbut such indicators are meaningless and of novalue. Allocative efficiency requires that managersknow something about the output of their servicesrelative to demand. Is the service being over- orunder-produced? Is the service mix the correctone? Is the service being targetted on theappropriate client/user group? To demonstrate,by using indicators, that service output is rising byitself is not a useful VFM indicator. If the publicservice is producing more and more of a servicewhich users do not want; which is ofthe wrong mixor quality; which is targetted on an inappropriateuser group; or which has low value placed on it,then allocative inefficiency exists.

    Many public services are near the efficiencyfrontiers of their production functions. There isprobably litde waste or slack in the system onaverage so that X-efficiency looks reasonable. Tbisdoes not deny scope for improvements in X-efficiency. The issue, however, is the amount ofallocative inefficiency that exists. The suspicion isthat allocative inefficiency is high and is the moresignificant source of improvements when it comesto delivering VFM,

    The hole in the middle ofthe VFM doughnutis the absence of suitable institutions to judge theeffectiveness of public services and policies and,therefore, to make a judgement about allocativeefficiency. Some have suggested tbat the market

    PUBLIC MONEY & MANAGEMENT OCTOBER-DECEMBER 1 9 9 3 CIPFA, 1993

  • 13

    would be an appropriate institution. This image isadvocated by the proponents of privatization.Markets are, however, notoriously inefficient intbeir capacity to evaluate the externalities that areusually associated with public services.

    Select Committees are another institution.These committees do make some attempt atevaluating the policy strategies adopted by centralgovernment departments, but they are oftenhindered by insufficient information about theinternal and external environment ofthe services.What is the Select Committee equivalent for localgovernment; the various health boards and thenew Next Steps agencies? Indeed, who makesstrategy for these organizations?

    Another imbalance arises between static anddynamic concepts of efficiency. The majority ofperformance indicators currently in use focus uponstatic efficiency, that is, the short run. Dynamicefficiency emphasizes the long run, and, therefore,the effectiveness of investment in new productivecapacity, including organizational infi-astructureand managerial innovations. The capitalexpenditure constraints in the public sector suggestthat dynamic efficiency has been sacrificed.

    Improvements in static and dynamic efficiencyrequire that appropriate incentives are in placethat will infiuence decisions and guide actionstowards efficiency improvements. Whether or nota suitable balance of incentives exists is taken upbelow.

    Wish-driven StrategiesWhat makes one organization more successful thanothers? This is another way of asking about itsrelative superior performance. Success and highperformance is not the 'product of wish-drivenstrategy' (Kay, 1993, p.4). Vision, mission andvalues are necessary but certainly not sufficient.Too many strategic management texts incorrectlyassume tbat provided our organization has a missionstatement then it will out-perform those that donot.

    If mission and vision remain in the minds ofthe senior management team and are notcommunicated throughout the organization thenthey are valueless. The critical success factors (CSF)are an appraisal of the internal strengths andweaknesses of the organization; an appreciationand understandingofitsenvironment; thecapacityto learn and adopt as the uncertain future unfolds.Strength in these areas must be worked at andacquired. This is the function of management.Success is not achieved by accident. Short runsuccess gained as a result of chance will be ephemeralunless strong management exists to capitalize it.

    Success requires a match between internalcapabilities (strengths) and external relationships.This 'strategic fit' demands an understanding oftbe organization's relationship with its suppliers;its clients (customers); the political system withinwhich it is embedded; the socio/demographicenvironment etc. Many 'wish-driven strategies'simply cannot be delivered because the organization

    does not have the necessary capabilities to do so.Wisb-driven organizations are guided by what thesenior management team would like them to be not what they are or what, given their existing setof CSFs they are capable of being. If public serviceorganizations are to acbieve their aspirations thenthe senior management team needs to create thenecessary capabilities. Dreams and visions are notsubstitutes for careful assessments of existingstrengths and weaknesses.

    The earlier discussion of public servicesgovernance structures raised the issue of who isresponsible for producing the public serviceorganization's mission and vision. This was seen tobe primarily the role ofthe political leaders with aninput from professional advisors who may or maynot be civil servants. Public service managers areresponsible for producing business or service planswhicb will deliver the contents of the missionstatement efficiently and effectively.

    Most politically generated public service missionstatements are wish-driven. They are made withlittle reference to the organization's capabilities ofdelivering them. This presents public servicemanagers with a real problem a missionimpossible or a remit wbich is bound to result inunder-performance. Politicians when making wish-driven strategies should ensure that the organizationis adequately resourced to produce the capabilitiesthat will enable it to deliver the elements of thestrategy. The question is how can they be madeaccountable to do so?

    Unless the responsibilities of politicians for theperformance of public service organizations isadequately recognized then public service managersare likely to be held to account for decisions overwhich they have no (or litde) input or control.Auditing bodies sucb as the National Audit Officeor the Audit Commission can, and on occasion do,comment upon the inadequate capabilities of apublic service to deliver. However, strongerstatements need to be made about wbo is responsiblefor these inadequacies.

    Frequent changes in wish-driven strategiesprovide a source of instability for managementaction. In the private sector the mission statementtends to be a fixed point wbicb is changedinfrequently. The strategy of public serviceorganizations, however, is tied to the political cycleand within that to changes in Secretaries of Stateeach time there is a Cabinet reshuffie, A change inmission and strategy results in changes in serviceplans, Sucb instability is destabilizing at tbeoperational level and can result in poor performancebecause personnel are constandy adjusting to newexpectations and routines. This suggests that ifpublic service organizations are to improve theirperformance they need to be more flexible andmore loosely coupled in order that they might rideout tbe storms of instability. The downside of thisrecommendation is that flexibility can bring with itgreater amounts of procedural uncertainty wbicbcan paralyse an organization, resulting in delays indecision-making, Creater decentralization also

    CIPFA, 1993 PUBLIC MONEY & MANAGEMENT OCTOBER-DECEMBER 1 9 9 3

  • 14

    weakens accountability.

    So What?The trend over the past ten years has been toproduce mission statements, strategies and business/service plans for public service organizations. Datahave been collected which are then reconstitutedinto performance measures and indicators. Sowhat has anything materially changed? Haveservices improved; has greater value beencreated?

    Many of the changes are symbols of goodmanagement practice. Litde of substance orconsequence changes as a result of theirintroduction. They are the first step on a longjourney. They are necessary but not sufficient.

    Performance indicators simply representmanagement information. Like all forms ofinformation they are an input which comes in avariety of qualities. Some convey clearer signalsthan others. However, like all inputs they need tobe worked on and crafted if they are to be useful.Our research shows that too often decision-makersdelegate the task of collecting, collating andpublishing statistics for performance measurementto a specialist group, TTiis is in many cases a formof sidelining performance measurement. Theinformation contained in the performanceindicators do not really influence decisions aboutresource use and will not help decision-makers tocreate value in these cases. Sidelining ofperformance measurement has been found in localauthorities; central government departments andin GP practices.

    If resource use decisions are to be influencedby the information contained in performanceindicators then they need to be related to incentivestbat will change behaviour. For example, theinformation signals contained in the new capitalcharges in the NHS have no impact on resource usebecause the charges are not related to funding.They do not influence behaviour in terms of assetuse with the result that their introduction has hadno impact on the efficient use of assets.

    At this moment in time it is impossible to knowthe value of performance indicators. It is notpossible to know, let alone evaluate, what hashappened as a result of their use. Whether or notresources are used more efficiendy; whether or notpublic services are more effective; and whether ornot more value has been created remainunanswered questions. The article of faith is thatoverall performance is better with the availability ofthe information than without.

    Important information gaps do, however,remain. Insufficient resources have been allocatedto analysing the cause and effect relationshipsbetween different means of policy intervention andoutcomes. Until crucial aspects of policy analysis

    have been carried out assessment ofthe effectivenessof public services will remain in the policy 'blackbox'.

    ConclusionsWben located within a strategic-managementperspective the performance measurementofpublicservices is seen to be necessary but not sufficient forimproved management practice; and a means oflearning rather than a means of control. If tbeintroduction of performance measures/indicatorsis to give the expected pay off then it is necessary forpublic service organizations to have the capacity tolearn from information signals that indicatorsprovide, as well as the the organizational capabilitiesto act upon that learning. Only then will additionalvalue be created wbicb justifies the costs ofmeasuring performance.

    A strategic framework also raises importantissues about the appropriate governance structurefor public services and who is responsible for whichareas of strategy. Concentration on X-efficiency,rather than allocative efficiency, has tended to shifttbe spodight ofaccountability away fi-om politiciansto public service managers. This has not broughtabout an imbalance in the concept of value formoney auditing. The creation of value in publicservices is only seen as a transfer of resources fr^ omthose who produce services to those who consumethem. Allocative efficiency requires a more carefulassessment of wbat is currendy being provided andthe values which users place upon it.

    Performance evaluation is part ofthe age-oldissue of public service accountability. It is new winein old botdes. The new public sector managerialismand its symbols of good management practice has,through the use of performance measurement,introduced new tints to lend new colour to thespectrum of accountability. The dominant ideologywitb its emphasis upon market forces and efficiencyunderpins these innovations. While many of thechanges appear to be innocuous the concepts uponwhich they rest are contestable,

    ReferencesHarris Research Centre (1990), Information for Strategic

    Management: A Study of Leading Companies. Manchester.Jackson, P, M, and Palmer, A, (1993), Developing

    Performance Monitoring in Public Sector Organizations.Management Centre, University ofLeicester,

    Johnson, H, T. and Kaplan, R, S, (1987), Relevance Lost:The Rise and Fall of Management Accounting. HarvardBusiness School Press, Boston,

    Kay, J, A (1993), Foundations of Corporate Success. OxfordUniversity Press, Oxford,

    Kaplan, R, S, (Ed,), (1990), Measures of ManufacturingSuccess. Harvard Business School Press, Boston,

    Kass, H, D, and Catrow, B, h. (1990), Images and Identitiesin Public Administration. Sage Publications, London,

    PUBLIC MONEY & MANAGEMENT OCTOBER-DECEMBER 1 9 9 3 CIPFA, 1993