customer_satisfaction_surveys_v1-0.pdf

Upload: hammad-hashmi

Post on 06-Apr-2018

217 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/3/2019 customer_satisfaction_surveys_v1-0.pdf

    1/19

    IT Customer Satisfaction Surveys Guidance on Best Practice (Version 1.0)

    1

    IT CUSTOMER SATISFACTION SURVEYS

    Guidance on Best Practice

    Version 1.0

  • 8/3/2019 customer_satisfaction_surveys_v1-0.pdf

    2/19

    IT Customer Satisfaction Surveys Guidance on Best Practice (Version 1.0)

    2

    Contents

    Section 1 Introduction.. 3

    a) Drivers for the survey3

    Section 2 Scope.5

    a) Internal versus external customers.5

    b) Customer segmentation5

    c) Service versus system versus infrastructure.5

    d) Areas to be addressed......5

    Section 3 Methods of collection.... 10

    Section 4 Methodology... 13a) Design of the survey13

    b) Communication.16

    c) Sample sizes.17

    d) Frequency and timing..18

    e) Analysis and actions. 18

    f) Review .19

  • 8/3/2019 customer_satisfaction_surveys_v1-0.pdf

    3/19

    IT Customer Satisfaction Surveys Guidance on Best Practice (Version 1.0)

    3

    1. Introduction

    This guidance is designed to give those working in government IT services advice onbest practice for conducting IT customer satisfaction surveys. It is based on the currentworking practices of professionals across a range of government organisations, bothdepartmental and non-departmental.

    The guidance was developed by some of the organisations participating in the 2005 ITbenchmarking exercise conducted by the e-Government Unit of the Cabinet Office. Itfocuses on three key areas which need to be addressed when developing a survey the

    scope of the survey, the methods of collection and the methodology. Additional guidanceon the drivers for a survey has been incorporated here in the Introduction.

    Work on the guidance has been undertaken with the practical needs of professionals inmind. It should not be regarded as definitive but has been designed to fit therequirements of the OGC Toolkits agenda.

    a) Drivers for the survey

    Before embarking on a detailed design, those responsible for conducting the surveyneed to understand why the survey is being conducted. Potential drivers for a surveyinclude:

    Quality of service

    This is the fundamental reason for conducting a survey. It may however beconstrained by an existing framework, for example the balanced scorecard or theExcellence Model from the European Foundation for Quality Management, which

    the organisation uses for its quality processes. Or the survey might be underlainby guidelines from the Office of Public Service Reform which cover delivery,timeliness, information, professionalism and staff attitude.

    Ongoing comparisons

    The survey may be part of an annual series which attempts to understand howthe service is changing and to evaluate the effectiveness of a serviceimprovement programme. In this case there have to be a number of fixedquestions to allow a year-on-year comparison. Similar considerations apply if an

  • 8/3/2019 customer_satisfaction_surveys_v1-0.pdf

    4/19

    IT Customer Satisfaction Surveys Guidance on Best Practice (Version 1.0)

    4

    attempt is made to benchmark the service against that of comparableorganisations.

    Change programmes

    It is quite likely that major organisational changes such as Machinery ofGovernment changes or a new supplier will impact on the ICT Services. Asurvey may seek to directly assess the impact of such changes.

    External requirements

    Institutions such as the National Audit Office or the e-Government Unit may wishto assess the effectiveness of ICT services across government from time to time.In such case the survey will be moulded to these external requirements.

  • 8/3/2019 customer_satisfaction_surveys_v1-0.pdf

    5/19

    IT Customer Satisfaction Surveys Guidance on Best Practice (Version 1.0)

    5

    2. Scope

    a) Internal versus external customers

    As a rule IT units within departments have two types of customer, internal and external.Internal customers are the employees of the organisation and generally use the systemsprovided by the IT unit at the desktop. External customers are members of the generalpublic, businesses or other organisations which use services such as websites, or whichreceive output from large transactional systems.

    Although the needs of both groups is important, the guidance in this document is onlyaimed at customer satisfaction surveys of internal customers.

    b) Customer segmentation

    A customer segment describes a group of customers and the services it uses. It shouldbe ensured that all customers and services are considered and, if any are to be omitted,the reason why recorded. This is because these might be included in the future and any

    omissions will need to be considered when comparing survey results.

    c) Service versus system versus infrastructure

    A survey needs to clearly identify and articulate the aspects of IT that are being tested.In many cases this will be the complete IT service and the survey will ask questionsabout general performance. However, if there is interest in one specific aspect of theoverall IT service provision, such as email, then this will have to be clearly explained in

    terms that users will understand.

    d) Areas to be addressed

    Any customer satisfaction survey aimed at internal IT customers should look at five mainareas:

    Individual customer use and experience of the IT service

  • 8/3/2019 customer_satisfaction_surveys_v1-0.pdf

    6/19

    IT Customer Satisfaction Surveys Guidance on Best Practice (Version 1.0)

    6

    How well the current IT service performs

    How good the response is when something goes wrong

    How well the IT function supports developing business needs

    Specific issues of concern to the individual organisation

    Where IT services are provided through a recognised framework, e.g. in compliance withthe IT Infrastructure Library, then the questions should be structured to feed back intothe elements of the framework such as the Service Level Agreements, the AvailabilityPlan and the Continuity Plan.

    Area Purpose Typical questions

    How important is the IT service to you?

    How much do you use the IT services?

    What particular packages do you use?

    How satisfied are you with the overall ITservice?

    Do the systems provided by the IT service meet

    your needs effectively?

    Have you had sufficient IT training for thesystems you need?

    Individualcustomer

    use andexperienceof the ITservice

    Attempts to gain anoverall impression of

    the usersrequirement for ITand generalsatisfaction with theservice.

    Can you get extra training when you want it?

  • 8/3/2019 customer_satisfaction_surveys_v1-0.pdf

    7/19

    IT Customer Satisfaction Surveys Guidance on Best Practice (Version 1.0)

    7

    Do you use the departmental intranet/ Internet/GSI from your PC?

    Do you access the departmental website?

    Can you get the information you require to doyour job?

    How reliable do you find the existing ITsystems?

    Do you find the speed of the existing systemsacceptable?

    How do you find the speed of the Internet?

    Are the systems available whenever you wantthem?

    Do you have to restart/reboot your PC frequently

    due to problems?

    Do you know what level of support to expectfrom the service desk?

    How wellthe currentIT serviceperforms

    Tests theperformance of theexisting IT systems.

    Are you aware of the Service Level Agreements(SLAs) with the service provider?

    Who is your first point of contact when things go

    wrong?

    Are there self-help documents or websites andhow effective are they?

    How good

    theresponse iswhensomethinggoes wrong

    Attempts to assess

    the effectiveness ofthe servicedesk/helpdesk andany desksidesupport. One issueworth probing is theextent to which theuser community relyon the officialhelpdesk and the

    Are you satisfied with the speed of the initialresponse when dealing with your problems?

  • 8/3/2019 customer_satisfaction_surveys_v1-0.pdf

    8/19

    IT Customer Satisfaction Surveys Guidance on Best Practice (Version 1.0)

    8

    Do you find the attitude of service desk staffhelpful?

    How did the service desk solve your problem(telephone/email/remotely/personal visit......)?

    Do the IT support hours meet your needs?

    How well does the helpdesk solve yourproblems?

    How quickly does the helpdesk solve yourproblems?

    How well are you aware of progress being madein solving your problems?

    extent to which theysolve problemsthemselves or rely on

    peer support.

    How much time do you spend on routine ITactivities, e.g. file housekeeping/printerproblems etc.?

    Are you consulted about future IT services?

    When thinking about future IT support, fromwhom do you seek advice?

    How well does the IT service provide extrafunctionality on the desktop when required?

    Is it easy to make changes to existing systems?

    How proactive is the IT service in looking atcurrent and future needs?

    How wellthe ITfunctionsupportsdevelopingbusinessneeds

    Attempts to gaugehow well the ITservice supports thebusiness needs ofthe organisation,looking at thedevelopment offuture applicationsand services as wellas extra functionalityon the desktop. It

    should be noted thatnot everyone in theorganisation is likelyto be involved inexploring the ways inwhich future businessneeds could besupported by IT. How effective is the process of introducing extra

    functionality?

  • 8/3/2019 customer_satisfaction_surveys_v1-0.pdf

    9/19

    IT Customer Satisfaction Surveys Guidance on Best Practice (Version 1.0)

    9

    Does the IT unit provide sufficient support after anew system or enhancement is introduced?

    How useful is System X in your work?

    How reliable is System X?

    How responsive is System X?

    Specificissues ofconcern totheindividualorganisation

    There may be issuesof specific concern tothe organisation.These may focus onintroducing a newsystem or service, oron ongoing problems.

    How easy to use is System X?

  • 8/3/2019 customer_satisfaction_surveys_v1-0.pdf

    10/19

    IT Customer Satisfaction Surveys Guidance on Best Practice (Version 1.0)

    10

    3. Methods of collection

    There are many ways in which feedback can be sought from groups and individuals.The following table summarises the various techniques which might be used to collectfeedback and it comments on their strengths and weaknesses.

    In selecting feedback methods, it is often appropriate to use more than one technique.The table also indicates, therefore, how the various techniques might best be combined.

    Method Advantages Disadvantages Comments

    Paper / emailquestionnaire

    High volumequantitative analysisfor relatively low cost

    What is good?/Whatis not so good?

    Understandingreasons behindsurvey scores

    Why is it good?/Whyis it not so good?

    Questionnaires cancontain qualitativequestions butanalysis can be timeconsuming and isnot always reliable.

    Web basedquestionnaire

    High volumequantitative analysisfor relatively low cost(see above).

    Can provide morepowerful drilling downcapability to expandon particular answerswithout the

    questionnairebecomingunmanageable.

    Understandingreasons behindsurvey scores

    Why is it good?/Whyis it not so good?

    Albeit that greaterdrilling-down

    potential can improveupon above.

    If the host web siteis not heavilytrafficked some formof promotion will berequired.

    Can be an effectiveand low cost methodof regularly samplingopinion.

    Suggestion box Givingemployees/customersa feeling of havingtheir say.

    Good, objective,statistically significantinformation.

    Unlikely to seriouslycontribute tocustomersatisfactionmeasurement.

  • 8/3/2019 customer_satisfaction_surveys_v1-0.pdf

    11/19

    IT Customer Satisfaction Surveys Guidance on Best Practice (Version 1.0)

    11

    Can be unfocussedon service issuesand open to abuse.

    Telephoneinterview

    Used in conjunctionwith questionnairesor web surveys, thiswill provide valuableinsights into reasonsbehind thequantitative results.

    Can also be used

    pre-questionnaire tohelp wordquestionnaire tomeasure likelyissues.

    Labour intensivenature makes itunsuitable for highvolume analysis.

    Subjects forinterviews need tobe carefully sampledto eliminate bias.

    Care needs to betaken to avoid theinterviewer selectingor pursuing

    questions based onpersonal bias.

    Face-to-faceinterview

    Similar to telephonesurvey but can beused to give greaterrichness to resultsand to handle more

    sensitive issues.

    Highly labourintensive. Wouldneed to be employedwith other techniquesto give statistical

    significance.

    Care needs to betaken to avoid theinterviewer selectingor pursuingquestions on

    personal bias.

    Focus group People spark off eachother. Gives greaterrichness of insightsinto opinions. Cangive insights onsolutions.

    Can be used post

    questionnaire toassess qualitativesupport or pre-questionnaire to helpword questionnaire tomeasure likelyissues.

    Highly labourintensive. Wouldneed to be employedwith other techniquesto give statisticalsignificance.

    This can be doneusingteleconferencing,which is likely toease logisticssignificantly andreduce costs in adistributedorganisation.

    The group may betoo easily steered bya dominantindividual.

    Accountmanager

    Given sufficientorganisation coverthese can provide

    Organisationswithout accountmanager functions

    The objectivity ofaccount managersmay not always be

  • 8/3/2019 customer_satisfaction_surveys_v1-0.pdf

    12/19

    IT Customer Satisfaction Surveys Guidance on Best Practice (Version 1.0)

    12

    excellent real timefeedback fromcustomers.

    will not justifyintroducing them forfeedback purposes

    only.

    guaranteed.

    Blog (weblog) Getting objectivefeedback from a blogrelies on its beingreliable, well usedand professionallymoderated.

    Can provide valuable

    early indicators ofissues that may meritfurther investigation.

    Unlikely to providegood, objective,statistically significantinformation.

    A successful blogneeds a high level ofcommitment from aninformed andprofessionalmoderator.

    Interactive chat Similar to blog exceptthat responses tendto be in real-time.

    Similar to blog. Similar to blog.

    Other guidance tips

    It is important to match the nature and content of the survey to the specificaudience. Some initial telephone or face-to-face work may help find the best wayto appeal to a particular group.

    Questionnaires need to be designed, as much as possible, to give graduations ofopinion, i.e. To what extent do you think.. rather than Do you think.. Thiscan also be in the form of making a definite statement, e.g. I have sufficient

    training to use the application effectively and have graduated answers such asStrongly Agree to Strongly Disagree.

    There needs to be an awareness of other issues which may influence the results,e.g. a major change programme may make employees/customers moreuncertain and critical about their internal service provision than would otherwisebe the case.

  • 8/3/2019 customer_satisfaction_surveys_v1-0.pdf

    13/19

    IT Customer Satisfaction Surveys Guidance on Best Practice (Version 1.0)

    13

    4. Methodology

    a) Design of the survey

    Survey design issues remain much the same whatever the mode of the surveyundertaken. The table below outlines the main areas for consideration:

    Objective Areas for consideration

    Although certain methods of data collection, such as telephoneinterviews, appear to be more informal than a questionnaire it isimportant, whatever the mode of the survey, to consider carefullythe number and structure of questions asked.

    Clear andpurposefulquestion design

    The questions or statements need to be clear and unambiguous,with the ability to draw out information that can be acted upon.

  • 8/3/2019 customer_satisfaction_surveys_v1-0.pdf

    14/19

    IT Customer Satisfaction Surveys Guidance on Best Practice (Version 1.0)

    14

    Questions and statements should be designed, as far as possible,to give graduations of opinion, e.g.

    Example Question:

    To what extent do you think? (rather than, Do you think..? )

    Responses such as: Definitely / Mostly / Neutral / Not really / No

    Example Statement:

    I know what the timescales are for resolving incidents.

    Responses such as: Strongly Agree / Agree / Neither Agree norDisagree / Disagree / Strongly Disagree

    Note that although it is satisfactory to use both questions andstatements, the statements may be seen to be biased towards thepositive.

    Relevant subjectmatter

    Check how important each question/area is to the user. Theinformation can then be used to prioritise action plans.

    Sufficientnumbers ofquestions

    Asking too many questions may result in less fulsome responses.

    Alignment withSLAs

    If any Service Level Agreements (SLAs) exist, consider formulatingquestions so that these test the SLAs usefulness.

    Adaptability forfuture use

    When compiling the survey, its adaptability for future use should beborn in mind. Ideally, the drivers will remain the same but thecriteria may differ, e.g. new application releases may be in place.

  • 8/3/2019 customer_satisfaction_surveys_v1-0.pdf

    15/19

    IT Customer Satisfaction Surveys Guidance on Best Practice (Version 1.0)

    15

    The rating terminology, i.e. the responses to the questions asked,should be identified. Although any number of choices may be used,consideration should be given to both the users view and the ways

    in which the data will be dealt with once gathered.

    The rating terminology needs to be appropriate to the question andcan be verbal or numeric:

    Verbal Choices:

    Strongly Disagree to Strongly Agree

    Very Dissatisfied to Very Satisfied

    Very Poor to Very Good

    Definitely to No, not at all

    Numerical Choices:

    Rate 1 to 5

    Rate 1 to 10

    Most people are used to rating out of ten and this is the mostcommon numerical scale.

    The ratings should be positioned in the same place throughout the

    survey so the employee/customer can avoid making the incorrectchoice by mistake.

    Appropriateratingterminology

    The ratings to be included in the Satisfied score should beagreed, i.e Very Good and Good would be used to calculate thesatisfaction score. Neutral replies would not count. Alternatively, ifa numerical score is being used, the score can be converted out often into a percentage satisfaction score.

  • 8/3/2019 customer_satisfaction_surveys_v1-0.pdf

    16/19

    IT Customer Satisfaction Surveys Guidance on Best Practice (Version 1.0)

    16

    The scoring mechanism to be used should be identified. Althoughany number of choices may be used, consideration should begiven to both the users view and the way in which the data will be

    dealt with once gathered.

    The most common mechanisms give four, five or ten choices. Thebenefit of using four options rather than five is that the customermust make a definite decision and not sit on the fence. With fiveoptions it is more likely the employee/customer will rate the middleoption. However, this may be preferable and may avoid abandonedsurveys. With a scoring mechanism of ten the employee/customerhas more choice.

    Preferred scoringmechanisms

    It may be relevant to include a Not Applicable option.

    Comments may be requested on the questionnaire. These areoften very useful in determining why the employee/customer isdissatisfied. However, if not directed they may not be as helpful asthey could be.

    Customer /employeecomments

    A comment is particularly useful if the employee/customer rates aquestion Poor or Very Poor.

    Some employees/customers will prefer to remain anonymous andmay not respond to a named survey.

    Deciding whetherto run a named oranonymoussurvey

    If the survey is to be anonymous there is no way to contact theemployee/customer for more details of poor ratings. One option isto provide an anonymous survey but give the employee/customerthe option of including their name if they are happy to becontacted.

    b) Communication

    Before

    Before conducting the survey, it should be ensured that all the stakeholders are inagreement as to the surveys purpose and methodology. Agreement is also requiredconcerning what information will be needed, by whom and how often.

  • 8/3/2019 customer_satisfaction_surveys_v1-0.pdf

    17/19

    IT Customer Satisfaction Surveys Guidance on Best Practice (Version 1.0)

    17

    It may be relevant to involve representatives from different teams in the organisation toensure that all aspects are covered by the questionnaire.

    All the team members in the organisation need to be made aware of the purpose of thesurvey and when it will be conducted.

    The survey should be publicised to employees/customers giving the reasons it is beingconducted and why participation is required. Details should be given as to when andhow the results will be published.

    During

    During the survey it may be useful to remind customers of the closing date forresponses, particularly if the response rate has not been high. This could be done by afollow-up email to all those invited to participate or by a bulletin on the intranet.

    If any of the feedback received needs urgent attention it can be communicated to theperson responsible.

    After

    After the survey closes, a message can be posted on the intranet or an email sentthanking participants for their time and announcing when the results will be published.

    Once the results have been analysed and action plans agreed these should bepublished as described in the earlier communications.

    c) Sample sizes

    Recommended sample sizes

    If not surveying all employees/customers the recommended sample sizes fromappropriate bodies should be considered, e.g. MORI or ITIL (ITIL recommends 10%).

    Assuming the survey is running across populations of 400+ a minimum sample shouldbe 100. Increasing the sample above 1,000 is unlikely to alter the conclusions of the

  • 8/3/2019 customer_satisfaction_surveys_v1-0.pdf

    18/19

    IT Customer Satisfaction Surveys Guidance on Best Practice (Version 1.0)

    18

    survey. However, it might be necessary to survey all staff for reasons of inclusivity.

    Using market research industry practice the number of respondents which will form arepresentative sample can be calculated. To obtain a quick summary based on size theSample Size Calculator (see link below) may be used to calculate the sample:

    http://www.surveysystem.com/sscalc.htm

    Choosing a 95% confidence level with a 5% confidence interval conforms to industrypractice.

    The sample size will need to be increased to account for non-responses. The

    percentage response will need to be an assumption the first time it is calculated. A ruleof thumb might assume a 50% to 60% response.

    Rotation

    Consideration might be given as to how the sample could be rotated, to ensure thatwhen the next survey is carried out the same sample of users is not selected.

    d) Frequency and timing

    The frequency and timing of surveys should be considered to ensure that there are noclashes with other surveys or events, and that the users do not suffer survey fatigue.The following factors should be born in mind:

    Other surveys being conducted, e.g. employee satisfaction surveys

    IT events, e.g. new releases

    Other events, e.g. redundancies announced, school holidays

    Requirement for a regular year-on-year comparison.

  • 8/3/2019 customer_satisfaction_surveys_v1-0.pdf

    19/19

    IT Customer Satisfaction Surveys Guidance on Best Practice (Version 1.0)

    19

    e) Analysis and actions

    Output

    The results can be summarised in numbers and/or percentages. It is useful to use both.

    They can be exported into bar charts and tables to highlight certain areas. A report canthen be compiled highlighting strengths and improvement areas.

    Where there are issues, drilling down will determine as much information as possible,e.g. Which unit? / Which application? / Which grade of staff?

    The results should be analysed and reported to the stakeholders as agreed.

    Drill-down information for stakeholders should be available as required, particularlywhere there are problem areas.

    Service improvement plan

    An improvement plan should be compiled and communicated to theemployees/customers and other organisation team members.

    An update on the implementation of the improvement plan should be communicatedregularly.

    Validation

    The results should be checked against other information available, such as a supplierperformance index, to ensure that results are not out of step.

    f) Review

    As with any project, a post-survey review is useful to determine what has been learntand what needs to be done.