3. creating training evaluations at the different 'levels'levels'.pdf · 4 level 1 -...

24
© TrainingCheck.com. These materials may be used or customised for non commercial purposes on the condition that the source is fully acknowledged. -------------------------------------------------------------------------------------------------------- Creating Evaluations at the Different ‘Levels’ -

Upload: others

Post on 25-Jun-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: 3. Creating Training Evaluations at the Different 'Levels'Levels'.pdf · 4 Level 1 - Participant Reaction Level 1 of the Kirkpatrick training evaluation model attempts to establish

© TrainingCheck.com. These materials may be used or customised for non commercial purposes on the condition that the source is fully acknowledged.

--------------------------------------------------------------------------------------------------------

Creating Evaluations at the Different ‘Levels’

-

Page 2: 3. Creating Training Evaluations at the Different 'Levels'Levels'.pdf · 4 Level 1 - Participant Reaction Level 1 of the Kirkpatrick training evaluation model attempts to establish

2

Contents Introduction…………………………………………………………….……………………3 Participant Reaction…………………………………………...…………………………..4

Learning ………….…………………………………………………………….…………..8

Job Impact …………………………………………………………………………………11

Learner View………………………………………………………………………14

Manager View…………….……………………………………………………….17

Business Impact……………………………………………………...............................20

Page 3: 3. Creating Training Evaluations at the Different 'Levels'Levels'.pdf · 4 Level 1 - Participant Reaction Level 1 of the Kirkpatrick training evaluation model attempts to establish

3

Creating Training Evaluations at the Different ‘Levels’ Note: This guidance has been designed to be read alongside the ‘Planning Your Training Evaluation’ guidance materials, which can also be found in the TrainingCheck Help Centre.

The following sections provide an overview of how to create training evaluations at each of the four levels of the Kirkpatrick training evaluation model. The four levels of the model are:

Level 1 – Participant Reaction

Level 2 – Learning

Level 3 – Job Impact

Level 4 – Business Impact

A full description of the Kirkpatrick model and the levels can be found in the guidance on ‘The TrainingCheck Approach’ within the Help Centre.

Note: While some of the following information on using TrainingCheck to evaluate at each of the levels is necessarily duplicated, there are some important differences between them.

Page 4: 3. Creating Training Evaluations at the Different 'Levels'Levels'.pdf · 4 Level 1 - Participant Reaction Level 1 of the Kirkpatrick training evaluation model attempts to establish

4

Level 1 - Participant Reaction Level 1 of the Kirkpatrick training evaluation model attempts to establish whether the conditions were right for learning to take place. This involves capturing participants’ reactions to the training programme, including reactions to its relevance, training methods, trainers, qualification and assessment methods, facilities and administration etc.

What are the key questions?

The kind of questions that evaluations at this level can seek to answer include:

• Did participants like and enjoy the training? • Did they consider the training relevant to their own needs and/or the needs of the

organisation? • How did participants perceive the practicability and potential for application of the

learning? • Did they consider it an effective use of their time? • Were the style, pace, content, delivery methods and materials appropriate? • Has the training acted as a motivator towards further learning? • Would participants recommend the training to colleagues?

Which particular evaluation questions you choose will depend on the overall objectives of the evaluation (as mentioned elsewhere, it is vital that the evaluation objectives are closely aligned to both the original training objectives and key business stakeholders’ expectations of the training).

Is evaluating at Level 1 worthwhile?

It should be pointed out that while evaluations at this level are carried out widely, some evaluation experts have questioned the worth of evaluating participant reaction. This is because they believe that getting feedback on, for example, whether the learners enjoyed the programme, will not result in any really useful data about whether the programme was effective.

While it is true that participant reaction evaluations cannot provide an objective measure of the effectiveness of the various elements of the programme, this does not mean that they are not worthwhile. Capturing participants’ views on the training can provide valuable information which can be used to among other things:

• identify popular courses (ie those that are likely to be attended well) and trainers • identify any unmet learning needs • provide clues as to how a training programme may be improved further • diagnose barriers to learning.

These last two points are especially true when feedback from participant reaction evaluations is viewed alongside evaluation data from the other Kirkpatrick levels. For example, if a level 2 (learning) evaluation shows that learning is not taking place and

Page 5: 3. Creating Training Evaluations at the Different 'Levels'Levels'.pdf · 4 Level 1 - Participant Reaction Level 1 of the Kirkpatrick training evaluation model attempts to establish

5

your level 1 evaluation reveals that participants score all elements of the programme highly except for the training materials, it would be a reasonable assumption that the training materials need to be improved.

What data collection methods can be used?

Usually evaluation of participant reaction is carried out through a questionnaire/survey which participants complete either at the end of the training programme or at specific points during a programme. However, you might also consider using the following to gather evaluation data:

• interviews or focus groups • capturing other formal/informal verbal reactions to the training (eg through

meetings, performance appraisals etc) • written reports from the participants.

Evaluating Participant Reaction with TrainingCheck

Important: The following provides only a very brief overview of the process of evaluating participant reaction using TrainingCheck. Before carrying out an evaluation for the first time you should also read the ‘Planning Your Training Evaluation’ guidance materials within the Help Centre. You will also find much more detailed Tutorials and FAQs on how to create, deploy, analyse and report on evaluations in the Help Centre.

Creating your evaluation

Once you have decided on your key evaluation objectives/questions, you can begin to create your evaluation within TrainingCheck (just click on the ‘Create Evaluation’ button on the ‘Manage Evaluations’ page). You will be able to:

• copy an existing evaluation (eg you can choose to copy the example evaluations provided)

• create a new evaluation from scratch • choose from the questions within the ‘Participant Reaction’ sections of the Question

Library • copy individual questions from existing evaluations • create your own questions.

Despite the large number of questions to choose from, the evaluations you create should generally be short (ie between 5 and 15 questions) as longer evaluations tend to have much lower response/completion rates. Therefore question choice is very important.

Once you have created your evaluation it is advisable to pilot it before deploying it with the target group.

Page 6: 3. Creating Training Evaluations at the Different 'Levels'Levels'.pdf · 4 Level 1 - Participant Reaction Level 1 of the Kirkpatrick training evaluation model attempts to establish

6

Deploying your evaluation

Evaluation respondents should, where possible, include all of the training programme participants. Where there are a very large number of learners you may consider using sampling techniques (you can find guidance on sampling techniques within the ‘Planning Your Training Evaluation’ section of the TrainingCheck Help Centre).

At this level evaluations should usually be deployed within two weeks of the completion of the training programme to ensure that the respondents’ recollection of the training is relatively fresh.

The deployment options (via the ‘Collect Responses’ page) are to:

• send your evaluation to contacts in your Address Book • place a link to your evaluation in an email using your usual email program (eg

Outlook) • place a link to your evaluation on a web page. • launch the evaluation immediately so that you can manually add data directly into it

(‘Add Data Manually’ button) - useful, for example, if you have collected evaluation data through paper based evaluations, interviews, or focus groups

Note: Evaluations can also be printed so that they can be completed manually. Responses can then be uploaded to TrainingCheck via the ‘Add Data Manually’ function.

You can deploy the evaluation as many times and using as many of the different methods as you wish.

When deploying the evaluation it will be important to consider the timing to ensure a good response rate. For example, does your evaluation coincide with other surveys, or is it a particularly busy time for respondents? Offering respondents the possibility of winning a prize of some kind (eg a gift voucher) or some other incentive for completing the evaluation can often significantly increase response rates.

Confidentiality

Learners are unlikely to give full and honest feedback if they feel that there may be a risk that the information they provide could be used against them in any way.

For this reason it is recommended that participant reaction evaluations are not deployed using the Address book method as this links the learner’s personal information (email address, name etc) with their response. Instead evaluations should be deployed to learners using the ‘Email Link’ or ‘Web Page Link’ options. Also learners should not be asked to enter information on evaluations which could identify them.

Page 7: 3. Creating Training Evaluations at the Different 'Levels'Levels'.pdf · 4 Level 1 - Participant Reaction Level 1 of the Kirkpatrick training evaluation model attempts to establish

7

Where it is essential to link learners with their responses (eg if you ask learners to list future training requests) you may consider using a coded system, eg allocating a number to each learner, which only you are aware of. It is important to ensure that learners are aware of any measures you have taken to ensure confidentiality. These can be described in your evaluation introduction email.

Data analysis and reporting

Once you have collected the data from respondents or manually entered data, you can view the responses by clicking on the ‘Analyse’ icon. You will be able to filter the responses according to criteria you choose, and download responses as CSV (Excel) or XML files. You will also be able to create custom reports (via the ‘My Reports’ page) and share these with key stakeholders.

You may want to discuss the results of the evaluation with the participants, the trainer(s) and participants’ managers. This can be an effective way of engaging others in, and identifying potential barriers to, ensuring the transfer of learning to the workplace.

As with all other levels of evaluation, it is vital that the outcomes of the evaluation at this level are acted on. Not doing so will, at a minimum, undermine the credibility of the evaluation process.

Page 8: 3. Creating Training Evaluations at the Different 'Levels'Levels'.pdf · 4 Level 1 - Participant Reaction Level 1 of the Kirkpatrick training evaluation model attempts to establish

8

Level 2 – Learning Level 2 of the Kirkpatrick training evaluation model involves evaluating how far training programme participants have improved their knowledge and skills as a result of the training.

What are the key questions?

The key questions that evaluations at this level can seek to answer include:

• Did the participants learn what was intended to be taught? • What is the extent of advancement or change in the participants after the training? • Were there any particular barriers to or promoters of learning?

When combined with evaluations at the other Kirkpatrick levels, measuring impact at level 2 can help you to make judgments and recommendations about the relevance and quality of the training programme and the suitability of the assessments, tests and/or qualifications used as part of that programme. It can also provide key diagnostic information where there has been a breakdown in the process of transferring learning to the workplace. For example, in the case where there has been no observable impact of a programme on workplace performance (level 3), data from level 2 evaluations can help you to track whether, and to what extent, this is due to the amount and type of learning that actually occurred.

What data collection methods can be used?

Evaluation at this level is typically carried out using assessments or tests before and after the training. Often a relevant person (eg the trainer, a union learning representative or a learning and development officer) will provide results data from assessments, tests and/or qualifications to the evaluator.

Other data collection methods which you might consider using include:

• interviews with and observation of training participants. • participant self-assessments • group and peer assessments.

Note: You can use TrainingCheck to create simple training assessments and tests using the core evaluation creation tools, but please be aware that dedicated assessment tools provide more comprehensive options and in some cases will be more appropriate to use.

To help ensure the relevance and validity of the evaluation results, it is important to make certain that any assessments, tests or qualifications which are to be used as part of the evaluation process are aligned as closely as possible to the original training objectives. Among other things, this will avoid the issue of the success of the training being judged in terms that were not defined at the outset. In addition, reliable, clear scoring and measurements should be established in order to reduce the risk of assessment inconsistency.

Page 9: 3. Creating Training Evaluations at the Different 'Levels'Levels'.pdf · 4 Level 1 - Participant Reaction Level 1 of the Kirkpatrick training evaluation model attempts to establish

9

Calculating the ‘Learning Gain’

As part of the data collection and analysis process at this level, you may wish to calculate the ‘learning gain’ from training. This shows the improvement between the pre- and post-learning assessment scores. It can be calculated using the following formula:

Post-learning Score minus Pre-learning Score X 100 Maximum Score minus Pre-learning Score

For example, if the pre-learning score was 50, the post-learning score is 80, and the maximum score is 100, then you get the following:

80– 50 X 100 = 30 X 100 = 60% 100 – 50 50 This shows that there was a 60% learning gain.

Evaluating Learning with TrainingCheck

Important: The following provides only a very brief overview of the process of evaluating learning using TrainingCheck. Before carrying out an evaluation for the first time you should also read the ‘Planning Your Training Evaluation’ guidance materials within the Help Centre. You will also find much more detailed Tutorials and FAQs on how to create, deploy, analyse and report on evaluations in the Help Centre.

Creating your evaluation

When you are ready to create your evaluation within TrainingCheck you will be able to:

• copy an existing evaluation (eg you can choose to copy the example evaluations provided)

• create a new evaluation from scratch • choose from the questions within the ‘Participant Reaction’ sections of the Question

Library • copy individual questions from existing evaluations • create your own questions.

The evaluations you create should generally be short (ie between 5 and 15 questions) as longer evaluations tend to have much lower response/completion rates. Therefore question choice is very important.

Once you have created an evaluation it is advisable to pilot it before deploying it with the target group.

Page 10: 3. Creating Training Evaluations at the Different 'Levels'Levels'.pdf · 4 Level 1 - Participant Reaction Level 1 of the Kirkpatrick training evaluation model attempts to establish

10

Deploying your evaluation

Your evaluation can be completed by participants themselves (eg assessments/tests or self-evaluation) and anyone else who has access to information about learning assessments, tests and/or qualifications outcomes, e.g. the trainer, a union learning representative, or a learning and development officer.

At this level evaluations can be deployed at any time during the evaluation timeframe. However, it is important to bear in mind that in order to reliably attribute the outcomes of assessments, tests and qualifications directly to the training programme they must be undertaken and recorded soon after the end of the training programme.

The deployment options (via the ‘Collect Responses’ page) are to:

• send your evaluation to contacts in your Address Book • place a link to your evaluation in an email using your usual email program (eg

Outlook) • place a link to your evaluation on a web page • launch the evaluation immediately so that you can manually add data directly into it

(‘Add Data Manually’ button) - useful, for example, if you have collected evaluation data through paper based evaluations, interviews, or focus groups

You can deploy the evaluation as many times and using as many of the different methods as you wish.

It will be important to consider the timing to ensure a good response rate. For example, does your evaluation coincide with other surveys, or is it a particularly busy time for respondents?

Data analysis and reporting

Once you have collected the data from respondents or manually entered data, you can view the responses by clicking on the ‘Analyse’ icon. You will be able to filter the responses according to criteria you choose, and download responses as CSV (Excel) or XML files. You will also be able to create custom reports (via the ‘My Reports’ page) and share these with key stakeholders.

You may want to discuss the results of the evaluation with the learners, the trainer(s) and learners’ managers. This can be an effective way of engaging others in, and identifying potential barriers to, ensuring the transfer of learning to the workplace.

As with all other levels of evaluation, it is vital that the outcomes of the evaluation at this level are acted on. Not doing so will, at a minimum, undermine the credibility of the evaluation process.

Page 11: 3. Creating Training Evaluations at the Different 'Levels'Levels'.pdf · 4 Level 1 - Participant Reaction Level 1 of the Kirkpatrick training evaluation model attempts to establish

11

Level 3 – Job Impact Level 3 of the Kirkpatrick training evaluation model involves evaluating the extent to which the training participants have applied their new knowledge and skills back to their work and what effect this has had on their work performance.

It goes without saying that evaluating at this level is important if it was an aim of the training programme to improve workplace performance in some way. More broadly speaking, it can help (along with level 4 evaluations) in establishing the ‘business value’ that training has added to an organisation.

What are the key questions?

The key questions that evaluators seek to find answers to at this level include:

• To what extent were knowledge and skills acquired through training programme used in the workplace?

• Were there noticeable and measurable changes in the activity and performance of the training participants when back in their workplace?

• Was the change in performance and new level of knowledge or skills sustained? • Were there any particular barriers to or promoters of the application of learning to the

workplace? • What influence have factors such as the workplace environment, the ‘learning

culture’, the support of managers and supervisors, target setting and the availability of on-the-job support had on the application of learning?

When combined with evaluations at other levels (particularly levels 2 and 4), level 3 evaluations can help to evidence the link between learning taking place and changes to individual and business performance. Similarly, where these changes have not materialised, evidence from evaluations at this level can play an important role in diagnosing factors which may be limiting the effectiveness of training programmes.

Choosing your evaluation questions

When choosing your specific evaluation questions at this level you will need to consider the following:

• Which particular job behaviours and/or competencies are expected to change as a result of the training programme? (For more information on competencies see the ‘Note on Competencies’ box below).

• Is data on these job behaviours and/or competencies currently collected (eg through performance management appraisals)? If not, then where possible the initial data should be collected before the programme begins. If this is not possible, then key stakeholders should establish an estimate of pre-programme performance.

• When, and over what timescale, will changes to the job behaviours and/or competencies be measured? You will need to bear in mind that in most cases it takes at least 3 months before the impact of learning on workplace performance becomes evident.

Page 12: 3. Creating Training Evaluations at the Different 'Levels'Levels'.pdf · 4 Level 1 - Participant Reaction Level 1 of the Kirkpatrick training evaluation model attempts to establish

12

• What factors other than the training programme might influence changes to each job behaviour and/or competency? For example, have managers and supervisors supported learners in applying their new skills, have there been changes to organisational structures, have performance incentive schemes been introduced, or have there been other external environmental influences?

Where possible each of the above questions should be discussed and, where relevant, agreed with key business stakeholders in advance of the training development. The conclusions can then be integrated within both training and evaluation objectives and plans. Stakeholders should also agree how success will be judged and how the evaluation results will be used.

These actions will help to ensure that the evaluation is aligned to the organisation’s strategic objectives, is feasible to carry out and can also help in gaining support and engagement from training participants and other stakeholders during the evaluation process. They will also help to motivate managers to support learners to transfer their learning to the workplace, which should in turn have a positive impact on the training outcomes.

Calculating ‘Competency Gains’

As part of your evaluation at this level you may wish to calculate ‘competency gains’. This shows how effective the training programme has been in improving a specified competency. It can be calculated using numerical ratings on changes to competencies using the following formula:

Most Recent Post-learning Competency Rating minus Pre-learning Competency Rating 1 X 100 Maximum Rating minus Pre-learning Competency Rating 1 For example, using a 10-point scale, if the pre-learning rating was 5, and the post-learning rating is 8, then you get: 8 – 5 X 100 = 3 X 100 = 60% 10 – 5 5 This shows that the training programme was 60% effective in improving the competency.

Note on Job competencies

The following are commonly used groupings of job competencies:

The technical skills and competencies needed to carry out the job. These are, for the most part, unique to each business unit in the company. For example, there are technical competencies needed by production line employees (eg measuring, grading, assembling and reporting) and different competencies needed by ICT technicians (eg installing, maintaining and upgrading hardware) etc. There may be some overlap of technical competencies across business units. (continued on next page)

Page 13: 3. Creating Training Evaluations at the Different 'Levels'Levels'.pdf · 4 Level 1 - Participant Reaction Level 1 of the Kirkpatrick training evaluation model attempts to establish

13

The personal management skills and competencies needed to manage both self and others, eg planning, organising, implementing, monitoring and reviewing.

The emotional and behavioural competencies needed to carry out a job eg self-confidence, motivation, conscientiousness, teamwork and collaboration

It is important to identify (eg through performance management processes) which competencies need development before delivering the training programme and any subsequent evaluation.

Note: It is not advisable to create new job competencies for the purposes of an evaluation without guidance from an expert.

What data collection methods can be used?

Evaluation at this level is typically carried out using questionnaires (evaluations) aimed at training participants and/or their managers, and it usually begins at least 3 months after the training programme has been completed. This allows time for the learning to be applied to and impact on the workplace. You may also consider using interviews and/or focus groups to collect evaluation data.

Evaluations focused on training participants and managers can be carried out independently of each other or they can be combined, via e.g. 360-degree / multi-rater evaluations, to generate a more comprehensive picture of the impact of a training programme on job performance.

If budget and resources allow it, you may wish to consider conducting two or more evaluations at level 3. These should ideally be at regular intervals (eg 3 and 6 months) after the training programme has been completed. Doing so will help to evidence that any impact was actually due to the training programme rather than any chance variations, and will also help you to develop a clearer picture of the impact of the programme on performance over time.

Learner View and Manager View

To help make evaluation at level 3 easier within TrainingCheck, two separate but closely related topic areas have been created within the Question Library:

• Job Impact (Learner View) - aimed at capturing data from learners on perceived changes to their own performance and influencing factors.

• Job Impact (Manager View) - aimed at capturing data from learners’ managers on observed changes to job performance and influencing factors.

Guidance on using TrainingCheck to evaluate these areas follows on the next pages.

Page 14: 3. Creating Training Evaluations at the Different 'Levels'Levels'.pdf · 4 Level 1 - Participant Reaction Level 1 of the Kirkpatrick training evaluation model attempts to establish

14

Level 3 – Job Impact (Learner View) Level 3 – Job Impact (Learner View) evaluation focuses on the impact of the training programme on job performance from the learner’s own point of view.

It is important to note that learners’ self-reports about their job performance may lack the validity and reliability of more objective measures, e.g. quantitative data provided by managers. They therefore may be viewed by some as not having any real value in the training evaluation process. However, in practice they can be an important element in building a complete picture of the impact of the training programme. The feedback provided can often lead to improvements in the training programme and the transfer of learning and to the identification of further training needs.

Evaluating Job Impact (Learner View) with TrainingCheck

Important: The following provides only a very brief overview of the process of evaluating job impact using TrainingCheck. Before carrying out an evaluation for the first time you should also read the ‘Planning Your Training Evaluation’ guidance materials within the Help Centre. You will also find much more detailed Tutorials and FAQs on how to create, deploy, analyse and report on evaluations in the Help Centre.

Creating your evaluation

When you are ready to create your evaluation within TrainingCheck you will be able to:

• copy an existing evaluation (eg you can choose to copy the example evaluations provided)

• create a new evaluation from scratch • choose from the questions within the ‘Participant Reaction’ sections of the Question

Library • copy individual questions from existing evaluations • create your own questions.

Despite the large number of questions to choose from, the evaluations you create should generally be short (ie between 5 and 15 questions) as longer evaluations tend to have much lower response/completion rates. Therefore question choice is very important.

If you are also planning to carry out an evaluation of managers’ views of changes to learners’ job performance, it is advisable to ensure that the topics of the evaluations mirror each other as far as possible so that you can compare the responses.

Once you have created an evaluation it is advisable to pilot it before deploying it with the target group.

Deploying your evaluation

Your evaluation should, where possible, be completed by all the learners on the relevant training programme. Where there are a very large number of learners you might also

Page 15: 3. Creating Training Evaluations at the Different 'Levels'Levels'.pdf · 4 Level 1 - Participant Reaction Level 1 of the Kirkpatrick training evaluation model attempts to establish

15

consider using sampling techniques (you can find guidance on sampling techniques within the ‘Planning Your Training Evaluation’ section of the TrainingCheck Help Centre).

At this level evaluations should usually be deployed a suitable period of time (usually about 3 months) after the completion of the training programme to allow time for learning to be transferred to the workplace. However, training programmes with a very narrow skills development focus or which concentrate on particular work processes can be evaluated sooner than this.

The deployment options (via the ‘Collect Responses’ page) are to:

• send your evaluation to contacts in your Address Book • place a link to your evaluation in an email using your usual email program (eg

Outlook) • place a link to your evaluation on a web page • launch the evaluation immediately so that you can manually add data directly into it

(‘Add Data Manually’ button) - useful, for example, if you have collected evaluation data through paper based evaluations, interviews, or focus groups

Please note: Evaluations can be printed so that they can be completed manually. Responses can then be uploaded to TrainingCheck via the ‘Add Data Manually’ function.

You can deploy the evaluation as many times and using as many of the different methods as you wish.

When deploying your evaluation it will be important to consider the timing to ensure a good response rate. For example, does your evaluation coincide with other surveys, or is it a particularly busy time for respondents? Offering learners a prize of some kind (eg a gift voucher or some other incentive for completing the evaluation) can increase response rates significantly.

Confidentiality

Learners are unlikely to give full and honest feedback if they feel that there may be a risk that the information they provide could be used against them in any way.

For this reason it is recommended that evaluations aimed at learners are not deployed using the Address book method as this links the learner’s personal information (email address, name etc) with their response. Instead evaluations should be deployed to learners using the ‘Email Link’ or ‘Web Page Link’ options. Also learners should not be asked to enter information on evaluations which could identify them. Where it is essential to link learners with their responses (eg if you ask learners to list future training requests) you may consider using a coded system, eg allocating a number to each learner, which only you are aware of.

Page 16: 3. Creating Training Evaluations at the Different 'Levels'Levels'.pdf · 4 Level 1 - Participant Reaction Level 1 of the Kirkpatrick training evaluation model attempts to establish

16

It is important to ensure that learners are aware of any measures you have taken to ensure confidentiality. These can be described in your evaluation introduction email.

Data analysis and reporting

Once you have collected the data from respondents or manually entered data, you can view the responses by clicking on the ‘Analyse icon. You will be able to filter the responses according to criteria you choose, and download responses as CSV (Excel) or XML files. You will also be able to create custom reports (via the ‘My Reports’ page) and share these with key stakeholders.

You may want to discuss the results of the evaluation with learners, the trainer(s) and learners’ managers. This can be an effective way of engaging others in, and identifying potential barriers to, ensuring the transfer of learning to the workplace.

As with all other levels of evaluation, it is vital that the outcomes of the evaluation at this level are acted on. Not doing so will, at a minimum, undermine the credibility of the evaluation process.

Page 17: 3. Creating Training Evaluations at the Different 'Levels'Levels'.pdf · 4 Level 1 - Participant Reaction Level 1 of the Kirkpatrick training evaluation model attempts to establish

17

Level 3 - Job Impact (Manager View) Level 3 – Job Impact (Manager View) evaluation focuses on the impact of the training programme on learners’ job performance from their managers’ point of view.

‘Manager’ means here the person the learner immediately reports to. This could be, for example, a departmental manager, supervisor, team leader or similar. The main thing is that the manager is in a position to directly observe and assess the learner’s application of their new skills or knowledge.

As mentioned in the introduction to level 3 and elsewhere, it is crucial to get engagement from managers in the planning of training development, delivery and evaluation processes as early as possible, and for all stakeholders to agree how success will be judged and how the evaluation results will be used. As part of this you will need to agree which competency and performance measures will be used in the evaluation. These actions should both encourage managers to participate in the evaluation process and also, crucially, motivate them to support the application of learning in the workplace so that selected performance measures will show improvement.

Individual assessments of changes to job performance should be closely aligned to original training objectives and, where relevant, integrate existing competency and/or performance criteria. In addition, they should where possible aim to reduce the subjective judgement (where it is not sought) of managers as this will affect the reliability and consistency of results.

The data you collect on changes to job performance can include competency ratings, performance appraisal reports and/or general observations about behavioural changes of learners. When using pre-learning competency or performance data in the evaluation, this should ideally have been collected over a long enough time period to be able to determine whether any change to it is likely to be due to seasonal and/or cyclical variations.

Note: Do not use employee performance appraisal reports as a measure of change to job performance unless the appraisal system is regarded as an effective tool for performance management within the organisation.

Evaluating Job Impact (Manager View) with TrainingCheck

Important: The following provides only a very brief overview of the process of evaluating job impact using TrainingCheck. Before carrying out an evaluation for the first time you should also read the ‘Planning Your Training Evaluation’ guidance materials within the Help Centre. You will also find much more detailed Tutorials and FAQs on how to create, deploy, analyse and report on evaluations in the Help Centre.

Page 18: 3. Creating Training Evaluations at the Different 'Levels'Levels'.pdf · 4 Level 1 - Participant Reaction Level 1 of the Kirkpatrick training evaluation model attempts to establish

18

Creating your evaluation

When creating your evaluation you will be able to:

• copy an existing evaluation (eg you can choose to copy the example evaluations provided)

• create a new evaluation from scratch • choose from the questions within the ‘Participant Reaction’ sections of the Question

Library • copy individual questions from existing evaluations • create your own questions.

The evaluations you create should generally be short (ie between 5 and 15 questions) as longer evaluations tend to have much lower response/completion rates. Therefore question choice is very important.

If you are also planning to carry out an evaluation of learners’ own views of changes to their job performance, it is advisable to ensure that the topics of the evaluations mirror each other as far as possible so that you can compare the responses.

Note: If the job behaviours and/or competencies you are measuring vary substantially across different departments/work areas, you may wish to create a separate evaluation for each relevant manager.

Once you have created an evaluation it is advisable to pilot it before deploying it with the target group.

Deploying your evaluation

The evaluation should, where possible, be completed by all managers who are responsible for learners who have attended the relevant training programme.

At this level evaluations should usually be deployed a suitable period of time (usually at least 3 months) after the completion of the training programme to allow time for learning to be transferred to the workplace. However, training programmes with a very narrow skills development focus or which concentrate on particular work processes can be evaluated sooner than this.

The deployment options (via the ‘Collect Responses’ page) are to:

• send your evaluation to contacts in your Address Book • place a link to your evaluation in an email using your usual email program (eg

Outlook) • place a link to your evaluation on a web page • launch the evaluation immediately so that you can manually add data directly into it

(‘Add Data Manually’ button) - useful, for example, if you have collected evaluation data through paper based evaluations, interviews, or focus groups

Page 19: 3. Creating Training Evaluations at the Different 'Levels'Levels'.pdf · 4 Level 1 - Participant Reaction Level 1 of the Kirkpatrick training evaluation model attempts to establish

19

Please note: Evaluations can be printed so that they can be completed manually. Responses from printed evaluations can be uploaded to TrainingCheck via the ‘Add Data Manually’ function.

You can deploy the evaluation as many times and using as many of the different methods as you wish.

When deploying the evaluation it will be important to consider the timing to ensure a good response rate. For example, does your evaluation coincide with other surveys, or is it a particularly busy time for respondents?

Data analysis and reporting

Once you have collected the data from respondents or manually entered data, you can view the responses by clicking on the ‘Analyse icon. You will be able to filter the responses according to criteria you choose, and download responses as CSV (Excel) or XML files. You will also be able to create custom reports (via the ‘My Reports’ page) and share these with key stakeholders.

You may wish to discuss the results of the evaluation with learners, the trainer(s), and learners’ managers. This can be an effective way of engaging others in, and identifying potential barriers to, ensuring the transfer of learning to the workplace.

As with all other levels of evaluation, it is important that the outcomes of the evaluation at this level are acted on. Not doing so will, at a minimum, undermine the credibility of the evaluation process.

Page 20: 3. Creating Training Evaluations at the Different 'Levels'Levels'.pdf · 4 Level 1 - Participant Reaction Level 1 of the Kirkpatrick training evaluation model attempts to establish

20

Level 4 – Business Impact Level 4 is the final level of the Kirkpatrick model of training evaluation. It seeks to measure changes in business performance that have come about through learners applying their new learning to the workplace. For organisation leaders in particular, this may be regarded as where the bottom-line value of training lies. However, gathering evaluation data at this level is a complex task which involves measuring the impact of training on business performance measures as reported by learners’ managers and other key stakeholders. The specific performance measures used will depend on the individual organisation and, crucially, on the agreed objectives and expectations of outcomes from the training programme. Examples of measures include changes to:

• productivity/output rates • sales volumes • employee turnover rates • number of customer complaints • wastage rates • non-compliance • rate of accidents per year • number of sick-absence days per month • number of cancelled training days/sessions • recruitment costs.

In many organisations some or all of these will be in place alongside other performance measures within existing management systems and reporting.

Note: Financial data collected in response to level 4 questions can also be used as a basis for calculating the Return on Training Investment (see also the guidance on ‘Calculating Return on Investment’ in the TrainingCheck Help Centre).

Taking account of influencing factors

It is often commented that isolating the impact of training on business performance is difficult. The main reason for this is that there are likely to be a number of other factors which will have an influence on how the organisation performs in any one area at any given time. For example, the organisation may have introduced new working practices or performance incentives, or there may be new competitors, legislative or environmental factors that influence performance.

Within TrainingCheck we have taken a pragmatic approach to this issue by creating example questions and evaluations which ask key stakeholders to identify any major influencing factors on performance changes and to estimate the relative influence of the training in relation to these. The proportion of influence ascribed to the training

Page 21: 3. Creating Training Evaluations at the Different 'Levels'Levels'.pdf · 4 Level 1 - Participant Reaction Level 1 of the Kirkpatrick training evaluation model attempts to establish

21

programme can then be taken into account when considering the evaluation results and be applied to any quantitative/financial data.

For example, if gains in business performance have resulted in a financial benefit of £5,000, and it is estimated that the training programme is responsible for 50% of the change in performance (the other 50% being due to other influencing factors), then the total financial benefit attributable to the training can be calculated as £2,500 (ie £5000 x 50%). It is this figure that can then be used, for example, as part of a Return on Training Investment calculation (see the guidance on Calculating Return on Training Investment).

While this is clearly not an exact science, relying on the estimates of those who are in the best position to make them will result in data that is likely to be taken seriously by relevant decision makers.

Is it always advisable to evaluate at level 4?

It may not always be advisable to evaluate the impact of training at this level. The decision whether to do so will ultimately depend on factors such as the length and type of training, and, most importantly, on what the training was designed to achieve. For example, for some compliance training it may be prudent to simply measure subsequent compliance rates, while in other cases (eg short, low-cost programmes), it may be possible get a great deal of the information you need by focusing on different kinds of satisfaction and learning measures.

As mentioned elsewhere, there may also be practical issues with attributing results at this level to training. For example, if an organisation-wide quality improvement initiative has recently been introduced of which the training programme was just one small part, then it makes no sense to try to evaluate the impact of the training programme alone. Instead the impact of entire quality improvement initiative should be evaluated. The important thing is that for an evaluation of a training programme at this level to be meaningful the training programme must be largely separable from other significant influencing factors.

Engaging managers

It is vital from the outset to engage managers in identifying training needs, in establishing expectations regarding training outcomes, in setting training and evaluation objectives and in agreeing what will happen with the evaluation results. Among other things, these actions will help to:

• motivate managers to support learners to transfer their learning to the workplace (and thereby potentially improve business performance)

• ensure managers and other stakeholders are engaged in the evaluation process (this is especially critical when collecting job and business impact data)

• ensure that the evaluation outcomes are meaningful to the organisation at a business/strategic level.

Page 22: 3. Creating Training Evaluations at the Different 'Levels'Levels'.pdf · 4 Level 1 - Participant Reaction Level 1 of the Kirkpatrick training evaluation model attempts to establish

22

Seeing the big picture

As with the other Kirkpatrick levels, level 4 should not be seen in isolation. Feedback from other levels can provide important data that may either support any conclusions drawn about the linkage between training and business performance, or, if necessary, help to diagnose potential causes where there has been little or none of the expected impact.

Choosing your evaluation questions

When choosing your specific evaluation questions at this level you will need to consider the following:

• Which performance indicators (PIs) and/or other business impact measures are relevant to the training programme? If the training programme development has been linked to specific business needs, then you should be able to identify the relevant performance areas relatively easily. However, as the programme may have a wider impact than intended you may also want to try to capture unexpected impacts by including a variety of other PIs/business impact measures (see the example evaluations and questions provided within TrainingCheck for some ideas).

• Is data on these PIs/business impact measures currently collected for the group, team, department or organisation? If not, then pre-learning data will either need to be collected before the programme begins, or, if this is not possible, then key stakeholders should where possible establish an estimate of pre-learning performance. (Note: When using actual pre-learning data, this should ideally have been collected over a long enough time period to be able to determine whether any changes are likely to be for seasonal and/or cyclical reasons)

• When, and over what timescale, will changes to the PIs/business impact measures be measured? Bear in mind that it in most cases it takes some time (usually at least 3 months) before the impact of training on workplace performance becomes evident.

• What other factors might influence changes to each PI/measure? For example, have new working practices or performance incentives been introduced, are there any new competitors, or are there any legislation or environmental factors that might influence performance? You will need to bear in mind that respondents may be asked to estimate the percentage of influence of these factors on changes to performance.

• Will you want to be able to calculate the Return on Training Investment (ROI) from the programme? If so, respondents will need to assign financial values to changes in performance where possible.

• What other quantifiable and/or ‘intangible’ (non-quantifiable) benefits might there be as a result of the training programme and how might these be measured or captured?

What data collection methods can be used?

Data collection at level 4 can be carried out through evaluations, interviews, and focus groups involving managers and other stakeholders, and/or through desk research.

If budget and resources allow it, you might consider conducting two or more evaluations at this level. These should ideally be at regular intervals (eg 3 and 6 months) after the

Page 23: 3. Creating Training Evaluations at the Different 'Levels'Levels'.pdf · 4 Level 1 - Participant Reaction Level 1 of the Kirkpatrick training evaluation model attempts to establish

23

training programme has been completed. Doing so will help to evidence that any impact was actually due to the training programme rather than any chance (eg seasonal) variations, and will also help you to develop a clearer picture of the impact of the programme on performance over time.

Evaluating Business Impact with TrainingCheck

Important: The following provides only a very brief overview of the process of evaluating business impact using TrainingCheck. Before carrying out an evaluation for the first time you should also read the ‘Planning Your Training Evaluation’ guidance materials within the Help Centre. You will also find much more detailed Tutorials and FAQs on how to create, deploy, analyse and report on evaluations in the Help Centre.

Creating your evaluation

When creating your evaluation you will be able to:

• copy an existing evaluation (eg you can choose to copy the example evaluations provided)

• create a new evaluation from scratch • choose from the questions within the ‘Participant Reaction’ sections of the Question

Library • copy individual questions from existing evaluations • create your own questions.

The evaluations you create should generally be short (between 5 and 15 questions). Longer evaluations tend to have much lower response/completion rates. Therefore question choice is very important.

Once you have created an evaluation it is advisable to pilot it before deploying it with the target group.

Deploying your evaluation

Evaluation respondents at this level should include anyone who has access to relevant performance data. This can include learners’ managers, line-managers, supervisors, team leaders, Human Resources, Finance, Production departments etc.

At this level evaluations should usually be deployed a suitable period of time (usually about 3 months) after the completion of the training programme to allow time for learning to be transferred to the workplace.

The deployment options (via the ‘Collect Responses’ page) are to:

• send your evaluation to contacts in your Address Book • place a link to your evaluation in an email using your usual email program (eg

Outlook) • place a link to your evaluation on a web page

Page 24: 3. Creating Training Evaluations at the Different 'Levels'Levels'.pdf · 4 Level 1 - Participant Reaction Level 1 of the Kirkpatrick training evaluation model attempts to establish

24

• launch the evaluation immediately so that you can manually add data directly into it (‘Add Data Manually’ button) - useful, for example, if you have collected evaluation data through paper based evaluations, interviews, or focus groups

Please note: Evaluations can be printed so that they can be completed manually. Responses from printed evaluations can be uploaded to TrainingCheck via the ‘Add Data Manually’ function.

You can deploy the evaluation as many times and using as many of the different methods as you wish.

It will be important to consider the timing to ensure a good response rate. For example, does your evaluation coincide with other surveys, or is it a particularly busy time for respondents?

Data analysis and reporting

Once you have collected the data from respondents or manually entered data, you can view the responses by clicking on the ‘Analyse icon. You will be able to filter the responses according to criteria you choose, and download responses as CSV (Excel) or XML files. You will also be able to create custom reports (via the My Reports and ROTI Calculator pages) and share these with key stakeholders.

You may want to discuss the results of the evaluation with learners, the trainer(s), and learners’ managers. This can be an effective way of encouraging, and identifying potential barriers to, the transfer of learning to the workplace.

As with all other levels of evaluation, it is vital that the outcomes of the evaluation at this level are acted on. Not doing so will, at a minimum, undermine the credibility of the evaluation process.