The Kirkpatrick Training Assessment Model
The Kirkpatrick model of training evaluation, which is still a reference today, distinguishes 4 levels of assessment:
-
Reaction: to what extent participants appreciated the training, found it stimulating, engaging, relevant.
-
Learning: to what extent participants have acquired the expected knowledge and skills at the end of their training.
-
Behavior: to what extent participants put into practice, in the exercise of their profession, what they learned during their training.
-
Results: to what extent has the training had a measurable positive effect on the operation of the organization or company.
Kirkpatrick's Level 1
Level 1, reaction, is traditionally managed by a minimal assessment system, including in paper surveys. This is quite simply a matter of asking the trainee their feelings about different aspects of the training.
Kirkpatrick's Level 2
Level 2, that of learning, is a measure of knowledge before and after training. It is either often forgotten, due to a lack of appropriate tools, or instead of measuring real learning, we simply ask participants if they feel like they have learned what they should have during the training. Feeling is important, but it does not replace effective measurement.
Levels 1 and 2 are typically assessed live, that is to say immediately after training. To truly measure knowledge progression, Level 2 requires before and after measurement. The assessment of knowledge before the training is also an opportunity to validate the prerequisites and therefore the appropriateness of the training.
Kirkpatrick's Level 3
Level 3, behaviors, can only be assessed after some time has passed since the training. This involves measuring how the acquired knowledge could be activated operationally, and to what extent they were professionally useful. This can be estimated by the trainee himself, or by his manager.
Kirkpatrick's Level 4
Level 4, results, concerns the measurement of a real, economically measurable return on investment. The true measurement is generally complex, and requires a real study. In large organizations structured as comparable entities, it is possible to proceed with a form of “A-B Testing”, i.e. to compare the operational indicators of an entity which has implemented a training plan with those of a similar entity that has not deployed it.
In the absence of an overall measure based on an audit, we can imagine evaluating the manager's perception of it.
Manager surveys
As mentioned above, the assessment addressed to the manager of each participant is particularly valuable. He or she is perhaps best placed to assess the extent to which the training has had an impact at the “behavioral” level, that is to say bringing an effective improvement in daily working methods. This is Kirkpatrick level 3.
The difficulty most often encountered is to identify the manager in a more or less systematic way, and to have an email address or a mobile number to send them an inquiry. This need must be taken into account during the registration process, especially since the manager is not always the person who places the training order.
But this is practically the only difficulty: once the registration process includes the entry of the manager's contact details, the entire suite will be automated, requiring no additional effort, and with significant benefit.
If we manage to implement a survey among managers, it will take place some time after the end of the training, around two to three months after the training, and will be extremely concise, with around 3 to 5 questions perhaps. As we have said, the survey will have a collateral benefit, to remind the manager that perhaps other of his colleagues could benefit from this training.
Trainer surveys
Even more than managers, it is important to ask trainers for their opinion. On the one hand their point of view on the course of the training is essential, on the other hand since they are most often professionals, they have a very relevant perspective and can naturally compare the course of a session with that to which they are accustomed to. They can assess whether the level of the trainees was appropriate and they can indicate whether they have identified additional training needs among some of the trainees.
Even if we judge the trainer's satisfaction to be less critical than that of his trainee clients, all training organizations are keen to cultivate a pool of training providers on whom they can rely, and for this it is important that the trainer also be satisfied with each session.
Furthermore, if there has been a clash with one of the trainees, and it is the latter mentioned in a feedback, it is quite legitimate to also have the trainer's vision of this incident.
In short, the survey of trainers is of the first importance. Unlike managers, trainers will always be clearly identified, and it is not difficult to have a 100% return rate requirement for them.
Surveys and knowledge assessment
It is important to clearly distinguish two very different mechanisms, which can be involved in training evaluation:
-
Surveys, which we will sometimes call satisfaction surveys, even if their scope can go well beyond satisfaction alone. They proceed from a logic of collecting information, and they will integrate questions that do not have correct or incorrect answers.
-
Assessment of prior learning, which aims to obtain a quantitative measure of the benefits of training in terms of knowledge and skills acquired. Here, each question asked is a small exercise and each question can contribute to building an overall score, which will be a measure of knowledge.
In both cases we call it “assessment”, and we sometimes talk about “questions” and even “questionnaires”. But using the same words causes confusion. This is why we prefer:
-
Use only the term surveys for the collection of information, possibly “survey form” to refer to all the input fields submitted to the trainee.
-
Use the terms “questions”, “questionnaires” and “assessments” only in the context of measuring learning.
The tools and solutions implemented are of a very different nature. It still happens that an evaluation system is deployed using only surveys, without adding a real measurement of acquired knowledge. Our point of view is that, once we have a solution allowing us to combine surveys and assessments, we will have much more complete, more precise and more objective feedback when processing the knowledge assessment.
And it is also the only way to truly cover level 2 of the Kirkpatrick model.
You need to now more about training assessment ? Download our free whitepaper here !