Published on
May 7, 2026

Kirkpatrick model

The rigorous application of Kirkpatrick's model highlights how training evaluation can directly contribute to strengthening a safety culture. The VMHS study explicitly aimed to transform organizational culture in order to improve psychological safety.

Kirkpatrick's Model: Evaluating the Effectiveness of Healthcare Training to Ensure Optimal Quality of Care

In healthcare, continuing and initial professional training is a cornerstone for ensuring quality care and patient safety. Whether it involves acquiring up-to-date medical knowledge, developing advanced technical skills, or improving soft skills such as communication and teamwork, the investment in training is substantial. However, to ensure that this investment pays off and truly contributes to a better healthcare system, it is imperative to evaluate the effectiveness of these training programs. How can we know if training has not only imparted knowledge but has also led to a change in behavior and, ultimately, improved patient outcomes? This is where the Kirkpatrick Model comes in, a major reference in the evaluation of training programs. The Kirkpatrick Model offers a structured framework for evaluating the impact of training at different levels, ranging from simple participant satisfaction to concrete results for the organization or benefits for patients. Based on the sources provided, we will explore this model in depth, its application in the specific context of healthcare training—particularly that delivered via e-learning or blended learning—and illustrate its usefulness through a concrete example of a study conducted within a healthcare system. This detailed guide aims to provide healthcare professionals, trainers, and decision-makers with the tools for a relevant and comprehensive evaluation of their training programs, with a view to continuously improving the quality and safety of care. The Importance of Evaluating Healthcare Training: Evaluating training is not a mere formality, but a fundamental strategic approach in the healthcare sector. Sources emphasize the need to assess the effectiveness of training programs, whether they involve e-learning or more complex interventions aimed at transforming organizational culture and psychological safety. In an environment where errors can have disastrous consequences, ensuring that staff are not only trained, but that this training leads to safer and more effective practices, is a top priority. The healthcare sector is characterized by increasing complexity, rapid advances in medical knowledge and technology, and constant pressure to improve the quality of care while controlling costs. In this context, training programs must not only be relevant, but also demonstrate their value. Evaluating training allows us to:

  • Verify whether the learning objectives have been met.
  • Understand how learners responded to the training.
  • Determine whether the training led to a change in clinical practice.
  • Quantify the impact of the training on specific outcomes, including benefits for patients and the organization.
  • Identify the strengths and weaknesses of training programs in order to continuously improve them.
  • Justify investments in training.

Sources indicate that e-learning has drawn inspiration from various fields, including educational sciences, information technology, open and distance learning, simulation, and quality control. This diversity of influences highlights the multifaceted nature of modern health training, which requires robust assessment methods capable of capturing these different dimensions. The World Health Organization (WHO) itself published a systematic review in 2015 on the value of e-learning in initial training for healthcare professionals, highlighting the growing importance of these methods and, consequently, the need for rigorous evaluation. Although numerous comparative studies have been conducted, particularly in the health sector, sources note that a lack of comparative studies published by French teams on the subject of e-learning reinforces the importance of adopting internationally recognized evaluation frameworks, such as Kirkpatrick’s, to contribute to a robust body of evidence. Evaluation according to Kirkpatrick’s model, which focuses on satisfaction, knowledge acquisition, practice change, and clinical outcomes, is a hierarchical approach adopted by evaluation groups such as BEME (Best Evidence Medical Education) and used in systematic reviews of e-learning program evaluations. This model provides a clear roadmap for assessing the effectiveness of training, from immediate reactions to long-term impact. Deciphering Kirkpatrick’s Model: The Four Levels of Evaluation The Kirkpatrick Model, developed by Donald Kirkpatrick and later refined by his son James Kirkpatrick, is one of the most widely used training evaluation frameworks in the world. It proposes a hierarchy of four increasing levels, each measuring a different type of training impact. These levels allow for the evaluation not only of what was learned, but also of whether the learning was applied and what the final result was. Here are the four evaluation levels according to Kirkpatrick, as summarized in the sources: Level 1 – Reactions: This level assesses how participants reacted to the training. Did they enjoy it? Were they satisfied? This involves measuring learners' immediate perception of the training experience, including perceived usefulness, interest, and the quality of the learning environment or materials. In the context of e-learning, this may include evaluating the usability of the interface, which is important to prevent participant discouragement. Participant satisfaction is often a necessary, though not sufficient, condition for higher levels of evaluation. High satisfaction can foster engagement and motivation to continue learning the content. Level 2 – Learning: This level measures what learners have learned at the end of the training session. What knowledge, skills (know-how), and/or attitudes (soft skills) have been acquired? Have the learning objectives been met? This is the pedagogical evaluation. This level aims to quantify the gain in knowledge or the improvement in skills that can be measured immediately after the training. Sources mention that evaluating e-learning based on knowledge gain is a commonly used criterion. This can be measured through knowledge tests (pre-tests and post-tests), practical exercises, or skills assessments. Level 3 – Behaviors (or Transfers): This level assesses whether learners apply what they learned during the training session in their professional practice. What new professional behaviors have been implemented? This level focuses on the application of learning in the workplace. In the healthcare field, this involves observing whether professionals have modified their clinical practices, their communication with patients or colleagues, or their approach to teamwork. Sources mention the assessment of changes in professionals’ practices or attitudes. This can be measured through self-assessments of skills and behaviors, direct observations, or reports from supervisors or colleagues. The improvement in practical skills and behaviors compared to a neutral control group has been studied in e-learning programs for various healthcare professions. Level 4 – Outcomes: This level assesses the impact of the training session on patient care or the organization. What are the benefits for patients? This level measures the final results of the training, which go beyond simple individual behavior. In the healthcare sector, this can include a reduction in medical errors, improved clinical outcomes for patients, increased process efficiency, patient satisfaction, or organizational indicators such as reduced incident-related costs. Evaluating clinical outcomes for patients is a criterion used in the evaluation of e-learning training programs. The improvement in clinical practices compared to a neutral control group was also studied.

Kirkpatrick's model is hierarchical because achieving higher levels generally depends on success at lower levels. For example, a dissatisfied participant (Level 1) is unlikely to engage sufficiently to learn (Level 2) or apply what they have learned (Level 3), which would limit the impact on outcomes (Level 4). The adoption of this hierarchy by organizations such as BEME to evaluate medical literature demonstrates its relevance and robustness in the context of evaluating healthcare training. Some studies explicitly use this classification for their comparative analyses.

Applying the Kirkpatrick Model to E-Learning Program Evaluation

E-learning has become an increasingly widespread training method in the healthcare sector, offering flexibility and accessibility. Evaluating the effectiveness of these programs using Kirkpatrick’s model requires adapting measurement methods to this specific format. The sources provide guidance on how e-learning can be evaluated at each of these levels. Level 1 – Feedback in E-learning: E-learning satisfaction is generally assessed through online surveys or questionnaires at the end of the program or modules. These surveys may focus on the platform’s ease of use, the clarity of the content, interactivity, the relevance of the examples, or the support received. The sources emphasize the importance of interface usability for a positive perception of e-learning. A poor interface can discourage participants. Conversely, interactive modules can generate greater satisfaction. The qualitative feedback gathered can help identify aspects of the instructional or technical design that need improvement. Level 2 – Learning in E-learning: E-learning assessment is commonly conducted through online tests. Pre-tests and post-tests are a standard method for measuring knowledge gain. Sources mention the use of pre-tests and post-tests to assess knowledge improvement within e-learning programs. These tests may consist of multiple-choice or single-answer questions. E-learning can be effective in improving knowledge. Skills improvement (know-how) can be assessed through practical exercises integrated into modules, online simulations, or interactive case studies. Assessing attitudes (soft skills) can be more complex and may require scenarios or judgment questions. Sources show that e-learning can lead to improved clinical knowledge and skills. Level 3 – Behaviors in E-learning: Assessing the transfer of learning to the workplace from e-learning training can be difficult without structured follow-up. Methods may include self-assessments or reports from participants on how they apply what they have learned. Observations by supervisors or colleagues can also be used, although this is less common for e-learning alone. Impact surveys conducted a few weeks or months after training may ask participants if they have changed their practices or adopted new behaviors. Sources mention assessing changes in practices or attitudes. Improvements in practical skills and behaviors have been observed in comparative studies on e-learning. Level 4 – Outcomes in E-learning: Measuring the impact of e-learning on clinical or organizational outcomes requires collecting objective data after training. This may involve analyzing patient records, safety incident data, quality of care indicators, or financial data related to process efficiency. Sources indicate that evaluating clinical patient outcomes is a relevant criterion and that improvements in clinical practices can be measured compared to a control group. E-learning programs can improve clinical outcomes compared to a neutral control group. Level 4 evaluation is often the most complex but also the most significant for demonstrating the strategic value of training. A systematic review from 2002 already used an e-learning evaluation based on knowledge gains, changes in professional practices or attitudes, and participant satisfaction. The use of Kirkpatrick’s model to evaluate e-learning programs is therefore an established practice in healthcare training research. For this guide, some of Kirkpatrick’s criteria were retained to evaluate the outcomes of e-learning training programs (knowledge assessment, assessment of clinical skills and behaviors, and clinical outcomes for patients). Sources also note that the time spent on online courses is similar to that of face-to-face courses, unless interactive elements are included, in which case the time increases, but so does the learning. "Adapted" programs, which allow students to skip certain modules depending on their level, can even shorten learning time while being more efficient. These time considerations are important when designing and evaluating e-learning programs.

Effectiveness of e-learning as assessed by Kirkpatrick, according to the sources

The sources provide a compilation of results from comparative studies evaluating the effectiveness of e-learning training based on various criteria, often aligned with the Kirkpatrick levels. In general, e-learning has a significant effect compared to no intervention. Compared to interventions that do not use the internet, the effect is more varied or smaller.

E-learning vs. Lecture-Based (In-Person) Training:The most common comparison is between e-learning and traditional lectures. A systematic review evaluating nine studies involving nursing students or graduates showed an improvement in knowledge and clinical skills with e-learning compared to traditional lectures. However, most studies comparing e-learning and lectures find no significant difference in knowledge acquisition. Some authors even find e-learning programs to be more effective than similar face-to-face courses. In terms of improving knowledge and clinical skills, face-to-face teaching is considered equivalent to online teaching. E-learning vs. Other Delivery Methods: Compared to other formats such as print, some studies have found no difference between web and print formats in terms of training delivery. However, an online course on pain management showed improved knowledge and management skills compared to the distribution of printed guidelines. An online course on EBM was more effective in improving knowledge and clinical examination skills compared to a group working independently. Effect of Program Duration: The duration or timing of the program (short vs. extended) did not influence the improvement of knowledge, clinical skills, or clinical outcomes. Studies comparing different delivery schedules for modules showed no difference between groups. However, a long-term multi-component program showed no difference in access to clinical guidelines.

Source: BMJ

Effect by Module Type:

  • Interactivity: Interactivity, practical exercises, repetition, and feedback improve learning outcomes. Interactive modules can be more effective than traditional teaching methods. However, some authors report better results with non-interactive modules, while others find no significant difference in learning outcomes between interactive and text-based modules. The delivery method—whether short or spread out over time—does not influence improvements in learning outcomes. Interactive modules involving participants improve learning outcomes, but results vary depending on the subject matter. Social interactions: Interactive modules between participants and/or teachers enhance learning outcomes. A program that included discussion and exchange on clinical cases, in addition to simple e-learning, was more effective in terms of knowledge acquisition.
  • Webcast, videoconference: Face-to-face teaching is just as effective as webcast or videoconference teaching methods in terms of improving knowledge. One study found no significant difference between interacting during videoconferences and watching webcasts without interaction.
  • Design: Modules with improved design (realistic clinical cases, interactivity, feedback, integration) are more effective than a standard program. However, a linear format did not show any difference compared to a branching format in terms of learning. A complex design does not guarantee better results. An effective design in one country appears to be transferable to another. Email/SMS Follow-up: Using email/SMS follow-up systems helps increase engagement in e-learning programs. SMS or email reminders can improve knowledge acquisition, participation, and the adoption of recommendations. Spaced learning, which combines online training with regular spaced tests via email, is appreciated by participants.
  • Learning Agent: The use of a learning agent (animated character) showed little benefit in a low-quality study.

Effect Based on the Pedagogical Focus:

  • Problem-Solving: Real-life clinical cases taught online lead to better knowledge acquisition and improved clinical performance than simple lectures. The use of specific interactive elements, such as a wiki, can be effective. Online problem-solving training has been shown to result in changes in knowledge and clinical practices. Problem-based teaching, compared to traditional teaching, is at least as effective in terms of improving knowledge, clinical skills, and clinical outcomes, and certain formats appear to be more effective depending on the subject matter. Virtual patients, clinical cases: Computerized clinical case simulations or virtual patients yield results compared to no information, but with a smaller effect compared to non-computerized training. A course with 11 additional clinical cases yielded superior results in the short term. However, other studies find no significant difference with or without clinical cases.
  • Situational e-learning: This type of interactive instruction, in which the learner is placed in a specific situation or context, is effective in improving performance compared to no intervention, but the effect is limited compared to traditional interventions.
  • "Serious games": "Serious games" have not yet proven their effectiveness.
  • MOOCs: No comparative studies were found on MOOCs.

Effect over time: The improvement in knowledge achieved through e-learning programs is not always sustained over time, and long-term studies are recommended. Some studies show a sustained gain, while others show a decline in acquired knowledge over time and no specific effect over time on maintaining skills in certain areas. Online teaching, compared to a control group, is insufficient to sustain the acquisition of knowledge or clinical skills over time, and few studies were found on this topic. Blended Learning: Blended learning, which combines face-to-face sessions with online training, has shown an improvement in clinical skills, albeit small, in initial clinical training. These face-to-face and online sessions are well-received by participants. Blended training may be superior to e-learning alone for acquiring knowledge and clinical skills. Applying e-learning after the face-to-face placement may yield the best results. The combination of face-to-face teaching with online training is superior to traditional teaching alone or online training alone for improving knowledge, clinical skills, and clinical outcomes. In summary, the effectiveness of e-learning varies and depends on many factors, including instructional design, interactivity, social interaction, and whether it is used alone or in conjunction with face-to-face training. Kirkpatrick’s multilevel assessment is essential for understanding these nuances and determining whether a program is achieving its objectives. A concrete case study: applying Kirkpatrick’s model in a blended anesthesiology program (VMHS). The second source presents a highly relevant case study to illustrate the application of Kirkpatrick’s model in the healthcare sector: a blended training program (e-learning and face-to-face simulation) aimed at improving psychological safety and soft skills within the anesthesiology teams of the VinMec Healthcare System (VMHS) in Vietnam. This study is particularly interesting because it addresses major challenges such as cultural entrenchment, geographical dispersion, and a strong hierarchical structure where deference prevailed. The goal was to transform the department into one of the safest in Southeast Asia. The initiative was conducted over 18 months and involved 112 physicians and nurse anesthetists. The program was structured and combined online learning modules with on-site simulation sessions. The impact assessment of this intervention was explicitly conducted using the Kirkpatrick model. This demonstrates that the model is not merely theoretical but can be applied to evaluate complex, large-scale training programs in a real-world clinical setting. The implementation team was multifaceted, including health system management, training experts, and simulation specialists. This collaboration was essential to overcoming challenges and ensuring alignment with organizational goals. The project required rigorous planning, needs analysis, the development of tailored courses and scenarios, and the conduct of practical training sessions with interactive debriefings. The challenges encountered during implementation were significant, including securing funding (which took nearly a year), skepticism from some senior stakeholders, language barriers (English being the common language but not a native language for most), navigating cultural norms and hierarchical structures, and the logistical complexities of training large groups across multiple sites. The program was made mandatory for all VMHS anesthesia staff as part of continuing education. The pedagogical approach leveraged e-learning to teach and demonstrate the soft skills essential for crisis management, and to raise awareness of performance gaps. The two-day, in-person simulation sessions, called DOMA (Development of Mastery in Anesthesiology), combined innovative theoretical lectures immediately followed by immersive simulation scenarios. The goal was to provide hands-on experience and highlight the gap between ideal and actual performance. Interactive debriefings following the simulations allowed for an in-depth exploration of participants’ understanding and mastery of the skills, offering opportunities for reflection and feedback.

Using Kirkpatrick's model, the VMHS study was able to assess the impact of this complex intervention across four distinct dimensions, thereby providing a comprehensive picture of its effectiveness.

Source: BMJ

Measuring Impact at Each Level: Results of the VMHS Study

The VMHS study rigorously measured the impact of its blended learning program using specific indicators for each Kirkpatrick level.

Level 1 – Satisfaction:Participant satisfaction was assessed through anonymous online surveys at the end of each in-person simulation session. Overall results are very positive. All participants (100%) recommended the training and felt it would change their practice. This indicates strong acceptance of the program by VMHS anesthesia staff, which is a promising starting point for achieving higher levels of assessment. Satisfaction is the primary expected outcome at Level 1 and aims to encourage adherence to the concept and approach. Level 2 – Learning: Improvement in knowledge was measured by the results of anonymous pre- and post-tests conducted on the e-learning platform. Over the 18 months of the study, the 112 participants completed 4,870 hours of e-learning, demonstrating strong engagement (average of 43 hours and 29 minutes per participant). A significant percentage of modules (91% of the 3,213 modules started) were completed 100%. The results showed a highly significant improvement between the pre- and post-tests (success rate of 41% vs. 89%, p<0.001). Cela démontre l'efficacité de l'outil e-learning pour l'acquisition de connaissances et sa valeur pour préparer les participants aux sessions de simulation, maximisant ainsi les bénéfices de l'apprentissage en personne. Des pré-tests et post-tests similaires ont également été utilisés pour les sessions de simulation de 2 jours via un système d'enquête en ligne.

Level 3 – Behaviors: Changes in behavior (self-reported and observed) were assessed through anonymous online impact surveys administered to anesthesia teams prior to each 2-day simulation session. These surveys, conducted at 6, 12, and 18 months, aimed to assess the perceived impact of the training on behaviors applied and observed in daily clinical situations. Figure 1 (not visible here but described in the source text) illustrates the incidence of observed or reported changes, which are significant and stable over the 18-month period. More than 93% of participants perceived the changes as lasting. Among the changes most frequently cited in the surveys, communication (including the ability to “speak up”) emerged as the most significant change (cited by 46% to 63% of respondents), followed by teamwork (including task assignment and coordination, cited by 35% to 57%) and the use of cognitive aids (cited by 20% to 57%). The fact that the intervention was rolled out to all VMHS anesthesia teams within a short timeframe is considered to have facilitated the successful implementation of these changes.

Level 4 – Outcomes: The impact on the organization, quality, and safety (Level 4) was assessed based on the results of the VMHS’s annual quality and safety audits and on trends in reported incidents for the VMHS Department of Anesthesiology. The VMHS annual safety audits, based on 124 indicators scored from 1 to 10, showed an improvement in the overall safety score and a reduction in the variation in scores between 2021 (the baseline year prior to the intervention) and 2022 and 2023. The reduction in variation is interpreted as a standardization of practices toward greater safety. The impact on safety culture and the ability to report adverse events was assessed by collecting the number of adverse events reported before and after the intervention began. The number of reported events increased (Figure 3A, not shown here), and the number of people reporting no events was halved compared to the previous year. The increase in the reporting of adverse events, although intuitively negative, is in fact a positive indicator of a strengthened safety culture and greater psychological safety, as staff feel more comfortable reporting errors and near-misses without fear of repercussions. The number of events reported to VMHS after 18 months was even nine times lower than the data from the comparable North American database (Figure 3B, not shown here). The study concludes that the educational intervention, being the only one implemented in the VMHS Department of Anesthesiology during the observed period, is likely the cause of these objective improvements.

The results of the VMHS study show clear and lasting positive effects of the intervention on all four Kirkpatrick levels. They demonstrate that psychological safety and related skills are not just a passive byproduct of good leadership, but a skill that can be systematically developed through structured training interventions. The integration of e-learning and large-scale simulation has enabled behavioral change, even in hierarchical and geographically dispersed systems.

Advantages and challenges of using Kirkpatrick in the healthcare context

The use of the Kirkpatrick model to evaluate health training, whether online, face-to-face or blended, presents several advantages and challenges, as illustrated by the sources.

Advantages:

  • Structured framework: The model provides a clear, hierarchical framework for planning and conducting the evaluation. It ensures that the evaluation goes beyond measuring immediate satisfaction or learning, and also seeks to assess implementation and long-term outcomes.
  • Comprehensiveness: By covering four levels, the model allows for a more comprehensive assessment of a training program’s impact, ranging from learners’ perceptions to concrete outcomes.
  • Healthcare relevance: Kirkpatrick’s levels (satisfaction, knowledge, change in practice, clinical outcomes) are directly relevant for assessing the impact of training on the quality of care and patient safety. Organizations such as BEME have adopted this hierarchy.
  • Applicability to various delivery methods: The model can be adapted to evaluate different training delivery methods, including e-learning and blended learning. The VMHS study is a concrete example of this.
  • Demonstrating Value: Evaluation at higher levels (Levels 3 and 4) is essential for demonstrating the strategic value of training and justifying investments. The results of the VMHS study at Level 4, which show improved safety scores and an increase in incident reports, provide tangible evidence of this.
  • Support for continuous improvement: The evaluation provides valuable data for identifying what is working well and what needs to be improved in training programs.

Challenges:

  • Difficulty in measuring higher levels: Levels 3 (behavior) and 4 (outcomes) are often the most difficult and costly to assess reliably, as they require workplace monitoring and the collection of objective data on clinical practices and outcomes. The VMHS study used impact surveys reporting perceived changes (which can be subjective) and objective audits and incident reports (more reliable but influenced by other factors).
  • Causal Attribution: It may be difficult to establish a direct causal link between the training and the changes observed at levels 3 and 4, as many other factors (work environment, managerial support, other initiatives, etc.) can influence behavior and outcomes. The VMHS study notes that its intervention was the only significant one implemented during the observation period, which strengthens the hypothesis of a causal link, but acknowledges the possible influence of unidentified external factors.
  • Cost and resources: Conducting a comprehensive evaluation at all levels—particularly monitoring and collecting objective data—can be costly and require significant time and staff resources.
  • Complexity of the healthcare environment: The healthcare sector is a complex environment characterized by hierarchical structures, strong cultural norms, and geographic dispersion. These factors can make it more difficult to transfer learning and measure impact. The VMHS study had to overcome these challenges to evaluate the effectiveness of its program.
  • Lack of standardized tools: Although evaluation frameworks and tools for e-learning programs exist, they must be selected and adapted based on the specific objectives of the evaluation. There are not always standardized tools available to accurately measure changes in practice or clinical outcomes related to a given training program across all contexts. The VMHS study developed its own measurement methods (impact surveys, use of internal audits, monitoring of incident reports).

Despite these challenges, using the Kirkpatrick model remains a powerful way to structure the evaluation of healthcare training and provide evidence of its effectiveness beyond simple knowledge acquisition. The "Key Takeaways" from the sources highlight the importance of measuring the impact on knowledge, clinical skills, and clinical outcomes, elements directly measurable at Kirkpatrick Levels 2, 3, and 4.

Towards a stronger safety culture through training evaluation

The rigorous application of the Kirkpatrick model, particularly in contexts such as the VMHS study, highlights how training evaluation can directly contribute to strengthening a safety culture. The VMHS study explicitly aimed to transform the organizational culture to improve psychological safety.

Psychological safety, defined as the ability of team members to speak up, take innovation risks, and admit their mistakes without fear of negative consequences, is a very important element for high-performing healthcare teams. Leaders play a key role in promoting a psychologically safe environment that fosters effective communication, improves teamwork and decision-making, and encourages incident reporting. Developing these non-technical skills ("soft skills") is essential for safer care.

The VMHS study demonstrated that psychological safety is not just an abstract ideal, but a concrete and trainable skill that has a measurable impact on patient safety. The blended learning program (e-learning and simulation) led to a significant increase in "speaking up" behaviors, improved teamwork, and the use of cognitive aids (Level 3). Above all, it led to an increase in safety incident reports and improved annual safety audit scores (Level 4). The increase in incident reports is a key indication of a more open and less punitive safety culture.

The evaluation using the Kirkpatrick model made it possible to objectify these changes. Level 1 (satisfaction) showed strong adherence. Level 2 (learning) confirmed the acquisition of knowledge via e-learning. Level 3 (behaviors) highlighted the perceived and reported changes in communication and teamwork. And Level 4 (results) demonstrated the impact on objective organizational safety indicators.

These results reinforce the idea that investing in training in non-technical skills and psychological safety, assessed in a structured manner, is a powerful lever for improving the quality and safety of care. Sources emphasize that leadership is a determining factor in overcoming obstacles and ensuring the success of such initiatives. The commitment of the VMHS health system's management has been essential in integrating psychological safety into a large hierarchical and geographically dispersed structure.

In conclusion, the Kirkpatrick Model provides an indispensable framework for evaluating the effectiveness of healthcare training programs, whether e-learning, simulation, or blended learning. It makes it possible to go beyond simply measuring satisfaction or knowledge acquisition to assess whether the training actually leads to changes in behavior and, more importantly, to concrete improvements for patients and the organization. In a sector where safety is paramount, a rigorous evaluation of training is not only a good practice, but a strategic necessity to build and maintain a strong safety culture and continuously improve the quality of care delivered. Initiatives such as the VMHS, evaluated by Kirkpatrick, offer a replicable model for other healthcare systems wishing to place psychological safety and training evaluation at the heart of their strategy. The success of these programs depends on an adapted pedagogical design, careful implementation, learner follow-up, and above all, a strong commitment from leadership to support cultural change. The Kirkpatrick evaluation guides this process and makes it possible to measure the progress made towards a safer and more efficient healthcare system.

Sources :

https://www.has-sante.fr/upload/docs/application/pdf/2015-09/4e_partie_guide_e-learning.pdf

https://bmjopenquality.bmj.com/content/14/2/e003186

photo of the author of the safeteam academy blog article
Frédéric MARTIN
SafeTeam Academy
Back to the blog
safeteam logo

Our teams are committed to assessing your needs and providing you with a response in less than 48 hours