The Kirkpatrick Model: evaluating the effectiveness of healthcare training for optimal quality of care
In the healthcare field, continuing and initial training of professionals is an essential pillar to guarantee the quality of care and patient safety. Whether it is for the acquisition of up-to-date medical knowledge, the development of advanced technical skills, or the improvement of non-technical skills such as communication and teamwork, the investment in training is considerable. However, to ensure that this investment pays off and truly contributes to a better healthcare system, it is imperative to evaluate the effectiveness of these training programs. How do we know if a training program has not only transmitted knowledge, but has also led to a change in behavior, and ultimately, to an improvement in patient outcomes? This is where the Kirkpatrick Model comes in, a major reference in the evaluation of training programs.
The Kirkpatrick Model offers a structured framework for evaluating the impact of training at different levels, ranging from simple participant satisfaction to concrete organizational results or benefits for patients. Based on the sources provided, we will explore this model in depth, its application in the specific context of healthcare training, particularly those delivered by e-learning or in blended learning format, and illustrate its utility through a concrete example of a study conducted in a healthcare system. This detailed guide aims to provide healthcare professionals, trainers, and decision-makers with the keys to a relevant and comprehensive evaluation of their training programs, with the aim of continuously improving the quality and safety of care.
The Importance of evaluating health training programs
Training evaluation is not a mere formality, but a fundamental strategic approach in the healthcare sector. Sources emphasize the need to evaluate the effectiveness of training programs, whether e-learning courses or more complex interventions aimed at transforming organizational culture and psychological safety. In an environment where errors can have disastrous consequences, ensuring that staff are not only trained, but that this training leads to safer and more effective practices, is a top priority.
The healthcare sector is characterized by increasing complexity, rapid advances in medical knowledge and technologies, and constant pressure to improve the quality of care while controlling costs. In this context, training programs must not only be relevant but also demonstrate their value. Evaluating training allows to:
- Validate whether pedagogical objectives have been met.
- Understanding how learners reacted to the training.
- Determine whether the training led to a change in clinical practice.
- Quantify the impact of training on concrete results, including benefits for patients and the organization.
- Identify the strengths and weaknesses of training programs in order to continually improve them.
- Justify investments in training.
Sources indicate that e-learning has drawn inspiration from various fields, including education sciences, information technologies, open and distance learning, simulation, and quality controls. This diversity of inspirations highlights the multifaceted nature of modern healthcare training, which requires robust evaluation methods capable of capturing these different dimensions. The World Health Organization (WHO) itself published a systematic review in 2015 on the value of e-learning in initial training for healthcare professionals, which underscores the growing importance of these modalities and, consequently, the need to evaluate them rigorously.

Although many comparative studies have been conducted, particularly in the field of health, sources note a lack of comparative studies published by French teams on the subject of e-learning. This reinforces the interest in adopting internationally recognized evaluation frameworks, such as Kirkpatrick's, to contribute to a solid body of evidence. Evaluation according to the Kirkpatrick model, which focuses on satisfaction, knowledge acquisition, practice change, and clinical outcome, is a hierarchical approach adopted by evaluation groups such as BEME (Best Evidence Medical Education) and used in systematic reviews on the evaluation of e-learning programs. This model provides a clear roadmap for evaluating the effectiveness of training, from immediate reactions to long-term impact.
Decoding the Kirkpatrick model: the four levels of evaluation
The Kirkpatrick Model, developed by Donald Kirkpatrick and later refined by his son James Kirkpatrick, is one of the most widely used training evaluation frameworks in the world. It proposes a hierarchy of four increasing levels, each measuring a different type of training impact. These levels make it possible to evaluate not only what has been learned, but also whether the learning has been applied and what the final result has been.
Here are Kirkpatrick's four levels of evaluation, as summarized in the sources:
- Level 1 – Reactions: This level assesses how participants reacted to the training. Did they appreciate it? Are they satisfied with it? It measures the immediate perception of learners regarding the training experience, including perceived usefulness, interest, and quality of the environment or educational materials. In the context of e-learning, this can include evaluating the ergonomics of the interface, which is important to avoid discouraging participants. Participant satisfaction is often a necessary, although not sufficient, condition for higher levels of evaluation. High satisfaction can promote adherence and motivation to engage with the content.
- Level 2 – Learning: This level measures what learners have learned at the end of the training session. What knowledge, skills (know-how) and/or attitudes (soft skills) have been acquired? Have the educational objectives been achieved? This is the pedagogical evaluation. This level aims to quantify the gain in knowledge or the improvement of measurable skills directly after the training. Sources mention that the evaluation of e-learning according to knowledge gains is a commonly used criterion. This can be measured by knowledge tests (pre-tests and post-tests), practical exercises, or skills assessments.
- Level 3 – Behaviors (or Transfers): This level assesses whether learners use what they have learned during the training session in their professional practice. What new professional behaviors have been implemented? This level focuses on the application of learning in the workplace. In the healthcare field, this means observing whether professionals have changed their clinical practices, their communication with patients or colleagues, or their approach to teamwork. Sources mention the evaluation of changes in practices or attitudes of professionals. This can be measured by self-assessments of skills and behaviors, direct observations, or reports from supervisors or colleagues. The improvement of practical skills/behaviors compared to a neutral control group has been studied in e-learning programs for various health professions.
- Level 4 – Results: This level assesses the impact of the training session on patient care or on the organization. What are the benefits for patients? This level measures the final results of the training, which go beyond the simple behavior of the individual. In the healthcare sector, this may include the reduction of medical errors, the improvement of clinical outcomes for patients, the increase in process efficiency, patient satisfaction, or organizational indicators such as the reduction of costs related to incidents. The evaluation of clinical outcomes on patients is a criterion retained in the evaluation of e-learning training programs. The improvement of clinical practices compared to a neutral control group has also been studied.
The Kirkpatrick Model is hierarchical because achieving higher levels generally depends on success at lower levels. For example, an unsatisfied participant (low Level 1) is unlikely to be engaged enough to learn (Level 2) or apply what they have learned (Level 3), which would limit the impact on results (Level 4). The adoption of this hierarchy by organizations such as BEME to evaluate medical literature demonstrates its relevance and robustness in the context of healthcare training evaluation. Some studies explicitly use this classification for their comparative analyses.
Applying Kirkpatrick to the evaluation of e-learning programs
E-learning has become an increasingly popular training modality in the healthcare sector, offering flexibility and accessibility. Evaluating the effectiveness of these programs using the Kirkpatrick model requires adapting measurement methods to this specific format. The sources provide indications on how e-learning can be evaluated at each of these levels.
Level 1 – Reactions in E-learning:Satisfaction assessment in e-learning is generally done through online surveys or questionnaires at the end of the program or modules. These surveys may cover the ease of use of the platform, the clarity of the content, the interactivity, the relevance of the examples, or the support received. Sources emphasize the importance of interface ergonomics for a positive perception of e-learning training. A deficient interface can discourage participants. Conversely, interactive modules can generate greater satisfaction. The qualitative feedback collected can help identify aspects of pedagogical or technical design that require improvement.
Level 2 – Learning in E-learning:The evaluation of learning in e-learning is commonly carried out through online tests. Pre-tests and post-tests are a standard method for measuring knowledge gain. Sources mention the use of pre-tests and post-tests to assess the improvement of knowledge in e-learning programs. These tests can consist of multiple-choice or single-choice questions. E-learning can be effective in improving knowledge. The improvement of skills (know-how) can be assessed by practical exercises integrated into the modules, online simulations, or interactive case studies. The evaluation of attitudes (soft skills) can be more complex and require scenarios or judgment questions. Sources show that e-learning can lead to an improvement in knowledge and clinical skills.
Level 3 – Behaviors in E-learning:Evaluating the transfer of learning to the workplace from e-learning training can be difficult without structured follow-up. Methods may include self-assessments or reports from participants on how they apply what they have learned. Observations by supervisors or colleagues can also be used, although this is less common for e-learning alone. Impact surveys conducted a few weeks or months after the training can ask participants if they have changed their practices or adopted new behaviors. Sources mention the evaluation of changes in practices or attitudes. The improvement of practical skills/behaviors has been observed in comparative studies on e-learning.
Level 4 – Results in E-learning:Measuring the impact of e-learning on clinical or organizational outcomes requires collecting objective data after the training. This may involve the analysis of patient records, safety incident data, quality of care indicators, or financial data related to process efficiency. Sources indicate that the evaluation of clinical outcomes on patients is a relevant criterion and that the improvement of clinical practices can be measured comparatively to a control group. E-learning programs can improve clinical outcomes compared to a neutral control group. Evaluation at level 4 is often the most complex but also the most significant to demonstrate the strategic value of training.
A systematic review dating back to 2002 already used an evaluation of e-learning based on gains in knowledge, changes in professional practices or attitudes, and professional satisfaction. The use of Kirkpatrick's model to evaluate e-learning programs is therefore an established practice in healthcare training research. For the purposes of this guide, some of Kirkpatrick's criteria have been adopted to evaluate the outcomes of e-learning programs (assessment of knowledge, assessment of clinical skills and behaviours, clinical outcome on patients).
Sources also note that the time spent on online courses is similar to that of face-to-face courses, unless interactions are offered, in which case the time increases, but so does learning. "Adapted" programs, allowing for the dispensation of certain modules depending on the student's level, can even shorten the learning time while being more efficient. These temporal considerations are important when designing and evaluating e-learning programs.
Efficacy of e-learning evaluated by Kirkpatrick according to sources
Sources present a compilation of results from comparative studies evaluating the effectiveness of e-learning training according to different criteria, often aligned with Kirkpatrick's levels. In general, e-learning has a significant effect compared to the absence of intervention. Compared to interventions not connected to the Internet, the effect is more heterogeneous or small.
E-learning vs. Lecture-Based Learning (In-Person):The most frequent comparison is between e-learning and traditional lecture-based learning. A systematic review evaluating 9 studies of nursing students or graduates showed an improvement in knowledge and clinical skills with e-learning compared to a traditional course. However, most studies comparing e-learning and lecture-based learning find no significant difference in knowledge acquisition. Some authors even find e-learning programs more effective compared to a similar in-person course. Regarding the improvement of knowledge and clinical skills, in-person teaching is considered equivalent to online teaching.
E-learning vs. Other Delivery Methods:Compared to other formats such as paper-based materials, some studies have found no difference between web-based and paper-based formats for training delivery. However, an online pain management course demonstrated improved knowledge and management skills compared to the distribution of paper-based recommendations. An online EBM course was more effective in improving knowledge and clinical examination skills compared to a group working independently.
Effect Based on Program Duration:The duration or spacing of the program over time (short vs. spaced mode) does not influence the improvement of knowledge, clinical skills, and clinical outcomes. Studies comparing different temporal modes of module delivery have shown no difference between groups. However, a long-term, multi-component program showed no difference compared to access to clinical guidelines.

Effect according to Module Type:
- Technical Interactivity: Interactivity, practical exercises, repetition, and feedback improve learning outcomes. Interactive modules may be more effective than traditional teaching. However, some authors find better results with non-interactive modules, and others find no specific improvement between interactive and textbook-style modules. The mode of delivery, whether short or spread out over time, does not influence the improvement of learning parameters. Interactive modules with the participant improve learning parameters, but divergent results exist depending on the subject matter taught.
- Social interactions: Interactive modules between participants and/or instructors improve learning objectives. A program with discussion and exchange on clinical cases in addition to simple e-learning was more effective in delivering knowledge.
- Webcast, videoconference: Face-to-face teaching compared to webcast or web conference-type teaching is equivalent in terms of improving knowledge. One study found no significant difference between interacting during video conferences or during webcasts without interactions.
- Design: Modules with improved design (authenticity of clinical cases, interactivity, feedback, integration) are more effective than a standard program. However, a linear format has not shown any difference compared to a branched format in terms of learning. A complex design does not guarantee better results. An effective design in one country seems to be transferable to another.
- Email/SMS reminders: The use of email/SMS reminder systems contributes to better involvement in e-learning programs. SMS or email reminders can improve knowledge acquisition, participation, and adoption of recommendations. Spaced learning combining online training and regular spaced tests by email is appreciated by participants.
- Pedagogical agent: The use of a pedagogical agent (animated character) showed a small interest in a study of low quality.
Effect according to Pedagogical Dominance:
- Problem solving: Real clinical cases taught online allow for better knowledge acquisition and improved clinical behavior than simple courses. The use of specific interactive elements such as wikis can be effective. Online training through problem solving has shown a change in knowledge and clinical practices. The problem-solving teaching method compared to the traditional teaching method is at least equivalent for improving knowledge, clinical skills and clinical outcomes, and some formats seem more effective depending on the subject.
- Virtual patient, clinical cases: Computerized clinical case simulations or virtual patients provide results compared to the absence of information, but with a small effect compared to non-computerized training. A course with 11 additional clinical cases gave superior short-term results. However, other studies find no significant difference with or without clinical cases.
- Situation-based e-learning: This type of interactive teaching, where the learner is placed in a specific situation/context, is effective in improving performance compared to no intervention, but the effect is limited compared to traditional interventions.
- "Serious games": "Serious games" have not yet demonstrated their effectiveness.
- MOOC: No comparative study has been found regarding MOOCs.
Effect Over Time:The improvement in knowledge with e-learning programs does not always persist over time, and long-term studies are recommended. Some studies show a gain that is maintained, while others show a decrease in acquired knowledge over time and no specific effect over time for maintaining skills in certain areas. Online teaching compared to a control group is not sufficient to maintain the acquisition of knowledge or clinical skills over time, and few studies have been found on this topic.
Blended Learning: Blended learning, which incorporates face-to-face sessions with online training, has shown a slight improvement in clinical skills in initial clinical training. These face-to-face and online sessions are well-accepted by participants. Blended learning may be superior to e-learning alone for the delivery of knowledge and clinical skills. Applying e-learning after the face-to-face training may yield the best results. Face-to-face teaching combined with online training is superior to traditional teaching alone or online training alone for improving knowledge, clinical skills, and clinical outcomes.
In summary, the effectiveness of e-learning is variable and depends on many factors, including pedagogical design, interactivity, social interactions, and whether it is used alone or as a complement to face-to-face training. Evaluation at different Kirkpatrick levels is essential to understand these nuances and determine whether a program achieves its objectives.
Concrete case study: application of the Kirkpatrick model in a mixed anesthesiology program (VMHS)
The second source presents a highly relevant case study to illustrate the application of Kirkpatrick's model in the healthcare sector: a blended training program (e-learning and face-to-face simulation) aimed at improving psychological safety and soft skills within the anesthesiology teams of the VinMec Healthcare System (VMHS) in Vietnam. This study is particularly interesting as it tackles major challenges such as cultural rootedness, geographical dispersion and a strong hierarchical structure where deference predominated. The aim was to transform the department into one of the safest in Southeast Asia.
The initiative was conducted over 18 months and involved 112 anesthesiologists and nurses. The program was structured and combined online learning modules and on-site simulation sessions. The evaluation of the impact of this intervention was explicitly carried out according to the Kirkpatrick model. This demonstrates that the model is not only theoretical but can be applied to evaluate complex and large-scale training programs in a real clinical context.
The implementation team was multifaceted, including health system leadership, training experts, and simulation specialists. This collaboration was essential to overcome challenges and ensure alignment with organizational goals. The project required rigorous planning, needs analysis, the development of tailored courses and scenarios, and the conduct of practical training sessions with interactive debriefings.
The challenges encountered during implementation were significant, including securing funding (which took almost a year), skepticism from some senior stakeholders, language barriers (English being the common but non-native language for most), navigating cultural norms and hierarchical structures, and logistical complexities related to training large groups across multiple sites. The program was made mandatory for all anesthesia staff at VMHS as part of continuing education.
The pedagogical approach leveraged e-learning to teach and demonstrate ideal non-technical skills for crisis management, and to raise awareness of performance gaps. The 2-day in-person simulation sessions, called DOMA (Development of Mastery in Anesthesiology), combined innovative theoretical courses immediately illustrated by immersive simulation scenarios. The goal was to provide a practical experience and highlight the gap between ideal and actual performance. Interactive debriefings after the simulations allowed for an in-depth exploration of participants' understanding and mastery of skills, providing opportunities for reflection and feedback.
Using the Kirkpatrick model, the VMHS study was able to evaluate the impact of this complex intervention across four distinct dimensions, providing a comprehensive picture of its effectiveness.

Measuring the impact at each level: the results of the VMHS study
The VMHS study rigorously measured the impact of its blended learning program using specific indicators for each Kirkpatrick level.
Level 1 – Satisfaction:Participant satisfaction was assessed through anonymous digital surveys at the end of each in-person simulation session. The overall results are very positive. All participants (100%) recommended the training and thought it would change their practice. This indicates a strong acceptance of the program by the VMHS anesthesia staff, which is a favorable starting point for achieving higher levels of evaluation. Satisfaction is the main effect expected at Level 1 and aims to encourage adherence to the concept and approach.
Niveau 2 – Apprentissage :L'amélioration des connaissances a été mesurée par les résultats de pré-tests et post-tests anonymes réalisés sur la plateforme d'e-learning. Sur les 18 mois de l'étude, les 112 participants ont complété 4870 heures d'e-learning, ce qui témoigne d'une forte implication (moyenne de 43h29 par participant). Un pourcentage significatif de modules (91% des 3213 modules démarrés) ont été complétés à 100%. Les résultats ont montré une amélioration très significative entre les pré-tests et les post-tests (taux de réussite de 41% vs 89%, p<0.001). Cela démontre l'efficacité de l'outil e-learning pour l'acquisition de connaissances et sa valeur pour préparer les participants aux sessions de simulation, maximisant ainsi les bénéfices de l'apprentissage en personne. Des pré-tests et post-tests similaires ont également été utilisés pour les sessions de simulation de 2 jours via un système d'enquête en ligne.
Level 3 – Behaviors:Behavior changes (reported and observed) were assessed through anonymous digital impact surveys conducted with anesthesia teams before each 2-day simulation session. These surveys, conducted at 6, 12, and 18 months, aimed to assess the perceived impact of the training on behaviors applied and observed in daily clinical situations. Figure 1 (not visible here but described by the source text) illustrates the incidence of observed or reported changes, which are significant and stable over the 18-month period. More than 93% of participants perceived the changes as lasting. Among the most frequently cited changes in the surveys, communication (including the ability to "speak up") emerged as the most important change (cited by 46% to 63% of respondents), followed by teamwork (including task assignment and coordination, cited by 35% to 57%) and the use of cognitive aids (cited by 20% to 57%). The fact that the intervention was applied to all VMHS anesthesia teams within a short period is considered to have facilitated the successful implementation of these changes.
Level 4 – Results:The impact on the organization, quality, and safety (Level 4) was estimated by the results of the VMHS's annual quality and safety audits and by the evolution of reported events for the VMHS anesthesia department. The VMHS's annual safety audits, based on 124 indicators rated from 1 to 10, showed an improvement in the overall safety score and a reduction in the dispersion of scores between 2021 (baseline year before the intervention) and 2022 and 2023. The reduction in dispersion is interpreted as a homogenization of practices towards greater safety. The impact on the safety culture and the ability to report adverse events was assessed by collecting the number of adverse events reported before and after the start of the intervention. The number of reported events increased (Figure 3A, not visible here) and the number of people reporting no events was halved compared to the previous year. The increase in adverse event reporting, although intuitively negative, is actually a positive indicator of a strengthened safety culture and greater psychological safety, as staff feel more comfortable reporting errors and near misses without fear of repercussions. The number of events reported at VMHS after 18 months was even nine times lower than the data in the comparable North American database (Figure 3B, not visible here). The study concludes that the educational intervention, being the only one implemented in the VMHS anesthesia department during the observed period, is likely to be the cause of these objective improvements.
The results of the VMHS study show clear and lasting positive effects of the intervention on all four Kirkpatrick levels. They demonstrate that psychological safety and related skills are not just a passive byproduct of good leadership, but a skill that can be systematically developed through structured training interventions. The integration of e-learning and large-scale simulation has enabled behavioral change, even in hierarchical and geographically dispersed systems.
Advantages and challenges of using Kirkpatrick in the healthcare context
The use of the Kirkpatrick model to evaluate health training, whether online, face-to-face or blended, presents several advantages and challenges, as illustrated by the sources.
Advantages:
- Structured Framework: The model provides a clear and hierarchical structure for planning and conducting the evaluation. It ensures that we do not limit ourselves to evaluating satisfaction or immediate learning but also seek to measure application and long-term results.
- Completeness: By covering four levels, the model allows for a more complete evaluation of the impact of a training program, from learner perceptions to concrete results.
- Relevance to health: Kirkpatrick levels (satisfaction, knowledge, practice change, clinical outcome) are directly relevant to assessing the impact of training on the quality of care and patient safety. Organizations such as BEME have adopted this hierarchy.
- Applicability to different modalities: The model can be adapted to evaluate different training modalities, including e-learning and blended learning. The VMHS study is a case in point.
- Demonstrating Value: Evaluation at higher levels (3 and 4) is essential to demonstrate the strategic value of training and justify investments. The results of the VMHS study at Level 4, showing improved safety scores and increased incident reporting, provide tangible proof.
- Support for continuous improvement: The evaluation provides valuable data to identify what works well and what needs to be improved in training programs.
Challenges:
- Difficulty in measuring higher levels: Levels 3 (behavior) and 4 (results) are often the most difficult and costly to evaluate reliably, as they require monitoring in the workplace and the collection of objective data on practices and clinical outcomes. The VMHS study used impact surveys reporting perceived changes (which may be subjective) and objective audits and incident reports (more reliable but influenced by other factors).
- Causality Attribution: Establishing a direct causal link between training and observed changes at levels 3 and 4 can be challenging, as numerous other factors (work environment, managerial support, other initiatives, etc.) can influence behavior and outcomes. The VMHS study notes that its intervention was the only significant one implemented during the observed period, which strengthens the hypothesis of a causal link but acknowledges the possible influence of unidentified external factors.
- Cost and resources: Conducting a comprehensive assessment at all levels, especially monitoring and collecting objective data, can be costly and require significant resources in time and personnel.
- Complexity of the healthcare environment: The healthcare sector is a complex environment, characterized by hierarchical structures, strong cultural norms, and geographic dispersion. These factors can make the transfer of learning and the measurement of impact more difficult. The VMHS study had to overcome these challenges to evaluate the effectiveness of its program.
- Lack of standardized tools: Although there are evaluation grids and tools for e-learning programs, their selection and adaptation must be based on the specific objectives of the evaluation. There are not always standardized tools to accurately measure changes in practice or clinical outcomes related to a given training in all contexts. The VMHS study developed its own measurement methods (impact surveys, use of internal audits, monitoring of incident reports).
Despite these challenges, using the Kirkpatrick model remains a powerful way to structure the evaluation of healthcare training and provide evidence of its effectiveness beyond simple knowledge acquisition. The "Key Takeaways" from the sources highlight the importance of measuring the impact on knowledge, clinical skills, and clinical outcomes, elements directly measurable at Kirkpatrick Levels 2, 3, and 4.
Towards a stronger safety culture through training evaluation
The rigorous application of the Kirkpatrick model, particularly in contexts such as the VMHS study, highlights how training evaluation can directly contribute to strengthening a safety culture. The VMHS study explicitly aimed to transform the organizational culture to improve psychological safety.
Psychological safety, defined as the ability of team members to speak up, take innovation risks, and admit their mistakes without fear of negative consequences, is a very important element for high-performing healthcare teams. Leaders play a key role in promoting a psychologically safe environment that fosters effective communication, improves teamwork and decision-making, and encourages incident reporting. Developing these non-technical skills ("soft skills") is essential for safer care.
The VMHS study demonstrated that psychological safety is not just an abstract ideal, but a concrete and trainable skill that has a measurable impact on patient safety. The blended learning program (e-learning and simulation) led to a significant increase in "speaking up" behaviors, improved teamwork, and the use of cognitive aids (Level 3). Above all, it led to an increase in safety incident reports and improved annual safety audit scores (Level 4). The increase in incident reports is a key indication of a more open and less punitive safety culture.
The evaluation using the Kirkpatrick model made it possible to objectify these changes. Level 1 (satisfaction) showed strong adherence. Level 2 (learning) confirmed the acquisition of knowledge via e-learning. Level 3 (behaviors) highlighted the perceived and reported changes in communication and teamwork. And Level 4 (results) demonstrated the impact on objective organizational safety indicators.
These results reinforce the idea that investing in training in non-technical skills and psychological safety, assessed in a structured manner, is a powerful lever for improving the quality and safety of care. Sources emphasize that leadership is a determining factor in overcoming obstacles and ensuring the success of such initiatives. The commitment of the VMHS health system's management has been essential in integrating psychological safety into a large hierarchical and geographically dispersed structure.
In conclusion, the Kirkpatrick Model provides an indispensable framework for evaluating the effectiveness of healthcare training programs, whether e-learning, simulation, or blended learning. It makes it possible to go beyond simply measuring satisfaction or knowledge acquisition to assess whether the training actually leads to changes in behavior and, more importantly, to concrete improvements for patients and the organization. In a sector where safety is paramount, a rigorous evaluation of training is not only a good practice, but a strategic necessity to build and maintain a strong safety culture and continuously improve the quality of care delivered. Initiatives such as the VMHS, evaluated by Kirkpatrick, offer a replicable model for other healthcare systems wishing to place psychological safety and training evaluation at the heart of their strategy. The success of these programs depends on an adapted pedagogical design, careful implementation, learner follow-up, and above all, a strong commitment from leadership to support cultural change. The Kirkpatrick evaluation guides this process and makes it possible to measure the progress made towards a safer and more efficient healthcare system.
Sources :
https://www.has-sante.fr/upload/docs/application/pdf/2015-09/4e_partie_guide_e-learning.pdf
https://bmjopenquality.bmj.com/content/14/2/e003186