Publié le
7/5/2026

Kirkpatrick model

The rigorous application of Kirkpatrick's model highlights how training evaluation can directly contribute to strengthening a safety culture. The VMHS study explicitly aimed to transform organizational culture to improve psychological safety.

Kirkpatrick's Model: Evaluating the Effectiveness of Healthcare Training for Optimal Quality of Care

In healthcare, continuing and initial professional training is a cornerstone for ensuring quality care and patient safety. Whether it's acquiring up-to-date medical knowledge, developing advanced technical skills, or improving soft skills like communication and teamwork, the investment in training is substantial. However, to ensure this investment pays off and truly contributes to a better healthcare system, it is imperative to evaluate the effectiveness of these training programs. How can we know if training has not only imparted knowledge but has also led to a change in behavior and, ultimately, improved patient outcomes? This is where the Kirkpatrick Model comes in, a major reference in the evaluation of training programs. The Kirkpatrick Model offers a structured framework for evaluating the impact of training at different levels, ranging from simple participant satisfaction to concrete results for the organization or benefits for patients. Based on the sources provided, we will explore this model in depth, its application in the specific context of healthcare training, particularly that delivered via e-learning or blended learning, and illustrate its usefulness through a concrete example of a study conducted in a healthcare system. This detailed guide aims to provide healthcare professionals, trainers, and decision-makers with the tools for a relevant and comprehensive evaluation of their training programs, with a view to continuously improving the quality and safety of care. The Importance of Evaluating Healthcare Training: Evaluating training is not a mere formality, but a fundamental strategic approach in the healthcare sector. Sources emphasize the need to assess the effectiveness of training programs, whether they involve e-learning or more complex interventions aimed at transforming organizational culture and psychological safety. In an environment where errors can have disastrous consequences, ensuring that staff are not only trained, but that this training leads to safer and more effective practices, is a top priority. The healthcare sector is characterized by increasing complexity, rapid advances in medical knowledge and technology, and constant pressure to improve the quality of care while controlling costs. In this context, training programs must not only be relevant, but also demonstrate their value. Evaluating training allows us to:

  • Validate whether the learning objectives have been met.
  • Understand how learners responded to the training.
  • Determine whether the training led to a change in clinical practice.
  • Quantify the impact of the training on concrete outcomes, including benefits for patients and the organization.
  • Identify the strengths and weaknesses of training programs in order to continuously improve them.
  • Justify investments in training.

Sources indicate that e-learning has drawn inspiration from various fields, including educational sciences, information technology, open and distance learning, simulation, and quality control. This diversity of inspirations highlights the multifaceted nature of modern health training, which requires robust assessment methods capable of capturing these different dimensions. The World Health Organization (WHO) itself published a systematic review in 2015 on the value of e-learning in initial training for healthcare professionals, highlighting the growing importance of these methods and, consequently, the need for rigorous evaluation. Although numerous comparative studies have been conducted, particularly in the health sector, sources note A lack of comparative studies published by French teams on the subject of e-learning reinforces the importance of adopting internationally recognized evaluation frameworks, such as Kirkpatrick's, to contribute to a robust body of evidence. Evaluation according to Kirkpatrick's model, which focuses on satisfaction, knowledge acquisition, practice change, and clinical outcome, is a hierarchical approach adopted by evaluation groups such as BEME (Best Evidence Medical Education) and used in systematic reviews of e-learning program evaluations. This model provides a clear roadmap for assessing the effectiveness of training, from immediate reactions to long-term impact. Deciphering Kirkpatrick's Model: The Four Levels of Evaluation The Kirkpatrick Model, developed by Donald Kirkpatrick and later refined by his son James Kirkpatrick, is one of the most widely used training evaluation frameworks in the world. It proposes a hierarchy of four increasing levels, each measuring a different type of training impact. These levels allow for the evaluation not only of what was learned, but also of whether the learning was applied and what the final result was. Here are the four evaluation levels according to Kirkpatrick, as summarized in the sources: Level 1 – Reactions: This level assesses how participants reacted to the training. Did they enjoy it? Were they satisfied? This involves measuring learners' immediate perception of the training experience, including perceived usefulness, interest, and the quality of the learning environment or materials. In the context of e-learning, this may include evaluating the usability of the interface, which is important to prevent participant discouragement. Participant satisfaction is often a necessary, though not sufficient, condition for higher levels of evaluation. High satisfaction can foster engagement and motivation to continue learning the content. Level 2 – Learning: This level measures what learners have learned at the end of the training session. What knowledge, skills (know-how), and/or attitudes (soft skills) have been acquired? Have the learning objectives been met? This is the pedagogical evaluation. This level aims to quantify the gain in knowledge or the improvement in skills that can be measured immediately after the training. Sources mention that evaluating e-learning based on knowledge gain is a commonly used criterion. This can be measured by knowledge tests (pre-tests and post-tests), practical exercises, or skills assessments. Level 3 – Behaviors (or Transfers): This level assesses whether learners use what they learned during the training session in their professional practice. What new professional behaviors have been implemented? This level focuses on the application of learning in the workplace. In the healthcare field, this means observing whether professionals have modified their clinical practices, their communication with patients or colleagues, or their approach to teamwork. Sources mention the assessment of changes in professionals' practices or attitudes. This can be measured by self-assessments of skills and behaviors, direct observations, or reports from superiors or colleagues. The improvement of practical skills/behaviors compared to a neutral control group has been studied in e-learning programs for various healthcare professions. Level 4 – Outcomes: This level assesses the impact of the training session on patient care or the organization. What are the benefits for patients? This level measures the final results of the training, which go beyond simple individual behavior. In the healthcare sector, this can include a reduction in medical errors, improved clinical outcomes for patients, increased process efficiency, patient satisfaction, or organizational indicators such as reduced incident-related costs. Evaluating clinical outcomes for patients is a criterion used in the evaluation of e-learning training programs. The improvement in clinical practices compared to a neutral control group was also studied.

Kirkpatrick's model is hierarchical because achieving higher levels generally depends on success at lower levels. For example, a dissatisfied participant (Low Level 1) is unlikely to engage sufficiently to learn (Level 2) or apply what they have learned (Level 3), which would limit the impact on outcomes (Level 4). The adoption of this hierarchy by organizations like BEME to evaluate medical literature demonstrates its relevance and robustness in the context of evaluating healthcare training. Some studies explicitly use this classification for their comparative analyses.

Applying Kirkpatrick to E-Learning Program Evaluation

E-learning has become an increasingly widespread training modality in the healthcare sector, offering flexibility and accessibility. Evaluating the effectiveness of these programs using Kirkpatrick's model requires adapting measurement methods to this specific format. The sources provide guidance on how e-learning can be evaluated at each of these levels. Level 1 – Feedback in E-learning: E-learning satisfaction is generally assessed through online surveys or questionnaires at the end of the program or modules. These surveys may focus on the platform's ease of use, the clarity of the content, interactivity, the relevance of the examples, or the support received. The sources emphasize the importance of interface usability for a positive perception of e-learning. A poor interface can discourage participants. Conversely, interactive modules can generate greater satisfaction. The qualitative feedback gathered can help identify aspects of the instructional or technical design that need improvement. Level 2 – Learning in E-learning: E-learning assessment is commonly conducted through online tests. Pre-tests and post-tests are a standard method for measuring knowledge gain. Sources mention the use of pre-tests and post-tests to assess knowledge improvement within e-learning programs. These tests may consist of multiple-choice or single-answer questions. E-learning can be effective in improving knowledge. Skills improvement (know-how) can be assessed through practical exercises integrated into modules, online simulations, or interactive case studies. Assessing attitudes (soft skills) can be more complex and may require scenarios or judgment questions. Sources show that e-learning can lead to improved clinical knowledge and skills. Level 3 – Behaviors in E-learning: Assessing the transfer of learning to the workplace from e-learning training can be difficult without structured follow-up. Methods may include self-assessments or reports from participants on how they apply what they have learned. Observations by supervisors or colleagues can also be used, although this is less common for e-learning alone. Impact surveys conducted a few weeks or months after training may ask participants if they have changed their practices or adopted new behaviors. Sources mention assessing changes in practices or attitudes. Improvements in practical skills/behaviors have been observed in comparative studies on e-learning. Level 4 – Outcomes in E-learning: Measuring the impact of e-learning on clinical or organizational outcomes requires collecting objective data after training. This may involve analyzing patient records, safety incident data, quality of care indicators, or financial data related to process efficiency. Sources indicate that evaluating clinical patient outcomes is a relevant criterion and that improvements in clinical practices can be measured compared to a control group. E-learning programs can improve clinical outcomes compared to a neutral control group. Level 4 evaluation is often the most complex but also the most significant for demonstrating the strategic value of training. A systematic review from 2002 already used an e-learning evaluation based on knowledge gains, changes in professional practices or attitudes, and their satisfaction. The use of Kirkpatrick's model to evaluate e-learning programs is therefore an established practice in healthcare training research. For this guide, some of Kirkpatrick's criteria were retained to evaluate the outcomes of e-learning training programs (knowledge assessment, assessment of clinical skills and behaviors, and clinical outcomes for patients). Sources also note that the time spent on online courses is similar to that of face-to-face courses, unless interactive elements are offered, in which case the time increases, but so does the learning. "Adapted" programs, which allow for the exemption of certain modules depending on the student's level, can even shorten learning time while being more efficient. These time considerations are important when designing and evaluating e-learning programs.

Effectiveness of e-learning assessed by Kirkpatrick according to the sources

The sources present a compilation of results from comparative studies evaluating the effectiveness of e-learning training according to different criteria, often aligned with Kirkpatrick levels. In general, e-learning has a significant effect compared to no intervention. Compared to interventions not connected to the internet, the effect is more heterogeneous or small.

E-learning vs. Lecture-Based (In-Person) Training:The most frequent comparison is between e-learning and the traditional lecture. A systematic review evaluating 9 studies with nursing students or graduates showed an improvement in knowledge and clinical skills with e-learning compared to a traditional lecture. However, most studies comparing e-learning and lectures find no significant difference in knowledge acquisition. Some authors even find e-learning programs more effective compared to a similar face-to-face course. In terms of improving knowledge and clinical skills, face-to-face teaching is considered equivalent to online teaching. E-learning vs. Other Delivery Methods: Compared to other formats such as print, some studies have found no difference between web and print formats in terms of training delivery. However, an online course on pain management showed improved knowledge and management skills compared to the distribution of printed guidelines. An online course on EBM was more effective in improving knowledge and clinical examination skills compared to a group working independently. Effect of Program Duration: The duration or timing of the program (short vs. extended) did not influence the improvement of knowledge, clinical skills, or clinical outcomes. Studies comparing different delivery schedules for modules showed no difference between groups. However, a long-term multi-component program showed no difference in access to clinical guidelines.

Source: BMJ

Effect according to Module Type:

  • Interactivity Technique: Interactivity, practical exercises, repetition, and feedback improve learning outcomes. Interactive modules can be more effective than traditional teaching. However, some authors find better results with non-interactive modules, and others find no specific improvement between interactive and text-based modules. The delivery method—short or spread out over time—does not influence the improvement of learning parameters. Interactive modules with the participant improve learning parameters, but divergent results exist depending on the subject matter. Social interactions: Interactive modules between participants and/or teachers improve learning objectives. A program with discussion and exchange on clinical cases, in addition to simple e-learning, was more effective in terms of knowledge acquisition.
  • Webcast, videoconference: Face-to-face teaching compared to webcast or webconference teaching methods is equivalent in terms of knowledge improvement. One study found no significant difference between interacting during videoconferences and during webcasts without interaction.
  • Design: Modules with improved design (authenticity of clinical cases, interactivity, feedback, integration) are more effective than a standard program. However, a linear format did not show any difference compared to a branching format in terms of learning. A complex design does not guarantee better results. An effective design in one country seems to be transferable to another. Email/SMS Follow-up: Using email/SMS follow-up systems contributes to better engagement in e-learning programs. SMS or email reminders can improve knowledge acquisition, participation, and the adoption of recommendations. Spaced learning, combining online training and regular spaced tests via email, is appreciated by participants.
  • Learning Agent: The use of a learning agent (animated character) showed little benefit in a low-quality study.

Effect According to the Pedagogical Focus:

  • Problem-Solving: Real-life clinical cases taught online allow for better knowledge acquisition and improved clinical behavior than simple lectures. The use of specific interactive elements such as a wiki can be effective. Online problem-solving training has demonstrated a change in knowledge and clinical practices. Problem-based teaching, compared to traditional teaching, is at least equivalent in terms of improving knowledge, clinical skills, and clinical outcomes, and some formats appear more effective depending on the subject matter. Virtual patients, clinical cases: Computerized clinical case simulations or virtual patients provide results compared to no information, but with a small effect compared to non-computerized training. A course with 11 additional clinical cases yielded superior results in the short term. However, other studies find no significant difference with or without clinical cases.
  • Situational e-learning: This type of interactive teaching, where the learner is placed in a specific situation/context, is effective in improving performance compared to no intervention, but the effect is limited compared to traditional interventions.
  • "Serious games": "Serious games" have not yet demonstrated their effectiveness.
  • MOOCs: No comparative studies were found regarding MOOCs.

Effect over time: The improvement in knowledge with e-learning programs is not always maintained over time, and long-term studies are recommended. Some studies show a sustained gain, while others show a decline in acquired knowledge over time and no specific effect over time on maintaining skills in certain areas. Online teaching, compared to a control group, is insufficient to maintain the acquisition of knowledge or clinical skills over time, and few studies were found on this topic. Blended Learning: Blended learning, which incorporates face-to-face sessions with online training, has shown an improvement in clinical skills, albeit small, in initial clinical training. These face-to-face and online sessions are well-received by participants. Blended training may be superior to e-learning alone for acquiring knowledge and clinical skills. Applying e-learning after the face-to-face placement may yield the best results. The combination of face-to-face teaching with online training is superior to traditional teaching alone or online training alone for improving knowledge, clinical skills, and clinical outcomes. In summary, the effectiveness of e-learning is variable and depends on many factors, including instructional design, interactivity, social interaction, and whether it is used alone or in conjunction with face-to-face training. Kirkpatrick's multilevel assessment is essential for understanding these nuances and determining whether a program is achieving its objectives. A concrete case study: applying Kirkpatrick's model in a blended anesthesiology program (VMHS). The second source presents a highly relevant case study to illustrate the application of Kirkpatrick's model in the healthcare sector: a blended training program (e-learning and face-to-face simulation) aimed at improving psychological safety and soft skills within the anesthesiology teams of the VinMec Healthcare System (VMHS) in Vietnam. This study is particularly interesting because it addresses major challenges such as cultural entrenchment, geographical dispersion, and a strong hierarchical structure where deference prevailed. The goal was to transform the department into one of the safest in Southeast Asia. The initiative was conducted over 18 months and involved 112 physicians and nurse anesthetists. The program was structured and combined online learning modules with on-site simulation sessions. The impact assessment of this intervention was explicitly conducted using the Kirkpatrick model. This demonstrates that the model is not merely theoretical but can be applied to evaluate complex, large-scale training programs in a real-world clinical setting. The implementation team was multifaceted, including health system management, training experts, and simulation specialists. This collaboration was essential to overcoming challenges and ensuring alignment with organizational goals. The project required rigorous planning, needs analysis, the development of tailored courses and scenarios, and the conduct of practical training sessions with interactive debriefings. The challenges encountered during implementation were significant, including securing funding (which took nearly a year), skepticism from some senior stakeholders, language barriers (English being the common language but not a native speaker for most), navigating cultural norms and hierarchical structures, and the logistical complexities of training large groups across multiple sites. The program was made mandatory for all VMHS anesthesia staff as part of continuing education. The pedagogical approach leveraged e-learning to teach, demonstrate the soft skills ideal for crisis management, and raise awareness of performance gaps. The two-day, in-person simulation sessions, called DOMA (Development of Mastery in Anesthesiology), combined innovative theoretical lectures immediately illustrated by immersive simulation scenarios. The goal was to provide hands-on experience and highlight the gap between ideal and actual performance. Interactive debriefings following the simulations allowed for an in-depth exploration of participants' understanding and mastery of the skills, offering opportunities for reflection and feedback.

Using Kirkpatrick's model, the VMHS study was able to assess the impact of this complex intervention across four distinct dimensions, thus providing a comprehensive picture of its effectiveness.

Source: BMJ

Measuring Impact at Each Level: VMHS Study Results

The VMHS study rigorously measured the impact of its blended learning program using specific indicators for each Kirkpatrick level.

Level 1 – Satisfaction:Participant satisfaction was assessed through anonymous online surveys at the end of each in-person simulation session. Overall results are very positive. All participants (100%) recommended the training and felt it would change their practice. This indicates strong acceptance of the program by VMHS anesthesia staff, which is a promising starting point for achieving higher levels of assessment. Satisfaction is the primary expected outcome at Level 1 and aims to encourage adherence to the concept and approach. Level 2 – Learning: Improvement in knowledge was measured by the results of anonymous pre- and post-tests conducted on the e-learning platform. Over the 18 months of the study, the 112 participants completed 4,870 hours of e-learning, demonstrating strong engagement (average of 43 hours and 29 minutes per participant). A significant percentage of modules (91% of the 3,213 modules started) were completed 100%. The results showed a highly significant improvement between the pre- and post-tests (success rate of 41% vs. 89%, p<0.001). Cela démontre l'efficacité de l'outil e-learning pour l'acquisition de connaissances et sa valeur pour préparer les participants aux sessions de simulation, maximisant ainsi les bénéfices de l'apprentissage en personne. Des pré-tests et post-tests similaires ont également été utilisés pour les sessions de simulation de 2 jours via un système d'enquête en ligne.

Niveau 3 – Comportements :Les changements de comportement (rapportés et observés) ont été évalués par des enquêtes d'impact numériques anonymes menées auprès des équipes d'anesthésie avant chaque session de simulation de 2 jours. Ces enquêtes, réalisées à 6, 12 et 18 mois, visaient à évaluer l'impact perçu de la formation sur les comportements appliqués et observés dans les situations cliniques quotidiennes. La figure 1 (non visible ici mais décrite par le texte source) illustre l'incidence des changements observés ou rapportés, qui sont significatifs et stables sur la période de 18 mois. Plus de 93% des participants ont perçu les changements comme durables. Parmi les changements les plus fréquemment cités dans les enquêtes, la communication (y compris la capacité de "speaking up" ou de prendre la parole) est apparue comme le changement le plus important (citée par 46% à 63% des répondants), suivie par le travail d'équipe (incluant l'attribution des tâches et la coordination, citée par 35% à 57%) et l'utilisation des aides cognitives (citée par 20% à 57%). Le fait que l'intervention ait été appliquée à toutes les équipes d'anesthésie du VMHS dans un délai court est considéré comme ayant facilité la mise en œuvre réussie de ces changements.

Niveau 4 – Résultats :L'impact sur l'organisation, la qualité et la sécurité (Niveau 4) a été estimé par les résultats des audits annuels de qualité et de sécurité du VMHS et par l'évolution des événements rapportés pour le département d'anesthésie du VMHS. Les audits annuels de sécurité du VMHS, basés sur 124 indicateurs notés de 1 à 10, ont montré une amélioration du score global de sécurité et une réduction de la dispersion des scores entre 2021 (année de référence avant l'intervention) et 2022 et 2023. La réduction de la dispersion est interprétée comme une homogénéisation des pratiques vers une plus grande sécurité. L'impact sur la culture de sécurité et la capacité à rapporter les événements indésirables ("adverse events") a été évalué en collectant le nombre d'événements indésirables rapportés avant et après le début de l'intervention. Le nombre d'événements rapportés a augmenté (figure 3A, non visible ici) et le nombre de personnes ne rapportant aucun événement a été divisé par deux par rapport à l'année précédente. L'augmentation du rapport d'événements indésirables, bien qu'intuitivement négative, est en fait un indicateur positif d'une culture de sécurité renforcée et d'une plus grande sécurité psychologique, car le personnel se sent plus à l'aise pour signaler les erreurs et les quasi-accidents sans craindre de répercussions. Le nombre d'événements rapportés au VMHS après 18 mois était même neuf fois inférieur aux données de la base de données nord-américaine comparable (figure 3B, non visible ici). L'étude conclut que l'intervention éducative, étant la seule mise en œuvre dans le département d'anesthésie du VMHS pendant la période observée, est susceptible d'être la cause de ces améliorations objectives.

Les résultats de l'étude VMHS montrent des effets positifs nets et durables de l'intervention sur les quatre niveaux de Kirkpatrick. Ils démontrent que la sécurité psychologique et les compétences associées ne sont pas seulement un sous-produit passif d'un bon leadership, mais une compétence qui peut être systématiquement développée par le biais d'interventions de formation structurées. L'intégration de l'e-learning et de la simulation à grande échelle a permis d'atteindre un changement de comportement, même dans des systèmes hiérarchiques et géographiquement dispersés.

Avantages et défis de l'utilisation de Kirkpatrick dans le contexte de la santé

L'utilisation du modèle de Kirkpatrick pour évaluer les formations en santé, qu'elles soient en ligne, présentielles ou mixtes, présente plusieurs avantages et défis, comme l'illustrent les sources.

Avantages :

  • Cadre structuré : Le modèle offre une structure claire et hiérarchique pour planifier et réaliser l'évaluation. Il permet de s'assurer qu'on ne se limite pas à évaluer la satisfaction ou l'apprentissage immédiat, mais qu'on cherche aussi à mesurer l'application et les résultats à long terme.
  • Complétude : En couvrant quatre niveaux, le modèle permet une évaluation plus complète de l'impact d'un programme de formation, allant des perceptions des apprenants aux résultats concrets.
  • Pertinence pour la santé : Les niveaux de Kirkpatrick (satisfaction, connaissance, changement de pratique, résultat clinique) sont directement pertinents pour évaluer l'impact de la formation sur la qualité des soins et la sécurité des patients. Des organismes comme le BEME ont adopté cette hiérarchie.
  • Applicabilité à diverses modalités : Le modèle peut être adapté pour évaluer différentes modalités de formation, y compris l'e-learning et le blended learning. L'étude VMHS en est un exemple concret.
  • Démonstration de la valeur : L'évaluation aux niveaux supérieurs (3 et 4) est essentielle pour démontrer la valeur stratégique de la formation et justifier les investissements. Les résultats de l'étude VMHS au Niveau 4, montrant une amélioration des scores de sécurité et une augmentation des rapports d'incidents, en sont une preuve tangible.
  • Support à l'amélioration continue : L'évaluation fournit des données précieuses pour identifier ce qui fonctionne bien et ce qui doit être amélioré dans les programmes de formation.

Défis :

  • Difficulté de mesurer les niveaux supérieurs : Les niveaux 3 (comportement) et 4 (résultats) sont souvent les plus difficiles et coûteux à évaluer de manière fiable, car ils nécessitent un suivi sur le lieu de travail et la collecte de données objectives sur les pratiques et les résultats cliniques. L'étude VMHS a utilisé des enquêtes d'impact rapportant les changements perçus (qui peuvent être subjectifs) et des audits objectifs et rapports d'incidents (plus fiables mais influencés par d'autres facteurs).
  • Attribution de causalité : Il peut être difficile d'établir un lien de causalité direct entre la formation et les changements observés aux niveaux 3 et 4, car de nombreux autres facteurs (environnement de travail, soutien managérial, autres initiatives, etc.) peuvent influencer le comportement et les résultats. L'étude VMHS note que son intervention était la seule significative mise en place pendant la période observée, ce qui renforce l'hypothèse d'un lien causal, mais reconnaît l'influence possible de facteurs externes non identifiés.
  • Coût et ressources : Mener une évaluation complète à tous les niveaux, en particulier le suivi et la collecte de données objectives, peut être coûteux et nécessiter des ressources importantes en temps et en personnel.
  • Complexité de l'environnement de santé : Le secteur de la santé est un environnement complexe, caractérisé par des structures hiérarchiques, des normes culturelles fortes, et une dispersion géographique. Ces facteurs peuvent rendre le transfert de l'apprentissage et la mesure de l'impact plus difficiles. L'étude VMHS a dû surmonter ces défis pour évaluer l'efficacité de son programme.
  • Manque d'outils standardisés : Bien qu'il existe des grilles et outils d'évaluation pour les programmes e-learning, leur sélection et adaptation doivent se faire en fonction des objectifs spécifiques de l'évaluation. Il n'y a pas toujours d'outils standardisés pour mesurer précisément les changements de pratique ou les résultats cliniques liés à une formation donnée dans tous les contextes. L'étude VMHS a développé ses propres méthodes de mesure (enquêtes d'impact, utilisation des audits internes, suivi des rapports d'incidents).

Malgré ces défis, l'utilisation du modèle de Kirkpatrick reste un moyen puissant de structurer l'évaluation de la formation en santé et de fournir des preuves de son efficacité au-delà de la simple acquisition de connaissances. Les "Points à retenir" des sources soulignent l'importance de mesurer l'impact sur les connaissances, les compétences cliniques et les résultats cliniques, des éléments directement mesurables aux niveaux 2, 3 et 4 de Kirkpatrick.

Vers une culture de sécurité renforcée grâce à l'évaluation de la formation

L'application rigoureuse du modèle de Kirkpatrick, en particulier dans des contextes comme celui de l'étude VMHS, met en évidence comment l'évaluation de la formation peut contribuer directement au renforcement d'une culture de sécurité. L'étude VMHS visait explicitement à transformer la culture organisationnelle pour améliorer la sécurité psychologique.

La sécurité psychologique, définie comme la capacité des membres d'une équipe à parler, à prendre des risques d'innovation et à admettre leurs erreurs sans craindre de conséquences négatives, est un élément très important pour les équipes de santé performantes. Les leaders jouent un rôle clé dans la promotion d'un environnement psychologiquement sûr qui stimule la communication efficace, améliore le travail d'équipe et la prise de décision, et encourage le rapport d'incidents. Développer ces compétences non techniques ("soft skills") est essentiel pour des soins plus sûrs.

L'étude VMHS a démontré que la sécurité psychologique n'est pas qu'un idéal abstrait, mais une compétence concrète et entraînable qui a un impact mesurable sur la sécurité des patients. Le programme de formation mixte (e-learning et simulation) a conduit à une augmentation significative des comportements de "speaking up" (prendre la parole), une amélioration du travail d'équipe et de l'utilisation des aides cognitives (Niveau 3). Surtout, il a entraîné une augmentation des rapports d'incidents de sécurité et une amélioration des scores d'audit annuel de sécurité (Niveau 4). L'augmentation des rapports d'incidents est une indication clé d'une culture de sécurité plus ouverte et moins punitive.

L'évaluation à l'aide du modèle de Kirkpatrick a permis de objectiver ces changements. Le Niveau 1 (satisfaction) a montré une forte adhésion. Le Niveau 2 (apprentissage) a confirmé l'acquisition des connaissances via l'e-learning. Le Niveau 3 (comportements) a mis en évidence les changements perçus et rapportés dans la communication et le travail d'équipe. Et le Niveau 4 (résultats) a démontré l'impact sur les indicateurs objectifs de sécurité organisationnelle.

Ces résultats renforcent l'idée que l'investissement dans la formation aux compétences non techniques et à la sécurité psychologique, évalué de manière structurée, est un levier puissant pour améliorer la qualité et la sécurité des soins. Les sources insistent sur le fait que le leadership est un facteur déterminant pour surmonter les obstacles et assurer la réussite de telles initiatives. L'engagement de la direction du système de santé VMHS a été essentiel pour intégrer la sécurité psychologique dans une large structure hiérarchique et géographiquement dispersée.

En conclusion, le Modèle de Kirkpatrick fournit un cadre indispensable pour évaluer l'efficacité des programmes de formation en santé, qu'il s'agisse d'e-learning, de simulation ou de formation mixte. Il permet de dépasser la simple mesure de la satisfaction ou de l'acquisition de connaissances pour évaluer si la formation conduit réellement à des changements de comportement et, plus important encore, à des améliorations concrètes pour les patients et l'organisation. Dans un secteur où la sécurité est primordiale, une évaluation rigoureuse de la formation est non seulement une bonne pratique, mais une nécessité stratégique pour construire et maintenir une culture de sécurité forte et améliorer continuellement la qualité des soins délivrés. Des initiatives comme celle du VMHS, évaluée par Kirkpatrick, offrent un modèle réplicable pour d'autres systèmes de santé souhaitant placer la sécurité psychologique et l'évaluation de la formation au cœur de leur stratégie. Le succès de ces programmes repose sur une conception pédagogique adaptée, une mise en œuvre soignée, un suivi des apprenants, et surtout, un engagement fort du leadership pour soutenir le changement culturel. L'évaluation selon Kirkpatrick guide ce processus et permet de mesurer le chemin parcouru vers un système de santé plus sûr et plus efficace.

Sources :

https://www.has-sante.fr/upload/docs/application/pdf/2015-09/4e_partie_guide_e-learning.pdf

https://bmjopenquality.bmj.com/content/14/2/e003186

photo de l'auteur de l'article du blog de la safeteam academy
Frédéric MARTIN
SafeTeam Academy
back to the blog
logo safeteam

Our teams are committed to assessing your needs and providing you with a response in less than 48 hours