Research Regarding Debriefing as Part of the Learning Process
SIMULATION IN HEALTHCARE
2011; 6: S52-S57
Use of Medical Simulation to Explore Equipment Failures and Human-Machine Interactions in Anesthesia Machine Pipeline Supply Crossover
ANESTHESIA AND ANALGESIA
2010; 110 (5): 1292-1296
Debriefing is a process involving the active participation of learners, guided by a facilitator or instructor whose primary goal is to identify and close gaps in knowledge and skills. A review of existing research and a process for identifying future opportunities was undertaken.A selective critical review of the literature on debriefing in simulation-based education was done. An iterative process of analysis, gathering input from audience participants, and consensus-based synthesis was conducted.Research is sparse and limited in presentation for all important topic areas where debriefing is a primary variable. The importance of a format for reporting data on debriefing in a research context was realized and a "who, when, where, what, why" approach was proposed. Also, a graphical representation of the characteristics of debriefing studies was developed (Sim-PICO) to help guide simulation researchers in appropriate experimental design and reporting.A few areas of debriefing practice where obvious gaps that deserve study were identified, such as comparing debriefing techniques, comparing trained versus untrained debriefers, and comparing the effect of different debriefing venues and times. A model for publication of research data was developed and presented which should help researchers clarify methodology in future work.
View details for DOI 10.1097/SIH.0b013e31822724d0
View details for Web of Science ID 000294209700009
View details for PubMedID 21817862
Leadership lessons from military education for postgraduate medical curricular improvement.
The clinical teacher
2010; 7 (1): 26-31
High-fidelity medical simulation can be used to explore failure modes of technology and equipment and human-machine interactions. We present the use of an equipment malfunction simulation scenario, oxygen (O(2))/nitrous oxide (N(2)O) pipeline crossover, to probe residents' knowledge and their use of anesthetic equipment in a rapidly escalating crisis.In this descriptive study, 20 third-year anesthesia residents were paired into 10 two-member teams. The scenario involved an Ohmeda Modulus SE 7500 anesthetic machine with a Datex AS/3 monitor that provided vital signs and gas monitoring. Before the scenario started, we switched pipeline connections so that N(2)O entered through the O(2) pipeline and vice versa. Because of the switched pipeline, the auxiliary O(2) flowmeter delivered N(2)O instead of O(2). Two expert, independent raters reviewed videotaped scenarios and recorded the alarms explicitly noted by participants and methods of ventilation.Nine pairs became aware of the low fraction of inspired O(2) (Fio(2)) alarm. Only 3 pairs recognized the high fraction of inspired N(2)O (Fin(2)o) alarm. One group failed to recognize both the low Fio(2) and the high Fin(2)o alarms. Nine groups took 3 or more steps before instigating a definitive route of oxygenation. Seven groups used the auxiliary O(2) flowmeter at some point during the management steps.The fact that so many participants used the auxiliary O(2) flowmeter may expose machine factors and related human-machine interactions during an equipment crisis. Use of the auxiliary O(2) flowmeter as a presumed external source of O(2) contributed to delays in definitive treatment. Many participants also failed to notice the presence of high N(2)O. This may have been, in part, attributable to 2 facts that we uncovered during our video review: (a) the transitory nature of the "high N(2)O" alert, and (b) the dominance of the low Fio(2) alarm, which many chose to mute. We suggest that the use of high-fidelity simulations may be a promising avenue to further examine hypotheses related to failure modes of equipment and possible management response strategies of clinicians.
View details for DOI 10.1213/ANE.0b013e3181d7e097
View details for Web of Science ID 000277130700010
View details for PubMedID 20418294
Patient simulation: a literary synthesis of assessment tools in anesthesiology.
Journal of educational evaluation for health professions
2009; 6: 3-?
quality medical education includes both teaching and learning of data-driven knowledge, and appropriate technical skills and tacit behaviours, such as effective communication and professional leadership. But these implicit behaviours are not readily adaptable to traditional medical curriculum models. This manuscript explores a medical leadership curriculum informed by military education.our paediatric anaesthesia residents expressed a strong desire for more leadership opportunity within the training programme. Upon exploration, current health care models for leadership training were limited to short didactic presentations or lengthy certificate programmes. We could not find an appropriate model for our 1-year fellowship.in collaboration with the US Naval Academy, we modified the 'Leadership Education and Development Program' curriculum to introduce daily and graduated leadership opportunities: starting with low-risk decision-making tasks and progressing to independent professional decision making and leadership. Each resident who opted into the programme had a 3-month role as team leader and spent 9 months as a team member. At the end of the first year of this curriculum both quantitative assessment and qualitative reflection from residents and faculty members noted significantly improved clinical and administrative decision making. The second-year residents' performance showed further improvement.medical education has long emphasised subject-matter knowledge as a prime focus. However, in competency-based medical education, new curriculum models are needed. Many helpful models can be found in other professional fields. Collaborations between professional educators benefit the students, who are learning these new skills, the medical educators, who work jointly with other professionals, and the original curriculum designer, who has an opportunity to reflect on the strengths and weaknesses of his or her model.
View details for DOI 10.1111/j.1743-498X.2009.00336.x
View details for PubMedID 21134139
"A rose by any other name"? Toward a common terminology in simulation education and assessment
CRITICAL CARE MEDICINE
2007; 35 (9): 2237-2238
The role of debriefing in simulation-based learning.
Simulation in healthcare
2007; 2 (2): 115-125
High-fidelity patient simulation (HFPS) has been hypothesized as a modality for assessing competency of knowledge and skill in patient simulation, but uniform methods for HFPS performance assessment (PA) have not yet been completely achieved. Anesthesiology as a field founded the HFPS discipline and also leads in its PA. This project reviews the types, quality, and designated purpose of HFPS PA tools in anesthesiology. We used the systematic review method and systematically reviewed anesthesiology literature referenced in PubMed to assess the quality and reliability of available PA tools in HFPS. Of 412 articles identified, 50 met our inclusion criteria. Seventy seven percent of studies have been published since 2000; more recent studies demonstrated higher quality. Investigators reported a variety of test construction and validation methods. The most commonly reported test construction methods included "modified Delphi Techniques" for item selection, reliability measurement using inter-rater agreement, and intra-class correlations between test items or subtests. Modern test theory, in particular generalizability theory, was used in nine (18%) of studies. Test score validity has been addressed in multiple investigations and shown a significant improvement in reporting accuracy. However the assessment of predicative has been low across the majority of studies. Usability and practicality of testing occasions and tools was only anecdotally reported. To more completely comply with the gold standards for PA design, both shared experience of experts and recognition of test construction standards, including reliability and validity measurements, instrument piloting, rater training, and explicit identification of the purpose and proposed use of the assessment tool, are required.
View details for DOI 10.3352/jeehp.2009.6.3
View details for PubMedID 20046456
View details for PubMedCentralID PMC2796725