Blame culture in the small to mid-size healthcare entities

Blame Culture in Healthcare: Why Leaders Blame Staff and Why It Harms the Organization

Executive Summary

Blame-focused responses in healthcare—often experienced as “name, blame, shame” after an incident, near-miss, or missed target—are psychologically tempting and institutionally reinforced. Yet they are structurally misaligned with what patient safety research has emphasized for decades: adverse events are usually produced by systems of care (work design, staffing, environment, technology, communication, and governance), not a single individual’s moral failure. The To Err Is Human: Building a Safer Health System[1] crystallized this systems view and estimated large preventable harm and cost associated with medical error, helping establish safety culture and learning systems as core organizational responsibilities. [2]

Healthcare leaders feel pressure to blame for identifiable psychological reasons—especially attribution bias (overweighting individual fault relative to context), self-protective “credit/culprit” dynamics, and threat responses that narrow sensemaking under crisis. These pressures intensify in healthcare because outcomes can be fatal, public trust is central, and accountability ecosystems (licensure, credentialing, malpractice exposure, accreditation, CMS reporting) create fear that ambiguity or nuance will be interpreted as weakness. Empirical work on incident reporting shows that fear of retaliation and fear of litigation are concrete barriers to reporting and learning—exactly the conditions blame cultures create and sustain. [3]

The organizational harm is not subtle. Blame cultures predictably reduce incident and near-miss reporting, thereby weakening early hazard detection and slowing corrective action. A study in Veterans Health Administration[4] hospitals found a measurable subset of employees would not report errors—most often due to fear of retaliation—and reporting willingness differed markedly between psychologically safe versus unsafe hospitals. [5] AHRQ primers similarly describe underreporting and identify blame and fear of repercussions as major barriers. [6] Suppressed reporting compounds risk: hazards persist, events recur, harm rises, and the organization ultimately bears greater clinical, legal, and reputational exposure than any single employee does. [7]

Blame also accelerates workforce harm in ways that feed back into patient outcomes: morale declines, burnout rises, turnover increases, and innovation and improvement work stalls. Meta-analyses link burnout among clinicians (physicians and nurses) to poorer patient safety and quality outcomes, including increased odds of unsafe care and lower safety performance—effects that become more likely when staff fear punishment for bringing problems forward. [8] The The Joint Commission[9] has explicitly linked intimidating/disruptive behaviors to medical errors, preventable adverse outcomes, increased cost of care, and the departure of qualified staff—mechanisms that resemble and reinforce blame climates. [10]

A practical path exists that preserves accountability while reducing organizational risk: just culture, high-quality RCA² investigations, blameless postmortems adapted for clinical operations, psychological safety interventions, and reliable follow-through on systems fixes. These approaches are consistent with major safety authorities and healthcare guidance: Agency for Healthcare Research and Quality [11] resources emphasize blame as a reporting barrier; Joint Commission guidance emphasizes a nonpunitive, learning-oriented response; and the World Health Organization [12] global action plan promotes safety culture and just culture concepts. [13]

Drivers of blame in healthcare leadership

Healthcare leaders rarely make “pro-blame” a stated value; blame typically emerges as a rapid, simplifying response when leaders feel they must quickly explain a bad outcome, reassure stakeholders, and appear in control.

A central psychological driver is the fundamental attribution error: observers systematically over-attribute others’ behavior to personal dispositions and under-attribute it to situational constraints (e.g., workload, unclear policies, alarm fatigue, inadequate staffing, flawed EHR workflows). In real incident response, leaders often have limited time and incomplete visibility into frontline constraints, making individualizing narratives feel more “certain” than systems narratives. [14]

A second driver is self-serving attributional bias: humans tend to take more personal credit for successes and shift causality outward for failures. In healthcare hierarchies—where leaders are accountable to boards, regulators, and public opinion—this bias can subtly push executives and managers toward “frontline fault” explanations, especially when leadership decisions (resource allocation, staffing models, productivity targets) are implicated. Meta-analytic evidence indicates strong, pervasive self-serving attribution patterns across many contexts, consistent with these dynamics. [15]

A third driver is the threat-rigidity effect: under threat (sentinel event, media scrutiny, financial stress, litigation risk), organizations constrict information processing, centralize control, and rely on overlearned responses. In healthcare, this often looks like command-and-control incident handling, rapid discipline, and “closure” via identifying a culprit—actions that can reduce leaders’ anxiety in the short term but impair learning. [16]

Legal/financial fear is not merely speculative. Empirical studies of incident reporting barriers identify fear of litigation as a prominent obstacle, and reviews of reporting barriers highlight fear of individual and legal charges among healthcare personnel—pressures that can lead leaders to prefer defensible stories centered on individual deviation rather than organizational design. [17] This interacts with leaders’ reputational concerns: claiming “we removed the bad actor” can appear to restore trust faster than admitting systemic fragility—despite the latter being more safety-relevant. [18]

Healthcare-specific organizational and cultural factors

Healthcare’s culture and governance environment can unintentionally reward blame, even in organizations that publicly endorse learning.

First, healthcare is structurally hierarchical: role gradients (attending–resident–nurse–tech–support staff) shape who feels safe speaking up and who is believed. Research in VHA hospitals shows psychological safety varies with supervisory level, and fear of retaliation is a common deterrent for error reporting—evidence of power-linked voice suppression. [5]

Second, disruptive or intimidating behaviors—whether overt (yelling) or subtle (humiliation, retaliation, scapegoating)—undermine teamwork and communication, which are foundational for safe care. Joint Commission guidance explicitly ties such behaviors to medical errors, preventable adverse outcomes, increased cost, and staff departures. [10]

Third, performance management and external accountability systems can inadvertently reduce safety to “performance theater.” In the U.S., entity["organization","Centers for Medicare & Medicaid Services","us federal health payer"] quality reporting increasingly includes structural measures aimed at driving patient safety action and governance; while beneficial in intent, these regimes can intensify leaders’ fear of poor scores, citations, or penalties—conditions under which blame-based control can feel attractive. [19] Accreditation requirements reinforce the need for credible investigation processes: Joint Commission policy and procedures require a systematic analysis (often RCA), corrective action planning, implementation, and monitoring of effectiveness after reviewable sentinel events. [20]

Fourth, healthcare reporting systems are “passive surveillance” and are widely understood to undercapture adverse events and near misses. AHRQ’s investigators’ primer explicitly notes underreporting and identifies blame culture and fear of repercussions as barriers; if leaders respond to reports punitively, the system becomes less informative over time. [6]

Finally, national health systems have increasingly formalized “learning rather than blame” expectations. In England, NHS patient safety culture guidance explicitly frames the shift as a move toward learning systems (supported by PSIRF) rather than performance management, and it positions the NHS Just Culture Guide as a tool to ensure staff are not treated unfairly after incidents. [21]

Harms to healthcare organizations

Reduced reporting and weaker hazard detection

Patient safety depends on surfacing weak signals. When staff expect punishment, they rationally avoid reporting, especially on near misses and ambiguous events. AHRQ learning materials explicitly identify blame culture and fear of repercussions as barriers to reporting and note that adverse events and near misses are underreported. [6] AHRQ issue analyses also show blame is present in a substantial portion of incident reports, indicating that blaming narratives can become institutionalized even within reporting systems intended for learning. [22]

Patient safety degradation and repeated harm

Suppressed reporting and shallow “person fixes” allow system vulnerabilities to persist. A scoping review summarizes evidence linking patient safety culture measures with adverse event rates, reinforcing that culture is not a “soft” concept but is associated with hard outcomes. [23] In parallel, global and national safety strategies emphasize that safety culture and just culture are core enabling conditions for reducing avoidable harm. [24]

Workforce outcomes that rebound onto operations

Blame environments predictably worsen morale and retention. Joint Commission’s disruptive behavior alert explicitly links intimidation/disruption to clinician departures and higher cost of care, implying organizational instability and recruitment/retention burdens. [10] In addition, multiple meta-analyses link clinician burnout to reduced safety and quality: physician burnout is associated with substantially higher odds of unsafe care, and nurse burnout is associated with lower safety/quality and lower patient satisfaction. These patterns are operationally relevant because blame climates can amplify drivers of burnout (fear, lack of support, moral distress, lack of improvement capacity). [8]

Innovation loss and stalled improvement

Innovation in healthcare often means surfacing process defects, questioning defaults, and trying safer redesigns—behaviors that require psychological safety. A large meta-analysis finds that psychological safety is associated with learning behaviors, information sharing, and performance-relevant outcomes; blame climates undermine the very social conditions that support these behaviors. [25] When incident reporting becomes performative or dangerous, improvement work narrows to compliance activities rather than adaptive learning. [18]

Legal, financial, and reputational consequences

The near-term “benefit” of blaming (signaling accountability) can increase long-term exposure by preventing early corrective action. The IOM estimated large national costs of preventable adverse events (including medical errors resulting in injury), with substantial healthcare cost components—costs that organizations ultimately absorb through additional care, inefficiency, claims, and lost productivity. [26]

Reputation risk can become existential when a negative culture prevents staff from raising concerns or leadership from hearing them. In the UK, the government-published Report of the Mid Staffordshire NHS Foundation Trust Public Inquiry[27] concluded that severe patient suffering was linked to board-level failure, failure to listen to patients and staff, and an “insidious negative culture,” illustrating how culture and governance failures can drive large-scale harm and lasting reputational damage. [28]

Differential impacts on clinical and non-clinical staff

Blame does not fall evenly; it tends to follow power gradients and professional status, which matters because it shapes who reports, who stays, and whose expertise is lost.

Clinical staff at the “sharp end” (nurses, residents, pharmacists, technicians) often carry the immediate blame load because they are the visible actors at the point of care. In a study of voluntary reporting, nurses were the highest reporting group; this is double-edged: it reflects proximity to events but can also make nursing staff more exposed to punitive responses if leaders do not protect reporting. [29]

Physicians and trainees can be uniquely vulnerable to authority gradients and career threats. Research on physician incident reporting barriers highlights underreporting as multifactorial, including fear of retaliation and lack of feedback; separate findings indicate residents may be reluctant to raise safety concerns when incidents involve those in authority. [30] These patterns can create “silent hazards” precisely in high-acuity environments where the cost of silence is high. [31]

Non-clinical staff (unit clerks, transporters, environmental services, food services, security) may be especially exposed to blame because they often have less professional status and fewer formal channels to contest narratives—even though their work is tightly coupled to infection prevention, timely care, and safe operations. Evidence that psychological safety increases with supervisory level implies that lower-power roles—often including non-clinical positions—are less likely to report risks when blame is likely. [5]

Vulnerable groups within the workforce (students, new grads, international staff, and staff in precarious employment) face heightened risk because blame interacts with evaluation and job security. The “second victim” literature documents that adverse events can produce significant psychological and professional distress in clinicians; punitive safety cultures are associated with higher second-victim-related harm, and systematic reviews describe barriers to implementing staff support programs, including persistent blame culture and reluctance to show vulnerability. [32] Evidence syntheses also report prevalence ranges for second victim experiences and indicate gendered fears (e.g., women reporting greater fear of losing position in some reviews), suggesting that blame can compound inequities. [33]

Mechanisms linking blame to patient and organizational outcomes

The core pathway is a fear → silence → weak learning → repeat harm mechanism, amplified by burnout and turnover dynamics.

flowchart TD
  A[Incident / near miss / harm occurs] --> B[Leader response perceived as blame or punishment]
  B --> C[Fear: retaliation, litigation, career damage]
  C --> D[Reduced reporting + reduced speaking up]
  D --> E[Hazards stay hidden; weak signals missed]
  E --> F[Superficial fixes: retrain or discipline individuals]
  F --> G[System vulnerabilities persist]
  G --> H[Repeat events, higher patient harm risk]
  H --> I[Organizational costs: claims, regulatory scrutiny, inefficiency, reputational loss]
  B --> J[Lower psychological safety]
  J --> D
  B --> K[Burnout and moral distress]
  K --> L[Turnover, staffing instability, reduced teamwork]
  L --> H

This mechanism is consistent with evidence that blame reduces reporting and that fear of repercussions and retaliation deters disclosure. [34] It is also consistent with the growing evidence base linking burnout to patient safety and quality outcomes—showing that a punitive climate (a burnout driver) increases operational risk. [8]

A second mechanism is threat-induced cognitive narrowing: after a sentinel event or publicized harm, leadership may centralize control and seek rapid closure, which can prematurely stop investigations at the level of individual acts rather than system conditions. Threat-rigidity theory predicts restricted information processing and a constriction of control under threat, which aligns with why organizations revert to blame under high scrutiny. [16]

Alternatives for hospitals and health systems

Comparison table: blame-focused vs learning-focused approaches

Metric

Blame-focused response

Learning-focused response (just culture + RCA² + psychological safety)

Morale and trust

Declines as staff anticipate punishment, fear increases, and silence. [35]

Improves as reporting is protected and leaders model nonpunitive learning. [36]

Turnover

Increases as intimidation/disruption and burnout rise. [37]

Decreases over time as workforce support and safety climate improve; second-victim support reduces trauma-driven exit. [38]

Error/near-miss reporting

Decreases due to fear of repercussions and retaliation; underreporting persists. [39]

Increases as reporting is normalized and recognized (“good catches”), improving hazard detection. [40]

Patient harm risk

Higher long-run risk due to hidden hazards and repeat events; culture correlates with adverse event rates. [41]

Lower long-run risk through stronger system fixes and continuous learning; culture is an explicit safety lever in national/global plans. [42]

Innovation and improvement

Lower—people avoid raising problems or experimenting with safer changes. [43]

Higher psychological safety supports learning behaviors, information sharing, and improvement. [44]

Legal/regulatory risk

Can increase over time because hazards remain, events recur, and investigations may lack credibility. [45]

Can decrease through credible investigations, documented corrective actions, and reduced repeat harm; aligns with accreditation expectations for analysis and monitoring. [46]

Step-by-step implementation roadmap

Below is a practical sequence designed for hospitals/health systems that need both fair accountability and measurable risk reduction.

1) Adopt a just culture policy with explicit behavioral boundaries. Use a clear “fairness test” that distinguishes human error, at-risk behavior, and reckless behavior, and define aligned responses (console and improve systems; coach and remove incentives for at-risk shortcuts; discipline for reckless/intentional violations). This aligns with NHS just culture guidance and with the learning-based safety culture expectations emphasized by Joint Commission leadership guidance. [47]

2) Separate incident learning from performance management where possible. Staff must believe that reporting is primarily for safety learning, not punishment. AHRQ explicitly identifies blame culture and fear of repercussions as reporting barriers; Joint Commission guidance emphasizes nonpunitive reporting and recognition of reporting behaviors (e.g., “good catches”). [48]

3) Upgrade investigations from “RCA paperwork” to RCA² with high-impact actions. Implement RCA²: Improving Root Cause Analyses and Actions to Prevent Harm[49] standards: multidisciplinary teams, strong causal analysis, and—critically—an action hierarchy that prioritizes system redesign over weak interventions (e.g., “retraining only”). This is widely referenced in patient safety improvement resources and is consistent with accreditation expectations for systematic analysis and corrective action monitoring. [50]

4) Create blameless postmortems for clinical operations and support services. Adapt the core practice: assume people’s intent was to do the right thing, focus on conditions that made error more likely, and publish learnings internally (with appropriate privacy). This reduces fear and increases the signal value of reports, aligning with the nonpunitive safety culture guidance emphasized by Joint Commission and AHRQ. [51]

5) Build psychological safety as an operational capability, not a slogan. Use leader behaviors (inviting concerns, thanking reporters, responding with curiosity), structured voice routines (pre-briefs, debriefs, “stop-the-line” protocols), and anti-retaliation enforcement. The VHA evidence shows retaliation fear deters reporting; the psychological safety meta-analysis links psychological safety to learning and performance outcomes—conditions necessary for safer care. [52]

6) Implement second-victim support and post-event workforce care. Programs that support staff after adverse events reduce trauma, help retain staff, and can strengthen the broader support culture. Reviews note that blame culture is a barrier to support program uptake and effectiveness—so support programs and just culture must be implemented together. [53]

7) Align incentives and compliance needs with learning goals. Ensure that reporting volume is not treated as “more errors” and that managers aren’t penalized for discovering hazards. Where external reporting and structural measures apply (e.g., CMS structural measures), translate requirements into internal learning infrastructure rather than PR-driven blame avoidance. [54]

Metrics to track (with interpretation guidance)

Hospitals should track metrics that reflect both learning and outcomes, while guarding against gaming.

·         Safety culture and nonpunitive response: Use validated AHRQ Surveys on Patient Safety Culture (Hospital SOPS 2.0 and relevant supplements) to track “response to error,” “communication openness,” “reporting,” teamwork, and management support. [55]

·         Reporting system health: event reports per 1,000 patient days, near-miss ratio, time-to-triage, time-to-close, percent with high-quality narratives, and percent with feedback delivered to the reporting unit. Underreporting is a recognized limitation; trends should be interpreted as culture signals, not just error frequency. [56]

·         Patient harm indicators: unit-level falls with harm, pressure injuries, medication-related harm, HAIs, and other locally material harms; interpret alongside culture scores given evidence that culture relates to adverse event rates. [23]

·         Workforce stability and wellbeing: turnover, vacancy, sick leave, burnout measures; given meta-analytic links between burnout and safety/quality outcomes, workforce metrics are leading indicators of patient risk. [57]

·         Investigation quality: percent of RCA² actions rated “strong” on an action hierarchy, action completion and effectiveness verification, recurrence rates for the same hazard class. [58]

Expected benefits and common barriers

Expected benefits include higher reporting of “good catches,” faster identification of latent hazards, fewer repeat events, improved retention, and a stronger compliance posture, as investigations and corrective actions are more credible and measurable. These expectations align with major safety guidance emphasizing nonpunitive learning, credible investigations, and culture as foundational. [59]

Common barriers include leadership's fear of appearing weak, the legal department's concern about discoverability, middle-management inconsistency (the most common “trust breaker”), and staff skepticism stemming from prior punitive experiences. These barriers are consistent with threat-rigidity dynamics and with empirical evidence that fear of retaliation and fear of litigation are salient in reporting behavior. [60]

timeline
  title Implementation timeline for hospitals shifting from blame to learning
  0-30 days: Declare just culture principles; clarify boundaries for reckless behavior; anti-retaliation stance; baseline culture survey plan
  30-90 days: Train leaders/managers on just culture decisions; redesign reporting feedback loop; launch second-victim support pathway
  3-6 months: Implement RCA² with action hierarchy and board oversight; publish de-identified learnings; start blameless postmortems for key services
  6-12 months: Tie leader evaluation to safety culture behaviors; verify effectiveness of corrective actions; track recurrence and workforce outcomes

Assumptions and unspecified constraints

The country and regulatory context are unspecified. This report, therefore, draws primarily from U.S. sources (AHRQ, Joint Commission, CMS, IOM/National Academies) and supplements them with major UK and global sources (NHS Just Culture guidance, WHO Global Action Plan) where they provide high-authority framing or comparable mechanisms. [61]

Hospital type and scale are unspecified (academic vs. community, single-hospital vs. multi-hospital system). Implementation steps assume a minimum governance capacity (patient safety office, risk management, HR partnership) and can be scaled down for smaller facilities by simplifying investigation workflows and sharing system-level resources across sites. [58]

This report treats blame as a dominant managerial pattern (punitive or stigmatizing default responses to error) rather than legitimate accountability for intentional harm, reckless conduct, or repeated refusal to follow critical safety procedures. Just culture approaches explicitly preserve accountability in those cases; the goal is to reduce unfair blame while increasing system reliability and patient safety. [62]

Citations:

[1] [8] [57] Association Between Physician Burnout and Patient Safety ...

https://jamanetwork.com/journals/jamainternalmedicine/fullarticle/2698144?utm_source=chatgpt.com

[2] [9] To Err is Human: Building a Safer Health System. Summary

https://nap.nationalacademies.org/resource/9728/To-Err-is-Human-1999--report-brief.pdf?utm_source=chatgpt.com

[3] [14] The Intuitive Psychologist and His Shortcomings

https://web.mit.edu/curhan/www/docs/Articles/15341_Readings/Social_Cognition/Ross_Intuitive_Psychologist_in_Adv_Experiment_Soc_Psych_vol10_p173.pdf?utm_source=chatgpt.com

[4] [16] [60] Threat Rigidity Effects in Organizational Behavior

https://strategy.sjsu.edu/www.stable/pdf/Staw%2C%20B%2C%20L%20E%20Sundelands%20and%20J%20E%20Dutton%2C%201981%2C%20Administrative%20Science%20Quarterly.%2026%20pp%20501-524.pdf?utm_source=chatgpt.com

[5] [31] [52] Psychological safety and error reporting within Veterans ...

https://pubmed.ncbi.nlm.nih.gov/24583957/?utm_source=chatgpt.com

[6] [7] [12] [13] [18] [34] [35] [39] [48] [56] [61] Strategies and Approaches for Investigating Patient Safety ...

https://psnet.ahrq.gov/primer/strategies-and-approaches-investigating-patient-safety-events?utm_source=chatgpt.com

[10] [37] Behaviors that undermine a culture of safety

https://digitalassets.jointcommission.org/api/public/content/594d3d98c2c64455bebe11d19114afce?v=7d018e57&utm_source=chatgpt.com

[11] [27] [50] [58] RCA2: Improving Root Cause Analyses and Actions to ...

https://www.ashp.org/-/media/assets/policy-guidelines/docs/endorsed-documents/endorsed-documents-improving-root-cause-analyses-actions-prevent-harm.ashx?utm_source=chatgpt.com

[15] Is-There-a-Universal-Positivity-Bias-in-Attributions-A-Meta- ...

https://www.researchgate.net/profile/Amy-Mezulis/publication/8347816_Is_There_a_Universal_Positivity_Bias_in_Attributions_A_Meta-Analytic_Review_of_Individual_Developmental_and_Cultural_Differences_in_the_Self-Serving_Attributional_Bias/links/0deec517175a025084000000/Is-There-a-Universal-Positivity-Bias-in-Attributions-A-Meta-Analytic-Review-of-Individual-Developmental-and-Cultural-Differences-in-the-Self-Serving-Attributional-Bias.pdf?utm_source=chatgpt.com

[17] [29] Safety incident reporting and barriers (SIRaB) study

https://pubmed.ncbi.nlm.nih.gov/38567698/?utm_source=chatgpt.com

[19] [54] Patient Safety Structural Measure: Attestation Guide

https://www.qualityreportingcenter.com/globalassets/2025/09/iqr/pssm-attestation-guide_updated-9-08-2025_vfinal_508.pdf?utm_source=chatgpt.com

[20] Sentinel Event Policy and Procedures

https://www.jointcommission.org/en-us/knowledge-library/support-center/standards-interpretation/sentinel-event-policy-and-procedures?utm_source=chatgpt.com

[21] Improving Patient Safety Culture: A practical guide

https://www.england.nhs.uk/wp-content/uploads/2023/07/improving-patient-safety-culture-a-practical-guide-v2.pdf?utm_source=chatgpt.com

[22] Nature of blame in patient safety incident reports

https://psnet.ahrq.gov/issue/nature-blame-patient-safety-incident-reports-mixed-methods-analysis-national-database?utm_source=chatgpt.com

[23] [41] The association between patient safety culture and adverse ...

https://pmc.ncbi.nlm.nih.gov/articles/PMC10053753/?utm_source=chatgpt.com

[24] [42] towards eliminating avoidable harm in health care

https://irp.cdn-website.com/812f414d/files/uploaded/GPSAP-2021-2030.pdf?utm_source=chatgpt.com

[25] [43] [44] Psychological Safety: A Meta‐Analytic Review and Extension

https://digitalcommons.odu.edu/context/management_fac_pubs/article/1018/viewcontent/Klinger_2017_PsychologicalSafetyAMetaAnalytic.pdf?utm_source=chatgpt.com

[26] Executive Summary - To Err is Human - NCBI Bookshelf

https://www.ncbi.nlm.nih.gov/books/NBK225179/?utm_source=chatgpt.com

[28] Report of the Mid Staffordshire NHS Foundation Trust ...

https://assets.publishing.service.gov.uk/media/5a7ba0faed915d13110607c8/0947.pdf?utm_source=chatgpt.com

[30] Barriers to Incident Reporting by Physicians: A Survey ... - PMC

https://pmc.ncbi.nlm.nih.gov/articles/PMC11260437/?utm_source=chatgpt.com

[32] Patient Safety Culture and the Second Victim Phenomenon

https://pmc.ncbi.nlm.nih.gov/articles/PMC5333492/?utm_source=chatgpt.com

[33] Coping strategies in health care providers as second ...

https://onlinelibrary.wiley.com/doi/10.1111/inr.12694?utm_source=chatgpt.com

[36] [40] [51] [59] The essential role of leadership in developing a safety culture

https://lhatrustfunds.com/assets/uploads/documents/SEA-57-Safety-Culture-and-Leadership-FINAL3.pdf?utm_source=chatgpt.com

[38] [53] A Systematic Review of Second Victim Support Resources

https://www.mdpi.com/1660-4601/18/10/5080?utm_source=chatgpt.com

[45] [46] Sentinel Event Policy (SE)

https://digitalassets.jointcommission.org/api/public/content/419ec16a9c9f4198ae703ca97d6893ce?v=ec99831a&utm_source=chatgpt.com

[47] [49] [62] A just culture guide

https://cpcw.org.uk/wp-content/uploads/sites/19/2018/03/180316-NHSi_just_culture_guide_A3.pdf?utm_source=chatgpt.com

[55] Surveys on Patient Safety Culture. | PSNet

https://psnet.ahrq.gov/issue/surveys-patient-safety-culture?utm_source=chatgpt.com

Next
Next

The Importance of Employers Valuing their employees