“It was total paralysis, and every birth doctor’s worst nightmare, to find that your own mistake caused the death of a baby. It is a horrible tragedy for the parents, but also a nightmare for the staff at the hospital. I remember when the father looked me in the eyes and asked me why we did not perform a caesarean birth. My reply was that I had made a mistake, and not read the journal properly. If we had followed the original plan, the baby would most probably have lived” (Westad, 2016).

Stian Westad is the head doctor of the women’s clinic at Lillehammer Hospital in Norway. Some years ago, he experienced how a mistake on the part of him and his team caused the death of a baby. The pregnant woman was expecting her first child. The time of expected conception was coming close, and the woman was anxious, because her stomach was so big. She went to the hospital, and the ultrasound showed that she was expecting a big child. In the journal, the doctor wrote that if the birth had not started by itself within one week, they would proceed to perform a caesarean birth. When the woman returned to the hospital some time later, a new ultrasound was taken, and this time the conclusion was that the child was not so big after all. That turned out to be a fatally mistaken change of view. When the birth started, the doctor decided for a vaginal rather than a caesarean birth. The baby was stuck during the birth, suffered severe brain damage, and died four weeks later.

“The baby would have survived if we had stayed with the original plan, and kept the promise to the parents of performing a caesarean if the birth did not start normally. I feel that we owe it to the parents and the child who died to not hide this away, but speak openly about it, so that we can use it constructively and improve. I told the parents instantly that we, the hospital staff, had made a mistake and that they had done things exactly right” (Westad, 2016). The doctor remained in contact with the parents, and when the woman became pregnant again, they chose to have the same doctor and the same midwife to follow them up. The hospital crew’s openness about their mistake created trust, and this time the process ended as expected, with the birth of a healthy baby.

In the aftermath of the mistake, Westad received support from colleagues, and together they critically scrutinized procedures to strengthen them, to minimize the risk of making the same mistake again. He hopes that his openness will make others come forward and talk about their mistakes. “When we have made a mistake, we have a unique opportunity to improve. If we ignore the mistakes we have made, we are at risk of repeating them, and that would be unforgivable” (Westad, 2016).

This chapter discusses examples from healthcare, both to illustrate further the relevance of the concepts from previous chapters, and to introduce the concept of trust into the discussion about fallibility. Hospital staff face situations where it is important that they voice a concern, and intervene to stop chains of events that may lead to unnecessary injury or death. Hospitals and other organizations in the health sector need to create a barrier system where people do not hesitate to voice their concerns, a communication climate where it is normal and appreciated to intervene when you sense that something is wrong.

The guiding ideas of this chapter are that openness about mistakes (i) can serve a foundation for trust within a professional unit, (ii) is necessary for further learning and improvement of professional services, and (iii) can strengthen public trust in the service providers. Trust will be understood as a function of ability, benevolence, and integrity (Mayer, Davis, & Schoorman, 1995; Schoorman, Mayer, & Davis, 2007). Trust can be one of the pathways for building high-quality connections at work (Dutton, 2003; Dutton & Heaphy, 2003). The examples in the current chapter illustrate how open talk about failures and mistakes can serve to build, maintain, and even repair trust, and also how a climate for such exchanges needs to be characterized by psychological safety (Carmeli, Brueller, & Dutton, 2009; Edmondson, 1999). What is at stake is also organizational trust (Schoorman, Mayer, & Davis, 1996). Hospitals and other health organizations need to display a willingness and ability to learn from failure in order to be deemed trustworthy by the public. The main data for this chapter comes from interviews with two experienced doctors. Both exemplify a growth mindset (Dweck, 2017) in that they see situations where things go wrong as an opportunity to learn and improve their professional work, individually and in teams.

1 Immediate Acknowledgement

Doctor Westad explains that his motivation for being open about his mistake in not initiating a caesarean birth was the thought that there is good health in doing the right thing immediately. He quickly erased any tendency on the parents’ side to think that they should have done things differently, that the death of their baby had even the slightest to do with any miscalculations from their side. In the light of other dramatic incidents where mistakes lead to bad outcomes, this immediacy seems important. If you hesitate and do not admit a mistake at the beginning, it might become more difficult later, since you then have to explain two things, both the mistake itself, and the fact that you did not speak up about it on the first occasion where that was possible.

Failure to admit mistakes early can create long and painful processes of denying and attributing blame. One dramatic example is the Hillsborough disaster in Sheffield on April 15, 1989, where 96 supporters were crushed to death at a football match (Scraton, 1999, 2016). It was not until April 2016 that an inquest returned a verdict that the supporters died due to grossly negligent failures by police and ambulance services to fulfill their duty of care to the supporters (Scraton, 2016). For twenty-seven years, the professionals had denied any mistakes, and had instead explained the tragedy as a result of reckless behavior from the supporters themselves. Families and friends endured the extra pain of these speculations and accusations. Doing the right thing immediately, as exemplified by doctor Westad’s response to his own mistake, would have made an immensely positive difference to a great number of people, over a long period of time. It may not have been necessary to take full and unconditional responsibility, but admitting a considerable part of the blame for the tragic events would no doubt have made a significant positive difference for many people.

It is worthwhile to dwell on the idea that there is good health to doing the right thing immediately after a mistake. It is a move that punctures any tendency towards a blame game, the kind of process we have seen after the Hillsborough tragedy. In hospital settings, a blame game often develops in the aftermath of shocking and publicly exposed mistakes. From time to time, in different cultures and settings, an operating team forgets a scissor inside a patient’s stomach. When the patient returns to the hospital in great pain, and professionals detect the mistake, the game of attributing personal blame can begin. The head surgeon may blame the nurse in charge of counting the number of instruments. She should have noted the missing instrument, and stopped the process of sewing the operation wound. The doctor himself can deny responsibility for the unfortunate turn of events. A surgeon is normally so preoccupied with the complex and difficult operation tasks that it is impossible for him or her to keep track of all the instruments that are in use. It is common that not only one, but two nurses have it as their main responsibility to count the instruments and speak up when one or more go missing. Nevertheless, when things do go wrong, the surgeon can decide to take responsibility on behalf of the team, and not point a blaming finger at one or two colleagues. A heated and public blame game can create an impression of an unprofessional and divided workplace, characterized by unhealthy individual strife rather than a team-oriented organization where colleagues shared the responsibility in the face of misadventure. Dutton (2003, p. 97) has noted how public chastising of people for poor performances is sometimes seen as a necessary way of being tough, but is also likely to be a “trust killer”, corroding professional relationships at work.

We can attempt to generalize from Westad’s behavior to a principle for professionals to follow in situations similar to the one he faced, highlighting the quality of immediacy in the response.

Principle of immediate acknowledgement: When you realize that your decision or behavior has caused harm, admit it and take responsibility immediately.

In the critical aftermath of a bad outcome, victims can be particularly susceptible to blaming themselves, no matter how irrational that may seem. When the professional meets them very early with an acknowledgement of responsibility, that causal attribution is much less likely to happen. One important dimension of the current example is that Westad could immediately grasp the facts of the situation, including his team’s role in bringing about the terrible outcome. In other situations, doctors, nurses, and other healthcare workers may be under pressure to acknowledge that they have made a fatal mistake, but may need more time to evaluate the circumstances and their own contribution to the outcome. The pressure to admit a mistake may also be present in situations where a reasonable interpretation of the facts does not warrant such an act. Angry and frustrated patients or relatives may understandably push for it, even in cases where the healthcare workers have actually done excellent work, to no avail. Those kinds of cases, and how they differ from the ones where the connection between a failure and a bad outcome can be established quickly will be discussed further in the next section.

Doctor Westad stayed in close contact with the parents after the death of their baby, and when the couple expected another baby, they decided to keep the same team of professionals that had helped them the first time. That trust appears to have been built on the foundation of the immediacy of the acknowledgement. Trust can be explained in terms of three factors that need to be in place in relation between the trustor and the trustee (Mayer et al., 1995; Schoorman et al., 1996). The trustor must perceive that the trustee has the necessary ability, benevolence, and integrity to be trustworthy. This understanding of the phenomenon overlaps with Dutton’s, who also highlights benevolence and integrity, but uses the more general term of dependability (honesty and reliability) instead of ability (Dutton, 2003, p. 81). The former definition will be used here, since the concept of ability brings forth the non-moral dimension of trust. The trustor must perceive the trustee to have a set of skills and competencies that enables him or her to perform specific tasks. Benevolence is present when the trustor perceives the trustee to be a person who cares about his or her well-being, and gives priority to the trustor’s interest over his or her egocentric interest. The trustor sees integrity in the trustee when it seems that the trustee believes in and adheres to a set of principles that the trustor finds acceptable. All three factors must be in place for trust to happen. The absence of any one of them creates an imbalance. It does not help to have ability, if the trustor doubts your benevolence or integrity, and neither of those two factors, together or alone, will suffice as a foundation for trust without being tied to a proper ability.

A fatal mistake at a hospital can create a deep crisis in the trustworthiness the patient (trustor) sees in the professionals (trustees). All the three factors can come under serious doubt, and the trust collapses if one or more of them gives way. It may appear that the doctor or the midwife did not have the necessary abilities to do the job well. Their benevolence can also be questioned. Are they more concerned about themselves than they are about the patient? A mistake with a terrible outcome can also make the patient doubt the integrity of the professionals, and question whether they are really adhering to the right set of principles.

Immediate acknowledgement of a mistake can keep trust alive, or at least create conditions from which to rebuild it. Being open about a mistake is a particularly strong expression of benevolence, in that the professional places the interest of the parents ahead of his own self-interest. Westad could have remained vague about the causes of the baby’s death, and even suggested some kind of fatal involvement or lack of proper care from the parents. That would have protected his own narrowly construed professional reputation, and placed more of the burden on the mother and father, who would have lacked the medical expertise to challenge that account. Westad instead chose to prioritize the parents’ wellbeing. In doing so, he practiced the benevolence at the core of professional ethics of any kind, as outlined by Nanda (2002), who sees the relation between professional and client or patient as one governed by a more or less explicit pledge from the professional: “Trust me; although my self-interest may dictate other actions, I undertake to serve in your best interest.” There can be conflicts of interest, and those situations are distinct from ethical dilemmas, where there are strong ethical reasons to do both A and B, and no matter what you do, something will be wrong. In conflicts of interest, there are strong ethical reasons to prioritize the client or patient, but the professional might be tempted to set his or her self-interest first. That temptation can be particularly strong due to the fact that there is usually a considerable knowledge gap between the provider and receiver of a professional service. The patient has seldom any way of knowing whether the doctor is doing the right thing, or not.

The parents whose baby died decided to use doctor Westad and his team when the next pregnancy occurred. That is a particularly powerful expression of trust. Mayer et al. (1995, pp. 712–714) understand trust as a willingness to be vulnerable. You are willing to trust someone, and assume that they have the required ability, benevolence, and integrity to do the work. That assumption may turn out to be false, and you lose, particularly if you engage in actual, trusting actions. Mayer et al. (1995, p. 724) distinguish between trust and trusting activities, and only the latter are truly risky. You can have a willingness to be vulnerable without ever becoming engaged in any risky activity, because the occasion never arises. The parents in question did both, in that the willingness was in place, and led to the concrete trusting action of staying with the professional team whose mistake had caused their first baby’s death. They must have had a strong belief along all three dimensions of trust in doctor Westad and his team, despite the tragic outcome of the first pregnancy. It seems that a crucial building block for that trust was Westad’s immediate acknowledgement of responsibility. The parents could go through a grief process without being tormented by thoughts about personal responsibility, and about how they could have done things differently to save the baby.

Admitting the mistake generated trust between doctor Westad and the parents. It is also likely that this act of honesty also contributed to a trusting environment at the doctor’s unit, and so to a strengthening of professional connections. When the leader steps forward and talks openly about what he did wrong, it lowers the threshold for others to do the same, and also signals the presence of ability, benevolence, and integrity at the top of the professional unit.

2 Barriers in Healthcare

Mistakes in medicine and healthcare is a considerable source of harm to patients even in countries where the professionals are well educated and trained. One study indicates that medical error is the third most common cause of death in the US (Makary & Daniel, 2016). In the previous chapter, we saw how a safety culture in aviation rests on the assumption that human beings are fallible. Even the most experienced professionals are prone to fail, and that creates a need for a system to detect their mistakes before they lead to harmful outcomes. Human intervention is a crucial element in any barrier system, as it can stop the causal chain of events set in motion by a professional’s mistake.

Doctor Westad was asked about what he thought could make a doctor hesitate about telling others about his or her mistake at work. One aspect he drew attention to was that of social cost. “A doctor may think that colleagues will see him as a loser if he admits to a mistake” (Westad, 2016). There is a parallel here to the perceived social cost of asking a colleague for help. That, too, is an initiative people tend to perceive to be socially costly. You risk losing face at work if you ask for help and are open about the limitations to your own competence. Brooks, Gino, and Schweitzer (2015) have studied the assumption that people who ask for help are considered to be less competent than those who try to manage on their own, and their data suggest the opposite. When work tasks are complex, the person who asks help for is seen as more competent than the one who do not. The relation between fallibility and asking for help at work will be explored further in Chap. 6.

Another aspect Westad drew attention to is that many young doctors are on temporary work contracts. They may want to be perceived as reliable and infallible professionals, in order to get a permanent job at the hospital. That ambition may also cause them to hold back when they are witness to mistakes and mishaps in the making in the hospital. They may decide to keep silent, out of a wish of not becoming an unpopular figure among the senior, permanently employed doctors, who will have a say in whether they get a contract renewal or even permanent employment. They do not perceive speaking up as a good career move.

Doctor Bjørn Atle Bjørnbeth is an experienced gastro surgeon at Oslo University Hospital. We have discussed communication climate and fallibility at work on a number of occasion in the past fifteen years. His experiences in sharing narratives about mistakes are the focal point of the next section. Even he identifies these career considerations as a major obstacle to openness about mistakes in a hospital, both to reporting about one’s own and those of colleagues (Bjørnbeth, 2017). From a leadership perspective, it is possible to neutralize both of these reasons for holding back when witnessing something out of the ordinary, by inviting people to voice their concerns, and by rewarding those who actually do.

Westad and Bjørnbeth are both concerned about how social cost and threat to career development can weaken the barrier system at a hospital. As noted in Chap. 4, Reason’s model starts from the assumption that people are fallible, and that each mistake they do start a causal chain of events that leads to a bad outcome, unless there is a barrier in place to bring that chain of events to an earlier stop. Human intervention is one possible barrier element, and the more people experience that the voicing of a concern will be valued and appreciated in the organization, the more likely it is that the will actually do so.

In order to investigate the two experienced doctors’ thoughts about how young doctors may be reluctant to speak up about mistakes, I interviewed doctor student Arne (not his real name) about his experiences when being exposed to hospital work for the first time. One of the episodes he shared exemplifies an attempt to voice a concern and function as the human element in a barrier system:

I was present when a doctor was doing a Nasopharynx test, where the aim is to get a microbiological sample from a specific location 6 to 8 centimeters into the nose cavity. The sample can tell us what kind of airway infection the patient has, and will determine what kind of treatment to pursue. The instrument is a brush, similar to a q-tip, but thinner and longer. I noticed that the doctor only inserted the instrument a few millimeters into the nostrils, and took samples there, where the microbiological flora is different. At school, we had learned that this is a mistake. Not only will the test be useless, but also the result can potentially mislead the doctor into making faulty decisions about treatment. I let the doctor know after the patient had left the room. (Arne, 2016)

The doctor had made a mistake, and the student took an initiative to stop the causal chain of events it put in motion. As such, the student behaved in an exemplary manner. The doctor, however, did not see things that way:

In response, he got mad at me, and said that I was supposed to learn from him, and not the other way around. Anyway, I hope he took in what I said and corrected his understanding of the test for later, because what he did was completely wrong, a bit like making a blood analysis of a urine sample. (Arne, 2016)

Without claiming that the situation above is typical, it is at least a stark example of the concrete circumstances where the communication climate between seniors and juniors in a work environment is put to the test. Arne’s own interpretation was that the more experienced doctor found it socially difficult to be confronted with a mistake by a junior. Here was an opportunity to strengthen the climate for making such interventions, by thanking the junior for the effort, in line with what the pilot did in his meeting with the driver of the pushback tractor, in the example from the introduction of this book. That opportunity was not taken, but at least we can share the hope expressed by the student that the doctor actually absorbed the information and silently revised his understanding of how to perform the test.

Conveying feedback in a constructive way is often easier said than done, particularly for a junior in relation to a senior. It can be a challenge in a range of professional setting, also beyond healthcare. In an interview, finance student Mina Randjelovic explained a strategy she used towards a senior colleague during a work assignment in a company. The two were supposed to have an expert—apprentice relationship, much as in the case of the senior doctor and the doctor student mentioned above. From time to time, the senior would make a mistake in preparing a particular business document, and the junior would catch it. “In those situations, I said to him that what he just did was really interesting, and I asked him to explain why he had done it. That way he had to rethink his behavior, and was able to detect and correct the mistake himself” (Randjelovic, 2017). She cleverly spared her colleague of the potential humiliation of being corrected by someone less experienced and knowledgeable than he was. Its strategy has a trace of hint and hope to it, and can fail if the recipient is inattentive and slow in his pedagogical effort. A plan B may be needed if the senior is unable to detect his own mistake even after revisiting the faulty reasons for his decision.

The three psychological phenomena highlighted in Chap. 2 can also pose a threat to the robustness of the barrier system in a hospital setting. First, a doctor or nurse can be susceptible to the sunk-cost fallacy, in that he or she has invested professional pride or other currency in one particular way of doing things. The idea of failure may cause cognitive dissonance, a pain that can be held at bay by continuing in the same direction, even with a vague idea that something is not quite right. A turnaround can also require the professional to admit that resources have been wasted, something he or she may be reluctant to do.

Second, there may be a bystander effect, in that many employees are present when the mistake happens, and they have a pacifying effect on each other, a diffusion of responsibility. Each of twenty in a group of doctors and nurses will tend to think that they only have one twentieth of a responsibility to intervene. Furthermore, pluralistic ignorance can occur. Each of the twenty may watch out for a response from the other nineteen, and if those remain passive, each will be prone to think that everything is fine, since none of the others take steps to intervene. Each individual can doubt his or her own initial thought that something is about to go wrong, given that there is no sign of a response from any of the others.

Third, the confirmation fallacy may cause professionals in a hospital to overlook and fail to spot obvious missteps from a colleague. The doctors and nurses who have the status of being the best and most experienced are particularly vulnerable to being allowed to continue on erroneous paths, since their colleagues may interpret whatever they do in the best possible light. They are, after all, the experts in their field. One patient story can serve to convey this situation. A woman was admitted to hospital with a broken ankle, from a skiing accident. The doctor in charge told her that it was an uncomplicated break, and it would even be fine to walk with the plastered foot on the next day. When the woman tried to do so on the following afternoon, it did not go well. It hurt to put the broken foot down on the floor, and the plaster did not seem to give sufficient support for standing or walking. The woman decided to go back to the hospital to explain the problem. One doctor and two nurses listened to her, and inspected the plastered foot. They agreed that it did not look right, and that the plaster should be removed and replaced by a new one. Before they proceeded to do so, the doctor asked who had put on the original plaster, and the patient answered doctor A. That changed the whole interpretation. “Doctor A is the best orthopedist in the Nordic countries. If he has put on the plaster, then it is supposed to be that way.” The decision to change plaster was revoked immediately, in the light of who had produced the initial one.

Some weeks after this event, the woman returned to the hospital to have the plaster removed. The doctor in charge assumed that she had already been to take an x-ray to confirm that the break had healed properly. The patient explained that she had been told that this was such an uncomplicated break that no x-ray was needed. “That is very odd. We are always supposed to check that the break has healed before we remove plaster. Whoever said otherwise?” he asked. Again, the patient said that it was doctor A. For the second time, this answer made any misgivings from the professional disappear. “If doctor A says so, it is correct. He is the best orthopedist in the Nordic countries.”

This patient twice experienced that professionals put preliminary evaluations of her situations aside because the doctor in charge was renowned for being the best in his field. Special rules applied to him, or he could allow himself to break the rules that ordinary medical workers must follow. The patient’s ankle has healed properly, so the medical treatment she got appears to have been right. However, we can have some doubts about the barrier system at the unit where doctor A works. It appears that everything he does is interpreted in the best possible way, since he is the best orthopedist in the Nordic countries. He is probably an excellent doctor, but the barrier system around him may be weak, in that colleagues commit the confirmation fallacy, assuming that everything he does is correct.

The pilot Jarle Gimmestad has talked of a similar vulnerability among the highest ranked pilots. Juniors and less experienced co-pilots hesitate to intervene when they sense that the seniors are about to make a mistake, often out of reverence to experience, but also for similar reasons as the junior doctors on temporary contracts. It may not be a wise career move to challenge a person who has the power to influence your professional prospects. A senior who is aware of this possible weakness in the barrier system can counter it by encouraging the junior to intervene when he or she notices something out of the ordinary in what the more experienced person is doing (Gimmestad, 2016).

3 Sharing Mistakes

One thing that surprised doctor Westad when he started to talk openly about his mistake was how unique and uncommon this kind of sharing appeared to be. The Norwegian Board of Health Supervision got his and the parents’ permission to use it in their annual report, and that led to media interest and invitations for the doctor to give presentations to healthcare workers about the processes before and after the mistake. This attention indicates that it is quite unusual to speak openly about one’s mistakes in the health sector in Norway, and that there is room for more learning from them among doctors, nurses, and other healthcare workers.

Interviews with doctor Bjørn Atle Bjørnbeth have focused on his experiences with fallibility at work, and the links between being open about one’s mistakes and learning to become a better professional. His initial response to doctor Westad’s act of immediate acknowledgement is that it was a very commendable thing to do, but also that many situations where things go wrong in connection with an operation are very complex. It may not be clear-cut that the negative outcome is a result of faulty professional work. An immediate acknowledgement of responsibility may be what the patient or the relatives are want to hear, but the rationale for giving one may not be present. “Prior to an operation, I try to be open about risk to the patients. Sometimes it is difficult to make a precise diagnosis, and we have to proceed without reliable knowledge about what is actually the matter with the patient. The uncertainty means that things might go wrong. Sometimes we operate people for an illness where it is common that about 30% experience more or less serious complications after the operation, and may have to be re-operated. Patients who end up in that category may respond with anger, and expect me to acknowledge a mistake. It would be wrong of me to do that, since we cannot establish whether the current problem is a result of a professional mistake in diagnosing or operating the patient, or not. Absorbing and acknowledging information about risk is difficult for patients and relatives, particularly in the light of a bad outcome” (Bjørnbeth, 2017).

Doctor Bjørnbeth has focused on the learning potential of sharing experiences about unexpected complications and mistakes. One episode from his early career set him on the path to understanding the importance of transparency about fallibility. He was on duty at a hospital when a young girl entered as a patient, with symptoms indicating a broken arm. When studying the x-ray of the arm, doctor Bjørnbeth could not see any break. In order to be on the safe side, he knocked on the door of his leader, the chief doctor of his unit, and showed him the x-ray. The senior doctor studied the picture carefully, and came to the same verdict as young doctor Bjørnbeth. The girl had not broken her arm, and could return home without treatment.

Later on the same day, another doctor came to the unit, and looked at the x-ray of the girl’s arm. After careful scrutiny, he spotted a break that was difficult to detect, and had escaped both doctor Bjørnbeth and the chief doctor’s gaze. The girl was sent for again, and this time got the proper treatment in the form of plaster.

The next day, doctor Bjørnbeth took the x-ray back to the chief doctor’s office, and said that the two of them had missed the break in the girl’s arm yesterday. The chief doctor asked to see the picture again, and once more studied it carefully. Then he exclaimed. “Yes, of course there is a break in this arm, but this is not the same picture that you showed me yesterday. I would not have failed to spot something so obvious.”

Doctor Bjørnbeth’s trust in his leader diminished after this exchange. It was not primarily the belief in the chief doctor’s professional abilities that disappeared, but more the perception of his benevolence and integrity. This man appeared to prioritized self-interest over the interest of his younger subordinate, and seemed to adhere to dubious principles regarding leadership support. From a moral luck perspective, we may say that he was unfortunate to encounter circumstances that revealed a weakness of character, a lack of substantial leadership capabilities.

Early in his career, doctor Bjørnbeth became convinced that talk about professional mistakes and failures could be a source of deep and profound learning, making it safer to be a patient at a hospital. In tandem with another young doctor, he initiated a new point on the agenda of the weekly unit meeting: Where have we had unexpected complications this week? In which cases have we failed to diagnose and treat patients faultlessly? The idea was to bring up examples where there would be room for strengthening common and individual work procedures and methods. Both doctors exemplified a growth mindset, an assumption that it is possible to strengthen professional capabilities by dwelling on unforeseen complications and failures (Dweck, 2017). In the beginning, the two initiators took turns in explaining to their colleagues how they had failed in giving perfect treatment to patients, and how they thought things could be done better. “The more experienced doctors listened in, shook their heads in more or less real disbelief at what we, the young colleagues told them. They indicated that such events would never have occurred on their watch. The veterans remained silent about their own mistakes. When their patients had complications after an operation, these doctors would explain that in terms of bad luck. They tended to be surprised whenever things went wrong, having expected that their superior professional efforts would lead to a good outcome” (Bjørnbeth, 2017). These veterans appear to have had a fixed mindset (Dweck, 2017), considering their own capabilities to be fully developed and set in stone. After a while, doctor Bjørnbeth and his equally open colleague decided to terminate this point on the meeting agenda, since nobody else stepped forward to share examples of situations where they had failed to be perfect doctors.

Concepts from attribution theory (Harvey, Madison, Martinko, Crook, & Crook, 2014; Heider, 1958) are useful for making sense of the responses from the doctors who denied fallibility and appealed to bad luck in order to explain bad outcomes. People tend to have an innate interest in understanding the causes of their own and others’ successes and failures. An agent’s self-attributions can be internal, pointing to individual efforts and skills, or external, pointing to factors beyond the agent’s control. As mentioned in Chap. 1, a football coach can explain his team’s success in a cup final to be a result of “world class coaching”, and thus engage in internal attribution, and a loss in a crucial game as down to bad refereeing or injuries to his own players, applying an external attribution strategy. Learning from failure depends on a realistic balance between internal and external attribution. That appeared to be absent in the case of the veteran doctors who refused to participate in a talk about failures. When you say that bad luck was the sole explanation for a negative outcome, you also indicate that you have nothing to learn from carefully studying the case at hand, inviting colleagues to evaluate your work, and to consider whether you could have done things differently. In the case of a successful outcome, the tendency to engage in internal attribution, explaining it primarily to be a result of individual expertise and effort can also hamper learning, in that fortunate dimensions of the situation are ignored. The shift in self-understanding in the light of success and failure can also be interpreted as a move from understanding oneself as an agent, to understanding oneself as a mere pawn (Nygård, 2007).

Attribution error happens when a person over- or underestimates his or her own contribution to a particular outcome (Ross, Amabile, & Steinmetz, 1977). Doctor Westad’s immediate acknowledgement of responsibility appears to have been motivated by a wish to prevent the parents from committing the attribution error of taking part of the blame for the baby’s death.

A relevant development in research on attribution has been to move beyond the distinction between internal and external, to include relational attribution in explanations of outcomes (Eberly, Holley, Johnson, & Mitchell, 2011). The success or failure of interpersonal interactions in organizations often depends on the quality of the teamwork, a dimension not properly captured in the traditional dualistic model of attribution theory. Dialogue between colleagues about recent failures can serve to utilize and strengthen the relational dimension at work.

In recent years, doctor Bjørnbeth has been the leader of a large unit at the Oslo University Hospital. When he took over responsibility as leader, work environment surveys indicated weaknesses in the communication climate. Regular exchanges of harsh words created a climate where people dreaded to go to work. Fallibility was at the core of the troubles, in that some doctors found it difficult to accept that even they could make mistakes, and might depend upon colleagues to intervene. Based on a series of one-on-one employee conversations, doctor Bjørnbeth gradually identified the challenges and set new standards for communication at the unit. A stronger team mentality emerged, where it was considered normal to voice a concern and engage in constructive criticism. The individuals who had contributed negatively to the work environment through harsh language were able to engage more respectfully in conversations with colleagues (Bjørnbeth, 2017).

One fixed agenda feature at the unit currently led by doctor Bjørnbeth is a meeting where they go through complications and unexpected developments from the past week. It is a version of the same kind of meeting his colleague and he tried and failed to establish years earlier. “At a recent meeting, we discussed a case where the operation itself had gone well, but there had been more blood than expected. I had cut a hole in a blood vessel, and had failed to anticipate the amount of blood that would come out of it, creating a more stressful situation than foreseen. We could have avoided that with a more careful look at the x-ray. We learned that for operations of that kind, we need to look more closely at the size of the closest blood vessels” (Bjørnbeth, 2017).

Sharing mistakes and talking openly about them require the presence of high-quality relationships between colleagues, and thus a sense of psychological safety (Carmeli et al., 2009; Edmondson, 1999). When colleagues sit down to talk about their experiences of not getting things right, they expose themselves to criticism and even humiliation. It is only in a work environment perceived to be psychologically safe that participants are likely to talk openly about tasks they have struggled with.

Dutton (2003) has noted how we can build trust in a work environment by being open about weaknesses: “Disclosing something of ourselves—especially information that makes us vulnerable in some way—is an especially powerful way to convey and generate trust.” Doctors and nurses who speak openly to one another about mistakes assume that nobody will use the information they share against them at a later stage, and so take a risk. The assumption may be false, and a colleague may betray the trust by exposing the information in some other setting. One reason why it can be difficult to create trust in a work environment is that people are not ready to make themselves vulnerable. Instead, they adopt a wait-and-see attitude or a “show me” stance (Dutton, 2003, p. 82). You go first, and then I might join you afterward. In the previous, unsuccessful attempt to create a practice of sharing mistakes, nobody was willing to follow in the footsteps of Bjørnbeth and his colleague. In Bjørnbeth’s current workplace, on the other hand, a system of trust appears to be in place, creating what Dutton sees as the potential for gradual strengthening of that attitude: “When we take the first step in building trust, we become crafters of connecting possibilities. Rather than passively waiting to see whether someone can be trusted, we actively start the virtuous cycle in which trust builds on itself” (Dutton, 2003, p. 82). Here we have a procedure for countering a tendency to hold back, of taking an initiative to break with the attitude of not being open about one’s own mistakes because you do not expect others to be equally open.

The main examples and input in this chapter have been from healthcare, where there is a potential to create trust and learn to become better professionals by being open about mistakes and failures. Doctor Westad’s open acknowledgement of his mistake created an opportunity for him and his colleagues to learn and to improve their professional work with pregnant women. Other doctors, nurses, and midwives in the same line of work can also take note of what went wrong in that particular case, and adjust their own efforts accordingly. It is also striking how the idea that there is much health in immediate acknowledgement of a mistake, is relevant in other contexts where the things go wrong and victims may start to question their own decision-making and conduct. Dutton has identified trust as one of the pathways to high-quality connections at work, and doctor Westad’s conduct appears to have generated trust both in relation to patients and among colleagues. Input from doctor Bjørnbeth indicates that there are also more complex cases, where a doctor needs to withstand pressure to take full responsibility for a bad outcome. His experience also points in the direction of sharing mistakes and analyzing them together as a powerful way of improving one’s professional work. The concepts of internal, external, and relational attribution can also be useful in sorting out the causes of good and bad outcomes of interpersonal interactions at work. The next chapter will investigate the more specific topic of normalizing acts of seeking and offering help as a fruitful way to cope with fallibility at work.