da Vinci is a robot manufactured by Intuitive Surgical Inc (ISI) to perform less invasive surgery and is one of the most complicated surgical tools ever developed. As the robot was introduced into the market, most cardiac surgeons had not accepted that less invasive heart surgery was even feasible. This presented two goals for robotic cardiac surgery training. First, become facile with operating and troubleshooting the robot so that it improves, rather than detracts, from the surgeon’s technical performance. Second, adopt a completely foreign approach to surgery viewed by many with great skepticism. Training in robotic surgery was set up for failure because few appreciated that this herculean task was essentially impossible with the resources available at the time. A scandal resulted because no one else (hospitals, surgical societies, FDA, lawyers) seemed willing or able to stop the predictable harm that it imposed on vulnerable patients.

The story begins in 1999 with the initial review of the da Vinci application for FDA approval. The FDA immediately expressed concerns about how the company could possibly teach surgeons to use such a complex machine. The company founder, Fred Moll, responded by proposing an extensive 6 week training course with validated and rigorous testing of the required competencies prior to any clinical cases. After da Vinci approval, the FDA sponsored a multicenter trial of the robot in cardiac surgery. The study started in 2002, when the hope/hype about the future of less invasive surgery was at its zenith. The most prestigious centers in the US were clamoring to participate. The study protocol involved a mandatory stepwise training program modeled after Dr. Moll’s vision before allowing any clinical enrollments (Argenziano et al, Ann Thor 2006; 81:1666-75). Out of 18 study sites selected to participate, 12 centers completed the training and 8 teams performed only a few robotic cases before dropping the project altogether. In fact, the bulk of enrollments during the 3 year study came from only one evangelical surgeon at one site.

This trial provided the first fateful signal about the challenges of getting surgeons to safely use the robot for heart surgery. It also yielded brilliant insights into how improve adoption rates of this procedure. During the multicenter trial, Harvard business professor Amy Edmundson was provided access to the study records and performed on-site interviews with staff at the 12 enrolling sites. Her data showed more rapid learning of this technology at some sites than others. In a widely cited paper (Harvard Business Review, Oct. 2001), she described factors related to change management (e.g. teamwork and communication) as far more influential than technical skill on the speed of team learning. These findings remain largely overlooked – the main training focus continues to be the development of technical skills.

At this point the company was at a strategic cross roads. Using robotics to improve cardiac surgery was a key vision of founding CEO Lonnie Smith. The robotic cardiac trial suggested that its training approach should be reappraised if uptake in the broader cardiac surgical community was going to have a chance. One way to improve training could have been to teach nontechnical skills such as change management in addition to the technical aspects of robotics. But who should provide this training? Surgical societies, not industry, enjoy the mandate and credibility to provide unbiased postgraduate training and education. Just a few years earlier, general surgical societies largely ignored the birth of laparascopic cholecystectomy until it became clear that public demand would not allow this technology to go away. After a rough first decade, laparoscopy then became a standard of care used by all hospitals. Only at that point did the societies help develop and validate training and credentialing protocols. At the time, it was reasonable to assume that cardiac surgeons would not repeat the same mistakes of laparoscopy’s troubled birth. The company poured generous financial support into the major societies for cardiac surgery to make sure that this next wave of innovation would benefit from those obvious lessons.

There were no regulatory mandates to revamp their training strategy. It is true that the FDA performed a postmarketing survey of experienced robotic surgeons from different specialties to further investigate a number of adverse event reports it received and all respondents felt that training for robotics was deficient. Nevertheless, da Vinci was approved as a 510K application. Training is not monitored by the FDA for medical devices approved through this route. This freed ISI from any of commitments they might have made in the past.

There was no legal mandate to provide training any more comprehensive than they did. Even though lawsuits have claimed patient injury because ISI provided insufficient surgeon training (e.g. Fred Taylor, et al. v. Intuitive Surgical Inc), the “learned intermediary doctrine” of product liability laws provides an effective shield against these claims. This doctrine indicates that ISI only needs to provide adequate warnings (not train or instruct on how to avoid harm) to the surgeons because they are considered learned intermediaries.

Corporate strategy was probably the most important consideration and deliberations could not have been easy. Another device used facilitate less invasive cardiac surgery, called HeartPort, also got off to a poor start at highly respected programs. Back-to-back failures of these pioneering efforts placed less invasive cardiac surgery up against a serious headwind. Making matters worse, a Wall Street Journal article published in 1999 uncovered high pressure tactics used by HeartPort to push its device on cardiac surgeons despite its poor trial results. (http://www.wsj.com/articles/SB925796548880067028). The end result of these missteps was a collective growth in doubts in the field about the safety of trying to avoid the sternotomy. Failure to acknowledge and proactively address these doubts was likely to make them grow further and cause irreversible damage to the reputation of ISI with its surgeon customers.

On the other hand, the core competency of ISI was to sell robots, not provide education or improve surgical safety. Promising companies fail when they lose focus on their core mission. In addition, ISI is publically traded on the NYSE, which tends to focus corporate strategy on sales, marketing and the improvement of quarterly financial results. “Ease of use” is at the top of the list of priorities for any doctor considering a new device or product. A complex, lengthy training program mandated by the company would have sent a signal to busy, impatient heart surgeons that the robot was not easy to use. As any marketer knows, that signal kills sales. It was no secret that cardiac surgeons in particular were key to getting hospitals to invest in a multimillion dollar robot. Their service line is far more profitable to the hospital than Urology, Gynecology or other specialties interested in using robotics. Large capital purchases don’t happen without a reasonable chance for return on investment, so cardiac surgeons were needed at the table. It had to be obvious to ISI corporate leaders with the fiduciary responsibility of quarterly financials that it was crazy to increase the complexity of robotics training in cardiac surgery beyond what the surgeons were requesting. Very few requested more training from ISI at the time – most wanted less.

With green lights from legal, regulatory and financial perspectives and silence from surgeons, ISI rolled back their requirements from the 6 week comprehensive training concept posed by Dr. Moll to a 1 day off-site course. A basic competency test given at the completion of this course was modified so that no one ever failed. There has also been email evidence from legal cases that show that the company tried to make credentialing requirements more limited at some hospitals. Many of these decisions were driven by marketing directors at ISI, insulated from those that would appreciate its clinical impact.

A red light built into our system failed to activate in this case. The medical profession is aware of a conflict of interest whenever industry provides training for their products. It is the role of surgical societies and credentialing committees to provide oversight and protect against this threat by establishing rational requirements for training and prerequisites. Unfortunately, the cardiac surgery societies repeated history and mimicked the response to laparoscopic surgery seen 10 years earlier. Once again, society leadership seemed to want robotic technology to go away without giving it the credibility of formal training recommendations at the outset. To date, they have never acted. Hospital credentialing committees were never able to unravel these complex issues and lacked the political will to slow down poorly trained teams. As a result, virtually all US hospitals defaulted in lockstep fashion to the minimalist training plan put forward by ISI.

It has been highly effective public policy for the FAA to require pilots to have 1000 hours of flight experience before operating a commercial plane. Yet, surgeons are free to operate a robot on patients (with 1000 fold more risk than a plane trip) after 1 day of off-site training and no legitimate testing of robotic competency at any point. This catastrophic comparison of aviation and surgery resulted from what safety expert James Reason would describe as “all the holes in the cheese slices lining up”.

It must be obvious by now that the plan was unlikely to yield encouraging clinical outcomes. Indeed it did not. Between 2004 and 2013, nearly 400 cardiac surgical teams initiated robotics training protocol of ISI. Only 22 achieved sustainable success. When failure occurs at this magnitude, it inevitably creates a wake of political, financial, legal, regulatory and clinical fallout at US hospitals that is impossible to overstate. Each program started with champions trumpeting the virtues of innovation and ended with stakeholders feeling betrayed by surgeons with inappropriate training and credentials. This wake has also turned ISI’s early successes into Pyrrhic victories. Future plaintiffs take notice of the dissenting opinions of judges and jurors that ISI’s training protocol may have been negligent. FDA concerns loom about training and the customer base of robotic surgeons shrinks. Their current threat of competition from other companies is far greater than might be expected for a first mover with a monopoly.

Accepting that our current state is a complete mess, we can now establish a vision of where we need to be. Success or failure of robotic cardiac surgery has always been about teamwork. Teams that master complex surgical procedures become good at rapidly learning from and correcting mistakes. During these cases, successful teams develop implicit communication and coordination patterns, anticipate needs of the situation, and minimize interruptions. The typical model for continuous quality improvement in the operating room is to recognize steps that could be modified and improved, often drawing on the collective experience of the group to identify alternatives that worked in the past. During a crisis, these teams illustrate their skill at “sensemaking”, or the ability to quickly uncover the seriousness of a situation through action. High performance teams have a sufficient number, consistency and quality of team members available to do the acting and interpretations needed to safely resolve the current crisis and prevent the next one.

Unfortunately, conditions built into any standard cardiac surgical OR set up robotic cardiac surgery for failure when using this standard approach. Even for routine cases, staff experience high stress from issues like the need to multitask under intense time pressure, inevitably making them prone to avoidable mistakes. The blame and shame culture that has become endemic to the cardiac surgical OR makes it difficult to openly discuss and learn from these mistakes. Another problem occurs when the same staff are not available for all of the robotic cardiac surgery cases on a consistent basis. Turnover hinders the ability of the team to interpret what is going on and to define the most appropriate responses. Taken together, these issues create competing demands for limited cognitive resources and make it exceedingly difficult for anyone to learn in real time about an unfamiliar procedure. A team that is accustomed to responding efficiently to crisis situations during open chest cases has a hard time accepting the complexity and unpredictability of robotic heart surgery. Left unaddressed, this inevitably compromises morale and job satisfaction, which compromises the learning environment and creates a vicious cycle of worse results.

Cardiac surgeons also find themselves in unfamiliar territory when implementing new ideas into the OR. A grueling decade of surgical training during CT residency inculcates a value system of self-determination and perfectionism. In such a system, you are left to learn on your own after finishing residency. Surgeon culture provides no expectation that an expert trainer or your team will support you during the adoption of robotics or any other novel procedure. In this culture, it is no surprise that many cardiac surgeons don’t have time or desire to implement ideas they didn’t learn in training. Those brave enough to try robotics usually have no understanding of the change management skills needed to guide their team through high impact change. Neglecting these skills makes adoption of a new robotic program is essentially impossible.

Robotic cases also trigger stress and cognitive overload in the surgeon, which makes decision making dramatically different than under less extreme conditions. In the heat of the moment, stress makes one underestimate risks and overestimate rewards of an action. It also impairs communication, teamwork and the ability to learn from negative feedback. All these adverse effects tend to alienate teams and isolate surgeons, essentially reinforcing prior biases that surgeons are on their own to tackle this daunting project.

Coaching is the antidote to all these problems. The role of a coach is to help achieve maximum potential by evaluating performance and then defining areas of weakness that serve as the focus for further training. In sports, a coach evaluates an athlete during practice sessions to gain improvement prior to game time. Whether it’s batting, pitching, serving, kicking or swimming, it is not difficult to establish a practice session that closely resembles the game-time situation. Mastery is less about innate talent and more about putting endless hours for the deliberate effort to continuously improve. After losing the championship in 2002, basketball star Allen Iverson derided the impact of practice with his infamous rant: “we’re talking about practice” (https://www.youtube.com/watch?v=eGDBR2L5kzI). This was so memorable because of how rare this attitude is in someone of his talent. A unifying theme of world-class athletes and most other high performing professionals is an unbelievable work ethic. Regardless of their experience, all pilots undergo annual performance testing during a “check ride” and lawyers endlessly prepare and rehearse for their cases. There are strong incentives for this tireless preparation. Without practice, athletes get benched, pilots don’t get a license and lawyers lose their case. Each of their salaries is linked to their efforts at practice. In stark contrast, there have been few incentives for the cardiac surgeon to practice robotics.

The first step is to get familiar with the robotic console and learn the robotic instruments and camera. Computer simulation and cadaver dissection are useful to practice this before “game time” (direct surgical care) but only teach the basics. My 13 yr old daughter recently tried out the da Vinci skills simulator at a marketing event (http://www.intuitivesurgical.com/products/skills_simulator/). Even though I’ve been doing robotic surgery for 10 years, she warmed up for 5 minutes and got a better score on the simulator than I did. My (biased) conclusion is that this did not prove she has the complex psychomotor skills that would help her avoid trouble during less invasive heart surgery. More likely, the fidelity of the model is not adequate to develop or test these types of skills. Actual clinical cases seem to be the only place for this learning. Decisions, communication, and technical performance are all more difficult if training must happen this way. This inevitably increases risks to patients (safety) and the team (morale). Shortening this phase as much as possible requires “deliberate practice”, or the focus of day-to-day activities on identifying and improving the specific things needed for robotic surgery to be safe. This effort is best paired with immediate feedback from a coaching expert because even senior cardiac surgeons go through a period of incompetence that they are largely unconscious to regarding a new procedure. It actually takes some degree of competency for the surgeon to start to recognize and then address their own areas of weakness (i.e. conscious incompetence). A coach easily identifies problems immediately.

If there is no other option than to learn on patients themselves, then it is a moral obligation of the highest priority to gain as much familiarity and judgment from each case as possible during this vulnerable period. Currently, each new robotic surgeon gets a coach/proctor for their first 2-3 robotic cases but none thereafter even though the entire novice phase lasts between 40 and 100 cases. Given the clear rationale for coaching, not engaging coaches beyond 3 cases is proof surgical programs have failed their moral obligation and needlessly allowed a preventable hazard to impact patients. However, even if this coaching was made available, surgeons might not accept it. Many hold tight to values of competency and autonomy that limit the ways that the coaches would actually be used to improve. Many cardiac surgeons hate the idea of being observed in the OR out of fear of looking incompetent to their team or their own skepticism that prolonged coaching would be useful. Given a blame and shame environment at many hospitals, there are concerns the coach might also be a policeman. As Atul Gawande said in his article about a “Coach in the Operating Room”: ..the capabilities of doctors matter every bit as much as the technology. This is true of all professions. What ultimately makes the difference is how well people use technology. (Surgeons) have devoted disastrously little attention to fostering those abilities.

Dr. Gawande was able to coax a master surgeon to sit for hours in his OR, patiently take notes and provide feedback about his performance. There are only 20-25 cardiac surgeons with enough robotic surgery experience to be considered experts, so the Gawande approach is not feasible. Instead, I propose telementoring. Expert coaches can provide feedback in real time using the da Vinci Si console, even at a different location. After each case, the coach can review videos taken from the robotic camera to specifically critique technical aspects and from the OR to critique communication and teamwork. In order to let the coach to hone in on the most important parts for review, I would ask team members (e.g. surgeon, tech, anesthesiologist, circulator) to wear heart rate monitors during the case. Identifying periods when there was low heart rate variability in various members of the team is a highly sensitive indication of emotional stress at that point during the case. When these data are coupled with the video review, the coach could specifically focus the critique during high stress phases because this frequently is when novice teams have the most difficulties with performance. The coach would be able to judge the team’s progress by noticing less stress in team members as experience accumulates.

Surgeons must become better coaches themselves. The lead surgeon needs to develop the skills to perform an effective debrief after each of these early cases. Edmundson’s research called this leadership skill in the surgeon as the ability to create “psychological safety”, meaning that the team feels free to speak up about problems without concern for retaliation or ridicule. After gaining some proficiency, the team must then learn the often unforeseen problems that occur during this new procedure, how particular conditions influence the risk of those problems and then devise strategies to either prevent or respond to those events. A coach that guides its team through the novice phase and steers it away from trouble while gaining more and more proficiency buys the needed time to acquire expert status. The ability to navigate this path is what separates successful from failed programs.

Evidence for the value of all this coaching is compelling. Academic surgeons have been coaching residents for decades by conferring graduated responsibility, encouraging or expecting deliberate practice, and deconstructing complex tasks. Multiple studies have shown that clinical outcomes at academic hospitals with residents operating are no different than hospitals with no residents. If coaching surgeons that had no prior skills does not compromise patient safety, then a similar approach should improve the success rate of training robotics to otherwise highly skilled surgeons.

Organizations must also develop their own novel paradigms for learning when it comes to surgical innovation. Few understand how to innovate. Time spent in training isn’t reimbursed so hospitals have been reluctant to make the necessary investments needed to improve past failed approaches to training. Hospital executives should also prioritize staffing to minimize the complexity of the learning phase.

It’s a lot to ask of surgeons, their teams and the hospitals where they work. But patients desperately want us to try. This makes it all worthwhile.