Keywords

When AHRQ assumed the responsibility from the Quality Interagency Coordination Task Force (QuIC) report, Doing What Counts for Patient Safety , to develop practice changes to reduce harm from medical errors , it faced two problems: there were few proven safe practices , and there was a dearth of standards by which to evaluate them. A standard setter was needed.

Fortuitously, a year earlier, the Advisory Commission on Consumer Protection and Quality in the Health Care Industry had recommended an independent organization be created to standardize performance measures in healthcare by means of a public-private partnership. Under vice president Gore’s direction, QuIC advanced the idea of a national standard setter and took steps to establish the National Forum for Health Care Quality Measurement and Reporting, later renamed the “National Quality Forum” (NQF) , “a broad-based, widely representative private body that establishes standard quality measurement tools to help all purchasers, providers, and consumers of healthcare better evaluate and ensure the delivery of quality services.” [1]

The fledgling National Quality Forum flourished under the leadership of Kenneth W. Kizer, once described by Don Berwick as “…probably the most effective leader in all of American healthcare.” Kizer was superbly well-equipped for the task. He was board certified in six medical specialties and had demonstrated his executive skills in several important prior positions. An emergency physician who engineered the statewide EMS system in California and a former US Navy diver and diving medical officer, Kizer well understood systems and systems thinking . As an active outdoor sports enthusiast and founding member of the international Wilderness Medical Society, he also had a deep appreciation for safety and planning for the unexpected.

Earlier in his career, as director of the California Department of Health Services and the state’s top health official, Kizer orchestrated California’s response to the new HIV/AIDS epidemic, led a cigarette tax increase and smoking cessation program that reduced the rate of smoking in California three times faster than the rest of the nation, and pioneered Medicaid-managed care. In 1994, president Clinton appointed him undersecretary for health in the Department of Veterans Affairs (VA) and chief executive officer of the VA healthcare system.

During his 5-year tenure at the helm of the VA healthcare system, Kizer radically transformed it, changing it from a hospital system to a truly integrated healthcare system that was rooted in primary care. He closed hospitals, reduced the total number of acute care hospital beds by some 55% (more than 29,000 beds), opened 300 new community-based outpatient clinics, and hired the first healthcare system chief telehealth officer in the country. All well before the “medical home” concept had taken hold in the rest of healthcare.

He reorganized the whole VA healthcare system into 22 new regional “Veterans Integrated Service Networks” (VISNs) that typically consisted of 8–9 hospitals, 25–30 community-based outpatient clinics, 5–7 long-term care facilities, 10–15 counseling centers, and 1 or 2 residential care facilities. Leaders of hospitals and clinics were proximately responsible to the network chiefs for providing quality care [2,3,4].

Kizer implemented multiple quality improvement changes that led to decreased death rates, a medication bar code system to check dose timings and reduce prescription errors , and a national formulary that resulted in savings of some $600 million annually. Customer service standards were implemented, and patient satisfaction surveys showed a growing percentage of veterans rated their quality of care as very good to excellent.

As a result of quality assurance measures , illness and death rates from high-volume surgical procedures declined. An observational study published in The New England Journal of Medicine found that the VA outscored Medicare’s fee for service program for the quality of preventive, acute, and chronic care [5]. All while the number of veterans served increased by 28% in 4 years.

figure 1

Ken Kizer. (All rights reserved)

Despite these truly astonishing improvements in the quality and access to care for veterans that Kizer accomplished in an amazingly short time, political opposition developed, largely as the result of his hospital closings and downsizing. Congressional hearings for his reappointment were repeatedly delayed, although the Congress passed specific legislation extending his tenure at the end of his first term. Finally, after continuing political drama, Kizer had had enough, and motivated in large part by his wife’s serious and deteriorating health problems, he decided to leave VA .

After he resigned, BusinessWeek reported that the Veterans Affairs system provided “the best medical care in the US.” [6] It was a remarkable transformation of a healthcare system that previously had often been regarded with distain by doctors and laymen alike. The Harvard Business Review characterized Kizer’s work at VA as the largest and most successful healthcare turnaround in US history .

But the Clinton administration was not about to let Kizer go. Vice president Gore’s office reached out to Kizer about leading the creation of a new organization that would become the NQF . It would be an independent, consensus-based, financially sustainable organization having equal representation from healthcare’s many and diverse stakeholders that would establish a national healthcare quality improvement strategy that included performance measures to track progress toward achieving the strategy.

This was a momentous step. Thanks to the work of Berwick and others, people in healthcare were beginning to talk about quality improvement and patient safety , but standards of care and valid methods for measuring quality and safety were few. Moreover, as later noted by Kizer, “The concept of the National Quality Forum arose in response to the strong American sentiment against government regulation and control of health care quality…. The (Advisory) commission envisioned that…the NQF would devise a national strategy for measuring and reporting health care quality that would advance the identified national aims.” [1]

Not everyone was enthusiastic, however. NCQA perceived it as a possible direct threat to what it had been doing for years and its business model. The Joint Commission also had reservations, although they later came around. Kizer recalls that Gail Warden , the inaugural chairman of the NQF board , wondered if the organization would last even 3 years.

The National Forum for Health Care Quality Measurement and Reporting was officially launched on September 1, 1999, with start-up funding from the Robert Wood Johnson Foundation , the California Health Care Foundation , the Horace W. Goldsmith Foundation , and the Commonwealth Fund .

Kizer saw the mission of the Forum as “to improve health care quality; that is, to promote delivery of care known to be effective; to achieve better health outcomes, greater patient functionality, and a higher level of patient safety; and to make health care easier to access and a more satisfying experience. The primary strategy…to accomplish this mission is to standardize the means by which health care quality is measured and reported and to make health care quality data widely available.” [1]

Kizer set the context for this enterprise by noting “This strategy is premised on the philosophy that health care quality data are a public good and, therefore, health care quality measurements should be publicly disclosed. It is further based on the belief that making reliable, comparative data on health care quality publicly available will motivate providers to improve the quality of care by providing benchmarks; will facilitate competition on the basis of quality; will promote consumer choice on the basis of quality; and will inform public policy.” [1]

Five key strategic goals were initially identified: (1) developing and implementing a national agenda for measuring and reporting healthcare quality, (2) standardizing the measures used to report healthcare quality so that data collection is less arduous for healthcare providers and so that the reported data are of greater value, (3) building consumer competence for making choices based on quality of care data, (4) enhancing the capability of healthcare providers to use quality-related data, and (5) increasing the overall demand for healthcare quality data [1].

From the outset, it has been NQF policy that the organization itself does not develop or test performance measures , but instead uses a multistep consensus process to vet measures created by public and private entities, including, among others, NCQA , CMS , Physician Consortium for Performance Improvement , and medical specialty associations.

NQF endorses only those measures that meet the following criteria: [1] importance to measure and report; [2] scientific acceptability of measure properties, i.e., produces reliable and valid results; [3] feasibility, i.e., require data that are readily available and create as little burden as possible; [4] usability and use, the extent to which they can be used for both accountability and performance improvement ; and [5] comparison to related and competing measures to harmonize them or select the best measure [7].

NQF was to be different from any other organization, public or private, in several ways. Indeed, it was broadly viewed as a truly novel experiment in democracy. Membership was open to anyone, organization or individual. Board membership represented a broad and diverse group of stakeholders, including federal agencies (e.g., CMS and AHRQ ), state agencies, professional associations (e.g., AHA and AMA) , private healthcare purchasers (e.g., GM), labor unions (e.g., AFL-CIO), and consumer groups (e.g., AARP) .

Kizer saw the mission of the NQF as blending and balancing consumer, purchaser, payer, and provider perspectives. All Board members had an equal vote. Multiple professional associations initially strongly objected to their vote having the same weight as consumer or purchaser organizations, but Kizer would not budge on this position. The Board’s decisions resulted from a consensus process derived from the National Technology Transfer and Advancement Act (NTTAA) and principles formally espoused by the Office of Management and Budget in OMB Circular A-119. 23 A specially convened Strategic Framework Board of experts supported the NQF’s nascent efforts by providing an intellectual architecture and principles to help guide measurement and reporting [8].

Kizer believed that ensuring patient safety should be the foundation of healthcare quality. He decided to take advantage of the contemporary surge of interest in patient safety by having the inaugural NQF effort focus on a safety issue: the reporting of serious harmful events. Bolstered by a formal charge from CMS and AHRQ , he asked me and John Colmer , the program officer for the Milbank Fund (which was funding the project), to co-chair a Serious Reportable Events Steering Committee to develop a core set of serious preventable adverse events to enable standardized data collection and reporting nationwide. The primary reason for identifying these measures was to facilitate public accountability through national mandatory reporting of these adverse events —an idea that president Clinton’s administration was open to, but which was summarily rejected by the subsequent Bush administration.

Serious Reportable Events

The first charge to the Steering Committee was to develop a definition of “serious, avoidable adverse events.” We were then to apply a consensus process to develop a set of these events. The final set would be voted on by the then 110 NQF member organizations and the Board of Directors. If approved, it would then be issued as a nationwide recommendation. In addition, we were to identify potential candidates for additional measures that needed more research, discuss issues relating to implementation, and develop a plan for dissemination of the measures .

The Steering Committee was composed of representatives from a cross section of healthcare providers, experts in quality and safety, public interest groups, regulators, and others. Broad representation of stakeholders was to be a cardinal principle of operation for the NQF , so a serious attempt was made to make sure all stakeholder sectors were well represented for this first effort. (Appendix 11.1)

I could see several pitfalls ahead. Following the release of the IOM report, To Err Is Human , the most common reaction from the public and the press was to call for required reporting of adverse events . Many people seemed to think that if people just knew about them, they would be taken care of. Doctors and others would be shamed into doing something. Those of us working in safety knew there was much more to it than that. Reporting does not automatically or necessarily lead to change.

Safety experts and policy-makers identify two kinds of reporting: reporting for improvement and reporting for accountability .

Reporting systems for improvement are voluntary and based on frontline caregivers’ desire to prevent harm . As Charles Billings , architect of the aviation safety reporting system , has noted, voluntary reporting only works when it is safe (does not result in punishment), simple (the act of reporting only takes a few minutes), and productive (reporting results in positive changes) [9].

Creating a safe environment for reporting within hospitals has long been challenging. Despite national campaigns, 20 years after the IOM report, nearly half of nurses surveyed say they do not feel safe talking about errors . On the other hand, outside the hospital, national voluntary systems, such as those run by specialty societies, ISMP , and the National Nosocomial Infection Survey, rely on reports from caregivers and have been quite successful.

Reporting systems for accountability rely on reports from institutions—in healthcare, primarily hospitals—and are mandatory. They are based on the concept that hospitals have a duty to prevent serious harm that we know how to prevent, such as amputation of the wrong leg or giving a blood transfusion to the wrong person. These events result from major system breakdowns, and it is the institution, not individual caregivers , that is in charge of the systems.

The public expects healthcare providers to ensure that care is safe, and it looks to the government to make sure that providers take the actions necessary to make care safe. The occurrence of a serious preventable adverse event suggests that a flaw exists in the healthcare organization’s efforts to safeguard patients. It is reasonable for the public to expect an oversight body to investigate such occurrences.

These serious reportable events are healthcare’s equivalent of airplane or other public-transportation crashes. And most people think the public has a right to know about them when they occur [10]. If so, then not only reporting, but making the reports public should be mandatory .

Reporting is of little value if it doesn’t lead to improvement. The healthcare organization must also be required to investigate the event to determine the underlying system problems and/or failures (i.e., root cause analysis) and then correct the failures to prevent recurrence of the event. This information should be disseminated to other healthcare organizations so all can benefit from the lessons learned.

In the USA, the only mandatory systems for reporting of serious events are those run by the states. However, in 1999 only 15 states had such programs, and these varied considerably in what hospitals were required to report and what happened when they did. In most cases, nothing happened: no analysis and no feedback to the hospital—and no reporting of results to the public. The programs were typically understaffed, underfunded, and ineffective [10]. Perhaps providing a nationally accepted, industry-endorsed list of serious preventable adverse events would be an incentive for improvement.

My personal feeling was that it was important to focus on clearly defined adverse events , not on errors or vague things like “loss of function,” which appeared in some systems. I argued for events that were simple to define and “unfudgeable,” i.e., not susceptible to interpretation or debate about whether it is or is not reportable. At the time, hospitals routinely gamed the system, going to great lengths to “prove” that an event was not preventable and therefore didn’t need to be reported. However, if certain events were by definition preventable, perhaps this charade could be curtailed.

The Steering Committee met for the first time on December 20, 2000. We defined “serious, avoidable adverse events” as patient harms that hospitals can reasonably be expected to prevent 100% of the time. We had interesting, thoughtful, and sometimes spirited discussions about the purpose of the list, preventive strategies, priorities, verifiability of reporting, and specificity of events. I suggested we eliminate “unanticipated” as being too difficult to define and too easy to weasel. More easily than I had expected, we agreed on the definitions of an adverse event and a serious event, as well as the criteria for inclusion on the list. We did a first pass, discussing a list of 25 candidates for inclusion that the NQF staff had prepared.

Mandatory reporting is a contentious issue. Many have strong feelings about the public’s right to know when these events occur, while hospitals are afraid of liability and loss of reputation from going public with a mistake. Although some hospitals had gone public and found their honesty led people to trust them more, most still did not believe in this degree of transparency .

At the Steering Committee’s second meeting in February 2001, Kizer announced a special advisory panel of state health professionals to help us ensure our list would be relevant. We agreed on four criteria for selection of events for the list: events must be (1) serious, (2) clearly definable, (3) usually preventable, and (4) quantifiable (i.e., capable of being easily audited). In other words, they should be events that are serious and obvious to all observers when they occur (the “unfudgeable” part).

In April we approved the final report. However, the group did not agree with Ken’s interest in calling them “never events,” undoubtedly rooted in the firmly held doctrine in medicine that you cannot say that anything “never” happens. So it was decided that the list should be officially titled “Serious Reportable Events.” Nevertheless, they quickly began to be referred to as “never events.”

In the final version, 27 items were grouped into 6 categories: (1) surgical events (e.g., wrong site, retained foreign body), (2) device or product events (contamination, malfunction), (3) patient protection events (suicide, infant discharged to wrong person), (4) care management events (death or disability from medication error , blood mismatch, kernicterus), (5) environmental events (death or severe disability from electric shock, burn, falls, restraints ), and (6) criminal events (sexual assault, impersonation ) [11]. The full list is shown in Appendix 11.2. In 3 subsequent updates, the list has been expanded to 29 events.

The Committee’s report was readily approved by the NQF Board , with only the American Hospital Association and one state hospital association voting against the report’s adoption, and it was published a few months later in early 2002. It was generally well received by the press and the public. It did, in fact, become the model for some state reporting systems . Later, CMS used the list to deny payments for Medicare patients. This was not our intended use, and the matter was vigorously debated and discouraged during the Committee’s deliberations. However, I personally felt it had merit. The best way to get hospitals’ attention is to hit them in the pocketbook.

All in all, this was an interesting and important initiative. We established important definitions and expectations. Within 2 years, the number of states with mandatory reporting of serious events increased to 20, and by 2010, 27 states and the District of Columbia had enacted mandatory reporting systems , incorporating all or part of NQF’s list.

Safe Practices for Better Healthcare

The reporting initiative got national attention and started the NQF on the way to Kizer’s goal of it becoming “the” trusted and respected national standard-setting organization. He then took on the second QuIC challenge, to “identify a set of patient safety practices critical to prevention of medical errors .” This initiative was more ambitious than the reporting project and destined to have far greater impact. It was to be a list and description of evidence-based and standardized care processes that promote safety and reduce patient harm . The objective was to stimulate healthcare organizations to adopt a systems approach by providing effective processes that could be used “off the shelf,” saving the care team the effort of developing their own new practices and systems de novo.

To begin the initiative, NQF asked AHRQ to commission an independent review of the evidence behind safe practices . As described in the previous chapter on AHRQ , this effort was led by Kaveh Shojania and Bob Wachter of UCSF [12]. But they found only 11 practices that met its criteria! As noted in the previous chapter. David Bates , Don Berwick, and I were concerned that the many accepted safe practices in current use would be suspect just because they had never been subjected to a controlled trial. Our paper and a rebuttal by the authors were published in JAMA as part of a point-counterpoint analysis [13, 14]. The Safe Practices Committee expanded its criteria to include experiential evidence of effectiveness.

Other sources of candidate practices were the Leapfrog Group , NQF member organizations, Steering Committee members themselves, and an open call for candidate practices to more than 100 medical, nursing, and pharmacy specialty societies.

The final criteria for inclusion as a safe practice were:

  1. 1.

    Specificity. The practice must be a clearly and precisely defined process or manner of providing a healthcare service.

  2. 2.

    Benefit. Use of the practice will save lives endangered by healthcare delivery, reduce disability or other morbidity, or reduce the likelihood of a serious reportable event.

  3. 3.

    Evidence of Effectiveness. There must be clear evidence that the practice would be effective in reducing patient safety events. This includes not just research studies, but broad expert agreement or professional consensus that the practice is “obviously beneficial” as well as experience from nonhealthcare industries transferable to healthcare (e.g., repeat-back of verbal orders or standardizing abbreviations).

  4. 4.

    Generalizability. The safe practice must be able to be utilized in a variety of inpatient and/or outpatient settings and/or for multiple types of patients.

  5. 5.

    Readiness. The necessary technology and appropriately skilled staff must be available to most healthcare organizations [15].

The practices were organized into five broad categories for improving patient safety: (1) creates a culture of safety , (2) matches healthcare needs with service-delivery capabilities, (3) facilitates information transfer and clear communication, (4) adopts safe practices in specific clinical settings or for specific processes of care, and (5) increases safe medication use.

The final list of 30 practices included both those with research evidence of effectiveness and those that were already in wide use and well accepted. Some examples of the latter practices included the following: staffing of ICUs with critical care specialists (intensivists); “read-back” of orders; prohibited abbreviations; medication reconciliation; hand hygiene ; unit dosing; adoption of computerized patient order entry; the universal protocol for preventing wrong site, procedure, and patient for all invasive procedures; and protocols for prevention of central line-associated bloodstream infections (CLABSI) , surgical site infections, MRSA, and catheter-associated urinary tract infections [15]. (For the full list, see Appendix 11.3.) The list was formally approved by the NQF Board in late 2002.

In April 2004, the Leapfrog Group adopted the full list of safe practices as their fourth safety “leap.” (Three of the safe practices were based on its first three safety leaps.) Also in 2004 several purchaser coalitions (e.g., Pacific Business Group on Health, The Employer Alliance Health Care Cooperative, Midwest Business Group on Health, among others) endorsed the safe practices .

Individual hospital adoption of the safe practices has varied greatly. Although now widely accepted as the standard toward which to strive, they are not easy to implement. (See Chapters 6 and 8.) Success requires strong support at the executive level, education and training of personnel, a “champion,” and teamwork . Physician buy-in is critical. Outside pressure, as from The Joint Commission , has helped.

NQF has periodically updated the safe practices in response to the development of new practices as patient safety matured. In the first update, in 2006, safe practices were added that addressed leadership and staffing, and the practices were harmonized with safety initiatives from other national groups such as the CMS , AHRQ , and The Joint Commission . In 2009, practices were added to address pediatric imaging, organ donation, caring for caregivers , glycemic control, and prevention of falls.

Performance Measures

While the serious reportable events and safe practices were highly visible projects, multiple other projects were concomitantly undertaken by NQF during its formative years. For example, performance measures were endorsed for, among other things, adult diabetes care, home healthcare, cardiac surgery, child healthcare, medication safety , hospital care, substance use disorders , and nursing care.

Likewise, a consensus framework for hospital care performance evaluation was developed, approved by the Board , and published, as were position papers or guidance documents on the role of hospital governing boards in promoting quality care, health information technology and electronic health records , health literacy, pay for performance, and improving healthcare quality for minority populations.

The NQF also closely worked with the eHealth Initiative, CMS , AHRQ , and other groups to facilitate the adoption of health information technology and new payment models that supported quality improvement. Kizer strongly lobbied HHS secretary Tommy Thompson to promote the adoption of electronic health records , and he worked closely with CMS administrator Scully to promote adoption of public reporting of performance measurement data.

Throughout this time, a problem that plagued the NQF’s efforts was the lack of stable financing and especially not having funds to undertake projects that were “for the public good”—i.e., projects that were not linked to a specific healthcare constituency and its interests. Kizer spent a large amount of time finding and cobbling together funds to finance the many projects that NQF undertook in these early years.

New Leadership

At the end of 2005, Kizer stepped down, and Janet Corrigan took over as president of the National Quality Forum . Corrigan was ideally suited to the role. An expert in health policy and management, she was highly respected for leading the staff at the Institute of Medicine that produced the legendary To Err Is Human . But most notably, Corrigan was the executive director of president Clinton’s Advisory Commission on Consumer Protection and Quality, which recommended the creation of NQF .

Corrigan faced several challenges. Despite generous support from several foundations, the financial situation was precarious. Other reliable sources were needed. In addition, the endorsement process had become unruly. It needed to be put on more rigorous scientific foundation. It needed to continue expanding the membership base, but more importantly it needs expanded public support. Corrigan brought in Helen Burstin from AHRQ to straighten it out.

Corrigan envisioned new opportunities for NQF . She believed that the NQF would be more effective if it focused more on measures needed to achieve national safety and quality goals. But what were the national priorities; what were the goals? And who set them? Well, it wasn’t clear. AHRQ had its priorities, as did CMS , IHI, and others , but there was no uniformity, no consistency, and no single voice .

With support from the Robert Wood Johnson Foundation , in 2007 Corrigan persuaded HHS to ask NQF to establish the National Priorities Partnership (NPP) to provide input to the secretary for consideration as it developed priorities. Under the leadership of Helen Burstin , NPP was developed as a public-private partnership of 51 partner organizations that represent the diverse perspectives of consumers, purchasers, healthcare providers and professionals, community alliances, health plans, accreditation and certification bodies, and government agencies. NPP identified six national priorities that were embraced by many national organizations and health systems.

figure 2

Helen Burstin. (All rights reserved)

As the debate around health reform heated up in 2009, NQF helped organize a coalition of quality leaders known as Stand for Quality to encourage legislators to provide stable and adequate support for the core measure activities recognizing that they are fundamental building blocks for virtually all approaches to payment and delivery system reform and for recognition of the important role of having a public-private partnership to carry out this work.

The passage of the Affordable Care Act (ACA) in 2010 permitted the realization of these goals. Federal support of NQF’s work increased. ACA directed HHS to obtain multi-stakeholder input on setting priorities and selecting measures for use in various federal programs. Significant support was provided for HHS to contract with a “voluntary consensus standard setting body” (aka NQF) to conduct much of this work.

In response and to complement its priority-setting NPP , NQF developed the Measure Applications Partnership (MAP) to advise the federal government and private sector payers on the optimal measures for use in payment and accountability programs. This closed the loop linking the endorsement process to measures needed to advance the goals established by the NPP . Under the leadership of Helen Burstin , the MAP built on the earlier efforts of the various “quality alliances” but provided for a more patient-centered, coordinated approach to measure selection across various providers, settings, and programs.

MAP has two overarching objectives: to focus accountability programs on achieving the NQS priorities and goals and to align measurement across the public and private sectors and across settings and populations served. The MAP Coordinating Committee and workgroups are composed of representatives from more than 60 private sector stakeholder organizations , 9 federal agencies, and 40 individual technical experts.

CMS found the recommendations of the MAP essential in 2012 when it adopted value-based purchasing for Medicare and Medicaid services that linked hospitals’ payments to their success in achieving reductions in specific measured bad outcomes, such as catheter-associated urinary tract infections and central line infections. Suddenly, the incentive for hospitals to implement safe practices increased dramatically. CMS contracted with the MAP for further measures to use in this program.

Through the MAP the NQF has advised the government on the selection of measures for use in more than 20 federal public reporting and pay-for-performance programs. About 300 NQF-endorsed measures are currently in use in federal, state, and private sector programs. Over 90 percent of all Medicare payments are now performance-based.

The ACA placed responsibility for setting national priorities within CMS . NQF’s National Priorities Partnership provides input to CMS for this function and also plays a role in convening stakeholders to develop action plans to achieve the national priorities and goals.

ACA also charged HHS with developing a National Quality Strategy (NQS) to improve the delivery of healthcare services, patient outcomes, and population health and required that the NQS be shaped by input from a broad range of stakeholders . HHS requested NQF to convene the NPP to provide input to the secretary for consideration as it developed this national body of work. In 2011 it established six priorities: healthy living, prevention of leading causes of mortality, patient safety, person and family engagement, communication and coordination, and affordable care.

NQF thus manages the “supply chain” for quality and safety priorities, setting standards and applying them: the NPP , which sets priorities and goals; measure stewards, who develop and test measures ; the evaluation and endorsement consensus process; the MAP that advises on selection of measures for use in accountability applications; and public and private accountability efforts. It is the neutral convener of multi-stakeholder groups that provide the “bridge” between public and private sectors.

In addition to NPP and MAP , the NQF has a broad array of quality and safety programs. Its health IT initiatives support the complex move toward electronic measurement to facilitate data sharing between healthcare providers and their patients. NQF provides information and tools to help healthcare decision-makers and has programs in person- and family-centered care, effective communication, palliative and end-of-life care, and disparities [16].

Many of these programs have been institutionalized by NQF into multi-stakeholder Standing Committees in topical areas. Standing Committees are charged to review and recommend submitted measures for endorsement to NQF’s Consensus Standards Approval Committee that considers all measures recommended for NQF endorsement.

Conflict of Interest Scandal

In 2014, the patient safety movement was rocked by its first major scandal when Charles Denham reached an out-of-court settlement with the US Department of Justice for receiving over $11 million from a medical products company, CareFusion , to promote their products [17, 18]. Denham had served as co-chair of various NQF committees that produced safe practices reports dating back to 2003; he was chair of the Leapfrog Group’s Safe Practices Committee and editor of the Journal of Patient Safety . Denham’s exposure was a blow to the entire patient safety community, but NQF by far suffered the greatest fallout.

The leadership of NQF was taken completely by surprise. Despite NQF’s strict conflict of interest policies, Denham had not disclosed his commercial ties. NQF immediately severed its ties to Denham and his foundation TMIT.

Denham had come under suspicion earlier at NQF in 2009 when concerns were raised by both staff and committee members when he lobbied the committee to insert a specific recommendation in Safe Practice 22 (surgical site infection prevention) to use chlorhexidine gluconate 2% and isopropyl alcohol solution as skin antiseptic preparation, i.e., CareFusion’s ChloraPrep. After investigation, the recommendation was replaced with a more generic one, and Denham was removed from his co-chairmanship of the Safe Practices Committee, but no one knew he was being paid by CareFusion .

There was another problem. For years, Denham had been providing substantial financial support for NQF . Much of the staff work for the Safe Practices Committee was supplied gratis by Denham’s “nonprofit” company, Health Care Concepts. Between 2006 and 2009, this organization donated grants totaling $725,000 to NQF .

When the scandal broke, NQF officials said that Denham never reported his conflicts, despite a specific requirement for all members to do so. After his firing, NQF took immediate steps to strengthen its processes to ensure the integrity of quality measures and safe practices , and it reviewed all of the standards set by the committee Denham co-chaired. It also established a policy of not accepting money from funding organizations whose leaders are on its committees. Denham was also relieved of his editorship of the Journal of Patient Safety and his leadership of a committee of the Leapfrog Group .

Conclusion

NQF is one of the few healthcare organizations defined as consensus-based by the National Institute of Standards and Technology, part of the Department of Commerce. This status allows the federal government to rely on NQF-defined measures or healthcare practices as the best, evidence-based approaches to improving care. Because they must meet rigorous criteria, NQF’s endorsed measures are trusted and used by the federal government, states, and private sector organizations to evaluate performance and share information with patients and their families.

NQF’s prompt and transparent response to the Denham affair confirmed its legitimacy as standard setter. It has continued to expand its role as envisioned by Kizer to promote effective care, achieve better outcomes, improve patient safety, and improve access to care through rigorous measures and collection and analysis of data.

Working together, NQF and AHRQ became the institutional foundation that permitted patient safety to advance both as a science and in practice. They represent the ideal of a public-private partnership where collaboration, commitment, leadership, and good will produce powerful and important change.