Mistakes and Complexity in Health Care

John R. Grout, Campbell School of Business, Berry College, Mt. Berry, Georgia 30149-5024, phone :(706) 238-7877

Draft: This paper is work in progress. Suggestions that would improve the paper gratefully received.


Hinckley and Barkan’s conformance model predicts defect rates using three sources of defects: variance, mistakes, and complexity. Variance (and statistical process control) has been studied extensively generally and for health care. Mistakes and complexity have not been as widely addressed. This paper 1) presents evidence that demonstrates the importance of mistakes in health care processes, 2) discusses an approach to remediate mistakes, and 3) proposes an approach to assessing changes to a health care providers’ process based on its relative priority and impact on complexity.


In the last decade, many firms including health care organizations have mounted substantial efforts to improve quality and customer satisfaction. These efforts have centered on a corporate culture of employee empowerment and involvement, decision making based on data, and statistical process control (SPC). More recently, some of these firms’ efforts have been focused on documenting their processes in compliance with ISO 9000-based quality standards.

While these efforts have been important and effective in many cases, the framework for thinking about quality has been incomplete. This is particularly true when the definition of quality is the conformance- or manufacturing-based definition [Garvin 1988].

Recent work by Hinckley and Barkan [1995] identify 3 causes of non-conformities, or defects: variance, mistakes, and complexity. Add to this the possibility of reducing defects through cultural means (incentives, awareness, driving out fear, etc.) and there are four distinct areas that must be addressed to achieve the single digit defects per million opportunities that are sought in today’s highly competitive business environment. Of these only two are widely addressed in current quality practice: culture and variance. Variance as used here is the statistical variance that is usually managed using statistical tools like SPC, DOE, acceptance sampling, etc.


Hinckley & Barkan [1995], and Chase & Stewart [1994] both argue that statistical variance based tools for controlling the process are not well suited to detecting mistakes caused by human error. Human error will often be classified as a common cause not a special cause. This is because human error tends to rare and intermittent. The impact of human errors on estimators of the process average and variance is likely to be small and undetected by sampling. However, their impact is substantial when quality goals are in the range of single digit defects per million. Rook [1962] found that human errors in experimental settings are likely to reach nearly 300 defects per million for relatively simple operations. Leap [1994] found that errors are a much larger problem than that in the health care industry: approximately two percent (20,000 errors per million) of patient days involve an adverse drug reaction of some kind. McClelland, McMenamin, Moores, and Barbara [1996] report that individuals are 30 times more likely to die from human errors in the transfusion process than the more highly publicized risk of receiving HIV tainted blood.

The approach to error prevention in health care (and elsewhere) has relied on individuals to not make errors [1]. The presumption has been that if errors occur, it indicates a lack of vigilance and determination by the individual. Similar approaches were, and often still are typical in industrial settings. The reflex among managers to exhort workers to "be more careful" is still common.

In quality management, it is often asserted that 85% of problems are attributable to "systems" outside the workers' control and that only 15% are attributable to workers. This has led managers to focus on improving systems instead of blaming workers for results that are out of their control. This approach is also appropriate for human errors. Donald Norman urges us to "change the attitude toward error. Think of an object's user as attempting to do a task, getting there by imperfect approximations. Don't think of the user as making errors; think of the actions as approximations of what is desired" [1989].

A set of strategies for reducing mistakes and human error was developed at Toyota Motor Company by an industrial engineer named Shigeo Shingo [1986]. These strategies rely heavily on the use of poka-yoke (pronounced POH-kah YOH-kay) devices. Poka-yoke is Japanese for mistake-proofing. Poka-yoke devices are simple mechanisms that either prevent errors from occurring or make errors obvious before serious consequences result.

Poka-yoke Framework

Poka-yoke devices have three attributes: an inspection method, a setting function, and a regulatory function. Each attribute is discussed in detail below.

Inspection methods. Shingo identified three types of inspection: judgment inspection, informative inspection, and source inspection. Judgment inspection sorts out defects. There is relative consensus that this type of inspection is discouraged.

Informative inspection is an inspection of the products produced by the process. Information from these product inspections is used as feedback to control the process and prevent defects. Control charts are one form of informative inspection. Shingo’s successive checks and self-checks are alternative forms. These involve having each operation inspect the work of the prior operation, successive checks, or having workers assess the quality of their own work, self-checks. Informative inspections occur "after the fact."

Source inspection creates and uses feed-forward information to determine "before the fact" that conditions for error-free production exist. Norman [1989] refers to this type of device as a "forcing function" because these devices are often designed to prevent erroneous actions from occurring. Source inspection is preferred to informative inspection.

Source inspection, self-checks, and successive checks each involve inspecting 100 percent of the process output. In this sense, zero quality control is a misnomer. These inspection techniques are intended to increase the speed with which quality feedback is received. Although every item is inspected, Shingo was emphatic that the purpose of the inspection is to improve the process and prevent defects, and therefore is not intended to sort out defects (although in some cases that may also be an outcome) [Shingo, 1986, p. 57]. Shingo believed that source inspection is the ideal method of quality control since conditions for quality production are assured before the process step is performed. Self-checks and successive checks should be used when source inspection cannot be done or when the process is not yet well enough understood to develop source inspection techniques.

Setting Functions. A setting function is the method used to perform an inspection. Chase and Stewart [1995] identify four setting functions 1) physical 2) grouping and counting 3) sequencing and 4) information enhancement. Physical methods determine whether defects or problems exist based on the presence or absence of physical contact with a sensing device. The small bevel on one corner of 3.5 inch diskettes combined with a stop in the computer’s disk drive eliminate the possibility of disks being inserted incorrectly into the computer. The grouping and counting method uses counting or measuring methods to insure no errors have occurred. L.L. Bean uses product weight information and an electronic scale to insure that orders are complete and correct. Sequencing methods check that a standard sequence of actions occurs. In a car, the key must be switched on before the car is shifted out of park and must be shifted back to park before the keys can be removed. Information enhancement methods provide or preserve information that would not be available otherwise. Restaurants use pagers to allow patrons to stroll and shop without fear of not hearing that their table is ready.

Regulatory Functions. There are two regulatory functions: 1) warning functions and 2) control functions. The bells, buzzers, and warning lights in automobiles are warning functions. Their purpose is to warn that an error has occurred or is about to occur. Control functions are more restrictive than warning functions. They actually keep errors from occurring by stopping the process or in some cases correcting the process automatically. A car's gearshift mechanism is an example of a control function. The car cannot be shifted out of park unless the ignition key is inserted and turned to the on position.


The fact that a patient is 30 time more likely to die as a result of a human error than to die HIV tainted blood, along with the
startling number of medication errors that occur in hospitals, indicates that mistake prevention is critical.

To further demonstrate the importance of mistakes in the transfusion process, consider Figure 1. It shows a flowchart of the blood transfusion process labeled with the types of process errors that are possible. This chart and process failure modes were provided by doctors studying transfusion medicine at the University of Texas Southwestern Medical school. The majority of these errors can be grouped in to 4 categories.

                                             Table 1.

error category


Percentage of items

 identifying & matching patients with their procedures and materials



information corruption through labeling, recording & data entry errors



Insure availability of relevant information or information transfer



 insure relevant information is used



Other errors







The errors in these four categories are human errors. They are unlikely to be common enough to be effectively managed using statistical descriptions of the variance. These categories can be characterized as identity-specific operations: patient-procedure-materials matching, moving information accurately through space and time, insuring relevant information is available and used by service providers. The exercise of matching process steps with specific individuals and matching inputs with their uniquely acceptable recipient are pervasive throughout much of health care practice. Mistake-proofing these types of errors differs from the commonly implemented mistake-proofing devices identified in the literature [Shingo 1986, Nikkan Kogyo Shimbun 1988, Bayer 1994]. These mistakes are not however limited to health care. They also exist in business environment where traceability is important or where parts are not fully interchangeable, like some remanufacturing operations.

Medical Applications Examples

Templates have been used on a limited basis as part of the blood donation process. The templates are laid over patient forms so that improperly checked boxes and omitted data become more obvious.

Surgeons use instrument trays with indentations for all of the instruments required in a procedure. The tray insures that all of the instruments are present. By replacing all instruments in the tray, a quick check can be made to insure all instruments are removed before closing the patient’s incision.

The computer system at Brigham and Women’s hospital that is used to process doctors’ prescriptions [ABC News, 1995] is a mistake-proofing device. Errors are reduced by allowing "point and click" selection of common dosages. The computer checks the prescription for possible overdoses (if manually entered), allergic reactions or reactions with other medicines the patient is taking.

Blood-Loc is a combination-lock-secured disposable bag that is used to deliver a unit of blood to a specific patient. The combination for the lock is unique and only available to medical personnel on the patient’s wrist ID. Blood-Loc insures that positive identification occurs before the blood can be unlocked and transfused. Burns and Wenz [1991] provide a detailed description and indicate that specific erroneous transfusions were prevented by the Blood-Loc System.


Hinckley and Barkan [1995] point out that complexity is also a source of non-conformities. They use the design for assembly (DFA) techniques developed by Boothroyd and Dewhurst [1987] to measure complexity. This measure of complexity is correlated with the actual operation times in assembly processes [Hinckley, 1993]. He also shows that the Boothroyd and Dewhurst DFA measure is also inversely correlated with non-conformities in industrial situations.

Reducing complexity essentially eliminates certain opportunities for errors to occur. Consider the experience of Weber Aircraft Operations. Weber Aircraft manufactures seats for passenger airliners. They have implemented numerous mistake-proofing devices to insure that the tubular aluminum frame of the seat is defect free. Recently that have started making some of these parts by milling a single piece of aluminum on a CNC machine. This process is preferred by DFA. The process costs less and avoids all the miss-cutting and welding process problems associated with tubular aluminum.

In Motorola’s quality program, the goal is 3.4 defects per million opportunities (DPMO). On any single product, there can be a large number of opportunities for defects to occur. The probability of producing a defect free product is the joint probability of all the opportunities being conforming. Reducing the opportunities for defects eliminates factors from the joint probability calculations. It is equivalent to insuring ongoing perfect quality for that opportunity.


Many of the process changes that occur in health care are the result of adverse outcomes and resulting corrective actions. Many of these corrective actions become changes to standard operating procedures (SOPs). As corrective actions are created and processed, the SOPs can be become very complicated. On occasion, the SOPs are so complicated that health care workers consciously circumvent the SOP using unapproved "work arounds." In many cases the complexity of the SOP is the cause of the workers’ inability to comply with the SOP and the resulting adverse outcomes. Increased complexity of procedure may increase the chance of error by making the basics of the process less obvious thus increasing opportunities to commit error.

This suggests that the impact of changes to SOPs should be evaluated based on its impact on the system and on the criticality of the adverse effects of non-conformity. It is conceivable that SOPs are changed as part of a corrective action where the outcome is not severe and the occurrence of the non-conformity is rare. In such cases, the cost to the system from the added complexity may far outweigh the benefit from avoiding the outcome. As the number of modifications to the SOP increases, the complexity may increase exponentially rather than in a linear manner.


In health care, indices of complexity like those used in DFA are yet to be developed. A specific complexity measure is not proposed here. Additional research in this area is needed. In the short term, the subjective predictions of time required to perform the process according to the SOP will be used as a measure of complexity. The DFA measures are refined means of predicting operation times. Those changes that increase the time required are considered to be more complex.

The preliminary estimate of increased complexity must be compare with the relative priority number (RPN) of the adverse outcome. The concept of RPN comes from failure mode effect analysis (FMEA) as presented by the Automotive Industry Action Group (AIAG) [1995]. In FMEA, the RPN is used to determine which failure modes deserve the most attention and where preventive measures should be focused first. FMEA allows many failure modes to be laid out on a worksheet and considered and prioritized simultaneously. Corrective action programs do not have this luxury. Corrective actions follow an arrival process where they must be considered serially. As a result, proper prioritization and measured responses are currently difficult to administer. Using the RPN along with a corresponding response policy would allow prioritization of the ongoing arrivals of corrective action requests. Often, responding to every corrective action request may only be possible if the response is a matter of expediency. Such expedient responses may make the system unduly complex or may not fully address the cause. The corrective action that states "the worker has been reprimanded and retrained" is common but not an effective corrective action. The admonition to "be more careful" is not effective since humans cannot obey over the long term.

The RPN is the product of three values: the criticality, the likelihood of occurrence, and the probability of detecting and remedying the non-conformance before any adverse effects can result. These three values are assessed subjectively along firm-specific guidelines. The values in the AIAG examples range from one to ten.

A Response policy that sets thresholds should be created by each firm over time to determine how to respond to various values of RPN. Cut off points like those shown below can be established:

RPN Rating

Response level required

RPN<250 & Criticality<3

No response


Minimal response


Respond w/o Complexity increase


Immediate response followed by complexity reduction


The use of the RPN should have the effect of being an ongoing pareto-style analysis that determines which improvements should take priority. It allows this analysis to be done serially as the corrective actions are requested and allows measured responses to the events that occur.

Many of the health care processes that have developed over time are tremendously complex. A lesson learned from SPC is that tampering with systems should be avoided. Even when a system is studied extensively, the bounded rationality of managers may result in situations where they make changes to the system for which the outcome is not completely understood. For very complex systems, managers may never be perfectly sure their changes are not tampering with the system. The use of RPN can provide an additional hurdle that changes must clear to avoid tampering.


Non-conformances in health care and elsewhere are the result of variance, mistakes, and complexity. The importance of mistakes in the context of health care has been demonstrated using data on a blood transfusion process and its potential failure modes. The applications of mistake-proofing or poka-yoke in health care applications has been presented along with examples. The management of complexity has been considered including a proposed tool for prioritizing corrective actions to avoid increasing complexity and making unwarranted changes to the system.


         ABC News, 1995. How to survive the hospital 20/20 transcript #1527. Denver: Journal Graphics (July 7).

         Automotive Industry Action Group, 1995. Process Failure Mode Effect Analysis.

         Bayer, P.C. 1994. Using Poka Yoke (Mistake Proofing Devices) to Ensure Quality. IEEE 9th Applied Power Electronics Conference Proceedings 1:201-204.

         Boothroyd, G. And Dewhurst, P. 1987. Product Design for Assembly Handbook. Wakefield, RI: Boothroyd Dewhurst, Inc.

         Chase, R.B., and D. M. Stewart. 1995. Mistake-proofing: Designing errors out. Portland, Oregon: Productivity Press.

         Chase, R. B., and D. M. Stewart. 1994. Make your service fail-safe. Sloan Management Review (Spring): 35-44.

         Garvin, David A. 1988. Managing Quality : The Strategic and Competitive Edge. New York: Free Press.

         Hinckley, C.M. 1993. A Global Conformance Quality Model: A New Strategic Tool for Minimizing Defects Caused by Variation, Error, and Complexity. (Dissertation) Ann Arbor Michigan: UMI Dissertation Services.

         Hinckley, C.M. and Barkan, P. 1995. The role of variation, mistakes, and complexity in producing nonconformities. Journal of Quality Technology 27(3):242-249.

         Leap, Lucian L. 1994. "Error in Medicine." Journal of the American Medical Association 272(23): 1851-1857.

         McClelland, D.B.L., McMenamin, J.J., Moores, H.M., and Barbara, J.A.J. 1996. Reducing risk in blood transfusion: process and outcome. Transfusion Medicine 6: 1-10.

         Nikkan Kogyo Shimbun/Factory Magazine, (Ed.). 1988. Poka-yoke: Improving product quality by preventing defects. Portland, Oregon: Productivity Press.

         Norman, D.A. 1989. The design of everyday things. New York: Doubleday.

         Rook, 1962. Sandia Labs report SCTM93-62(14).

         Shingo, Shigeo. 1986. Zero quality control: source inspection and the poka-yoke system. trans. A.P. Dillion. Portland, Oregon: Productivity Press.

         Wenz, B. And Burns, E.R. 1991. Improvement in transfusion safety using a new blood unit patient identification system as part of safe transfusion practice. Transfusion. 31 (5): 401-403.





This intrusion of advertising makes this site self-funding

This intrusion of advertising makes this site self-funding

White Paper
2007 John Grout | Contact John Grout | Home