into statistical sampling and modeling to

2 Summary The American Society for Quality defines quality engineering as “the body of knowledge contributing to the creation of quality in products and services that lead to customer satisfaction” (Krishnamoorthi, 2005, p.4). Over time this body of knowledge has been expanded and tailored to accommodate various products and processes. Simple inspection of the assembly line has evolved into statistical sampling and modeling to prevent failures instead of only finding when they occur. This paper examines two major components of quality engineering, quality control and quality assurance. These components are broken down into major topics that better define each area. Material from a variety of well-known and respected sources was chosen to provide background and examples for each area. In this paper, the purpose, history, tools and methods, historical literature, and current research topics are examined and presented. The goal is to provide the reader with a clear and succinct idea of what quality engineering is and how it has evolved over time. Purpose/Objective “High levels of quality are essential to achieve Company business objectives. Quality, a source of competitive advantage, should remain a hallmark of Company products and services (Manghani, 2011, p.34). “Quality does not only relate solely to the end products and services but also relates to the way the Company employees do their job and the work processes they follow to produce products or services” (Manghani, 2011, p.34). “Quality Engineering is a set of operational, managerial, and engineering activities that a company uses to ensure that the quality characteristics of a product are at the nominal or required levels and that the variability around these desired levels is 3 minimum” (Montgomery, 2009, p.8). The Quality Engineering Journal focuses mainly on the topics of quality control and quality assurance management, therefore for the purpose of this paper the focus will be on quality control and quality assurance as both play essential roles within Quality Engineering. “Quality control is focused on fulfilling quality requirements; it encompasses the operational techniques and activities undertaken within the quality assurance system to verify that the requirements for quality have been fulfilled” (Manghani, 2011, p.35). Some processes and techniques that have shaped quality control are Deming’s 14 points, Benchmarking, and Continuous Improvement. “Deming delineated his revolutionary management philosophy for establishing quality, productivity, and competitive position” (Bauer, 2002, p.21). “The 14 points include creating a constant purpose for the organization, eliminating reliance on inspection, constantly improving systems, increasing training, and instituting leadership” (Bauer, 2002, p.21). “Benchmarking is an evaluation technique in which an organization compares its own performance for a specific process with the “best practice” performance of a recognized leader in a comparable process” (Bauer, 2002, p. 88). “Continuous Quality Improvement (CQI) is a management approach to improving and maintaining quality that emphasizes internally driven and relatively constant assessments of potential causes of quality defects, followed by action” (Bauer, 2002, p.94). “Quality assurance, on the other hand, is focused on providing confidence that quality requirements are fulfilled” (Manghani, 2011, p.35). Some tools used to ensure quality assurance are failure testing, Japanese Influences like Taguchi and Ishikawa on statistical control, and total quality management. Failure as defined in the glossary of the Quality Improvement Handbook (2002) is the inability of an item, product, or service to perform required functions on demand due to one or more defects. Failure testing builds 4 events which stress a product intended to fail in order to determine failure thresholds and tolerances. “Statistical process control involves gathering data on the product or services as it is being created, developing the range of acceptable product (upper and lower control limits) for the process, graphically charting the information on one of several types of charts, and following the progress of the process to detect unwanted variation” (Bauer, 2002, p.13). “Total quality management (TQM) is an approach to quality management that emphasizes a thorough understanding by all members of an organization of the needs and desires of the ultimate product/service recipient” (Bauer, 2002, p.120). History/Background The origin of Quality Engineering is deeply rooted in progressive nature of commercial industry during the late 19th and early 20th century. “The quality profession as we know it today had its start when Walter Shewart of Bell Laboratories developed a system for measuring variance in production systems, known as statistical process control” (Bauer, 2002, p.4). “During WWII the U.S. War Department hired Dr. W. Edwards Deming, to teach statistical process control to the defense industry” (Bauer, 2002, p.4). “Quality control and statistical methods were critical factors in the successful war effort (Bauer, 2002, p.4). “Unfortunately, most American companies stopped using these tools after the war” (Bauer, 2002, p.4). Deming’s 14 points/Japanese Influence “Quality Management in the modern sense was not discussed until after World War II” (Bisgaard, 2007, p.666). Both Deming and Juran were invited to consult for the Japanese industry about the implementation of quality control. They both expressed concern about the futility of quality control when management would not show much interest in managing the quality function and also because systems were mainly based on 5 inspection. It was in the 1950s that both Deming and Juran decided that it was critical to have a “comprehensive system for managing quality”. “He proposed a new structure in his renowned 14 points of management” (Bauer, 2002, p. 21). “The 14 points include creating a constant purpose for the organization, eliminating reliance on inspection, constantly improving systems, increasing training, and instituting training” (Bauer, 2002, p.21). “Deming sets the stage for initiating quality efforts in an organization and is as such essential” (Bisgaard, 2007, p.666). However, “Juran was more hands-on, prescriptive and developed comprehensive and detailed practical guidelines for the development and implementation of a quality management system”(Bisgaard, 2007, p.666). “Many Japanese industrialists, engineers, and scholars such as Ishikawa, Mizuno, and Taguchi, have made valuable contributions to the “body of knowledge”” (Bisgaard, 2007, p.667). Benchmarking Strategic planning began to gain popularity in the 1960’s. Benchmarking was a tool used to support strategic planning. “Benchmarking can be used to study any company that may make a similar product or perform a similar process or activity, whether it is in the benchmarking team’s industry or not” (Boxwell, 1994, p.15). “Benchmarking has been growing more popular in the United States since Xerox began doing it in the late 1970s” (Boxwell, 1994, p. 15). Boxwell (1994) in Benchmarking for Competitive Advantage wrote: According to Xerox, the benefits to be derived from benchmarking are quite obvious. The original framers of the Malcolm Baldrige National Quality Award included benchmarking among criteria for the award, which has really moved it from the arcane to the best seller list of managerial tools available today. If Benchmarking is a fad, it is fooling a lot of people, because some of the best and 6 brightest in the United States are pushing it as a means of helping us regain our competitive edge. Continuous Improvement The word Kaizen means “continuous improvement” in Japanese. “In 1962, Tetsuichi Asaka and K
aoru Ishikawa developed “quality circles”(Mika, 2006, p.6). The result of these quality circles was the first “Kaizen” events. “Quality circles focused on solving problems that interrupted production throughout the plant”(Mika, 2006, p.6). “The quality circle was a cross-functional team charged with analyzing and finding the root cause of a problem, formulating a solution, and then implementing it(Mika, 2006, p.6). It was Masaaki Imai that founded the Kaizen Institute in 1962, which was committed to promoting the Kaizen methods across the globe. Failure Testing ISO/IEC 17025 is the main standard used for Failure testing by testing and calibration laboratories and was originally issued by the International Organization for Standardization in 1999. This standard applies directly to the organizations that produce testing and calibration results for use in quality assurance. Statistical Quality Control “After the War, Genichi Taguchi used and promoted in Japan statistical techniques for quality from an engineer’s perspective, rather than that of a statistician” (Goh, 1993, p. 185). “The generation of Taguchi faithfuls first emerged in the early 1980s among non-statisticians” (Goh, 1993, p.195). “The main statistical contents of Taguchi methods may be appreciated from the following: problem formulation, experimental design, data analysis, special applications, and finally some illustrative examples” (Goh, 1993, p. 188). In addition to the Taguchi methods, “Shewart’s techniques taught that 7 work processes could be brought under control by determination of when a process should be left alone and when intervention was necessary”(Bauer, 2002, p.29). He developed the control charts in 1924 which were meant to “track performance over a period of time allowing workers to monitor their work and predict when they were about to exceed limits and possibly produce scrap” (Bauer, 2002, p.29). Total Quality Management (TQM) “Studies show that the majority of TQM programs are based partly on Deming’s approach to quality management” (Martin, 1993, p.13). Deming’s philosophy of quality management is his “14 points”. “Deming teaches us that TQM that “the key to product and service quality in the human services lies in controlling the processes by which customers are served” (Martin, 1993, p.14). We also learn that in order to fully grasp the tools of TQM, we must adopt the philosophy. “While major contributors to this topic including William Deming, Joseph Duran, and Phillip Crosby each developed their own philosophy towards management and quality, six themes are shared between each and are considered the foundation for TQM they are: 1. Ensure quality as a primary organizational goal 2. Quality should be determined by an organization’s customers 3. Customer satisfaction drives organizations 4. Study and reduce variation in processes 5. Change continuously using teams and teamwork 6. Ensure top management commitment and employee empowerment. (Martin, 1993, p.23) 8 Design/Tools/Methods Japanese Influence After World War II, Dr. Genichi Taguchi was employed to develop a method for improving Japan’s telephone system (Roy, 2010, p.9). Taguchi found that in order to improvement this system, quality design should focus on offline processes, and not on on-line quality methods such as the Design of Experiments (DOE). The focus on an offline process allowed for reduction of the number of variables that deviate from a target value. The Taguchi method holds three core ideas, they are: 1. Quality should be designed into the product and not inspected into it. 2. Quality is best achieved by minimizing the deviation from a target. The product should be so designed that it is immune to uncontrollable environmental factors. 3. The cost of quality should be measured as a function of deviation from the standard, and the losses should be measured system-wide. (Roy, 2010, p.10) The successes of his method lead to receiving the Deming award (Mears, 1995, p.196). This method has been widely adopted among many industries and companies. Benchmarking Developing a method for benchmarking usually includes some form of the following steps: 1. Identify customer needs. 2. Identify critical indicators of successfully meeting customer needs. 3. Identify and collect data on best-in-business products or services, use as baseline. 9 4. Compare critical indicators of current product to baseline. (Mears, 1995, p.156) The most widely used guide for benchmarking is Robert Camp’s text, The Search for Industry Best Practices that Lead to Superior Performance. In this work, Camp uses a 12-step approach for successful benchmarking. These methods were first adopted in 1989 and are still used today. Another major text offering a structure for successful benchmarking is Kaiser Associates Beating the Competition: A Practical Guide to Benchmarking. Kaiser Associates is an international strategy consulting firm that offers a “tried and true in industry” method and lessons learned when benchmarking (Kaiser 29). Continuous Improvement The Japanese approach to continuous improvement is more gradual. There are small, significant improvements made continuously in a process. The American approach is more dramatic. Often improvements in quality are made in “plateaus”. One example of this is a manager implementing a major staffing reorganization (Reid, 2011, p.148). Another method for continuous improvement was developed by Walter Shewhart, the Plan-Do-Study-Act method. This method created a cycle in which the quality engineer continuously identifies problems and creates solutions. This cycle was later modified by William Deming where the “study” portion of this cycle was renamed the “check” (Mears, 1995, p.122). Other names for this method include the Deming Wheel or the Shewhart Cycle. An illustration of the original cycle is shown in Figure 1. 10 Figure 1 – Plan, Do, Study, Act Cycle (Reid 148) The steps in the cycle are described below: • Plan – Identify problems. Create plan for new process. • Do – Implement new process and create documentation. • Study – Evaluate data; compare to old data. • Act – Decide if the product or process is improved. Accept or reject new process. (Reid, 2011, p.148) These four steps can be applied to manage any process (Mears, 1995, p.122). Two major functional tools utilized by quality engineers during continuous improvement include the flowchart and the cause and effect diagram, or bonefish diagram. A flow diagram shows the sequence of events involved in a process (Reid, 2011, p.150). It is used to aid in defining customer needs, identifying a methodology, and identifying key quality parameters (Mears, 1995, p.19). A bonefish diagram is justly named due to its shape. The “head” of the fish represents a problem, and potential causes are listed on multiple levels using a branching technique (Reid, 2011, p.150). This diagram was developed by Kaoru Ishikawa in 1982 as an aid in deciding a focus for finding causes to problems and select data that needs to be collected to find a solution (Ledolter, 1999, pgs. 11 59-61). Illustrations of the flowchart and bonefish diagram are shown in Figures 2 and 3 respectively. Figure 2 – Flowchart (Reid 151) Figure 3 – Bonefish Diagram (Reid 152) Other statistical tools used for continuous improvement include modeling, control charts, and acceptance analysis, and are studied in subsequent sections. Failure Testing The most useful tools for failure testing besides the stressing equipment (which vary by product and test) are the processes and documentation for these tests. These processes are found in various standards that ensure the test was carried out in a successful manner. One such standard is the ISO/IEC 17025 standard. This documentation includes methods and validation procedures for calibration and testing and is based from benefits
12 found in real lab experiments (Honsa, 2003, p.1038). Other organizations that produce standards for failure testing include the National Institute of Science and Technology (NIST) and the American Society for Testing and Materials (ASTM). When measuring fail points in a process it is important to utilize calibration. The purpose of calibration is to reduce or eliminate bias when taking measurements. It helps the tester approach absolute measurements and to ensure common units are applied across various tests (NIST/SEMATECH e-handbook … 2.3.1). For example, according to NIST, mass should be measured in kilograms (kg). The reference is a platinumiridium kilogram stored by the Bureau International des Poids et Mesures in France. This is an example of artifact calibration, where a property is measured relative to a standard artifact (NIST/SEMATECH e-handbook… 2.3.2). Statistical Control The purpose of statistical control is to guarantee the goodness of results within predictable limits.” (NIST/SEMATECH E-HANDBOOK … 2.2.1). A variety of tools are used to aid other quality engineering efforts such as continuous improvement and failure testing. One important tool is the control chart. The control chart was developed by Walter Shewhart and plots the averages of measurements of samples vs. time (Krishnamoorthi, 2006, p.185). In this chart upper and lower control limits are placed to bind a set of data. If all averaged points lay within the control limits, the process is considered to be statistically in control (Montgomery, 2009, p.182). Control charts are used to determine if a process is stable, for distinguishing usual variability from unusual variability, and as an alarm system to find a cause for some problem (Ledolter, 1999, pgs. 304-305). 13 Another tool widely used for statistical analysis is the design experiment. Design experiments systematically vary the inputs to a process to discover their effects on the output (Montgomery, 2009, p.14). This tool includes factorial design, where all combinations of input variables studied is tested using all factor levels. They are used primarily to reduce inputs that cause variability in a controlled manner (Ledolter, 1999, p.9). Often significant breakthroughs in quality improvement and control occur using this method (Montgomery, 2009, p.14). When experiments are not available, it is necessary to make predictions of what a dataset looks like. To do this modeling using distributions is needed. Models use probability to estimate a distribution of data (Krishnamoorthi, 2006, p.59). Some common distributions include: normal, Poisson, geometric, and chi squared. Illustrations of distributions of discrete data and continuous data are shown below in Figures 4(a) and 4(b) respectively. Figure 4 – Distributions for (a) discrete data and (b) continuous data (Montgomery 75) Using statistics to aid in determining overall quality is best exemplified in acceptance sampling. Acceptance sampling is the process of selecting samples from a lot of products for inspection and determining if the lot is acceptable or defective (Krishnamoorthi, 2006, p.365). Sampling can occur at the supplier (outgoing inspection) 14 or the customer (receiving inspection) (Montgomery, 2009, p.15). An illustration of outgoing and receiving inspection is shown below in Figure 5. Figure 5 – Examples of Acceptance Sampling (Montgomery 15) The steps for acceptance testing include: 1. Identify the aim of the survey 2. Identify the sample and lot to be studied 3. Develop a method for obtaining the information 4. Determine the sample size and method of selection 5. Execute the sampling. (Ledolter, 1999, pgs.228-231) The standard for acceptance testing is MIL-STD-1916. Its purpose is to move towards a preventative approach in quality and to ensure acceptance sampling is performed in a successful manner (Engineer’s Edge). Total Quality Management One tool in TQM that exemplifies most of these themes is the supplier/user matrix. In order to define and implement a quality product, one must find what their customer considers to be a quality product (Martin, 1993, p.27). For this tool the supplier creates a list of quality measures given by the user and applies a weight to each item. Several representatives from the supplier then meet and discuss which departments within the organization are responsible for each quality measure. An item of quality the 15 customer views as a priority will receive more resources from the company to ensure it is met. It also helps management decide if the company is improving the items of quality the customer considers to be important (Krishnamoorthi, 2006, p.449). The figure below shows an example of a supplier/user matrix without weighting. The contributions of each of the company’s departments are shown for each quality measure the customer desires. Figure 6- Supplier/user matrix Historical Literature Review The development of Quality Engineering in the US, Japan, and Europe has been marked by several notable publications since the early 20th century. Significant contributors to the QE field have standardized quality engineering with various handbooks on quality control and quality engineering in general. Furthermore, researchers from the late 20th century have helped to add a more statistical approach to the work, noting such keystones as experimental design and the 2k-p factorial. And currently, standards such as the ISO 9000 have continued to keep the field current. Shewart 16 W. A. Shewart’s take on quality control is detailed in his book, Economic Control of Quality of Manufactured Product. This book highlights the reasoning behind using a statistical approach to quality using sampling and control limits. Shewart defines quality as, …the characteristic or group or combination of characteristics which distinguishes one article from another, or goods of one manufacturer from those of his competitors, or one grade of product from a certain factory from another grade turned by the same factory (Shewart, 1931, p.39). Quality is viewed less as a subjective perception but rather a set of definable characteristics in a product that can be accepted or rejected. As Shewart introduces this scientific basis for control: “ Based upon evidence such as already presented, it appeared feasible to set up criteria by which to determine when assignable causes of variation in quality have been eliminated so that the product may then be considered to be controlled with limits. This state of control appears to be, in general, a kind of limit to which we may expect to go economically in finding and removing causes of variability without changing a major portion of the manufacturing process as, for example, would be involved in the substitution of new materials or designs (Shewart, 1931, p.25). From a technical perspective, Shewart’s book outlines control modeling utilizing a distinct probability p for a given process as the sample size for that specific process. In addition, quality not as a singular ideal value (“X”) but rather it is a perceived as a target value “ with a standard deviation . Acceptance is not based on an unobtainable specification but on a range of values that are considered satisfactory. For the quality model, statistic P is defined: Given the following equation, and act as the control limits based off of factors such as the mean or standard deviation, that determine the acceptability of a given product. See Figure 7 for the generic distribution. 17 Figure 7: Basic Frequency Distribution (Shewart, 1931, p.285) Shewart’s works act as the beginning concept of the control chart which is now one of the fundamentals of detecting variation. Feigenbaum Armand Vallin Feigenbaum takes a similar approach to the quality field with his book in 1951, Total Quality Control. Unlike Shewart, however, Feigenbaum looks at quality from an organizational perspective, defining total quality control as: An effective system for integrating the quality development, quality-maintenance, and quality improvement efforts of the various groups in an organizat
ion so as to enable marketing, engineering, production, and service at the most economical levels which allow for full customer satisfaction. (Feigenbaum, 1983, p. 6) Feigenbaum approaches quality not as a group of isolated factors on a product floor, but as a common set of standards that are accepted on each level of production. One of the key principles of Feigenbaum’s is that total quality control is implemented through a quality system, which is essentially the most efficient placement of human and technical resources in order to consistently deliver the highest customer satisfaction in the most economic manner. This system incorporates both the technical and managerial aspects of controlling quality and responding to deficient conditions. “Feigenbaum observed that manufacturing was only one element in the long chain of activities involved in producing a new product” (Pzydek, 2003, p. 125). 18 Technically, Feigenebaum is an agreement with Shewart on the use of sampling, distributions, and control charts. Feigenbaum’s book, however, introduces the concepts of new-design control and design of experiments. New-design control is finding and working to meet the optimum quality standards for cost, safety, and reliability while eliminating potential problems in the process (Feigenbaum, 1983, p. 65). Deming and Japanese contributors One of the biggest catalysts for the Quality Movement of the 20th century was the introduction of the statistical methods from the US and Europe to Japan. W. Edwards Deming’s 14 Points have been a foundation for Quality Engineering, but he also worked as a quality consultant to the Japanese and would be influential in starting a quality movement in Post-World War II Japan (Montgomery, 2009, p. 18). Japanese authors such as G. Taguchi would continue to develop and add on to these new approaches to statistical methods. Genichi Taguchi Taguchi’s presence in the quality engineering field has marked notable advancements in topics related to Signal-to-Noise ratio and design of experiments. Overall, Taguchi’s summarizes his philosophy on quality, “functional improvement using tuning should be made under standard conditions only after conducting stability design because tuning is improvement is based on response analysis” (Taguchi, 2005, p. 57). The SN Ratio is based off the concept that for an output y there is a corresponding input signal S (Taguchi, 2005, p. 135). Likewise, the fundamental principle of experimental design is to create an equation that models a relationship between a variable and its response. In turn, the main goal is to eliminate noise in order to identify the most significant factor. 19 The Later 20th Century: George P. Box In the late 20th century, George Box contributed to the development of time-series analyses, experimental design and the 2k-p factorial. Although such works are numerous and were done at different stages of development for QE, most his significant publications are consolidated in George Tiao’s 1985 compilation, The Collected Works of George E. P. Box. Juran’s Quality Control Handbook Juran’s Quality Handbook (originally Juran’s Quality Control Handbook) is another publication that focuses on total quality control. Similar to previous attempts at consolidating the entire quality control field into a single work, this book addresses both the statistical and organizational components of QC. Keeping with the mathematical approach, Juran outlines quality using the diagram shown in Figure 8. Some of the highlights include a Juran’s Trilogy, an overview of statistical control process, and discussions on new computational software and standards that have been introduced in the latest version of the book. Figure 8: Juran’s Quality-Cost Model (Pzydek, 2003, p. 110) 20 ISO 9000 One of the major shifts in the QE field during the 1990’s was the creation and popularity of the ISO 9000 standard by the International Standards Organization (ISO). As defined by D. C. Montgomery, ISO 9000 is “a generic standard, broadly applicable to any type of organization, and is often used to demonstrate a supplier’s ability to control its processes” (Montgomery, 2009. P 23-24). As it is described by Bergh and Rabbit, ISO 9000 represents “the common denominator of business quality that is accepted internationally” (Bergh & Rabbitt, 1994, p. 9). Although many companies have work to be certified under this standard, there have been arguments that it is too broad to maintain a standard of quality highly unique, technical production companies (Montgomery, 2009, p. 24). Current Research Monitoring and Adjustment The original statistical process control (SPC) tools and methods were developed by Walter Shewhart in the early 1900’s. These control charts were used to help monitor a process only. Other methods and tools were used to provide adjustments to a process for improvement, such as use of a EWMA chart (Box, 2009, p.1). Currently quality professionals have found that merging monitoring and adjustment techniques into systems that all for both analyses can improve an overall quality system. These systems usually carry time-series data as opposed to random sampling. In this way no stationary noise, which occurs in projects such as manufacturing when permanent noise in a system accumulates over time, is uncovered and adjustments and continuous checking can occur (Box, 2009, p.65). 21 Multivariate Order Statistics Currently there have been improvements in multivariate order statistics modeling. Multivariate order statistics includes regression and control for responses with more than one input variable that fluctuates. The main purpose of this analysis is attempting to transform a problem with many unknowns into a sing dimension problem. Historically, methods of multivariate order statistics were created and tailored to the problem at hand, making the design phase mandatory for all projects (Arnold, p.2). Currently there is movement to develop an all-encompassing method of performing this analysis, especially to support ranked set sampling. By developing a method to explore abstract quality elements, it can analysis many problems instead of only one. In order to do this, research must produce systems with ranking systems rather than numerical systems (Lillestol, p.293). Model Validation 2As quality systems become more complex and the ability to model these complex systems becomes possible, there is also a need to ensure that the method used to analyze these systems is accurate and reliable. Most complex systems are analyzed using computer models or other statistical modeling due to their numerous calculations and storage of data. Usually, this data is used to predict a response, and the model is validated with the use of physically sampled data. As systems become more complex, so does the work that is used to validate these methods. Ongoing work in model validation includes determining the “noise level” of the validation procedure. Reduction of this noise level allows for more validity in the model. 22 Conclusion Quality Engineering, as it is known today, is the use statistical and organizational methods to obtain a desired product by detecting and reducing variation. Considering the development of Quality Engineering over the 20th century, its significance has grown from the need to improve process performance and product standards. Developers such as Shewart would lead the way by defining quality as a set of definable characteristics that can be altered by monitoring and controlling processes. These objectives would later define the areas of the quality control and quality assurance which compose quality engineering. Similarly, contributors such as Feigenbaum, Deming, and Juran would advance the field with the total quality control concept in which true quality is not only monitoring processes but also recognizing that the final product is the sum of the entire organization’s performance. This initial idea would lead to the conceptual development of
tools and methods such as benchmarking, Total Quality Management and the Quality System. Furthermore, Deming’s work in Japan would lead to statisticians such as Taguchi and Ishikawa to contribute to areas such as design of experiments and the “kaizen” concept of continuous improvement. Currently, international standards such as ISO 9000 embody the contemporary application of quality engineering in how organizations operate. And even still, topics such as multivariate order statistics and model validation represent a new frontier for the field as a whole. Although QE has seen steady expansion over the 20th century, it promises to evolve and expand further as the need for product quality increases. 23 References Arnold, Barry C., Castillo, Arnold, and Jose Maria Sarabia (2009). On multivariate order statistics: Application to ranked set sampling. Computational Statistics and Data Analysis, 53. Retrieved from EBSCOHost. Bauer, John E., Duffy, Grace L., and Westcott, Russell T. (2002). The Quality Improvement Handbook. Milwaukee, WI: American Society for Quality. Bergh, P. & Rabbitt, J. (1994). The ISO 9000 Book (2nd ed). New York: Quality Resources. Bisgaard, Soren. (2007). Quality Management and Juran’s Legacy. Quality and Reliabilty Engineering International, Vol 23, 665-677. Retrieved from (http:// http://ehis.ebscohost.com.proxy.lib.odu.edu/ehost/pdfviewer/pdfviewer?sid=e699 f22b-9dfb-4817-9fae-6d41907d5fdf%40sessionmgr4&vid=4&hid=6) Box, Geoge E. P., Luceno, Alberto and Maria del Carmen Paniagua-Quinones (2009). Statistical control by monitoring and adjustment, second edition. Hoboken, NJ: John Wiley & Sons, Inc. Boxwell, Robert J. (1994). Benchmarking for Competitive Advantage. New York, NY: McGraw-Hill. Engineers Edge (2010). MIL-STD-1416. Engineers Edge, LLC. Retrieved from (http://www.engineersedge.com/engineering/Engineering_Standards_Specificatio ns) Feigenbaum, A. V. (1983). Total Quality Control. New York: McGraw-Hill. Goh, T.N. (1993). Taguchi Methods: Some Technical, Cultural and Pedagogical Perspectives. Quality and Realibility Engineering International, Vol. 9, (185- 202). Wiley and Sons. Retreived from (http:// http://ehis.ebscohost.com.proxy.lib.odu.edu/ehost/pdfviewer/pdfviewer?sid=115d e9ea-e963-4e98-9f91-645e516892fe%40sessionmgr12&vid=3&hid=6) Honsa, Julie D. and Deborah A. McIntyre (2003). ISO 17025: Practical Benefits of Implementing a Quality System. Journal of AOAC international, 86(5), 1038- 1044. Johnson, Perry L. (2000). ISO 9000: The year 2000 and beyond, third edition. New York, NY: McGraw-Hill. Krishnamoorthi, K.S. (2006). A first course in quality engineering. Upper Daddle River, NJ: Pearson Prentice Hall. Ledolter, Johannes and Claude W. Burrill (1999). Statistical quality control: Strategies and tools for continual improvement. New York, NY: John Wiley & Sons, Inc. 24 Lillestol, Jostein (1991). Multivariate statistical methods for quality creation: a review. Total Quality Management, 2(3). Retrieved from EBSCOHost. Mangahani, Kishu. (Jan-March 2011). Quality assurance: Importance of systems and standard operating procedures. Perspectives in Clinical Research, Vol 2, Issue 1. Retrieved from (http://www.picronline.org) Martin, Lawrence L. (1993). Total Quality Management in human service organizations. Newbury Park, CA: Sage Publications. Mears, Peter (1995). Quality improvement tools and techniques. New York, NY: McGraw-Hill Inc. Mika, Geoffrey (2006). Kaizen event implementation manual. Dearborn, MI: Society of Manufacturing Engineers Montgomery, Douglas C. (2009) Introduction to statistical quality control, 6th edition. New York, NY: John Wiley & Sons, Inc. Juran, J. M. (1999). Juran’s Quality Handbook. New York: McGraw-Hill. NIST/SEMATECH e-Handbook of Statistical Methods (2010). Retrieved from (http://www.itl.nist.gov/div898/handbook) Pzydek, T. (2003). Quality Engineering Handbook. New York: Marcel-Dekker Inc. Reid, R. Dan and Nada R. Sanders (2011). Operations management, 4th edition. New York, NY: John Wiley & Sons, Inc. Roy, Ranjit K. (2010). A primer on the Taguchi method, second edition. Dearborn, MI: Society of Manufacturing Engineers. Shewart, W. A. (1931). Economic Control of Quality of Manufactured Product. New York: D. Van Nostrand Company Inc. Sung H Park and Jiju Antony (2008). Robust design for quality engineering and six sigma. World Scientific. Taguchi, G. (2005). Taguchi’s Quality Engineering Hanbook. Hoboken: John Wiley & Sons. Tiao, G. (ed). (1985). The Collected Works of George E. P. Box. ( Vols 1 & 2). Belmont: Wadsworth Advanced Books and Software.

Leave a Reply

Your email address will not be published. Required fields are marked *