This books (The Certified Six Sigma Black Belt Handbook [PDF]) Made by T. M. Kubiak About Books Brand New To Download Please Click. The. Six Sigma. Handbook. A Complete Guide for Green Belts,. Black Belts, and Managers at All Levels. Thomas Pyzdek. Paul A. Keller. Third Edition. New York. The Certified Six Sigma Black Belt (CSSBB) is a professional who can explain A Black Belt should demonstrate team leadership, understand team dynamics.

Certified Six Sigma Black Belt Handbook Pdf

Language:English, Arabic, German
Genre:Business & Career
Published (Last):08.10.2015
ePub File Size:25.64 MB
PDF File Size:19.26 MB
Distribution:Free* [*Register to download]
Uploaded by: BELL

THE CERTIFIED SIX SIGMA. BLACK BELT HANDBOOK. SECOND EDITION. T. M. Kubiak. Donald W. Benbow. ASQ Quality Press. Milwaukee, Wisconsin. Request PDF on ResearchGate | The Six Sigma Black Belt Handbook | When Many enterprises apply either Six Sigma or Lean such as GE, Motorola, Ford. Six Sigma. The Certified Six Sigma Master Black Belt Handbook. T.M. Kubiak. This book has been created to reflect the most current thinking among key Six.

Because samples do not contain all the data from a population, there is a risk that we will draw incorrect conclusions about the larger population. Bad decisions in practice lead to difficulties and problems for producers as well as consumers, and we refer to this as producer risk and consumer risk. The same bad decisions in statistical terms are referred to as Type I and Type II error, as well as alpha and beta risk, respectively. It is important for Six Sigma Green Belts to realize that these.

Lastly, we must realize that choices made during the design of SQC tools, choices related to selection of consumer and producer risk levels, quite dramatically impact performance of the tools and the subsequent information they produce for decision making.

It is not enough to simply identify the risks associated with making bad decisions; the Six Sigma Green Belt must also know the following key points: Sooner or later, a bad decision will be made The risks associated with making bad decisions are quantified in probabilistic terms and risks added together do not equal one Even though and go in opposite directions that is, if increases, decreases , there is no direct relationship between and The values of and can be kept as low as we want by increasing the sample size Definition 1.

Probability is quantified as a number between zero and one where the chance that an event or outcome will not occur in perfect certainty is zero and the chance that it will occur with perfect certainty is one. The chance that an event or outcome will not occur added to the chance that it will occur add up to one. The probability of making a producer risk error is quantified in terms of.

The Certified Six SIGMA Green Belt Handbook

The probability of making a consumer risk error is quantified in terms of. A critically important point, and a point that many people struggle to understand, is the difference between the probability that an event will or will not occur and the probabilities associated with consumer and producer risk they simply are not the same thing.

As noted earlier, probability is the percent chance that an event will or will not occur, wherein the percent chances of an event occurring or not occurring add up to one. The probability associated with making an error for the consumer, quantified as , is a value ranging. The probability associated with making an error for the producer, quantified as , is also a value between zero and one.

The key here is that and do not add up to one. In practice, one sets an acceptable level of and then applies some form of test procedure some application of an SQC tool in this case so that the probability of committing a error is acceptably small.

So defining a level of does not automatically set the level of. In closing, the chapters that follow discuss the collection of data and the design, application, and interpretation of each of the various SQC tools. You should have the following two goals while learning about SQC: When trying new cuisine, for example, we take only a small bite to decide whether the taste is to our likingan idea that goes back to when civilization began.

However, modern advances in sampling techniques have taken place only in the twentieth century. Now, sampling is a matter of routine, and the effects of the outcomes can be felt in our daily lives. Most of the decisions regarding government policies, marketing including trade , and manufacturing are based on the outcomes of various samplings conducted in different fields.

The particular type of sampling used for a given situation depends on factors such as the composition of the population and the objectives of the sampling as well as time and budget availability.

Because sampling is an integral part of SQC, this chapter focuses on the various types of sampling, estimation problems, and sources of error. To make such inferences, which are usually in the form of estimates of parameters and which are otherwise unknown, we collect data from the population under investigation.

The aggregate of these data constitutes a sample.

Each data point in the sample provides us information about the population parameter. Collecting each data point costs time and money, so it is important that some balance is kept while taking a sample. Too small a sample may not provide enough information to obtain proper estimates, and too large a sample may result in a waste of resources. This is why it is very important to remember that in any sampling procedure an appropriate sampling schemenormally known as the sample designis put in place.

The sample size is usually determined by the degree of precision desired in the estimates and the budgetary restrictions. If is the population parameter of interest, is an estimator of , and E is the desired margin of error of estimation absolute value of the difference between and , then the sample In this chapter we will briefly study four different sample designs: But before we study these sample designs, some common terms used in sampling theory must be introduced. Definition 2. For example, if we are interested in the ability of employees with a specific job title or classification to perform specific job functions, the population may be defined as all employees with a specific job title working at the company of interest across all sites of the company.

If, however, we are interested in the ability of employees with a specific job title or classification to perform specific job functions at a particular location, the population may be defined as all employees with the specific job title working only at the selected site or location.

Populations, therefore, are shaped by the point or level of interest. Populations can be finite or infinite. A population where all the elements are easily identifiable is considered finite, and a population where all the elements are not easily identifiable is considered infinite.

For example, a batch or lot of production is normally considered a finite population, whereas all the production that may be produced from a certain manufacturing line would normally be considered infinite. It is important to note that in statistical applications, the term infinite is used in the relative sense. For instance, if we are interested in studying the products produced or the service delivery iterations occurring over a given period of time, the population may be considered as finite or infinite, depending on ones frame of reference.

It is important to note that the frame of reference finite or infinite directly impacts the selection of formulae used to calculate some statistics of interest. In most statistical applications, studying each and every element of a population is not only time consuming and expensive but also potentially impossible.

For example, if we want to study the average lifespan of a particular kind of electric bulb manufactured by a company, we cannot study the whole population without testing each and every bulb. Simply put, in almost all studies we end up studying only a small portion, called a sample, of the population. Normally, the sampled population and the target population coincide with each other because every effort is made to ensure that the sampled population is the same as the target population.

However, situations arise when the sampled population does not cover the whole target population. In such cases, conclusions made about the sampled population are usually not applicable for the target population. Before taking a sample, it is important that the target population be divided into nonoverlapping units, usually known as sampling units.

Note that the sampling units in a given population may not always be the same. In fact, sampling units are determined by the sample design chosen. For example, in sampling voters in a metropolitan area, the sampling units might be an individual voter, the voters in a family, or all voters in a city block.

Similarly, in sampling parts from a manufacturing plant, sampling units might be each individual part or a box containing several parts. Before selecting a sample, one must decide the method of measurement. Commonly used methods in survey sampling are personal interviews, telephone interviews, physical measurements, direct observations, and mailed questionnaires. No matter what method of measurement is used, it is very important that the person taking the sample know what measurements are to be taken.

Individuals who collect samples are usually called fieldworkers. This is true regardless of their locationwhether they are collecting the samples in a house, in a manufacturing plant, or in a city or town.

All fieldworkers should be well acquainted with what measurements to take and how to take them. Good training of all fieldworkers is an important aspect of any sampling procedure. The accuracy of the measurements, which affects the final results, depends on how well the fieldworkers are trained.

It is quite common to select a small sample and examine it very carefully. This practice in sample surveying is usually called a pretest. The pretest allows us to make any necessary improvements in the questionnaire or method of measurement and to eliminate any difficulties in taking the measurements.

It also allows a review of the quality of the measurements, or in the case of questionnaires, the quality of the returns. Once the data are collected, the next step is to determine how to organize, summarize, and analyze them. This goal can easily be achieved by using methods of descriptive statistics discussed in Chapters 3 and 4 of Applied Statistics for the Six Sigma Green Belt, the first book of this series.

Brief descriptions of various sample designsimple random sampling, stratified random sampling, systematic random sampling, and cluster random. Later in this chapter we will see some results pertaining to these sample designs. Sampling Designs The most commonly used sample design is simple random sampling.

This design consists of selecting n sample size sampling units in such a way that each unit has the same chance of being selected. If the population is finite of size N, however, then the simple random sampling design may be defined as selecting n sample size sampling units in such a way that each sample of size n, out of N possible samples, has the same chance of being selected.

Because the parts from which the Six Sigma Green Belt wants to take the sample are manufactured during the same shift at the same plant, it is quite safe to assume that all parts are similar. Therefore, a simple random sampling design is the most appropriate. The second sampling design is stratified random sampling, which at the same cost as simple random sampling may give improved results. However, the stratified random sampling design is most appropriate when a population can be divided into various nonoverlapping groups, called strata, such that the sampling units in each stratum are similar but may vary stratum to stratum.

Each stratum is treated as a population by itself, and a simple random sample is taken from each of these subpopulations or strata. In the manufacturing world, this type of situation can arise often. For instance, in Example 2. In addition, there is the advantage of administrative convenience.

For example, if the machine parts in the example are manufactured in plants located in different parts of the country, then stratified random sampling would be beneficial. Each plant may have a quality department, which can conduct the sampling within each plant.

To obtain better results in this case, the quality departments in all the plants should communicate with each other before sampling to ensure that the same sampling norms will be followed. Another example of stratified random sampling in manufacturing is when samples are taken of products manufactured in different batches. In this case, products manufactured in different batches constitute different strata.

A third kind of sampling design is systematic random sampling. This procedure is the easiest one, and it is particularly useful in manufacturing processes where the sampling is done on line. Under this scheme, the first item is randomly selected, and thereafter every mth item manufactured is selected until we have a sample of desired size.

Systematic random sampling is not only easy to select, but under certain conditions it is also more precise than simple random sampling. The fourth and last sampling design we will consider here is cluster random sampling. In cluster random sampling, each sampling unit is a group or cluster of smaller units. In the manufacturing environment, this sampling scheme is particularly useful because it is not very easy to prepare a list of each part that constitutes a frame.

On the other hand, it may be much easier to prepare a list of boxes, where each box contains many parts. Hence, a cluster random sample is merely a simple random sample of these boxes.

Another advantage of cluster random sampling is that by selecting a simple random sample of only a few clusters, we can, in fact, have quite a large sample of smaller units, which clearly has been achieved at a minimum cost. As mentioned earlier, the primary objective of sampling is to make inferences about population parameters using information contained in a sample taken from such a population.

In the rest of this chapter, certain results pertaining to each sample design discussed earlier will be presented. However, some basic definitions, which will be useful in studying the estimation problems, will first be given. The sample statistic used to calculate such an estimate is called an estimator.

The probability with which it contains the true value of the unknown parameter is called the confidence coefficient. The maximum value of the error of estimation with a given probability is called the margin of error. A simple random sample can be drawn from an infinite or finite population. There are two techniques for taking a simple random sample from a finite population: Simple random sampling with replacement from a finite population and simple random sampling from an infinite population have the same statistical properties.

A simple random sample can be taken by using tables of random numbers, which consist of digits 0, 1, 2,. A table of random numbers, Table A. The first step in selecting a simple random sample using a table of random numbers is to label all the sampling units 0 to N 1. If N is a four-digit number, for example, the labels will look like , ,. Note that Table A. Once all the sampling units are labeled, the second step is to select the same number of digits columns from the table of random numbers that matches the number of digits in N.

Third, one must read down n N numbers from the selected columns, starting from any arbitrary point and ignoring any repeated numbers. Note that if the sampling is done with replacement, we do not ignore the repeated numbers. The fourth and last step for taking a simple random sample of size n is to select the n sampling units with labels matching these numbers.

This method of selecting simple random samples is not as easy as it appears. In manufacturing, it may not even be possible to label all the parts. For example, suppose we want to select a simple random sample of 50 ball bearings from a big lot or a container full of such ball bearings.

Now the question is whether it is economical or even possible to label all these ball bearings every time we want to select a sample. The answer is no. The method described earlier is more to develop statistical theory. In practice, to select simple random samples, we use some convenient methods so that no particular sampling unit is given any preferential treatment.

For instance, to select a simple random sample of 50 units from a container, we first mix all the ball bearings and then select 50 of them from several spots in the container. This sampling technique is sometimes called convenience sampling. Define the population parameters as follows: For a simple random sample, the sample mean y and the sample variance s2 are defined as n. For estimating the population mean we use the sample mean defined in equation 2.

In order to assess the accuracy of the estimator of , it is important to find the variance of y , which will allow us to find the margin of error. In practice we usually do not know the population variance 2; therefore, it becomes necessary to find an estimator of V y , which is given by 2.

If the population variance 2 is unknown, then in equation 2. V y by its estimator V y. Example 2. The following data give the yield point in units of pounds per square inch psi of a simple random sample of size Because the population under investigation consists of all steel castings produced by the large manufacturer, it can be considered infinite.

Thus, the margin of error for estimating the population mean with, say, 95 percent probability, using equations 2. If the population variance 2 is unknown, then in equations 2. The company got a defense contract to supply 25, devices to the army. In order to meet the contractual obligations, the department of human resources wants to estimate the number of workers the company will need to hire.

This can be accomplished by estimating the number of workforce hours needed to manufacture 25, devices. The following data show the number of hours spent by randomly selected workers to manufacture 15 devices. In this example, we first estimate the mean workforce hours needed to manufacture one device and then determine the margin of error with 95 percent probability.

The sample mean for the data given in this example is 1 8. The sample variance for these data is 1 8. Using equation 2.

Now suppose that T is the total workforce hours needed to manufacture 25, devices. Using equations 2. Using the results of Example 2. In practice, the population variance 2 is usually unknown. Thus, to find the sample size n we need to find an estimate s2 of the population variance 2. This can be achieved by using the data from a pilot study or previous surveys. If the population size N is large, then the factor N 1 in equation 2.

The sample size needed to estimate the population total with margin of error E with probability 1 can be found by using equation 2. Determine the appropriate sample size to achieve this goal.

Plugging these values into equation 2. To attain a sample of size 86, we take another simple random sample of size 71, which is usually known as a supplemental sample, and then combine this sample with the one that we had already taken, which was of size For example, a Six Sigma Green Belt might be interested in finding the longevity of parts manufactured in different regions of the United States.

Clearly, the longevity of such parts may vary from region to region depending on where the parts are manufactured. This could be due to differences in raw materials, training of the workers, or differences in machines. In such cases where the population is heterogeneous, we get more precise estimates by using a sample design known as stratified random sampling. In stratified random sampling, the population is divided into different nonoverlapping groups, called strata, such that they constitute the whole population; the population is as homogeneous as possible within each stratum but varies between strata.

Then a stratified random sample is taken by selecting a simple random sample from each stratum. Some of the advantages of stratified random sampling over simple random sampling are as follows:. It provides more precise estimates for population parameters than simple random sampling would provide with the same sample size.

A stratified sample is more convenient to administer, which may result in a lower cost for sampling. It provides a simple random sample for each subgroup or stratum, which are homogeneous.

Therefore, these samples can prove to be very useful for studying each individual subgroup separately without incurring any extra cost. Before we look into how to use the information obtained from a stratified random sample to estimate population parameters, we will briefly discuss the process of generating a stratified random sample: Divide the sampling population of N units into nonoverlapping subpopulations or strata of N1, N2,.

These strata constitute the whole population of N units. Select independently from each stratum a simple random sample of size n1, n2,. To make full use of stratification, the strata sizes N1, N2,.

Let and T be the population mean and population total, respectively, and let i, Ti, i2 be the population mean, total, and variance of the ith stratum, respectively. Then, we have K. Let yi be the sample mean of the simple random sample from the ith stratum. From the previous section we know that yi is an unbiased estimator of the mean i of the ith stratum.

From equations 2. An estimator of the variance in equation 2. When the strata variances are unknown, which is usually the case, the stratum variance i2 in equation 2. The margin of error for the estimation of the population total with probability 1 is given by K. Again, when the strata variances are unknown, the stratum variance i2 in equation 2. Then the margin of error for the estimation of the population total with probability 1 is given by s2 ni 1 N.

Notes 1. The labor costs, raw material costs, and other overhead expenses vary tremendously from country to country. In order to meet the target value, the company is interested in estimating the average cost of a part it will produce during a certain period. To achieve its goal, the company calculated the cost of 12 randomly selected parts from facility one, 12 parts from facility two, and 16 parts from facility three, which is in the United States.

These efforts produced the following data in dollars: Sample from facility one 6. An estimate of the population mean and population total 2. The margin of error for the estimation of the population mean and population total with probability 95 percent 3. A confidence interval for the population mean and population total with confidence coefficient 95 percent Solution: In the example, each facility constitutes a stratum.

So first we determine the sample mean and the sample standard deviation for each stratum. Also, note that if the stratum population variance i2 is not known, then it is replaced with the corresponding sample variance si2 , which can be found from either a pilot study or historical data from a similar study.

From Example 2. In Example 2. This means we will need supplemental samples of sizes 43, 53, and 79 from stratum one, two, and three, respectively. An optimal allocation of sample size to each stratum depends on three factors: The details on this topic are beyond the scope of this book. For more information, see Cochran , Govindarajulu , Lohr , and Scheaffer et al. Because the first element is randomly selected and then every kth element is selected, this determines a complete systematic random sample and is usually known as every kth systematic sample.

A major advantage of the systematic sample over the simple random sample is that a systematic sample design is easier to implement, particularly. Systematic sampling is very easy to implement when the sample is taken directly from the production or assembly line. Another advantage of systematic sampling is that the workers collecting the samples do not need any special training, and sampled units are evenly spread over the population of interest.

If the population is in random order, a systematic random sample provides results comparable with those of a simple random sample. Moreover, a systematic sample usually covers the whole sampled population uniformly, which may provide more information about the population than that provided by a simple random sample.

However, if the sampling units of a population are in some kind of periodic or cyclical order, the systematic sample may not be a representative sample. Therefore, a systematic sample in this case will not be a representative sample. Similarly, suppose we want to estimate a workers productivity and we decide to take samples of his or her productivity every Friday afternoon. We might be underestimating the true productivity. On the other hand, if we pick the same day in the middle of the week, we might be overestimating the true productivity.

So, to have better results using systematic sampling, we must keep in mind that it includes the days when the productivity tends to be higher as well as lower.

If there is a linear trend, the systematic sample mean provides a more precise estimate of the population mean than that of a simple random sample but less precision than the stratified random sample. We denote the mean of the ith sample described earlier by y and the mean of a randomly selected systematic sample by y , where the i sy subscript sy means that the systematic sample was used. Note that the mean of a systematic sample is more precise than the mean of a simple random sample if, and only if, the variance within the systematic samples is greater than the population variance as a whole, that is,.

Sometimes the manufacturing scenario may warrant first stratifying the population and then taking a systematic sample within each stratum. For example, if there are several plants and we want to collect samples on line from each of these plants, then clearly we are first stratifying the population and then taking a systematic sample within each stratum. In such cases, the formulas for estimating the population mean and total are the same as given in the previous section except that y i is replaced by y sy.

The sample size needed to estimate the population total with margin of error E with probability 1 can be found from equation 2. However, before closing the deal, the company is interested in determining the mean timber volume per lot of one-fourth an acre. To achieve its goal, the company conducted an every th systematic random sample and obtained the data given in the following table. Estimate the mean timber volume per lot and the total timber volume T in one thousand acres.

Determine the 95 percent margin of error of estimation for estimating the mean and the total T and then find 95 percent confidence intervals for the mean and the total T. To find the margin of error with 95 percent probability for the estimation of the mean and the total T, we need to determine the estimated variances of We first need to find the sample variance. The estimated variances of and T are given by ssy2 26, Note that in Example 2.

In order to attain the margin of error of 25 cubic feet, we will have to take a sample of at least size In systematic sampling it is normally not possible to take only a supplemental sample to achieve a sample of full size, which in Example 2. Furthermore, we must keep the first randomly selected unit the same. Our new sample will then consist of lot numbers 7, 32, 57, 82, , , , and so on.

Obviously, the original sample is part of the new sample, which means in this case we can take only a supplemental sample of size instead of taking a full sample of size That is, to prepare a good frame listing is very costly, or there are not enough funds to meet that kind of cost, or all sampling units are not easily accessible.

Another possible scenario is that all the sampling units are scattered, so obtaining observations on each sampling unit is not only very costly, but also very time consuming. In such cases we prepare the sampling frame consisting of larger sampling units, called clusters, such that each cluster consists of several original sampling units subunits of the sampled population.

Then we take a simple random sample of the clusters and make observations on all the subunits in the. This technique of sampling is known as the cluster random sampling design, or simply the cluster sampling design.

Cluster sampling is not only cost effective, but also a time saver because collecting data from adjoining units is cheaper, easier, and quicker than if the sampling units are far apart from each other. In manufacturing, for example, it may be much easier and cheaper to randomly select boxes that contain several parts rather than randomly selecting individual parts.

However, while conducting a cluster sampling may be cost effective, it also may be less efficient in terms of precision than a simple random sampling when the sample sizes are the same. Also, the efficiency of cluster sampling may further decrease if we let the cluster sizes increase. Cluster samplings are of two kinds: In one-stage cluster sampling, we examine all the sampling subunits within the selected clusters. In two-stage cluster sampling, we examine only a portion of sampling subunits, which are chosen from each selected cluster using simple random sampling.

Furthermore, in cluster sampling, the cluster sizes may or may not be of the same size. Normally in field sampling it is not feasible to have clusters of equal sizes. For example, in a sample survey of a large metropolitan area, city blocks may be considered as clusters. If the sampling subunits are households or persons, then obviously it will be almost impossible to have the same number of households or persons in every block. However, in industrial sampling, one can always have clusters of equal sizes; for example, boxes containing the same number of parts may be considered as clusters.

We take a simple random sample of n clusters. We have the following: Then, an estimator of the population mean average value of the characteristic of interest per subunit is given by n. An estimator of the population total T total value of the characteristic of interest for all subunits in the population is given by n.

The margins of error with probability 1 for estimating the population mean and population total are given by. The pump model being investigated is typically installed in six applications, which include food service operations for example, pressurized drink dispensers , dairy operations, soft-drink bottling operations, brewery operations, wastewater treatment, and light commercial sump water removal. The quality manager cannot determine the exact warranty repair cost for each pump; however, through company warranty claims data, she can determine the total repair cost and the number of pumps used by each industry in the six applications.

In this case, the quality manager decides to use cluster sampling, using each industry as a cluster. The data on total repair cost per industry and the number of pumps owned by each industry are provided in the following table. Estimate the average repair cost per pump and the total cost incurred by the industries over a period of one year. Determine a 95 percent confidence interval for the population mean and population total.

To estimate the population mean , we proceed as follows: An estimate of the total number of pumps owned by the industries is Note that in order to determine the , for which we need to have a sample and sample size, we needed s2 and m thus the sample size, which in fact we want to determine.

The solution for this problem, as discussed in details in Applied Statistics for the Six Sigma Green Belt, is to evaluate these quantities either by using some similar data if already available or by taking a smaller sample, as we did in this example. However, the concept of SQC is less than a century old.

SPC consists of several tools that are useful for process monitoring. The quality control chart, which is the center of our discussion in this chapter, is one of these tools.

Walter A. Shewhart, from Bell Telephone Laboratories, was the first person to apply statistical methods to quality improvement when, in , he presented the first quality control chart. Van Nostrand Co. The publication of this book set the notion of SQC in full swing. In the initial stages, the acceptance of the notion of SQC in the United States was almost negligible. It was only during World War II, when the armed services adopted the statistically designed sampling inspection schemes, that the concept of SQC started to gain acceptance in American industry.

The complete adoption of modern SQC was limited from the s to the late s, however, because American manufacturers believed that quality and productivity couldnt go hand in hand. American manufacturers argued that producing goods of high quality would cost more because high quality would mean downloading raw materials of higher grade, hiring more qualified personnel, and providing more training to workers, all of which would translate into higher costs. American manufacturers also believed that producing higher-quality goods would slow down productivity because better-qualified and better-trained workers would not compromise quality to meet their daily quotas, which would further add to the cost of their product.

As a consequence, they believed they would not be able to compete in the world market and would therefore lose their market share. The industrial revolution in Japan, however, proved that American manufacturers were wrong. This revolution followed the visit of a famous American statistician, W. Edwards Deming, in Deming , 2 wrote, Improvement of quality transfers waste of manhours and of machine-time into the manufacture of good product and better service.

The result is a chain reactionlower costs, better competitive position, Deming describes this chain reaction as follows: Improve quality cost decreases because of less work, fewer mistakes, fewer delays or snags, better use of machine time and materials productivity improves capture the market with better quality and lower price stay in business provide jobs and more jobs.

Once the management in Japan adopted the philosophy of this chain reaction, everyone had one common goal of achieving quality. We define it as Deming did: The customer may be internal or external, an individual or a corporation. If a product meets the needs of the customer, the customer is bound to download that product again and again.

On the contrary, if a product does not meet the needs of a customer, then he or she will not download that product even if he or she is an internal customer.

Consequently, that product will be deemed of bad quality, and eventually it is bound to go out of the market. Other components of a products quality are its reliability, how much maintenance it demands, and, when the need arises, how easily and how fast one can get it serviced. In evaluating the quality of a product, its attractability and rate of depreciation also play an important role. As described by Deming in his telecast conferences, the benefits of better quality are numerous.

First and foremost is that it enhances the overall image and reputation of the company by meeting the needs of its customers and thus making them happy. A happy customer is bound to download the product again and again. Also, a happy customer is likely to share a good experience with the product with friends, relatives, and neighbors. Therefore, the company gets publicity without spending a dime, and this results in more sales and higher profits.

Higher profits lead to higher stock prices, which means higher net worth of the company. Better quality provides workers satisfaction and pride in their workmanship. A satisfied worker goes home happy, which makes his or her family happy. A happy family boosts the morale of a worker, which means greater dedication and loyalty to the company. Another benefit of better quality is decreased cost.

This is due to the need for less rework, thus less scrap, fewer raw materials used, and fewer workforce hours and machine hours wasted. Ultimately, this means increased productivity, a better competitive position, increased sales, and a higher market share.

On the other hand, losses due to poor quality are enormous. Poor quality not only affects sales and the competitive position, but it also carries with it high hidden costs that are usually not calculated and therefore not known with precision. These costs include unusable product, product sold at a discounted. In most companies, the accounting departments provide only minimum information to quantify the actual losses incurred due to poor quality. Lack of awareness concerning the cost of poor quality could lead company managers to fail to take appropriate actions to improve quality.

A process may be defined as a series of actions or operations performed in producing manufactured or nonmanufactured products. A process may also be defined as a combination of workforce, equipment, raw materials, methods, and environment that works together to produce output. The flowchart in Figure 3. The quality of the final product depends on how the process is designed and executed.

However, no matter how perfectly a process is designed and how well it is executed, no two items produced by the process are identical. The difference between two items is called the variation. Such variation occurs because of two causes: Common causes or random causes 2. Special causes or assignable causes As mentioned earlier, the first attempt to understand and remove variation in any scientific way was made by Walter A.

Shewhart recognized that special or assignable causes are due to identifiable sources, which can be systematically identified and eliminated, whereas common or random causes are not due to identifiable sources and therefore cannot be eliminated without very expensive measures. These measures include redesigning the process, replacing old machines with new machines, and renovating part of or the whole system. A process is considered in statistical control when only common causes are present. Deming , 5 states, But a state of statistical control is not a natural state for a manufacturing process.

It is instead an achievement, arrived at by elimination, one by one, by determined efforts, of special causes of excessive variation. Because the process variation cannot be fully eliminated, controlling it is key. If process variation is controlled, the process becomes predictable. Otherwise the process is unpredictable. To achieve this goal, Shewhart introduced the quality control chart.

Shewhart control charts are normally used in phase I implementation of SPC, when a process is particularly susceptible to the influence of special causes and, consequently, is experiencing excessive variation or large shifts. Quality control charts can be divided into two major categories: In this chapter we will study control charts for variables.

Definition 3. Examples of a variable include the length of a tie-rod, the tread depth of a tire, the compressive strength of a concrete block, the tensile strength of a wire, the shearing strength of paper, the concentration of a chemical, the diameter of a ball bearing, and the amount of liquid in a ounce can, and so on.

In general, in any process there are some quality characteristics that define the quality of the process. If all such characteristics are behaving in a desired manner, then the process is considered stable and it will produce products of good quality. If, however, any of these characteristics are not behaving in a desired manner, then the process is considered unstable and it is not capable of producing products of good quality. A characteristic is usually determined by two parameters: In order to verify whether a characteristic is behaving in a desired manner, one needs to verify that these two parameters are in statistical control, which can be done by using quality control charts.

In addition, there are several other such. These tools, including the control charts, constitute an integral part of SPC. SPC is very useful in any process related to manufacturing, service, or retail industries. This set of tools consists of the following: Histogram 2.

Book of the Week

Stem-and-leaf diagram 3. Scatter diagram 4. Run chart also known as a line graph or a time series graph 5.

Check sheet 6. Pareto chart 7. Cause-and-effect diagram also known as a fishbone or Ishikawa diagram 8. Defect concentration diagram 9. Control charts These tools of SPC form a simple but very powerful structure for quality improvement. Once workers become fully familiar with these tools, management must get involved to implement SPC for an ongoing quality-improvement process.

Management must create an environment where these tools become part of the day-to-day production or service process. The implementation of SPC without managements involvement and cooperation is bound to fail. In addition to discussing these tools, we will also explore here some of the questions that arise while implementing SPC.

Every job, whether in a manufacturing company or in a service company, involves a process. As described earlier, each process consists of a certain number of steps. No matter how well the process is planned, designed, and executed, there is always some potential for variability.

In some cases this variability may be very little, while in other cases it may be very high. If the variability is very little, it is usually due to some common causes that are unavoidable and cannot be controlled. If the variability is too high, we expect that in addition to the common causes there are some other causes, usually known as assignable causes, present in the process.

Any process working under only common causes or chance causes is considered to be in statistical control. If a process is working under both common and assignable causes, it is considered unstable, or not in statistical control.

In this chapter and the two that follow, we will discuss the rest of these tools. Check Sheet In order to improve the quality of a product, management must try to reduce the variation of all the quality characteristics; that is, the process must be brought to a stable condition.

In any SPC procedure used to stabilize a process, it is. The check sheet is an important tool to achieve this goal. We discuss this tool using a real-life example. Example 3. In order to identify these defects and their frequency, a study is launched. This study is done over a period of four weeks. The data are collected daily and summarized in the following form see Table 3. The summary data not only give the total number of different types of defects but also provide a very meaningful source of trends and patterns of defects.

These trends and patterns can help find possible causes for any particular defect or defects. Note that the column totals in Table 3. It is important to remark here that these types of data become more meaningful if a logbook of all changes, such as a change in raw materials, calibration of machines, or training of workers or new workers hired, is well kept.

Pareto Chart The Pareto chart is a useful tool for learning more about attribute data quickly and visually.

The Pareto chartnamed after its inventor, Vilfredo Pareto, an Italian economist who died in is simply a bar graph of attribute data in a descending order of frequency by, say, defect type.

For example, consider the data on defective rolls in Example 3. The chart allows the user to quickly identify those defects that occur more frequently and those that occur less frequently. This allows the user to priori-. For instance, the Pareto chart in Figure 3. Corrugation and blistering together are responsible for Corrugation, blistering, and streaks are responsible for 70 percent.

To reduce the overall rejection, one should first attempt to eliminate or at least reduce the defects due to corrugation, then blistering, streaks, and so on. By eliminating these three types of defects, one would dramatically change the percentage of rejected paper and reduce the losses.

It is important to note that if one can eliminate more than one defect simultaneously, then one should consider eliminating them even though some of them are occurring less frequently. Furthermore, after one or more defects are either eliminated or reduced, one should again collect the data and reconstruct the Pareto chart to determine whether the priority has changed.

If another defect is now occurring more frequently, one may divert the resources to eliminate such a defect first.

Note that in this example, the category Other may include several defects such as porosity, grainy edges, wrinkles, or brightness that are not occurring very frequently. So, if one has limited resources, one should not expend them on this category until all other defects are eliminated. Sometimes the defects are not equally important. This is true particularly when some defects are life threatening while other defects are merely a nuisance or an inconvenience.

It is quite common to allocate weights to each defect and then plot the weighted frequencies versus the defects to construct the Pareto chart. For example, suppose a product has five types of defects, which are denoted by A, B, C, D, and E, where A is life threatening, B is not life threatening but very serious, C is serious, D is somewhat serious, and E is not serious but merely a nuisance.

The data collected over a period of study are shown in Table 3. In Figure 3. That is, by using weighted frequencies, the order of priority of removing the defects is C, A, B, D, and E, whereas without using the weighted frequencies, this order would have been E, C, D, B, and A.

Cause-and-Effect Fishbone or Ishikawa Diagram In an SPC-implementing scheme, identifying and isolating the causes of a particular problem or problems is very important.

An effective tool for identifying such causes is the cause-and-effect diagram. This diagram is also known as a fishbone diagram because of its shape, and sometimes it is called an Ishikawa diagram after its inventor. Japanese manufacturers have widely used this diagram to improve the quality of their products.

In preparing a causeand-effect diagram, it is quite common to use a brainstorming technique. The brainstorming technique is a form of creative and collaborative thinking. This technique is used in a team setting. The team usually includes personnel from departments of production, inspection, downloading, design, and management, along with any other members associated with the product under discussion.

A brainstorming session is set up in the following manner: Each team member makes a list of ideas. The team members sit around a table and take turns reading one idea at a time. As the ideas are read, a facilitator displays them on a board so that all team members can see them. Steps 2 and 3 continue until all ideas have been exhausted and displayed. Cross-questioning concerning a team members idea is allowed only for clarification. When all ideas have been read and displayed, the facilitator asks each team member if he or she has any new ideas.

This procedure continues until no team member can think of any new ideas. Once all the ideas are presented using the brainstorming technique, the next step is to analyze them. The cause-and-effect diagram is a graphical technique used to analyze these ideas. Figure 3. The five spines in Figure 3. In most workplaces, whether they are manufacturing or nonmanufacturing, the causes of all problems usually fall into one or more of these categories. Using a brainstorming session, the team brings up all possible causes under each category.

For example, under the environment category, a cause could be the managements attitude.

Management might not be willing to release any funds for research or to change suppliers; there might not be much cooperation among middle and top management; or something similar.

Under the personnel category, a cause could be lack of proper training for workers, supervisors who are not helpful in solving problems, lack of communication between workers and supervisors, and workers who are afraid of asking their supervisors questions for fear of repercussions in their jobs, promotions, or raises.

Once all possible causes under each major category are listed in the cause-and-effect diagram, the next step is to isolate one or more common causes and eliminate them. A complete cause-and-effect diagram may appear as shown in Figure 3. Defect Concentration Diagram A defect concentration diagram is a visual representation of the product under study that depicts all the defects.

Related titles

This diagram helps the workers determine if there are any patterns or particular locations where the defects occur and what kinds of defects, minor or major, are occurring. The patterns or particuEnvironment. Workers underpaid No funds for research Not enough workers Lack of management cooperation.

It is important that the diagram show the product from different angles. For example, if the product is in the shape of a rectangular prism and defects are found on the surface, then the diagram should show all six faces, very clearly indicating the location of the defects. The defect concentration diagram was useful when the daughter of one of the authors made a claim with a transportation company. In , the author shipped a car from Boston, Massachusetts, to his daughter in San Jose, California.

After receiving the car, she found that the front bumpers paint was damaged. She filed a claim with the transportation company for the damage, but the company turned it down, simply stating that the damage was not caused by the company.

Fortunately, a couple days later, she found similar damage under the back bumper, symmetrically opposite the front bumper. She again called the company and explained that this damage had clearly been done by the belts used to hold the car during transportation. This time the company could not turn down her claim, because by using a defect concentration diagram, she could prove that the damage was caused by the transportation company.

Run Chart In any SPC procedure it is very important to detect any trends that may be present in the data. Run charts help identify such trends by plotting data over a certain period of time. For example, if the proportion of nonconforming parts produced from shift to shift is perceived to be a problem, we may plot the number of nonconforming parts against the shifts for a certain period of time to determine whether there are any trends.

Trends usually help us identify the causes of nonconformities. This chart is particularly useful when the data are collected from a production process over a certain period of time. A run chart for data in Table 3. From this run chart we can easily see that the percentage of nonconforming units is the lowest in the morning shift shifts 1, 4, 7,. There are also some problems in the evening shift, but they are not as severe as those in the night shift. Because such trends or patterns are usually created by special or assignable.

Does the quality of the raw materials differ from shift to shift? Is there inadequate training of workers in the later shifts? Are evening- and late-shift workers more susceptible to fatigue? Are there environmental problems that increase in severity as the day wears on? Deming , points out that sometimes the frequency distribution of a set of data does not give a true picture of the data, whereas a run chart can bring out the real problems of the data.

The frequency distribution gives us the overall picture of the data, but it does not show us any trends or patterns that may be present in the process on a short-term basis. Control charts make up perhaps the most important part of SPC.

We will study some of the basic concepts of control charts in the following section, and then in the rest of this chapter and the next two chapters we will study the various kinds of control charts. As noted by Duncan , a control chart is 1 a device for describing in concrete terms what a state of statistical control is, 2 a device for attaining control, and 3 a device for judging whether control has been attained. Before the control charts are implemented in any process, it is important to understand the process and have a concrete plan for future actions that would be needed to improve the process.

Process Evaluation Process evaluation includes not only the study of the final product, but also the study of all the intermediate steps or outputs that describes the actual operating state of the process. For example, in the paper production process, wood chips and pulp may be considered as intermediate outputs.

If the data on process evaluation are collected, analyzed, and interpreted correctly, they can show where and when a corrective action is necessary to make the whole process work more efficiently. Action on Process Action on process is important for any process because it prevents the production of an out-of-specification product.

An action on process may include changes in raw materials, operator training, equipment, design, or other measures. The effects of such actions on a process should be monitored closely, and further action should be taken if necessary.

Action on Output If the process evaluation indicates that no further action on the process is necessary, the last action is to ship the final product to its destination.

Obviously, such an action on the output is futile and expensive. We are interested in correcting the output before it is produced. This goal can be achieved through the use of control charts.

Any process with a lot of variation is bad and is bound to produce products of inferior quality.

Control charts can help detect variation in any process. As described earlier, in any process there are two causes of variation: Variation No process can produce two products that are exactly alike or possess exactly the same characteristics. Any process is bound to contain some sources of variation. The difference between two products may be very large, moderate, very small, or even undetectable, depending on the source of variation, but certainly there is always some difference.

For example, the moisture content in any two rolls of paper, the opacity in any two spools of paper, and the brightness of two lots of pulp will always vary. Our aim is to trace back as far as possible the sources of such variation and eliminate them. The first step is to separate the common and special causes of such sources of variation.

Common Causes or Random Causes Common causes or random causes are the sources of variation within a process that is in statistical control. The causes behave like a constant system of chance. While individual measured values may all be different, as a group they tend to form a pattern that can be explained by a statistical distribution that can generally be characterized by: Location parameter 2.

Dispersion parameter 3. Shape the pattern of variation, whether it is symmetrical, right skewed, left skewed, and so on Special Causes or Assignable Causes Special causes or assignable causes refer to any source of variation that cannot be adequately explained by any single distribution of the process output, as otherwise would be the case if the process were in statistical control.

Unless all the special causes of variation are identified and corrected, they will continue to affect the process output in an unpredictable way. Any process with assignable causes is considered unstable and hence not in statistical control. However, any process free of assignable causes is considered stable and therefore in statistical control. Assignable causes can be corrected by local actions, while common causes or random causes can be corrected only by actions on the system.

Are usually required to eliminate special causes of variation 2. Can usually be taken by people close to the process 3. Can correct about 15 percent of process problems Deming believed that as much as 6 percent of all system variations is due to special or assignable causes, while no more than 94 percent of the variations is due to common causes. Actions on the System 1. Are usually required to reduce the variation due to common causes 2. Almost always require management action for correction 3.

Are needed to correct about 85 percent of process problems Furthermore, Deming points out that there is an important relationship between the two types of variations and the two types of actions needed to reduce such variations. We will discuss this point in more detail. Special causes of variation can be detected by simple statistical techniques.

These causes of variation are not the same in all the operations involved. Detecting special causes of variation and removing them is usually the responsibility of someone directly connected to the operation, although management is sometimes in a better position to correct them.

Resolving a special cause of variation usually requires local action. The extent of common causes of variation can be indicated by simple statistical techniques, but the causes themselves need more exploration in order to be isolated. It is usually managements responsibility to correct the common causes of variation, although personnel directly involved with the operation are sometimes in a better position to identify such causes and pass them on to management for an appropriate action.

SlideShare Explore Search You. Submit Search. Successfully reported this slideshow.

The Certified Six SIGMA Black Belt Handbook

We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime. Upcoming SlideShare. Like this presentation? Why not share! An annual anal Embed Size px. Start on. Show related SlideShares at end. WordPress Shortcode. Published in: Full Name Comment goes here. Are you sure you want to Yes No. Imran Sarker at B.

No Downloads. Views Total views.Corrugation and blistering together are responsible for To achieve this goal, Shewhart introduced the quality control chart. Rational Samples for a Control Chart. People in group 2 are commonly expected to design and implement SQC tools. Methods, Models, and Decisions.

Critical values of t with degrees of freedom. Definition 3. For example, if samples are taken every 30 minutes, a false alarm will occur on the average once every hours.

DENICE from Charleston
I enjoy immediately . Also read my other articles. I am highly influenced by jeu provençal.