sqc2014m

166
STAISTICAL QUALITY CONTROL STAT496 A.R. MURALIDHARAN, ASSISTANT PROFESSOR IN STATISTICS , COMPUTATIONAL SCIENCE WOLLEGA UNIVERSITY NEKEMTE

Upload: murali-dharan

Post on 11-Jan-2016

224 views

Category:

Documents


7 download

DESCRIPTION

sqc

TRANSCRIPT

Page 1: SQC2014m

STAISTICAL QUALITY CONTROL

STAT496

A.R. MURALIDHARAN, ASSISTANT PROFESSOR IN STATISTICS , COMPUTATIONAL SCIENCE WOLLEGA UNIVERSITY NEKEMTE

Page 2: SQC2014m

WOLLEGA UNIVERSITY

FACULTY OF COMPUTATIONAL SCIENCESTATISTICS PROGRAM

LECTURE NOTES FOR FINAL YEAR STATISTICS STUDENTS ACADEMIC YEAR 2012-13/SEMI

COMPILED BY

A.R. MURALIDHARAN ASSISTANT PROFESSOR IN STATISTICS

STATISTICS PROGRAM DEPARTMENT OF MATHEMATICS

WOLLEGA UNIVERSITY NEKMETE, ETHIOPIA

A.R.Muralidharan. SQC Lecture Notes 2

Page 3: SQC2014m

WOLLEGA UNIVERSITY

FACULTY OF COMPUTATIONAL SCIENCESTATISTICS PROGRAM

Stat 492: Statistical Quality ControlCOURSE OUTLINE

1. Introduction (6 lecture)1.1. Quality improvement in the modern Business Environment1.2. Modeling process Quality2. Methods of Statistical process control and Capability Analysis (21 lecture)2.1. Method and Philosophy of statistical process capability2.2. Control chart for variables2.3. Control chart for Attributes2.4. Process and Measurement system capability analysis3. Other statistical process-monitoring and control techniques (10 lecture)3.1. Cumulative sum and Exponentially weighted Moving average control charts3.2. Other univariate statistical process monitoring and control techniques3.3. Multivariate process Monitoring and control.4. Acceptance sampling (4 Lecture hours)4.1. Concepts of acceptance sampling4.2. Lot-by-Lot acceptance sampling for attributes4.3. Other acceptance sampling techniques5. Reliability and life testing (4 lecture hours)5.1. Definition of reliability5.2. Life history curve5.3. Types of reliability tests6. Ethiopian experience in quality control (2 lecture hours)6.1. Discussion on the Ethiopian industry experience in quality control.

Text book: Montogomery D.C (2005) Introduction to Statistical Quality control (5 Th Edition)

A.R.Muralidharan. SQC Lecture Notes 3

Page 4: SQC2014m

WOLLEGA UNIVERSITY

FACULTY OF COMPUTATIONAL SCIENCESTATISTICS PROGRAM

Stat 492: Statistical Quality ControlLecture note

1. Introduction (6 lecture hours)

1.1 Quality Improvement in the Modern business Environment1.2 Modeling Process quality1.2.1 Describing Variation a. stem –and- leaf plot b. Histogram c. Numerical summary of data d. The Box plot e. Probability Distributions

Introduction

Quality is the key to a nation is economic. It is very important requirement for any type of product and services. The development of science and technology has been successful in providing consumers with better and the more consistent quality products. Quality and productivity are more likely to bring prosperity in the country.

In order to develop and to standardize an enterprise must be competitive. Quality does not mean only the goodness or otherwise of a finished product. It is ultimate object of a company. The management looks to achieve customer satisfaction by running its business at the desired economic level. Both these can be attained by properly integrating quality development quality maintenance and quality improvement of the product. The integration of these three aspects of a product can achieved through a sound quality system.

Quality may define in several ways, “Quality” as relating to one or more desirable characteristics that a product or service should posses. Quality has become one of the most important consumer decision factors in the selection. There is a substantial return on investment form improved quality and from successfully employing

quality as an integral part of overall business.

A.R.Muralidharan. SQC Lecture Notes 4

Page 5: SQC2014m

1.1 Quality improvement in the modern Business Environment

In modern trends, the use of statistical methods and other problem-solving techniques to improve the quality of products. These products consist of manufactured goods such as automobiles, clothing as well as services.

Quality improvement methods can be applied to any area within the company or organization. Now we are discussing the basic definitions of quality, quality improvement and other quality terminologies. Also the discussion on the historical development of quality improvement, methodology and overview the statistical tools essential for the modern Business.

The meaning of quality and quality improvement

Quality may define in many ways. It is observed that quality means, a product or services should posses one or more desirable characteristics. This conceptual understanding is certainly a useful statement in developing a definition and it is not cover all aspects.

Quality has become one of the most important consumer decision factors in the selection among competing products and services. Understanding and improving quality is a key factor leading to business success, growth and enhanced competitive situations. There is a substantial return on investment from improved quality.

Dimensions of Quality

The quality of a product can be evaluated in several ways; collectively it is known to be dimensions of quality. It consists of eight components, otherwise dimensions of quality.

Dimensions

1. Performance2. Reliability3. Durability4. Serviceability5. Aesthetics6. Features7. Perceived quality8. Conformance to Standards

The traditional definition of quality is based on the viewpoint that products and services must meet the requirements. Thus, Quality means fitness for use.

There are two general aspects of fitness for use: Quality of design and Quality of conformance.

All goods and services are produced in various grades or levels of quality. These variations in grades or levels of quality are international and purposive and consequently. That appropriate technical term is Quality of Design. These design differences include the type of materials used in construction, tolerances in manufacturing, reliability obtained through engineering development of engines and drive trains.

In short, Quality of design is with difference in specification for products which have the same use.

A.R.Muralidharan. SQC Lecture Notes 5

Page 6: SQC2014m

Quality of conformance is the ability to maintain the specified quality of design. It is influenced by number of factors, including the choice of manufacturing processes, the training and supervision of the workforce and so on, the extent to which these quality-assurance producers are followed.

Quality characteristics observed in industry can be classified generally into any one of the following categories.

Directly measurable Quality characteristics Non measurable Quality characteristics Inspection result obtains by counting defects.

Unfortunately, the above definition has more stress on conformance aspect of quality than design. This leads to much less focus on the customer and more of a “conformance-to-specifications” approach to quality. The modern definition of quality is “Quality is inversely proportional to variability”: from this definition the variability decreases leads to the quality of the product increases. If we decrease the variability automatically quality improved.

Now we just define the term “Quality Improvement”. It is defined as “Quality Improvement is reduction of variability in processes and products. Excessive variability in process performance often results in Waste. An alternate and highly useful definition is that quality improvement is the reduction of waste. This concept is particularly effective in service industries where there may not be as many things that can be directly measured. In service industries, a quality problem may be an error or a mistake, the correction of which requires effort and expense. By the service process, this wasted effort and expense can be avoided.

Quality Engineering Terminology

Every product possesses a number of elements that jointly describe what the user things of as quality. These parameters are called quality characteristics. This may be several types:

a. Physical b. Sensoryc. Time dependent.

All these quality characteristics related directly or indirectly to the dimensions of quality.

Quality engineering is the set of operational, managerial and engineering activities that a company uses to ensure that the quality characteristics of a product are at the nominal or required levels. Most of the organizations are finding difficult to give identical products from unit to unit, because of variability, no two products are ever identical. If the variation is less, then it may have no impact otherwise it gives undesirable and unacceptable options. Sources of this variability include differences in materials, differences in the performance and operation of the manufacturing equipment and differences in the way the operators perform their tasks.

Since variability can be deal with statistical terms, statistical methods play a central role in quality improvement efforts. In quality engineering data it can be classified into two categories. They are Attributes - usually discrete data, often taking the form of counts and Variables – usually continuous measurement such as length, voltage or viscosity.

Quality control measurements

A.R.Muralidharan. SQC Lecture Notes 6

Page 7: SQC2014m

• Attributes – A performance characteristics that is either present or absent in the product or service under consideration.

• Examples: Order is either complete or incomplete; an invoice can have one, two, or more errors. • Attributes data are discrete and tell whether the characteristics conforms to specifications.• Attributes measurements typically represented as proportions or rates. e.g. rate of errors per opportunity.• Typically measured by “Go-No Go” gauges.

• Variable – Continuous data that is concerned with degree of conformance to specifications.• Generally expressed with statistical measures such as averages and standard deviations.• Sophisticated instruments (caliper) used.• In statistical sense, attributes inspection less efficient than variable inspection. • Attribute data requires larger sample than variable inspection to obtain same amount of statistical

information. • Most quality characteristics in service industry are attributes.

As a statistician, we can deal statistical-based quality engineering tools for dealing with both types of data.

Quality characteristics are often evaluated relative to specifications. A value of a measurement that corresponds to the desired value for those quality characteristics is called nominal or target value for that characteristics. These target Values are usually bounded by a range of values, sufficiently close to that target value so as to not impact the function of performance of product if the quality characteristic is in that range. The largest allowable Value for a quality characteristic is called the Upper Specification limit (USL). The smallest allowable value for a quality characteristic is called the lower specification limit (LSL). Some quality characteristics have specification limits on only one side of target.

Specifications are usually the result of the engineering design process for the product. Traditionally, design engineers have arrived at a product design configuration though the use of engineering science principles, which often results in the designer specifying the target values for the critical design parameters. Then prototype construction and testing follow and they perform in a much unstructured manner, without the use of statistically based experimental design procedures and without much interaction with or knowledge of the manufacturing processes that must produce the component parts and final product. The final product is released for manufacturing referred as the over-the-wall approach.

A.R.Muralidharan. SQC Lecture Notes 7

Page 8: SQC2014m

Problems are greater in the above approach and specifications are often set without regard to the inherent variability that exists in materials and other parts of the system it leads to the products or components are non-conforming (fail to meet one or more of its specifications). A specific type of failure is called a non- conformity. A non- conforming product is not necessarily unfit for use.

A non-conforming product is considered defective if it has one or more defects, which are non-conformities that are serious enough to significantly affect the safe or effective use of the product.

Statistical methods for quality control and Improvement

Statistical and engineering technology useful in quality improvement, now here we focus on three major areas: Statistical process control, Design of experiments and Acceptance sampling. Additions to this some other statistical tools are useful in analyzing quality problems and improving the performance of production process.

The diagram given below presents a production process as a system with a set of inputs and outputs. The inputs x1,x2…….xn are controllable such as temperatures, pressures, and so on. The inputs z1,z2…….zm are uncontrollable such as environmental factors, raw materials, and so on. The manufacturing process transforms these inputs into a finished product that has several quality characteristics. The output variable Y is a measure of Process quality.

A.R.Muralidharan. SQC Lecture Notes 8

Page 9: SQC2014m

Controllable I/P

Input raw materials, x1 x2…… xn

Components and

Subassemblies

Y= quality characteristics

O/p Product

Z1 z2……zm

Uncontrollable I/P

Figure shows production process inputs (I/P) and Outputs (O/P)

A.R.Muralidharan. SQC Lecture Notes 9

Measurement Evaluation Monitoring and Control

Process

Page 10: SQC2014m

A Control Chart is one of the primary techniques of Statistical process Control (S.P.C). A typical control chart is shown above. This chart plots the average of measurements of quality characteristics in samples taken from the process versus time. The chart has a Central line (C.L) and Upper and Lower Control limits (UCL and LCL). The central line represents process characteristics should fall if there are no unusual sources of variability present. The control limits are determined from simple statistical methods. Usually Control charts are applied to the output variable in a system. The control chart is a very useful Process monitoring technique –when unusual sources of variability are present, sample averages will plot outside the control limits. This is a signal that some investigation of the process should be made and coactive action to remove these unusual sources of variability taken. Systematic use of a control chart is an excellent way to reduce variability.

A designed experiment is extremely helpful in discovering the key variables influencing the quality characteristics of interest in the process. A designed experiment is an approach to systematically varying the controllable input factors in the process and determining the effect these factors have on the output product parameters. Statistically designed experiments are invaluable in reducing the variability in the quality characteristics and in determining the levels of the controllable variables that optimize process performance. One major type of designed experiments is factorial design, in which factors are varied together in such a way that all possible combinations of factor levels are tested.

Further, Designed experiments are a major OFF-line quality control system, because they are often used during development activities and the early stages of manufacturing, rather than as a routine on-line or in-process procedure. They play a crucial role in reducing variability. Once we identified a list of important variables that affect the process output is usually necessary to model the relationship between the influential input variables and output quality characteristics. Statistical techniques useful in constructing such models include regression analysis and time series analysis.

The third area of quality control and improvement that we consider is Acceptance sampling. This is connected with inspection and testing of product, which is one of the earliest aspects of quality control, dating back to long before statistical methodology was developed for quality improvement.

A.R.Muralidharan. SQC Lecture Notes 10

Page 11: SQC2014m

The primary objective of quality engineering efforts is the systematic reduction of variability in the key quality characteristics of the product. Statistically designed experiments can be employed in conjunction with statistical process control to minimize process variability in nearly all industrial settings.

Statistical Concepts in Quality Control

The greatly increased precision of manufacturing parts has been accompanied by the need for better methods to measure, specify and record this precision.

Statistics the science of numbers has consequently become one of the most valuable techniques in Quality control job.

Statistical Quality Control is a branch of quality control. It is the collection, analysis and interpretation of data to solve a particular problem.

In modern manufacturing it is no two pieces are ever made exactly alike. This may be small or large.

Types of Variation

There are there classifications useful for analytical purposes are

Variation within the part itself Variation among parts produced during the same period. Variation among parts produced at different periods.

Reasons for Variation

Tool wear Bearings that loosen Vibrations Poor raw materials Measuring Error

A.R.Muralidharan. SQC Lecture Notes 11

Page 12: SQC2014m

Weather Change and so on

A Brief History of Quality control and Improvement

Quality always has been an integral part of virtually all products and services. Our awareness of its importance and the introduction of formal methods for quality control and improvement have been an evolutionary development. Below table gives the evolutionary process.

Other Aspects of Quality Control and Improvement

Even Statistical techniques are the critical tools for quality control and improvement, to be used most effectively they must be implemented within and be part of a management system that is quality driven. To achieve the quality and improve the quality some managerial frameworks are also used they are Total quality Management (Company-wide quality control, Total quality Assurance) and six-sigma.

Total Quality Management (TQM)

It is a strategy for implementing and managing quality improvement activities on an organization wide basis. It began in early 1980s with the philosophies of Deming and Juran as the focal point. It evolved into a broader spectrum of concepts and ides, involving participative organizations and work for improvement goal.

Some general reasons for the lack of conspicuous successes of TQM include

1. Lack of top down, high-level management commitment and involvement 2. Inadequate use of statistical methods and insufficient recognition of variability reduction as a prime

objective3. Diffuse as opposed to focused, specific objectives and4. Too much emphasis on widespread training as opposed to focused technical education.

Another reason for the erratic success of TQM is that many managers and executives have regarded it as just another “Program” to improve quality. During the 1950s and 1960s, programs such as Zero defects and Value engineering abounded, but they had little real impact on quality and productivity improvement.

ISO 9000 series

1. Management responsibility for quality2. Design control’3. Document and data control4. Purchase and contract management5. Product identification and traceability6. Inspection and testing7. Process control8. Handling of non-conforming product9. Handling, storage, packing and delivery

A.R.Muralidharan. SQC Lecture Notes 12

Page 13: SQC2014m

10. Control of quality receiving11. Internal audits12. Training13. Statistical methods

Difference between Statistical Quality Control (SQC) and Statistical Process Control(SPC)

In SQC there might have een some philosophical separation, the idea of “quality” is larger and more encompassing than that of process.

The term “process” is problematic by nature. SQC as the management version of SPC

SPC is the process of overseeing and controlling how a product is produced using statistical methods in order to guarantee its quality and to ensure that the process products uniform products at minimum waste.

The use of SPC started in the early 1920’s for the purpose of improving the quality of manufactured products. It was later adapted and applied to process other than manufacturing in software engineering. While traditional quality control checks the product after production either passing or rejecting a product based on certain characteristics (specifications).

SPC checks the production process for flow that this lead to unacceptable products. SQC refers to the use of statistical tools to analyze variations in the manufacturing process in order to make it better

SPC is a category of SQC that also uses statistical tools to over seen and control the production process to ensure the production of uniform products with less waste. SPC checks the production process for flows that may lead to low-quality products while SQC uses a specific number of samples to determine the acceptability of a product. In SQC , the tools are

1. Descriptive statistics 2. SP C3. Acceptance sampling

Thus SPC is in SQC, SQC the term used to describe the set of statistical tools used by quality professionals. SQC is used to analyze the quality problems and solve them and to use statistical methods in monitoring and maintaining of the quality of products and services.

1.2 Modeling Process Quality

A.R.Muralidharan. SQC Lecture Notes 13

Page 14: SQC2014m

Here we show how simple tools of descriptive statistics can be used to express variation quantitatively in quality characteristics when a sample of data on this characteristic is available and discussing probability distributions and show how they provide a tool for modeling or describing the quality characteristics of process.

Describing Variationa. The stem-leaf plot

No two units of product produced by a manufacturing process are identical. Some variation is inevitable. For instance, in manufacturing the soft drink, the net content of a can of drink varies slightly from can to can. As we know that statistics is the science of analyzing data and drawing conclusions, taking variation in the data into account.

There are several methods either numerical or graphical that are very useful for summarizing and presenting data. Now we discuss one of the most impotent and useful graphical method “Stem-leaf plots”.

Suppose that the data are represented x1,x2…….,xn and the each number xi consists of at least two digits. To construct this plot, we divide each number xi into two parts; a stem, consisting of one or more of the leading digits; and a leaf, consisting of the remaining digits. In general, we should choose relatively few stems in comparison with the number of observations. It is usually best to choose between 5 and 20 stems. Once a set of stem has been chosen, they are listed along the left-hand margin f the display, and beside each stem all leaves corresponding to the observed data vales are listed in the order in which they are encountered in the data set. For example the display will be

STEM LEAF Frequency4

5

6

7

8

9

8 9 7 9

3 2 1 3

3 0 4 2

21 2

2 1 3 4

8 7 3 4

4

4

4

3

4

4

b. Histogram

A frequency distribution is an arrangement of the data by magnitude. It is more compact summary of data than a stem-and-leaf display. Histogram is a graphical representation of a frequency distribution with three properties:

A.R.Muralidharan. SQC Lecture Notes 14

Page 15: SQC2014m

1. Shape2. Location3. Scatter

Several guidelines are helpful in constructing histograms. When the data are numerous, them into cells are very useful.

c. Numerical summary of data

The stem-and-leaf display and the histogram provide a visual display of three properties of sample data: the shape of the distribution of the data, the central tendency in the data and the variability in data. In measures of central tendency, mean and in variability we can use sample variance to get the summary of data.

d. The Box Plot

The stem- and –leaf display and the histogram provide a visual impression about a data set, whereas the sample average and standard deviation provide quantitative information about specific features of the data. The Box-plot is a graphical display that simultaneously displays several important features of the data, such as location, spread, departure from symmetry and identification of observations that lie unusually far from the bulk of the data. A box plot displays the three quartiles, the minimum and the maximum of the data on a rectangular box, aligned either horizontally or vertically. The box encloses the inter-quartile range with the left line at the first quartile Q1 and the right line at the third quartile Q3. A line is drawn through the box at the second quartile Q2= mean. A line at either end extends to the extreme values. These lines are usually called WISKERS. Some authors refer to the box plot as the box and whisker plot. e. Probability Distributions

The histogram or stem-leaf or box plot is used to describe sample data. A sample is a collection of measurements selected from some larger sources or population.

A probability distribution is a mathematical model that elates the value of the variable with the probability of occurrence of that value in the population. In other words, the variables involved is random to describe these types of random variables we have

1. Discrete distributions

When the parameter being measured can only take on certain values such as integers then the probability distribution is called a discrete distribution.

2. Continuous Distributions

When the variable being measured is expressed on a continuous scale, its probability distribution is called continuous distribution. 2. Methods of statistical process control and capability analysis

2.1. Methods and philosophy of statistical process control2.2. Control charts for variables2.3. Control charts for Attributes2.4. Process and measurement system capability analysis

Basic Methods of Statistical process control and capability analysis

It is not that easy to inspect or test quality into a product. The manufacturing processing must be stable and it involved with the process including operators, engineer’s management and so on. It is a continuously monitoring process and reduces variability in key parameters.

A.R.Muralidharan. SQC Lecture Notes 15

Page 16: SQC2014m

On-line S.P.C is used to achieving the above objective. Control charts are the simplest type on-line SPC. These charts are inverted by Dr. Walter A. Shewhart, hence it is known to be Shewhart control charts. There are two different control charts based on data involved. They are 1. Variables control charts 2. Attributes control charts. The three fundamental used of a control charts are:

1. Reduction on process variability2. Monitoring and surveillance of a process and3. Estimation of products or process parameters

2.1. Method and philosophy of statistical process control

To make the above control charts effective we explores process-capability analysis its helps how control charts and other statistical techniques can be used to estimate the natural capability of a process and to determine how it will perform relative to specifications on the product. Some aspects of setting specifications and tolerances including the tolerance “stack-up” problem.

For all the above concepts we need some basic general methodology of SPC. Here before moving to control charts we describe several fundamental SPC problem-solving tools.

The basic SPC problem-solving tools called the “Magnificent seven”. If the product is to meet customer requirements, generally it should be produced by a process that is stable or repeatable. More precisely, the process must be capable the operating with little variability around the target or normal dimensions of the products quality characteristics.

SPC is a powerful collection of problem-solving tools useful in achieving process stability and improving capability through the reduction of variability. SPC can be applied to any process.

The seven tools for SPC are:a. Histogram or stem and leaf diagramb. Check sheetc. Pareto chartd. Cause-and – effect diagrame. Scatter diagramf. Control charts

SPC builds an environment in which all individuals in an organization desire continuous improvement in quality and productivity.

Chance and Assignable causes

In any production a certain amount of inherent or natural variability will always exist. This natural variability or “Back ground noise” is the cumulative effect of many small, essentially unavoidable causes. In the frame work of SQC this natural variability is often called a “Stable system of chance causes”.

A process that is operating with only chances causes of variation present is said to be in statistical control, it means the chance causes are an inherent part of the process.

The variability arises from three sources:1. Improper adjusted or controlled machines

A.R.Muralidharan. SQC Lecture Notes 16

Page 17: SQC2014m

2. Operator errors and3. Defective raw materials

These variability; is generally large when compared to the background noise and it usually represents an unacceptable level or process performance.

These types of variability that are not part of the chance cause pattern are known to be “Assignable causes”. A process that is operating in the presence of assignable causes is said to be “Out of control”.

Figure :

The main goal of SPC is to quickly detect the occurrence of assignable causes of the process shifts so that inspection of the process and correction may be identified before units are manufactured. The Control charts are an ON-LINE process-monitoring technique widely used. It may also be used to estimate the parameters of a production process and through this information, to determine process capability. The control charts may also provide information useful in improving the process. The ultimate goal of SPC is elimination of variability in the process. It may not be possible to completely eliminate variability, but control chart is an effective tool in reducing variability as much as possible.

Statistical Basis of the control chartsA. Basic principles

A typical control chart is shown above; it is a graphical presentation of a quality characteristics. In X-axis sample number and in Y-axis samples. The chart contains three lines, a central line to represent the average value of the quality characteristics the other two dotted horizontal lines called upper control limit and lower control limit. These control limits are chosen so that if the process is in control, all the sample points will fall between them. As long as the points plot within the control limits we can say or assumed to be as the process in control and no-action is necessary.

A.R.Muralidharan. SQC Lecture Notes 17

Page 18: SQC2014m

However, a point that plots outside of the control limits is interpreted as evidence that the process is out of control and corrective action to be carried out by investigation and eliminate the assignable cause or causes responsible for the dis-quality. It is a practice the sample points on the control chart with straight-line segments, so it is easier to see how the sequence of points has spread over time.

The general model or format for a control chart limits are described as below:

Let w be a sample statistic that measure some quality characteristic of interest and suppose that the mean of w

is µw and the S.D of w is σw . Then the Limits will be

CL = µw LCL= µw - L σw UCL = µw + Lσw where L is the “distance” of the control limits from the central lines. This general theory of control charts was first proposed by Dr. Walter S. Shewhart and control chart developed according to these principles are often called Shewhart control charts.

The control chart is a device for describing in a precise manner exactly what is meant by statistical control.

Sample data are collected and used to construct the control chart and if the sample values of X fall within the control limits and do not exhibit any systematic pattern, we say the process is in control at the level indicated by the chart. In many applications it is used for on-line process surveillance.

We may interested in determining both whether the past data came from a process that was in control and whether future samples from this process indicate statistical control.

A.R.Muralidharan. SQC Lecture Notes 18

Page 19: SQC2014m

The most important use of a control chart is to improve the process1. Most processes do not operate in a state of statistical control2. The continuous use of control charts will identify assignable causes3. The control charts will detect assignable causes only.

It is important to find the underlying root cause of the problem and to attack it. Developing an effective system for corrective action is an essential component of an effective SPC implementation. A very important part of the corrective action process associated with control chart usage is the out-of control Action plan (OCAP).Along with control charts OCAP should accompany it. Control charts without an OCAP are not likely to be very useful as a process improvement tool. We may use the control chart as an estimating dev ice. From a control chart, we may estimate certain process parameters, such as mean,SD fraction non-conforming or fall out and so forth. To determine the capability of the process to produce, these estimates be used.

Control charts may be classified into two general types. If the quality characteristic can be measure and expressed as a number on some continuous scale of measurement, it is usually called a variable. If the data is measureable, it is convenient to express in Measures of central tendencies and dispersion. These types of control charts collectively called variables control charts.

In real time situations, many quality characteristics are not measured. In such cases, we judge each unit or product as either conforming or non-conforming, defective or non-defective, good or bad, and so on , It is on the basis of whether or not it possesses certain attributes or we ma count the number of non-conformities (defects) appearing on a unit of product . Control charts for such quality characteristics are called attributes control charts.

An important factor in control charts usage is the design of the control chart. By this we mean the selection of the sample size, control limits and frequency of sampling. Then to examine control chart design from an economic point of view considering explicitly the cost of sampling, losses from allowing defective product to be produced and the costs of investigating out-of – control signals that are really “false alarm”.

Another important consideration in control chart usage is the type of variability exhibited by the process. We can say there are three different processes

a. Stationary and uncorrelated (white noise)b. Stationary and auto-correlatedc. Non-stationary

By stationary process we mean that the process data vary around a fixed mean in a stable or predictable manner. This type of behavior that Shewhart implied was produced by an in-central process. If the process with Uncorrelated the observations give the appearance of having been drawn at random from a stable population, perhaps a normal distribution. This type of data is referred as white noise. In this type of process, the past values of the data are no help in predicting any of the future values.

If the process with Auto correlated we can able to notice that successive observation in these data is dependent. This mean a value above the mean tends to be followed by another value above the mean, whereas a value below the mean is usually followed by another such value. This produces a data series that has a tendency to move in moderately long “runs” on either side of the mean.

If the process in non stationary variation, the process is very instable in that it drifts or “wanders” about without any sense of a stable or fixed mean. This type of process occurs in the chemical and process industries. In most

A.R.Muralidharan. SQC Lecture Notes 19

Page 20: SQC2014m

of the industrial settings, we stabilize this type of behavior by using engineering process control. In such situations the factors cannot be stabilized such as environmental variables or properties of raw materials.

Shewhart control charts are most effective when the in-- process data. This implies that the charts can be designed so that their performance is predictable and reasonable to the used and they are effective in reliably detecting out-of control conditions.

Control charts have had a long good history of use in industries. The reason for their popularity isI. Control charts are proven technique for improving productivityII. Control charts are effective in defect preventionsIII. Control charts prevents unnecessary process adjustmentIV. Control charts provide diagnostic informationV. Control charts provide information about process capability

Choice of Variable

The variable chosen for x-bar and R control chart should be such that it can be measured and expressed in numbers such as dimension, dardness, tensile strength, weigth, volume etc. In many organization or company, the choice of the right variable is often troublesome. There may be a large number of variables that is dimensions on many parts of products. Obviously, only few of them which will result in real saving in cost should be selected for control chart purpose.

In otherwords, the variable selected should be one that is likely to reduce cost for example a quality characteristics that is responsible for high rejection or rework and where the spoilage and rework costs are high

Choice of control Limits

Specifying the control limit is one of the important and careful decisions that must be made in designing a control charts. By moving the control limits farther from the central line, we decrease the risk of the type I error. It means the risk of a point falling beyond the control limits indicating an out of control conditions when no assignable cause is present. However, widening the control limit will also increase the risk of the type II error. This means, the risk of a point falling between the control limit when the process is really out of control. If we move the control limit closer to the central line, the opposite effect is obtained: the risk of type I error is increased, while the risk of type II error is decreased.

Control Limits

For plotting control charts generally ± 3 σ limits are selected and they are termes as CONTROL LIMITS. They present a bandwith in which the dimensions of the components are expected to fall. It is known that 3 σ limits implies 99.7 percent of the samples from a given population will fall within these limits the remaining 0.3 percent will be fall outside the limits. It means 3 out of 1000 will fall outside of the limits. Since this quantatiy is a very small risk ± 3 σ limits have been found to give good practical results.

A.R.Muralidharan. SQC Lecture Notes 20

Page 21: SQC2014m

Thus, the sample average is within 3 σ limits it is assumed that any variation between the sample average and the desired populatin average is due to chance causes, that is no assiganble causes of variation are present. Sometimes if a sample average falls exactly at one of the 3 σ points, it is usually assumed that no change has taken place but it is absolutely essential to take another sample soon after to verify this assumption.

When it is found that a shift has taken place, the next step is to find the assignable causes and it must be found that due to production equpement materials or operatiors method then eliminated so that future production is not affected adversely.

Sometimes there are some possibility of making an error. As we discussed earlier, with 3 σ limits in the long run, 3 samples out of every 1000 will fall out side of the limits even if no change takes place in the population average.

As a result, therewill be occasion when we shall be looking for an assignable cause of variation when none exists, because no shift has taken place. By statistical term it is known as “Type I error”. On the other side if we conclude that the universe has not changed when it really has changed, this conclusion is described as “Type II error”. In a simple word we can say “wider the Limits ,the greater the probability of making type II error and lesser the probability of making Type I error.

Warning limits

Some analysist suggest using two sets of limits on control charts. The outer limits at 3σ are usually known to be action limits. This means when a point plots out side of this limits, a search for an assignable cause is made and corrective action is taken if necessarily. The inner limits 2σ are called warning limits. If one or more points fall between the warning limits and the control limits or very close to the warning limits. It should be suspicious that the process may not be operating properly. One possible action to take when this occurs is to increase the sampling frequency and/or the sample size so that more information about the process can be obtained quickly.

A.R.Muralidharan. SQC Lecture Notes 21

Page 22: SQC2014m

Process control schemes that change the sample size and/or the sampling frequency depending on the position of the current sample value are called adaptive or variable sampling interval or variable sample size schemes. These types of schemes have been used in practice for many years and have recently been studied extensively by researchers in the field. The use of warning limits can increase the sensitivity of the control charts. It means we can allow the control chart to signal a shift in the process more quickly one of their disadvantages is that they may be confusing to operating personnel. This is not usually a serious objection however and many practitioners use warning limits routinely on control charts. A more serious objection is that although the use of warning limits can improve the sensitivity of the chart, they also result in an increased risk of false alarms.

Basis of sub grouping

The information given by the control chart depends on the basis of subgroups selected. Hence it is an improtant task to select a subgroup and setting up of a control chart. The following points may be helpful in selection a subgroup when we construct control chart.

1. Each subgroup should be as homogeneous as possible2. There should be maximum opportunity for variation from one subgroup to another3. Samples should not be taken at exactly equal intervals of time

It is the primary purpose of keeping the charts is to detect shifts in process average. It is obserrved that if one sample is drawn from one population and a second sample from second population; gives a high difference between the two samples. This permits a minimum chances of variabtion withikn a selected subgroup when all the items of a subgroup are drawn at random from a single population.

By the same way, when all items of one subgroup are taken from one populatin and all items of another subgroup are taken from second population, then it gives a maiximum chances of variation from one subgroup to another subgroup. It should also be noted that the samples should not be taken at equal time intervals. It is better that without informing to the operations to be selected the sample for inspection.

Sometimes the scheme of subgrouping need to modified because of the following reasons

1. if there is any difficulties in a homogeneous samples (OR)2. to provide a basis for acceptance

Sample size and sampling frequency

For any work we have specify sample size. In designing a control chart, also we do so and the frequency of sampling also to be specified. It is known to be that, larger samples will make it easier to detect small shifts in the process.

When choosing the sample size we must keep in mind the size of the shift that we are trying to detect. If that process shift is relatively large, then we use smaller sample sizes than those that would be employed if the shift of interest were relatively small.

Regarding the frequency of sampling, the most desirable situation from the point of view of detecting shifts would be to take large samples very frequently; however, this is usually not economically feasible. The general problem is one of allocating sampling effort. That is either we take small samples at short intervals or larger samples at longer intervals. Currently industry practice tends to favor smaller, more frequent samples, particularly in high-

A.R.Muralidharan. SQC Lecture Notes 22

Page 23: SQC2014m

volume manufacturing processes. An alternative method to evaluate the decisions regarding sample size and sampling frequency is through the average run length (A.R.L) of the control charts. Essentially the ARL is the average number of points that must be plotted before a point indicates an out-of control condition. If the process observations are uncorrelated, then for any Shewhart control chart, the ARL can be calculated as ARL = 1/p where p is the probability that any point exceeds the control limits. This equation can be used to evaluate the performance of the control chart.

In recent years, to describe the performance of control charts has be criticized by using ARL, because the distribution of run length for a shewhart control charts is a geometric distribution. There are two reasons with ARL

1. The sd of the run length is very large2. The geometric distribution is much skewed, so the mean of the distribution is not necessarily a very

“typical” value of the run length.

It is also occasionally convenient to express the performance of the control charts in terms of its “Average time to signal (ATS)”. If samples are taken at fixed intervals of time that are h hours apart then ATS = ARL (h)

Sample size

To provide maximum homogenity within subgroup the size of subgroup should be as small as possible. It should be four or five in practice. The distribution of X is nearly normal for subgroups of four or five and this fact is helpful in interpretation of control chart limits. Larger subgroups is sensitive in the process average and the larger sample will cause the limits of a control charts to be closer to control line on the chart and it becomes easy to detect small variations. This happens because the standard deviation of p, X or R varies inersely with √n. Hence the larger the sample size the smaller the sd and the closer 3σ limits will be to the central line on the chart.

If the cost of measurement is quite high then it may be necessary to use smaller sample size of two or three.

Frequency of sampling

In general, the frequency of sampling based on just how well the operation is going. For this there are two possible ways

a. To take larger samples at les frequent intervals (OR)b. Smaller samples at more frequent intervals

The selection will be governed by the cost of taking and analysising measurements and also the benefits to be derived from action based on control charts.

Frequency of subgroups should be more at the initial stages and could be reduced when the function of the control chart is only to maintain the processs control over current production. The frequency of taking a subgroup may be expressed either in terms of time such as once an hour or as a proportion of the items produced such as 5 out of 100.

Rational Subgroups

A.R.Muralidharan. SQC Lecture Notes 23

Page 24: SQC2014m

In control chart, the collection of sample data according to what shewhart called “rational subgroups”. For instance, suppose that we are using a X control chart to detect chages in the process mean. Then subgroups or samples should be selected, as rational subgroups, so that if assignable causes are present, the chance for differences between subgroups will be maximized while the chance for differences due to these assignable causes with in a subgroup will be minimize

Two general approaches to constructing rational subgroups are used:

In first approach, each sample consists of units that were produced at the same time. For this we can take consecutive units of production. This approach is used when the primary purpose of the control chare is to detect process shifts. It minimizes the chance of variability between samples of assignable causes are present. We can get a better estimate of the S.D of the process in the case of variables control charts. This approach to rational sub grouping essentially gives “a snapshot” of the process at each point in time where a sample is collected.

In the second approach, each sample consists of units of product that are representative of all units that have been produced since the last sample was taken. It is essential each subgroup is a random sample of all process output over the sampling interval. This method of a rational sub is often used when the control chart is employed to make decisions about the acceptance of all units of product that have been produced since the last sample. In fact, if the process shifts to an “Out –of – control” state and then back in control again between samples, it is sometimes argued that the first method of rational sub grouping defined above will be ineffective against these types of shifts and so second method must be used.

When the rational sub group is a random sample of all units produced over the sampling interval, considerable care must be taken in interpreting the control charts. If the process mean drifts between several levels during the interval between sample this may cause the range of the observations within the sample to be relatively large, resulting in wider limits on X c h arts .

The concept of rational sub group is very important. The proper selection of sample requires careful consideration of the process, with the objective of obtaining as much useful information as possible from the control chart analysis.

Preliminary inferences from control chart

Lack of control is indicated by points falling outside the control limits on either X or R chart. It means that some assignable causes of variation are present, it is not a constant cause system.

We can say any process is in control, if all points fall inside the control limits. This implies that there is no assignable causes of variation are present. On the other hand it is said to be that any process is out of control or is not in control if some of the points of samples fall outside of the control limits. Based on how many points fall outside the control limits and they are tolerated or not also be considered.

In order to detect shifts in the process average in any process even though all points lie within the control limits, it is customary to use various practical working scales. These rules depend only on the extreme runs.

1. Whenever a run of seven consecutive points is on one side of the control line2. Whenever on eleven successive points on the control chart at least ten are on the same side of the central

line

A.R.Muralidharan. SQC Lecture Notes 24

Page 25: SQC2014m

3. When ever in fourteen successive points on the control chart, at least twelve are on the same side of the central line.

4. Whenever in seventeen successive points on the control chart at least fourteen are on the same side5. Twenty successive points with at least sixteen of them on the same side.

For all the above one side points it is suggested that there is a review in a position on the central line on the charts.

Interpretation of process in control

With evidence from the control chart that a process is in control, we are in a position to judge the manufacture of product that meets the specification or not. As we observed the control chart it gives us estimates of the centering

of the process (X ) and the dispersion of the process Rd2

.

X chart for chance pattern of variation has two important characteristics is

1. Most of the points are near center line X and2. Very few points are near control limits.

The control pattern for X∧R chart follow a pattern of variation on the control chart. To take a proper interpretation of the process it is good to having the knowledge of these patterns. In any process it is possible to find out the causes which might give rise to a particular pattern of variation. While interpreting X chart we have to construct R chart that is X chart cannot be intrepreted properly unless the corresponding R chart is in statistical control.

These control patterns can be classified into two major causes. They are

1. Chance pattern of variations2. Assignable cause pattern of variation

The process will be in a state of statistical control if a chance pattern of variation is exhibited by the X - R charts. The un-natural pattern of variation indicates that the process is out of control and corective action is necessary.

The ability of the process to meet specified tolerance is based on the ability to distinguish between the chance pattern of variation and assignable pattern of variation.

a. Chance causes pattern of variation

Since we know that the central tendency gives a representative value that tend to the center point of distribution. It is natural; we can expect an equal number of points on either side of the center line. Thus, even if any process to be interpreted by chance causes.

A control chart having a chance pattern of variation will thus have the following three characteristics

Most of the points will lie near central line

A.R.Muralidharan. SQC Lecture Notes 25

Page 26: SQC2014m

Very few points will be near the control limits None of the points (3/1000) fall outside the control limits

b. Assignable (un-natural ) causes pattern of variation

It is quite nature to interpret when the process is not in control. Base on the patterns of variation are

A. Extreme variationB. Indication of trendC. ShiftsD. Erratic fluctuations

A. Extreme variation

Extreme variation is recognized by the falling points outside of both control limits. The width of the control limits on the control chart represents the variation due to the inherent characteristics of the process that is the normal permissible variation in machines, materials and men. Thus when the samples point fall outside these limits on X chart, p chart or both it means some assignable causes of error are present and corrective action is necessary to produce the products within the specified limits

Causes of extreme variation

1. Error in measurement and calculations2. Samples chosen at a peak position on temperature pressure and such other factors3. Wrong setting of machine, tools etc4. Samples chosen at the commencement or end of an operation

B. Indication of trend

If the consecutive points on X∨R ch art tend to move steadily either towards lower control limits or upper control limit, it can be assumed that process is indicating a “Trend” that is change is taking place slowly and though all the points are lying within control limits after some time it is likely that the process may go out of control if proper care or corrective action is not taken

Causes of Trend

1. Tool wear2. Wear of threads on clamping device3. Effects of temperature and humidity4. Accumulation of drill and clogging of fixtures and holes.

A.R.Muralidharan. SQC Lecture Notes 26

Page 27: SQC2014m

Figure shows: Cycles on a control charts, A mixture pattern, A shift in process level and A trend in process level.

Increasing trends (up wards) on the R chart indicate gradual wearing of operating machine parts. A decreasing trend (downwards) on the R chart indicates improvement in operations, better maintenance, and improved control on back process.

C. Shift

When a series of consecutive points fall above or below the center line on either x-bar or R chart if can be assumed that shift in the process has taken place indicating presence of some assignable cause. It is generally assumed that when 7 consecutive points lie above or below the centre line, the shift has occurred.

Causes of shift:

1. Change in material2. Change in operator, inspector , inspection equipment3. Change in machine setting4. New operator , carelessness of the operator5. Loss fixture and so on.

D. Erratic fluctuations

Erratic fluctuations is characterized by ups and sown. This may be due to single causes or a group of causes affecting the process level and spread. The cause of erratic fluctuations is rather difficult to identify. It may be due to different causes acting at different times on the process.

A.R.Muralidharan. SQC Lecture Notes 27

Page 28: SQC2014m

Causes of Erratic fluctuations

1. Different types of material being processed2. Frequent adjustment of machine3. Chang in operator, marching, test equipment etc.

Analysis of pattern on Control Charts

A control chart may indicate out-of-control conditions either when one or more points fall beyond the control limits or when the plotted points exhibit some non-random pattern of behavior.

An important term in analysis is “RUN”. a run is defined as a sequence of observations of the same type. The observations are increasing is known to be RUN UP. A sequence of decreasing points referred as RUN DOWN. The control charts sometimes be long run up or long run down. The run up and run down usually defined that the type of observations as those above or below the central line. The below points are related to Analysis of pattern on control charts.

A control chart may indicate an out-of-control condition either when one or more points fall beyond the control limits, or when the plotted points exhibit some nonrandom pattern of behavior.

The process is out of control if any one or more of the criteria is met.

1. One or more points outside of the control limits. This pattern may indicate: A special cause of variance from a material, equipment, method, or measurement system change. Mis- measurements of a part or parts Miscalculation or misplotted data points Miscalculation or misplotted control limits2. A run of eight points on one side of the center line. This pattern indicates a shift in the process output

from changes in the equipment, methods, or materials or a shift in the measurement system3. Two of three consecutive points outside the 2-sigma warning limits but still inside the control limits. This

may be the result of a large shift in the process in the equipment methods materials or operator or a shift in the measurement system

4. Four of five consecutive points beyond the 1-sigma limits.5. An unusual or non random pattern in the data . A trend of seven points in a row upward or downward. This may show gradual deterioration or wear in

equipment: Improvement or deterioration in technique Cycling of data can indicate: Temperature or other recurring changes in the environment. Difference

between operators or operator techniques. Regular rotation of machines. Differences in measuring or testing devices that are being used in order

6. Several points near a warning limit or control limit.

A.R.Muralidharan. SQC Lecture Notes 28

Page 29: SQC2014m

Magnificent Seven

A.R.Muralidharan. SQC Lecture Notes 29

Page 30: SQC2014m

We know that the control charts are very useful and powerful problem-solving and process improvement tool; it is most effective when its use if fully integrated into a comprehensive SPC program. The seven major SPC problem-solving tools should be used to identify improvement opportunities and to assist in reducing variability and eliminating waste.

The “seven tools” are listed below1. Histogram 2. Check sheet3. Pareto chart4. Cause-and-effect diagram5. Defect concentration diagram6. Scatter diagram7. Control charts

A.R.Muralidharan. SQC Lecture Notes 30

Page 31: SQC2014m

Process Control(By control charts

AttributesVariables

Np- Chart

Product Control(By acceptance Sampling)

P-ChartC- Chart

Variables Attributes

X Chart R- Chart

Control Charts

Control Chart Background

A process may either be classified as in control or out of control. The boundaries for these classifications are set by calculating the mean, standard deviation, and range of a set of process data collected when the process is under stable operation. Then, subsequent data can be compared to this already calculated mean, standard deviation and range to determine whether the new data fall within acceptable bounds. For good and safe control, subsequent data collected should fall within three standard deviations of the mean. Control charts build on this basic idea of statistical analysis by plotting the mean or range of subsequent data against time.

Functions of Control charts

The main purpose of using a control chart is to monitor, control, and improve process performance over time by studying variation and its source. There are several functions of a control chart:

1. It centers attention on detecting and monitoring process variation over time.

2. It provides a tool for ongoing control of a process.

3. It differentiates special from common causes of variation in order to be a guide for local or management action.

A.R.Muralidharan. SQC Lecture Notes 31

S.Q.C. Techniques

Page 32: SQC2014m

4. It helps improve a process to perform consistently and predictably to achieve higher quality, lower cost, and higher effective capacity.

5. It serves as a common language for discussing process performance.

2.2. Control Charts for Variables

A quality characteristic that is measured on a numerical scale is called a variable. In real time data, many quality characteristics can be expressed in terms of measurements. When dealing with variables it is necessary to monitor both the mean and variability of those characteristics. Control of the process mean is usually done with X c h art and the process variability can be monitored with either a range chart (R-chart) or standard deviation chart (s−chart ). Usually separate chart for mean with R and separate chart for Mean with s will be constructing for any variables. These control charts are most useful and important on-line statistical process monitoring and control techniques. It is important to maintain control over both the process mean and process variability.

a. Control charts for X∧R chart-

Statistical basis of the charts

Let us assume that a quality characteristic is normally distributed with mean μ∧standard deviation σ , where both m ean and SD are known. It x1, x2, ……xn is a sample of size n, the average of this sample is

X= x 1+x2+… ……. xnn

and this mean is normally distributed with mean μ and SD σ= σ

√n . We have assumed that the distribution of the

quality characteristic is normal. Even if sometimes we get non-normal distribution, by central limit theorem we can assume the normal. In practice, we usually will not know meanμ∧standard deviation σ . Therefore, they must be estimated from preliminary samples or subgroups taken when the process is thought to be in control. These estimates should usually be based on at least 20 to 25 samples.

Suppose that m samples are available, each containing n observations on the quality characteristics. Typically, n will be small, often either 4, 5 or 6. These small sample sizes usually result from the construction of rational subgroups and from the fact that the sampling and inspection costs associated with variables measurements are usually relatively large. Letx1,x2,…….xn be the average of each sample. Then the process average is

X= x 1+x2+… ….+xnm

Thus this process mean is used as Central line for X chart.

To construct the control limits, we need an estimate of the standard deviation , for this current we will use range for that the range( R) is defined as

R = xmax-xmin

Let R1,R2,…………..Rm be the ranges of the m samples then the average of R is

A.R.Muralidharan. SQC Lecture Notes 32

Page 33: SQC2014m

R =R 1+R 2+… ….. Rm

m

Now we can see the control limits for the X∧R chartLIMITS For X chart For R chartCL X R Central LineUCL X + A2 R D4 R Upper control Limits

LCL X - A2 R D3 R Lower control Limits

The Constants A2,D3 and D4 are tabulated values for various sample sizes

The process variability may be monitored by plotting values of the sample range R on a control chart. Improving

the above limits for computing the control limits on these charts is easy. The random variable W=Rσ

is called

Relative range. The parameters of the distribution of W are a function of the sample size n. The mean of W is

d2. An estimator of sd is σ = R

d 2 . Therefore, if mean of range Ris for m samples.

Importance of R chartThe R chart is used as a measure of sub-group dispersion chart. This chart depends on the type of process, in many industrial production processes, is difficult to maintain uniform process dispersion. In such situations R chart extremely useful for process control. It is useful particularly for those processes where the skill of the operator is important. It is very helpful to bring the process dispersion into statistical control at the beginning stage The importance and application of R-chart is same as of the s-chart, but most of the time R chart is preferable instead of s-chart because of its easy calculation and easy to understand. R chart is always necessary , as a measure of sub-group, to provide a basis for calculating limits on a control chart for X . Even we can estimate s.d also.The above points are strongly supported for R chart. However, the R chart may often be omitted. Because of its demerits. Also experience shows that for many a time in industrial problems, R chart never goes out of control even though X does so frequently.

Interpreting patterns in control charts

• General rules to determine whether a process is in control:

1. No points outside the control limits.2. The number of points above and below the center line are about the same.3. Points seem to fall randomly above and below the center line.4. Most points are near center line, and only a few are close to control limits.

• One point outside control limits: measurement or calculation error, power surge, a broken tool, incomplete operation.

• Sudden shift in process average: new operator or inspector, new machine setting.

A.R.Muralidharan. SQC Lecture Notes 33

Page 34: SQC2014m

• Cycles: operator rotation or fatigue at the end of shift, different gauges, seasonal effects such as temperature and humidity.

• Trends: x-bar-chart – learning effect, dirt or chip buildup, tool wear, aging of equipment; R-chart (increasing trend) – gradual decline in material quality; R-chart (decreasing trend) – improved skills, better materials.

• Hugging the center line: sample taken over various machines canceling out the variation within the sample.

• Hugging the control limits: sample taken over various machines not canceling out the variation within the sample.

• Instability: difficult to identify causes. Typically, over-adjustment of machine.

• Always, R-chart analysis before the x-bar-chart analysis.

Interpreting an X-bar / R Chart

Always look at the Range chart first. The control limits on the X-bar chart are derived from the average range, so if the Range chart is out of control, then the control limits on the X-bar chart are meaningless. After reviewing the Range chart, interpret the points on the X-bar chart relative to the control limits and Run Tests. Never consider the points on the X-bar chart relative to specifications, since the observations from the process vary much more than the subgroup averages.

Interpreting the Range Chart

On the Range chart, look for out of control points. If there are any, then the special causes must be eliminated. Brainstorm and conduct Designed Experiments to find those process elements that contribute to sporadic changes in variation. To use the data you have, turn Auto Drop ON, which will remove the statistical bias of the out of control points by dropping them from the calculations of the average Range, Range control limits, average X-bar and X-bar control limits.

Also on the range chart, there should be more than five distinct values plotted, and no one value should appear more than 25% of the time. If there are values repeated too often, then you have inadequate resolution of your measurements, which will adversely affect your control limit calculations. In this case, you will have to look at how you measure the variable, and try to measure it more precisely.

Once you have removed the effect of the out of control points from the Range chart, look at the X-bar Chart.

Interpreting the X-bar Chart

After reviewing the Range chart, look for out of control points on the X-bar Chart. If there are any, then the special causes must be eliminated. Brainstorm and conduct Designed Experiments to find those process elements that contribute to sporadic changes in process location. To use the data you have, turn Auto Drop ON, which will remove the statistical bias of the out of control points by dropping them from the calculations of the average X-bar and X-bar control limits.

A.R.Muralidharan. SQC Lecture Notes 34

Page 35: SQC2014m

Look for obviously non-random behavior. Turn on the Run Tests, which apply statistical tests for trends to the plotted points. If the process shows control relative to the statistical limits and Run Tests for a sufficient period of time (long enough to see all potential special causes), then we can analyze its capability relative to requirements. Capability is only meaningful when the process is stable, since we cannot predict the outcome of an unstable process.

 When to Use an X-bar / R Chart

X-bar / Range charts are used when you can rationally collect measurements in groups (subgroups) of between two and ten observations. Each subgroup represents a "snapshot" of the process at a given point in time. The x-axes are time based, so that the charts show a history of the process. For this reason, you must have data that is time-ordered; that is, entered in the sequence from which it was generated. If this is not the case, then trends or shifts in the process may not be detected, but instead attributed to random (common cause) variation.

For subgroup sizes greater than ten, use X-bar / Sigma charts, since the range statistic is a poor estimator of process sigma for large subgroups. In fact, the subgroup sigma is ALWAYS a better estimate of subgroup variation than subgroup range. The popularity of the Range chart is only due to its ease of calculation, dating to its use before the advent of computers. For subgroup sizes equal to one, an Individual-X / Moving Range chart can be used, as well as EWMA or Cu Sum charts.

X-bar Charts are efficient at detecting relatively large shifts in the process average, typically shifts of +-1.5 sigma or larger. The larger the subgroup, the more sensitive the chart will be to shifts, providing a Rational Subgroup can be formed. For more sensitivity to smaller process shifts, use an EWMA or Cu Sum chart.Trial control limits When sub samples are used to construct the mean and range charts, it is customary to treat the control limits in the above table as trial control limits. They allow us to determine whether the process was in control when the m initial samples were selected. To test the hypothesis of past control, plot the value of X∧R ch artfrom each sample on the each sample on the charts and analyze the resulting display. If all points plot inside the control limits and no systematic behavior is evident, then we conclude that the process was in control in the past, and the trial control limits are suitable for controlling current or future production. Suppose that one or more of the values of either mean or range chart plot out of control when compared to the trial control limits. Then they may be based on the data from a process that is in control. If the hypothesis of past control is rejected, it is necessary to revise the control limits. We have to examining each of the out –of-control points looking for an assignable cause. If any assignable cause is found, that point has to be discarded and the limits are recalculated with the remaining points. Then these remaining points are reexamined for control, Estimating Process capability

Thex and R charts provide information about the performance or capability of the process. From the x chart,

the process mean isx the process SD may be estimated as σ=Rd2

. The specification limit s on this range isx

± σ . The control chart data may be used to describe the capability of the process to produce given items relative to these specifications Assuming that the given process is a normally distributed random variable with mean xand SD σwe may estimate the fraction of non-conforming products as

P= P ( x<LSL ) +P ¿USL) = φ (LSL− xσ ) + 1 - φ (USL− x

σ ) = φ ( Z1 ) + 1- φ ( Z2 )

A.R.Muralidharan. SQC Lecture Notes 35

Page 36: SQC2014m

This will be expressed in percentage of produced items will be outside of the specifications.

Another way to express process capability ratio (PCR). Cp is for quality characteristic with both USL and

LSL. Cp = USL−LSL

6 σ Note that the 6σ spread of the process is the basic definition of process capability. Since SD is usually

unknown and the estimated σ = R

d 2 resulting estimated cp of cp is cp=cp

The PCR Cp may be interpreted another way. The quality P = 1cp

X 100 is simply the percentage of the

specification band that the process uses up. P=P

Revision of Control limits and central lineWe should always treatment the initial set of control limits as trial limits subject to subsequent revision. Generally the effective use of any control chart will require periodic revision of the control limits and central line. Some practitioners establish regular periods for review and revision of control chart limits such as every week, every month, and so on or every 25,20 or 100 samples. When revision control limits, remember that it is highly described to use at least 25 samples or subgroups. Sometimes the user will replace the central line of the

x chart with a target value say x0.If the R chart exhibits control, this can be helpful in shifting the process

average to the desired value, particularly in processes where the mean may be changed by a fairly simple adjustment of a manipulatable variable in the process. If the mean is not easily influenced by a simple process adjustment then it is likely to be a complex and unknown function of several process variables and a target value x may not be helpful as use of that value could result in many points outside the control limits. In such cases we would not necessarily know whether the point was really associated with an assignable cause or whether it plotted outside the limits because of a poor choice for the central line.When the R chart is out of control, we often eliminate the out of control points and recomputed a revised value ofR. This value is then used to determine new limits and center line on the R chart and new limits of the x chart. This will tighten the limits on both charts, making constants with a process SD consistent with use of the revised R in the relationship process estimated SD. This estimate of SD could be used as the basis of a preliminary analysis of process capability.Continuation of x and R chart:Once a set of reliable control limits is established, we use the control chart for monitoring future production. This estimate of SD could be used as the basis of a preliminary analysis of process capability. Some additional samples we have to collect after the control chart were established. The data from these new samples are collected and the continuation of the xbar and R chart are constructed. Again there are two possibility one is process is in control, for this situation no problem arisen. But if the processes are out of the control then there will be an assignable cause has occurred around that time. The general pattern of points on the xbar chart, from subgroup of some numbers, indicative of shift in the process mean.. Once the Control chart is established and is being used in on-line process monitoring to speed up shift detection. In examining the control chart data it is sometimes helpful to construct a run chart of the individual observations in each sample. This chart is sometimes called a Tolerance or Tier diagram. This may reveal some pattern in the data or it may show that a particular value of the chart was provided by one or two unusual observations in the sample. When the sample size is larger than 7 or 8 the box plot will usually be a good alternative to the tier diagram.

A.R.Muralidharan. SQC Lecture Notes 36

Page 37: SQC2014m

Control limits, specification limits and Natural tolerance limits A point that should be conphasized is that there is no connection or relationship between the central line on the X∧R ch arts and the specification limits on the process. The central line are driven by the natural variability of the process that is by the natural tolerance limits of the process.It is customary no define the upper and lower natural tolerance limits as 3-sigma above and below the process mean. The specification limits are determined externally. They may be set by the management, engineers etc,. we have knowledge of inherent process variability when setting specifications, but remember that there is no mathematical or statistical relationship between the central line and specification limits.Below figure summarized the situation of relationship between control , specification and natural tolerance limits

Rational SubgroupsThe rational subgroups concept plays an important role in the use of X∧R ch arts. Defining a rational subgroup in practice may ne easier if we have a clear understanding of the functions of the two types of control charts.The X c h art monitors the average quality level in the process. Therefore, samples should be selected in such a way that maximizes the chances for shifts in the process average to occur between samples and thus to show up as out-of-control points on the X c h art .The R chart, on the other hand measures the variability with in a sample. Therefore, samples should be selected so the variability within samples measures only chance or random causes. Another way of saying this is that X c h art monitors between sample variability∧R ch art measures wit h∈sample variability .An important aspect of this is evident from carefully examining how the control limits for the X∧R ch arts are determined from past data. The estimate of the process SD used in construction the control limits is calculated from the variability within each sample.Consequently, the estimate of σ reflects within sample variability only. It is not correct to estimate σ is

s=√∑∑ (xij− x )2

mn−1 where xij is the jth observation in the ith sample because if the sample mean differ

then this will cause s to be too large. Consequently σ will be overestimated. Pooling all of the preliminary data

A.R.Muralidharan. SQC Lecture Notes 37

Page 38: SQC2014m

in this manner to estimate σ is not a good practice because it potentially combines both between sample and within sample variability. The control limits must be based on within-ample variability only.

Guidelines for the Design of the control chart To design the X an d R c h arts, we must specify the sample size, central line width and frequency of sampling to be used. It is not possible to give an exact solution to the problem of control chart design, unless the analyst has detailed information about both the statistical characteristics of the control chart tests and the economic factors that affect the problem. A complete solution of the problem would require knowledge of the cost of sampling, the costs of investigating and possibly correcting the process in response to out-of control signals, and the costs associated with producing a product that does not meet specifications. An economic decision model could be constructed to allow economically optimum control chart design. The X c h art is being used primarily to detect moderate large process shifts. When smaller samples are used there is less risk of a process shift occurring while a sample is taken. If a shift does not occur while a sample is taken, the sample average can obscure this effect. Frequently, an argument for using as small a sample size as is consistent with the magnitude of the process shift that one is trying to detect. The alternative of this is to increasing the sample size is to use warning limits and other procedures to enhance the ability of the control chats to detect small process shifts.If small shifts are necessary in a study use the CUSUM or EWMA charts. The R chart is relatively insensitive to shifts in the process SD for small samples. Larger samples would seem to be more effective but we also know that the range method for estimating the SD drops dramatically in efficiency as n increase. From a statistical point of view, the operating characteristics curve (O.C.Curve) of the X∧R ch arts can be helpful in choosing the sample size. They will provide the magnitude of process shift that will be detected with a stated probability for any sample size n. The problem of choosing the sample size and the frequency of sampling is one of “allocating sampling efforts”. Generally, the decision maker will have only a limited number of resources to allocate to the inspection process. The available strategies will usually be either to take small frequent samples or to take larger samples less frequently. It is not possible in some cases to take frequent samples. The opinion is that if the inter val between samples is too great, too much defective product will be produced before another opportunity to detect the process shift occur. On the basis of economic aspects, if the cost associated with producing defective item is high, smaller, more frequent samples are better than larger, less frequent once. Variable sample, interval and variable sample size schemes could also be used. The use of 3sigma control limits on the X∧R ch arts is a widespread practice. There are situations arises in choice of control limits. Finally, if the process is such that out-of-control signals are quickly and easily investigated with a minimum of lost time and cost, then narrower control limits are appropriate.

The Effect of Non-normality on and R Charts There often is an assumption that links normality and control charts in the development of the performance properties of and R control charts; that is, that the underlying distribution of the quality characteristic is normal. In many situations, we may have reason to doubt the validity of this assumption. For example, we may know that the underlying distribution is not normal, because we have collected extensive data that indicate the normality assumption is inappropriate. Now if we know the form of the underlying distribution, it is possible to derive the sampling distributions of and R (or some other measure of process variability) and to obtain exact probability limits for the control charts. This approach could be difficult in some cases, and most analysts would

A.R.Muralidharan. SQC Lecture Notes 38

Page 39: SQC2014m

probably prefer to use the standard approach based on the normality assumption if they believed that the effect of departure from this assumption was not serious. However, we may know nothing about the form of the underlying distribution, and then our only choice may be to use the normal theory results. Obviously, in either case, we would be interested in knowing the effect of departures from normality on the usual control charts for and R.

Several authors have investigated the effect of departures from normality on control charts. Burr (1967) notes that the usual normal theory control limit constants are very robust to the normality assumption and can be employed unless the population is extremely nonnormal. Schilling and Nelson (1976), Chan, Hapuarachchi, and Macpherson (1988), and Yourstone and Zimmer (1992) have also studied the effect of non-normality on the control limits of the chart. Schilling and Nelson investigated the uniform, right triangular, gamma (with l = 1 and , 1, 2, 3, and 4), and two bimodal distributions formed as mixtures of two normal distributions. Their study indicates that, in most cases, samples of size 4 or 5 are sufficientto ensure reasonable robustness to the normality assumption. The worst cases observed were for small values of r in the gamma distribution [r = and r = 1 (the exponential distribution)]. For example, they report the actual a-risk to be 0.014 or less if n ≥ 4 for the gamma distribution with , as opposed to a theoretical value of 0.0027 for the normal distribution.While the use of three-sigma control limits on the chart will produce an a-risk of 0.0027 if the underlying distribution is normal, the same is not true for the R chart. The sampling distribution of R is not symmetric, even when sampling from the normal distribution, and the long tail of the distribution is on the high or positive side. Thus, symmetric threesigma limits are only an approximation, and the a-risk on such an R chart is not 0.0027. (In fact, for n = 4, it is a = 0.00461.) Furthermore, the R chart is more sensitive to departures from normality than the chart.Once again, it is important to remember the role of theory and assumptions such as normality and independence. These are fundamental to the study of the performance of the control chart which is very useful for assessing its suitability for phase II, but plays a much less important role in phase I. In fact, these considerations are not a primary concern in phase I.

A.R.Muralidharan. SQC Lecture Notes 39

Page 40: SQC2014m

b. XBAR AND S CHART

i. The X and s chart for constant sample size.

Yet another powerful tool for the variable data is the X and s chart. This chart is the most under used in practice because of the computational difficulties. Its power and sensitivity for variation are recognized and one can suggest that if modern organizations really are interested in understanding as well as controlling variation, they should displace the X and R chart with this one. The advent of calculators and computers has nullified the computational difficulties. Specifically, this chart should be advantageous in one of the following instances

More sensitive control of the process spread is needed. A large sample size (n > 8) is collected. A statistical calculator or computer is used to compute and/or plot the control chart.

The sample size is variable

Steps for constructing and interpreting anXand s chart are similar to those for the X and R chart. The X and s chart differs from the X because the sample standard deviation is used to describe the spread of the manufacturing process. The X - S Chart is a double chart that plots the average of values for each period on the top chart (X Chart) and the standard deviation of the values for the period on the bottom chart (S Chart). Each point plotted on the top chart is the average of the values occurring in that period and each point plotted on the bottom chart is the standard deviation of the values occurring during that period.

In statistical quality control, the and s chart is a type of control chart used to monitor a variables data when samples are collected at regular intervals from a business or industrial process. The "chart" actually consists of a pair of charts: One to monitor the process standard deviation and another to monitor the process mean, as is done with the and R and individuals control charts. The and s chart plots the mean value for the quality characteristic across all units in the sample, , plus the standard deviation of the quality characteristic across all units in the sample as follows:

.

The normal distribution is the basis for the charts and requires the following assumptions:

The quality characteristic to be monitored is adequately modeled by a normally-distributed random variable

A.R.Muralidharan. SQC Lecture Notes 40

Page 41: SQC2014m

The parameters μ and σ for the random variable are the same for each unit and each unit is independent of its predecessors or successors

The inspection procedure is same for each sample and is carried out consistently from sample to sample

The control limits for this chart type are:

LIMITS For X chart For s chartCL X s Central LineUCL X + A3 s B4 s Upper control Limits

LCL X - A3 s B3 s Lower control Limits

where and are the estimates of the long-term process mean and range established during control-chart setup and A3, B3, and B4 are sample size-specific anti-biasing constants.

As with the and R and individuals control charts, the chart is only valid if the within-sample variability is constant. Thus, the s chart is examined before the chart; if the s chart indicates the sample variability is in statistical control, and then the chart is examined to determine if the sample mean is also in statistical control. If on the other hand, the sample variability is not in statistical control, then the entire process is judged to be not in statistical control regardless of what the chart indicates.

Estimation of σ

The process SD using the fact that Sc4

is an unbiased estimate of σ

ii. The X and s chart whit variable sample size.

TheX and s chart are relatively easy to apply in cases where the sample sizes are variable. In such case, we should use a weighted average approach in calculating x∧s. If ni is the number of observations in the ith sample the

x=¿ ∑ ni xi

∑ ni

and S=¿¿

as the center line on the X and s control charts , respectively. The control limits as the above table but the constants A3,B3 and B4 will depend on the sample size used in each individual subgroup.

An alternative to using variable-width control limits on the X and s charts is to base the control limit

calculations on an average sample size n. If the ni are not very different, this approach may be satisfactory in some situations; it is particularly help if the charts are to be used in a presentation to management. Since the average sample size may not be an integer, a useful alternative is to base these approximate control limits on a modal sample size.

A.R.Muralidharan. SQC Lecture Notes 41

Page 42: SQC2014m

The S2 Control Chart

Most of the time we use either R chart or S chart to monitor process variability, with S preferable to R for moderate to large sample sizes. Sometimes, we have some critical situation in deciding in R or S, at this time we may prefer to use directly the sample variance S2 . The control limits for this chart is

S.NO LIMITS FORMULA EXPLANATION1 CL S2 An average Sample variance

2 UCL S2

n−1χ α

2,n−1

2 χ α2

, n−1

2

follows chi-square with n-1df

at LOS3 LCL S2

n−1χ α

2,n−1

2 χ α2

, n−1

2

follows chi-square with n-1df

at LOS

SHEWHART CONTOL CHARTS FOR INDIVIDUAL MEASUREMENTS

There are many situations in which the sample size used for process monitoring is n=1; the sample consists of an individual unit. Some examples for these situations are:

1. Automated inspection and measurement technology is used, and every unit manufactured is analyzed so there is no basis for rational sub grouping.

2. The production rate is very slow, and it is inconvenient to allow sample sizes of n>1 to accumulate before analysis. The long interval between observations will cause problems with rational sub grouping.

3. Repeat measurements on the process differ only because of laboratory or analysis error, as in many chemical process

4. Multiple measurements are taken on the same unit of product, such a measuring at several different locations in some manufacturing

5. In process plant, measurements on some parameter will differ very little and produce a sd that also too small.

In such situations, the control chart for individuals units is useful. In many applications of the individuals control chart we use the moving range of two successive observations as the basis of estimating the process variability. The moving range is defined as

MRi = |x i−x i−1|

It is also possible to establish a control chart on the moving range. For this Moving range chart (OR) control charts for individual measurements the control limits will be

A.R.Muralidharan. SQC Lecture Notes 42

S.NO LIMITS X Bar Chart R-Chart1 CL x MR2 UCL

x+3MRd2

D3MR

3 LCLx−3

MRd2

D4MR

Page 43: SQC2014m

Interpretation of the chart

This chart can be interpreted as same as of ordinary x chart. A shift in the process average will result in either a point outside the control limits, or a pattern consisting of run on one side of the central line.

Moving range chart is also able to identify this shift with a sample in identifying exactly where a process shift in the mean has occurred. Some care should be exercised in interpreting patterns on the moving-range chart. The moving ranges are correlated, and this correlation may often induce a pattern of runs or cycles on the chart. The individual measurements on the x chart are assumed to be uncorrelated, however, and any apparent pattern on this chart should be carefully investigated.

Average Run length

Based on Average run length (ARL), Crowder studied and produces ARLs for various setting of the control limits and shifts in the process mean and standard deviation. In general, his work shows that the ARL0 of the combined procedure will generally be much less than the ARL0 of a shewhart chart when the process is in control, if we use the conventional three sigma limits on the charts. In general, results closer to the shewhart in-control ARL are obtained if we use three-sigma limits on the chart for individuals and compute the upper control limit on the moving-range chart from UCL = DMR where d is a constant and should be 4≤ D≤ 5.

One can get a very good idea about the ability of the individuals control chart to detect process shifts by looking at the OC (operating curve) or the ARL curves. For an individuals control chart with three sigma limits we have the following

S.No Size of sift β ARL1 1σ 0.9772 43.962 2σ 0.8413 6.303 3σ 0.5000 2.00

A.R.Muralidharan. SQC Lecture Notes 43

Page 44: SQC2014m

The ability of the individuals control chart to detect small shift is very poor. The control limits narrower than three sigma be used on the chart for individuals to enhance its ability to detect small process shifts. This is not good and it is dangerous, as narrower limits will dramatically reduce the value of ARL0 and increase the occurrence of false alarms to the point where the charts are ignored and hence become useless. If we ae interested in detecting small shifts, then the correct approach is to use either the cumulative-sum control chart (CUSUM) or the exponential weighted moving-average control chart (EWMA).

In our assumption, the observations follow a normal distribution. In actual study the behavior of the shewhart control chart for individuals when the process data are not normal. Any in-control ARL is dramatically affected by non-normal data. From this we conclude that if the process shows evidence of even moderate departure from normality, the control limits given here may be entirely inappropriate. One approach to dealing with the problem of non-normality would be to determine the control limits for the individuals control chart based on the percentiles of the correct underlying distribution. These percentiles could be obtained from histogram if a large sample were available, or from a probability distribution fit to the data. It is important to check the normality assumption when using the control chart for individuals. A simple way to do this is with the normal probability plot. It is to be remember that the normal probability plot is but a crude check of the normality assumption and the individuals control chart is very sensitive to non-normality. Finally we suggest that the Shewart individuals chart be used with extreme caution.

Summary of Control limits for Variables

A.R.Muralidharan. SQC Lecture Notes 44

Page 45: SQC2014m

2.3. Control charts for Attributes

Introduction

In previous chapter, we introduced control charts for quality characteristics that are expressed as variables. Although these control charts enjoy widespread application they are not universally applicable because not all quality characteristics can be expressed with variables data.

Many quality characteristics cannot be conveniently represented numerically. In such cases, we identify the inspected as either conforming or non-conforming to the specifications on that quality characteristic. If we classify the observations

A.R.Muralidharan. SQC Lecture Notes 45

Page 46: SQC2014m

defective or non-conforming or non-defective is classified as “Attributes”. For example, consider a glass container for a liquid product. Suppose, if we examine a container and classify it into one of the two categories called conforming or nonconforming, depending on whether the container meets the requirements on one or more quality characteristics. This is an example of attributes data, and a control chart for the fraction of nonconforming containers could be established.

Even, X∧Rcharts are powerful devices for the diagnosis of quality problems and means for routine detection of sources of sources of trouble. But there are some limitations available in control charts for variables

a. X∧R ch arts can be used for quality characteristics that can be measured and expressed in numbers. It is not suitable for quality characteristics based on attributes. That is by classifying each item inspected into one of the two classes, either conforming or non-conforming to the specifications

b. X∧R ch arts can be used only for one measurable characteristic at a time.c. Any process classifies the variable quality characteristics as good or bad, for

some economy reasons. In such cases X∧R charts may give a troublesome quality characteristic.

Alternatively, in some processes we may examine a unit of product and count defects or nonconformities on the unit. These types of data are widely encountered in the semiconductor industry, for example. Further, we show how to establish control charts for counts, or for the average number of counts per unit.Attributes charts are generally not as informative as variables charts because there is typically more information in a numerical measurement than in merely classifying a unit as conformingor nonconforming. However, attribute charts do have important applications. They are particularly useful in service industries and in nonmanufacturing quality-improvement efforts because so many of the quality characteristics found in these environments are not easily measured on a numerical scale

We saw the control charts for quality characteristics that are expressed as variables. Although these control charts enjoy widespread application they are not universally applicable because not all quality characteristics can be expressed with variables data.

Some situations we can deal with Attribute based data by deciding the quality. Suppose we examine a product and classify it into one of the two categories called “Conforming” or “nonconforming” depending on the requirements on one or more quality characteristics.

In Attribute charts we have construct a different charts for different situations. The following titles are different charts for attributes for different situations.

A.R.Muralidharan. SQC Lecture Notes 46

Page 47: SQC2014m

a. The control chart for Fraction Nonconforming (p-chart)b. The control chart for Number of Nonconforming (np-chart)c. The control chart for Nonconformities or the chart for defects (c-chart)d. The control chart for count of Nonconformance/unit (u-chart)

a. The control chart for Fraction Nonconforming (p-chart)

The Fraction nonconforming is defined as the ratio of the number of nonconforming items in a population to the total number of items in that population. The items may have several quality characteristics, these items may not conform to standard on one or more of these characteristics are classified as nonconforming. We can express these in fraction or in percentage of nonconforming. These fraction conforming gives process yield.

Now we can see the statistical principles underlying the control chart for fraction nonconforming, this is based on the binomial distribution. Suppose the production process is operating in a stable manner, there are two possibilities , conforming or nonconforming, such that the probability that any unit conform to specification is

A.R.Muralidharan. SQC Lecture Notes 47

Page 48: SQC2014m

success ie., p and that successive units produced are independent. Then we can say it follows Bernoulli random variable with parameter p.

If a random sample of n units of product is selected and if X number of units of product that are nonconforming, then the Binomial distribution with parameters n and p is given by

P(X=x) = (nx ) px qn−x ; x=0,1,2,3 ,………, n where q= 1-p and the mean and standard deviation of this

distribution are np and npq respectively.

The ratio of the number of non-conforming units in the sample X to the sample size n is p= Xn

and it is known

to be sample fraction nonconforming. The distribution of the random variable p can be obtained from the binomial distribution then the mean and variance of p are

σ= pqn

respectively. The chart monitors the process fraction nonconforming p, it is also called the p chart.

Now we can construct the control limits based on the theory to the development of a control chart for fraction nonconforming.

The form of equation we can construct the center line and its limits

LIMITS Standard given Standard not given CL P pUCL

P + 3√ pqn

p + 3√ p qn

LCLP - 3√ pq

np - 3√ pq

n

From the above data standard given is for known p value and not given for estimated p value. Standard value specified by management is known to be standard given otherwise it is not given. The actual operation of this chart would consist of taking subsequent samples of n units, computing the sample fraction nonconforming p, and plotting the points on the chart. As long as p remains within the control limits and the sequence of points do not exhibit any systematic nonrandom pattern, we can conclude that the process fraction nonconforming has most likely shifted to a new level and the process is out of control.

When process fraction nonconforming p is not known, then it must be estimate from observed data. The usual procedure is to select m preliminary samples, each of size n. As a general rule, m should be 20 or 25. Then if

there are Di non-conforming units then the estimated p=Di

n i= 1,2,3,…..m and the average of these individual

sample fractions nonconforming is

p=∑i=1

m

Di

mn=∑i=1

m

p

m

A.R.Muralidharan. SQC Lecture Notes 48

Page 49: SQC2014m

The statistic p estimates the unknown fraction nonconforming p. the centerline and control limits of the control chart for fraction nonconforming are computed as standard not given as the above table and that equations should be regarded as trial control limits.

The sample values of pi from the preliminary subgroups should be plotted against the trial limits to test whether the process was in control when the preliminary data were collected. Any points that exceed the trial control limits should be investigated. If assignable causes for these points are discovered, they should be discarded and new trial control limits determined.

If the control chart is based on a known or standard value for the fraction nonconforming p, then the calculation of trial control limits is generally unnecessary. However, one should be cautious when working with a standard value for p. since in practice the true value of p would rarely be known with certainty, we would usually be given a standard value of p that represents a desired or target value for the process fraction nonconforming. If this is the case and future samples indicate an out-of-control condition, we must determine whether the process is out of control at the target p but in control at some other value of p.

Sometimes we may “improve” the level of quality by using target values or to bring a process into control at a particular level of quality performance. In process where the fraction nonconforming can be controlled by relatively simple process adjustments, target values of p may be useful.

Design of the fraction Nonconforming control chartThe fraction nonconforming control charts has three parameters that must be specified:i. The sample sizeii. The frequency of sampling and iii. The width of the control limits.

To construct these we have some general guidelines for selecting these parameters

It is relatively common to base a control chart for fraction nonconforming on 100% inspection of all process output over some convenient period of time, such as a shift or a day. In this case, both sample size and sampling frequency are interrelated. It is general selecting a sampling frequency appropriate for the production rate and this fixes the sample size. Rational sub grouping may also play a role in determining the sampling frequency.

If we are to select a sample of process output, then we must choose the sample size n. various rules have been suggested for the choice of n. if p is very small, we should choose n sufficiently large so that we have a high probability of finding at least one non-conforming unit in the sample. Otherwise we might find that the control limits are such that the presence of only one non-conforming unit in the sample would indicate an out-of-control condition. Duncan suggested that the sample size should be large enough that we have approximately a 50% chance of detecting a process shift of some specified amount.

It should be note that the fraction nonconforming control chart is not a universal model for all data on fraction nonconforming. It is based on the binomial probability model; that is the probability of occurrence of a nonconforming unit is constant, and successive units of production are independent. In processes where nonconforming units are clustered together, or where the probability of a unit being nonconforming, the fraction nonconforming control chart is often of little use. In such cases, it is necessary to develop a control chart based on the correct probability model.

Interpretation of Points on the p chart

A.R.Muralidharan. SQC Lecture Notes 49

Page 50: SQC2014m

Points that plot beyond the control limits are treated, both in establishing the control chart and during its routine operation. Care must be exercised in interpreting points that plot below the lower control limit. These points often do not represent a real improvement in process quality. Frequently, they are caused by errors in the inspection process resulting from inadequatel trained or inexperienced inspectors of from improperly calibrated test and inspection equipment. We have also seen cases in which inspectors deliberately passed non-conforming units or reported fictitious data. The analyst must keep these warnings in mid when looking for assignable causes if points plot below the lower control limits. Not all “Downward shifts” in p are attributable to improved quality.

Comparision of X-R charts with P chartXbar R chart P chartUsed to quality characteristics that can be measurable and expressed in numbers

Used for quality charactrsitic tat can be classified as either conforming to the specification example Go-NoGo gauges

Cost of collecting data is more Cost of collecting data is comparitiely lessData can’t be used for other purpose Data can be used for other purposeMeasuring quality characteristic may be impritical and uneconomical

Cost of computing and charting may also be good and p chart can be applied to any number of quality characteristics observed on one article

Best for critical dimensions Best in classifying an article as acceted or rejectedDetecting assignable causes is very sensitive It will dectect the assignable causesSample size is small Sample size is largeGives the trend of the process Useful record of quality history

b. The control chart for Number of Nonconforming (np-chart)

A.R.Muralidharan. SQC Lecture Notes 50

Page 51: SQC2014m

The np-chart monitors the number of defectives. The np-chart is considered by many preferable to the p-chart the reason is number of defective is easier than the fraction non-conforming for quality technicians , inspectors and operators to understanding . The central line and control limits are :

Control chart for variable size

Sometimes there may be situations of different numbers of units could be produced in each period. At that time in the control chart for fraction nonconforming have a variable size. The sample is a 100% inspection of process output over some period of time. For this situation there are three approaches to constructing and operating a control chart with a variable sample size.

The first approach is most simple approach is to determine control limits for each individual sample that are based on the specific sample size. This approach is referred as “Variable-width control Limits”. In this

approach the ith sample is of size ni, then the upper and lower limits are p±3√ p (1−p)ni

. it is noted that the

width of the control limits is inversely proportional to the square root of the sample size, . Many popular quality control computer programs will handle the variable sample size case.

The second approach is “Control limits Based on an Average sample Size” , resulting in an approximate set of control limits. This assumes that future sample sizes will not differ greatly from those previously observed. If this approach is used, the control limits will be constant, and the resulting control chart will not look as formidable to operating personnel as the control chart with variable limits. However, if there is an unusually large variation in the size of a particular sample or if a point plots near the approximate control limits, then the exact control limits for that point should be determined and the point examined relative to that value. We can find the

average sample size by n=∑ ni

sample then the control limits are p±3√ p (1−p)

n The care must be taken in the

interpretation of points near the approximate control limits and careful in analyzing runs or other apparently abnormal patterns on control charts with variables sample sizes. The problem is that a change in the sample fraction nonconforming p must be interpreted relative to the sample size. The first observation seems to indicate poorer quality than the second, since pi> pi+1 .

The third approach is “standardized control chart” to dealing with variable sample size , where the points are plotted in standard deviation units. Such a control chart has the central line as ZERO and the upper and lower

limits are of +3 and -3 respectively .The variable plotted on the chart is z i=

p−p

√ pq(1−p)¿i

where p is the

process fraction nonconforming in the in-control state. Test for runs and pattern-recognition methods could safely be applied to this chart, because the relative changes from one point to another are all expressed in terms of the same units of measurement.

A.R.Muralidharan. SQC Lecture Notes 51

LIMITS CL nPUCL nP + 3√n pqLCL P - 3√n pq

Page 52: SQC2014m

The standardized control chart is no more difficult to construct or maintain than either of the other two procedures. In fact, many quality control software packages either automatically execute this as a standard feature or can be programmed to plot a standardized control chart. It may be more difficult for operating personnel to understand and interpret, because reference to the actual process fraction defective has been “LOST”. However, if there is large variation in sample size, than runs and pattern-recognition methods can only be safely applied to the standardized control chart. In such a case, it might be advisable to maintain a control chart with individual control limits for each sample for the operating personnel, while simultaneously maintaining a standardized control chart for the quality engineer’s use. The standardized control chart is also recommended when the length of the production run is short, as in many job-shop settings.

Nonmanufacturing Applications

The control chart for fraction nonconforming is widely used in nonmanufacturing applications of statistical process control. In the nonmanufacturing environment, many quality characteristics can be observed on a conforming or non-conforming basis.

Examples are the number of employee paychecks that are in error or distributed late during a pay period, the number of check requests that are not paid within an accounting cycle, and number of deliveries made by a supplier that are not on time.

Many nonmanufacturing application of the fraction nonconforming control chart will involve the variable sample size case. We have to note that all purchase order weekly to the company’s suppliers. The purchasing group’s quality improvement team inspect how many purchase orders change owing to errors in the original work. This quality is the number of nonconforming. The use of this control chart was a key initial step in identifying the underlying root cause of the errors on purchase orders and in developing the corrective actions necessary to improve the process.

c. The control charts for Nonconformities (DEFECTS) (C-chart)

A.R.Muralidharan. SQC Lecture Notes 52

Page 53: SQC2014m

A nonconforming item is a unit of product that does not satisfy one or more of the specifications for that product. Each specific point at which a specification is not satisfied results in a defect or nonconformity. Consequently, a nonconforming item will contain at least one non conformity. However, depending on their nature it is possible for unit to contain several nonconformities and not be classified as nonconforming. There are several practical situations that we prefer to work with number of nonconformities rather than the fraction nonconforming.

It is possible to develop control charts for either the total number of nonconformities in a unit or the average number of nonconformities per units. These charts are based on Poisson distribution and assume that the occurrence of nonconformities in samples of constant size .this chart is essentially requires that the number of opportunities for nonconformities be large and the probability of occurrence of a nonconformity at any location be small and constant and the inspection unit must be the same for each sample. That is , each sampling inspection unit must an identical of nonconformities. In addition, we can count defects of several different methods on one unit, as long as the above described conditions satisfied for each class. The most practical situations, the above conditions will not be satisfied.

To discuss these situations, consider the occurrence of nonconformities in an inspection unit of product. In most cases, the inspection unit will be a product of single unit. It is good and convenient that to keep records of the inspection, it could be a group of 5 units or 10 units and so on.

The purpose of C control chart is to generate a counts control chart. A C chart is a data analysis technique for determining if a measurement process gone out of statistical control. The C chart is sensitive to changes in the number of defective items in the measurement process. The C is in control chart stands for “COUNTS” as in defectives per lot. The C control chart consists of

Vertical axis = the number defective for each sub-group:

Horizontal axis = sub-group designation.

Sometimes we want to actually count the number of defects this gives us more information about the process. The basic assumption is that defects "arrive" according to a Poisson model:

This assumes that defects are independent and that they arrive uniformly over time and space.

Suppose that defects or nonconformities occur in this inspection unit according to the Poisson distribution: that is

the PMF is p( x )= e−m mx

x ! x= 0,1,2………….

LIMITS Standard given Standard not given CL C cUCL c + 3√c c + 3√cLCL c - 3√c c - 3√c

Assuming the standard value of c is avaibalble. In calculation part, if the LCL is in negative then the LCL set to be LCL =0

If standards are given then we can contruct a usual 3 sigma control limits. If no standard is given, then c may be estimated as the observed average number of nonconformities in a preliminary sample of inspection units and we

A.R.Muralidharan. SQC Lecture Notes 53

Page 54: SQC2014m

can denote it a c . the given control limits with standards not given should be regard as trial control limits and the preliminary samples examined for lack of control.

Analysis of Nonconformities

Non conformity data are always more informative than fraction nonconformity, the reason is usually there will be several types of nonconformities or defect. We can get some information based on the analysis of the nonconformity by their type about their cause. This can be useful in developing the out-of-control-action-plan (OCAP) that must accompany control charts.

Another useful technique for further analysis of nonconformity is “The cause-effect diagram”. It is used to show the various sources of nonconformities in products and their interrelationships. Developing a good diagram is an advancement of technology and it is focusing in products, operators, manufacturing engineers, and manager in quality department. There are several ways to draw the diagram. Another useful approach is to organize the diagram according to the flow of material through the process

A.R.Muralidharan. SQC Lecture Notes 54

Page 55: SQC2014m

d. The control charts for Average number of Nonconformities (Average Number of DEFECTS) U chart

An approach involves setting up a control chart based on the average number of nonconformities per inspection unit. If we find x total nonconformities in a sample of n inspection units, then the average number

of nonconformities per inspection Unit is defined as U and U = xn

. Here the given x is a Poisson variable and

the parameters of the control chart for the average number of nonconformities per unit. This chart shows the nonconformities per unit produced by a manufacturing process and it is often called the U-chart.

Determining stability of "counted" data (e.g., errors per widget, inquiries per month, etc.) when the sample size varies.

The u chart will help evaluate process stability when there can be more than one defect per unit. This chart is especially useful when you want to know how many defects there are not just how many

defective items there are. It's one thing to know how many defective circuit boards, meals, statements, invoices, or bills there are; it is another thing to know how many defects were found in these defective items.

It is used when the sample size varies: the number of circuit boards, meals, or bills delivered each day varies.

The trial control limits are :

U_CHART LIMITSCentral line uUpper Control Limits

u+3√ un

Lower Control Limitsu−3√ u

n

Alternative Probability Models for count Data

Most applications of the c chart assume that the Poisson distribution is the correct probability model underlying the process. However, it is not the only distribution that could be utilized as a model of “Count” or nonconformities per unit-type data. Various types of phenomena can produce distributions of defects that are not well modeled by the Poisson distribution.

Procedures with Variable Sample SizeControl charts for nonconformities are occasionally formed using 100% inspection of theproduct. When this method of sampling is used, the number of inspection units in a samplewill usually not be constant. For example, the inspection of rolls of cloth or paper often leadsto a situation in which the size of the sample varies, because not all rolls are exactly the samelength or width. If a control chart for nonconformities (c chart) is used in this situation, boththe center line and the control limits will vary with the sample size. Such a control chart

A.R.Muralidharan. SQC Lecture Notes 55

Page 56: SQC2014m

would be very difficult to interpret. The correct procedure is to use a control chart for nonconformities per unit (u chart). This chart will have a constant center line; however, the control limits will vary inversely with the square root of the sample size n.

U_CHART LIMITSCentral lineUpper Control LimitsLower Control Limits

There are, however, two other possible approaches:

1. Use control limits based on an average sample size

2. Use a standardized control chart (this is the preferred option). This second alternative would involve plotting a standardized statistic

on a control chart with LCL = −3 and UCL = +3 and the center line at zero. This chart is appropriateif tests for runs and other pattern-recognition methods are to be used in conjunction with the chartDemerit SystemsWith complex products such as automobiles, computers, or major appliances, we usually findthat many different types of nonconformities or defects can occur. Not all of these types ofdefects are equally important. A unit of product having one very serious defect would probably be classified as nonconforming to requirements, but a unit having several minor defects might not necessarily be nonconforming. In such situations, we need a method to classify nonconformities or defects according to severity and to weight the various types of defects in a reasonable manner. Demerit systems for attribute data can be of value in these situations.One possible demerit scheme is defined as follows.

A.R.Muralidharan. SQC Lecture Notes 56

Page 57: SQC2014m

Let ciA, ciB, ciC, and ciD represent the number of Class A, Class B, Class C, and Class Ddefects, respectively, in the ith inspection unit. We assume that each class of defect is independent,and the occurrence of defects in each class is well modeled by a Poisson distribution.Then we define the number of demerits in the inspection unit as

The demerit weights of Class A—100, Class B—50, Class C—10, and Class D—1 are usedfairly widely in practice. However, any reasonable set of weights appropriate for a specificproblem may also be used.Suppose that a sample of n inspection units is used. Then the number of demerits per unit is

Where is the total number of demerits in all n inspection units. Since ui is a linear combination of independent Poisson random variables, the statistics ui could be plotted on a control chart with the following parameters:

LIMITSCentral lineUpper Control Limits

Lower Control Limits

In the preceding equations represent the average number of Class A, Class

B, Class C, and Class D defects per unit. The values of are obtained fromthe analysis of preliminary data, taken when the process is supposedly operating in control.Standard values for uA, uB, uC, and uD may also be used, if they are available.Jones, Woodall, and Conerly (1999) provide a very thorough discussion of demeritbasedcontrol charts. They show how probability-based limits can be computed as alternatives

A.R.Muralidharan. SQC Lecture Notes 57

Page 58: SQC2014m

to the traditional three-sigma limits used above. They also show that, in general, the probabilitylimits give superior performance. They are, however, more complicated to compute.Many variations of this idea are possible. For example, we can classify nonconformitiesas either functional defects or appearance defects if a two-class system is preferred. Itis also fairly common practice to maintain separate control charts on each defect class ratherthan combining them into one chart

Dealing with Low Defect LevelsWhen defect levels or in general, count rates, in a process become very low—say, under 1000occurrences per million—there will be very long periods of time between the occurrence ofa nonconforming unit. In these situations, many samples will have zero defects, and a controlchart with the statistic consistently plotting at zero will be relatively uninformative. Thus conventional c and u charts become ineffective as count rates are driven into the low parts per million (ppm) range. One way to deal with this problem is adopt a time between occurrence control chart, which charts a new variable: the time between the successive occurrences of the count. The time-between-events control chart has been very effective as a process-control procedure for processes with low defect levels.Suppose that defects or counts or “events” of interest occur according to a Poisson distribution. Then the probability distribution of the time between events is the exponential distribution. Therefore, constructing a time-between-events control chart is essentially equivalent to control charting an exponentially distributed variable. However, the exponential distribution is highly skewed, and as a result, the corresponding control chart would be very asymmetric. Such a control chart would certainly look unusual, and might present some difficulties in interpretation for operating personnel.Nelson (1994) has suggested solving this problem by transforming the exponential random variable to a Weibull random variable such that the resulting Weibull distribution is well approximated by the normal distribution. If y represents the original exponential random variable, the appropriate transformation is

One would now construct a control chart on x, assuming that x follows a normal distribution. In many cases, the cusum and EWMA control charts in Chapter 4would be better alternatives, because they are more effective in detecting small shifts inthe mean.Kittlitz (1999) has also investigated transforming the exponential distribution for controlcharting purposes. He notes that a log transformation will stabilize the variance of theexponential distribution, but produces a rather negatively skewed distribution. Kittlitz suggests using the transformation x = y0.25, noting that it is very similar to Nelson’s recommendation and it is also very easy to computeNonmanufacturing ApplicationsThe c chart and u chart are widely used in transactional and service business applications ofstatistical process control. In effect, we can treat errors in those environments the same as wetreat defects or nonconformities in the manufacturing world. To give just a few examples,we can plot errors on engineering drawings, errors on plans and documents, and errors incomputer software as c or u charts. An example using u charts to monitor errors in computersoftware during product development is given in Gardiner and Montgomery (1987).

A.R.Muralidharan. SQC Lecture Notes 58

Page 59: SQC2014m

Control Charts for Variables vs. Charts for Attributes

Sometimes, the quality control engineer has a choice between variable control charts and attributes control charts.

Advantages of attribute control charts

Allowing for quick summaries, that is, the engineer may simply classify products as acceptable or unacceptable, based on various quality criteria.

Thus, attribute charts sometimes bypass the need for expensive, precise devices and time-consuming measurement procedures.

More easily understood by managers unfamiliar with quality control procedures. More sensitive than attribute control charts. Therefore, variable control charts may alert us to quality problems before any actual "unacceptables" (as

detected by the attribute chart) will occur. Montgomery (1985) calls the variable control charts leading indicators of trouble that will sound an alarm

before the number of rejects (scrap) increases in the production process.

Choosing the Proper Type of Control Chart.A. and R (or and s) charts. Consider using variables control charts in these situations:1. A new process is coming on stream, or a new product is being manufactured by an existing process.2. The process has been in operation for some time, but it is chronically in trouble or unable to hold the specified tolerances.3. The process is in trouble, and the control chart can be useful for diagnostic purposes (troubleshooting).4. Destructive testing (or other expensive testing procedures) is required.5. It is desirable to reduce acceptance-sampling or other downstream testing to a minimum when the process can be operated in control.6. Attributes control charts have been used, but the process is either out of control or in control but the yield is unacceptable.7. There are very tight specifications, overlapping assembly tolerances, or other difficult manufacturing problems.8. The operator must decide whether or not to adjust the process, or when a setup must be evaluated.9. A change in product specifications is desired.10. Process stability and capability must be continually demonstrated, such as in regulated industries.B. Attributes Charts ( p charts, c charts, and u charts). Consider using attributes control charts in these situations:1. Operators control the assignable causes, and it is necessary to reduce process fallout.2. The process is a complex assembly operation and product quality is measured in terms of the occurrence of nonconformities, successful or unsuccessful product function, and so forth. (Examples include computers, office automation equipment, automobiles, and the major subsystems of these products.)3. Process control is necessary, but measurement data cannot be obtained.4. A historical summary of process performance is necessary. Attributes control charts, such as p charts, c charts, and u charts, are very effective for summarizing information about the process for management review.

A.R.Muralidharan. SQC Lecture Notes 59

Page 60: SQC2014m

5. Remember that attributes charts are generally inferior to charts for variables. Always use and R or and s charts whenever possible.C. Control Charts for Individuals. Consider using the control chart for individuals in conjunction with a moving-range chart in these situations:1. It is inconvenient or impossible to obtain more than one measurement per sample, or repeat measurements will only differ by laboratory or analysis error. Examples often occur in chemical processes. 2. Automated testing and inspection technology allow measurement of every unit produced.In these cases, also consider the cumulative sum control chart and the exponentially weighted moving average control chart 3. The data become available very slowly, and waiting for a larger sample will be impractical or make the control procedure too slow to react to problems. This often happens in non-product situations; for example, accounting data may become available only monthly.4. Generally, once we are in phase II, individuals’ charts have poor performance in shift detection and can be very sensitive to departures from normality. Always use theEWMA and CUSUM charts of Chapter 9 in phase II instead of individuals charts whenever possible.

A.R.Muralidharan. SQC Lecture Notes 60

Page 61: SQC2014m

Process capability Process capability refers to the uniformity of the process. Obviously, the variability of critical-to-quality characteristics in the process is a measure of the uniformity of output. There are two ways to think of this variability:1. The natural or inherent variability in a critical-to-quality characteristic at a specified time; that is, “instantaneous” variability2. The variability in a critical-to-quality characteristic over time

Upper and lower natural tolerance limits in the normal distribution

Capability analysis is a set of calculations used to assess whether a system is statistically able to meet a set of specifications or requirements. To complete the calculations, a set of data is required, usually generated by a control chart; however, data can be collected specifically for this purpose. Specifications or requirements are the numerical values within which the system is expected to operate,

A.R.Muralidharan. SQC Lecture Notes 61

Page 62: SQC2014m

that is, the minimum and maximum acceptable values. Occasionally there is only one limit, a maximum or minimum. Customers, engineers, or managers usually set specifications. Specifications are numerical requirements, goals, aims, or standards. It is important to remember that specifications are not the same as control limits. Control limits come from control charts and are based on the data. Specifications are the numerical requirements of the system.

All methods of capability analysis require that the data is statistically stable, with no special causes of variation present. To assess whether the data is statistically stable, a control chart should be completed. If special causes exist, data from the system will be changing. If capability analysis is performed, it will show approximately what happened in the past, but cannot be used to predict capability in the future. It will provide only a snapshot of the process at best. If, however, a system is stable, capability analysis shows not only the ability of the system in the past, but also, if the system remains stable, predicts the future performance of the system.

Capability analysis is summarized in indices; these indices show a system’s ability to meet its numerical requirements. They can be monitored and reported over time to show how a system is changing. Various capability indices are presented in this section; however, the main indices used are Cp and Cpk. The indices are easy to interpret; for example, a Cpk of more than one indicates that the system is producing within the specifications or requirements. If the Cpk is less than one, the system is producing data outside the specifications or requirements. This section contains detailed explanations of various capability indices and their interpretation.

Capability analysis is an excellent tool to demonstrate the extent of an improvement made to a process. It can summarize a great deal of information simply, showing the capability of a process, the extent of improvement needed, and later the extent of the improvement achieved.

Capability indices help to change the focus from only meeting requirements to continuous improvement of the process. Traditionally, the focus has been to reduce the proportion of product or service that does not meet specifications, using measures such as percentage of nonconforming product. Capability indices help to reduce the variation relative to the specifications or requirements, achieving increasingly higher Cp and Cpk values.

Before capability analysis is completed, a histogram and control chart need to be completed.

A.R.Muralidharan. SQC Lecture Notes 62

Page 63: SQC2014m

Process Capability is a measure of the ability of the process to meet specifications. It tells us how good the individual parts are. There are several methods to measure process capability including an estimation of the ppm (defective parts per million). Capability indices such as Cp, Cpk, Pp, Ppk are very popular; however, trying to summarize the capability via a single index is often misleading because key information about the process is lost.

Process capability compares the process output with the customer’s specification.

The purpose of a process capability study is to compare the process specification to the process output and determine statistically if the process can meet the customer’s specification.

A capable process is:

Stable and not changing.

Can fit within the customer’s specification with a little extra room (usually 25%) to spare.

The less variation there is in a process, the more capable it will be of meeting the customer’s specification.

A process is a unique combination of tools, materials, methods, and people engaged in producing a

measurable output; for example a manufacturing line for machine parts. All processes have

inherent statistical variability which can be evaluated by statistical methods.

The Process Capability is a measurable property of a process to the specification, expressed as

a process capability index (e.g., Cpkor Cpm) or as a process performance index (e.g., Ppk or Ppm). The

output of this measurement is usually illustrated by a histogram and calculations that predict how many

parts will be produced out of specification (OOS).

Process capability is also defined as the capability of a process to meet its purpose as managed by an

organization's management and process.

Two parts of process capability are: 1) Measure the variability of the output of a process, and 2)

Compare that variability with a proposed specification or product tolerance.

To measure the process, the input of a process usually has at least one or more measurable

characteristics that are used to specify outputs. These can be analyzed statistically; where the output data

shows a normal distribution the process can be described by the process mean (average) and the standard

deviation.

A process needs to be established with appropriate process controls in place. A control chart analysis is

used to determine whether the process is "in statistical control". If the process is not in statistical control

A.R.Muralidharan. SQC Lecture Notes 63

Page 64: SQC2014m

then capability has no meaning. Therefore the process capability involves only common cause

variation and not special cause variation.

A batch of data needs to be obtained from the measured output of the process. The more data that is

included the more precise the result, however an estimate can be achieved with as few as 17 data points.

This should include the normal variety of production conditions, materials, and people in the process.

With a manufactured product, it is common to include at least three different production runs, including

start-ups.

The process mean (average) and standard deviation are calculated. With a normal distribution, the "tails"

can extend well beyond plus and minus three standard deviations, but this interval should contain about

99.73% of production output. Therefore for a normal distribution of data the process capability is often

described as the relationship between six standard deviations and the required specification.

The output of a process is expected to meet customer requirements, specifications, or engineering

tolerances. Engineers can conduct a process capability study to determine the extent to which the

process can meet these expectations.

The ability of a process to meet specifications can be expressed as a single number using a process

capability index or it can be assessed using control charts. Either case requires running the process to

obtain enough measurable output so that engineering is confident that the process is stable and so that

the process mean and variability can be reliably estimated. Statistical process control defines techniques

to properly differentiate between stable processes, processes that are drifting (experiencing a long-term

change in the mean of the output), and processes that are growing more variable. Process capability

indices are only meaningful for processes that are stable (in a state of statistical control).

Process capability compares the output of an in-control process to the specification limits by using capability indices. The comparison is made by forming the ratio of the spread between the process specifications (the specification "width") to the spread of the process values, as measured by 6 process standard deviation units (the process "width").

A process capability index uses both the process variability and the process specifications to determine whether the process is "capable"

Process capability index

We are often required to compare the output of a stable process with the process specifications and make a statement about how well the process meets specification.  To do this we compare the natural variability of a stable process with the process specification limits. 

A process where almost all the measurements fall inside the specification limits is a capable process. This can be represented pictorially by the plot below:

A.R.Muralidharan. SQC Lecture Notes 64

Page 65: SQC2014m

There are several statistics that can be used to measure the capability of a process:  Cp, Cpk, Cpm.

Most capability indices estimates are valid only if the sample size used is 'large enough'. Large enough is generally thought to be about 50 independent data values. 

The Cp, Cpk, and Cpm statistics assume that the population of data values is normally distributed. Assuming a two-sided specification, if   and   are the mean and standard deviation, respectively, of the normal data and USL, LSL, and T are the upper and lower specification limits and the target value, respectively, then the population capability indices are defined as follows:

Definitions of various process capability indices

Sample estimates of capability indices

Sample estimators for these indices are given below. (Estimators are indicated with a "hat" over them).

A.R.Muralidharan. SQC Lecture Notes 65

Page 66: SQC2014m

The estimator for Cpk can also be expressed as Cpk = Cp(1-k), where k is a scaled distance between the midpoint of the specification range, m, and the process mean,  .

Denote the midpoint of the specification range by m = (USL+LSL)/2. The distance between the process mean,  , and the optimum, which is m, is   - m, where  . The scaled distance is

(the absolute sign takes care of the case when  ). To determine the estimated value,  , we estimate   by  . Note that  .

The estimator for the Cp index, adjusted by the k factor, is

Since  , it follows that  .

Plot showing Cp for varying process widths

To get an idea of the value of the Cp statistic for varying process widths, consider the following plot

A.R.Muralidharan. SQC Lecture Notes 66

Page 67: SQC2014m

This can be expressed numerically by the table below:USL - LSL 6 8 10 12Cp 1.00 1.33 1.66 2.00Rejects .27% 64 ppm .6 ppm 2 ppb% of spec used 100 75 60 50where ppm = parts per million and ppb = parts per billion. Note that the reject figures are based on the assumption that the distribution is centered at  .

We have discussed the situation with two spec. limits, the USL and LSL. This is known as the bilateral or two-sided case. There are many cases where only the lower or upper specifications are used. Using one spec limit is called unilateral or one-sided. The corresponding capability indices are

and

where   and   are the process mean and standard deviation, respectively

A.R.Muralidharan. SQC Lecture Notes 67

Page 68: SQC2014m

Estimators of Cpu and Cpl are obtained by replacing   and   by   and s, respectively. The following relationship holds

Cp = (Cpu + Cpl) /2.

This can be represented pictorially by

Note that we also can write: Cpk = min {Cpl, Cpu}.

Confidence Limits For Capability Indices

Assuming normally distributed process data, the distribution of the sample   follows from a Chi-square

distribution and   and   have distributions related to the non-central tdistribution. Fortunately, approximate confidence limits related to the normal distribution have been derived. Various

approximations to the distribution of   have been proposed, including those given by Bissell (1990), and we will use a normal approximation.

The resulting formulas for confidence limits are given below:

100(1- )% Confidence Limits for Cp

where

A.R.Muralidharan. SQC Lecture Notes 68

Page 69: SQC2014m

      ν = degrees of freedom.Approximate 100(1- )% confidence limits for Cpu with sample size n are:

with z denoting the percent point function of the standard normal distribution. If   is not known, set it to  .

Limits for Cpl are obtained by replacing   by  .

The variance is obtained as follows:

Let

Then

A.R.Muralidharan. SQC Lecture Notes 69

Page 70: SQC2014m

Their approximation is given by:

where

The following approximation is commonly used in practice

It is important to note that the sample size should be at least 25 before these approximations are valid. In general, however, we need n   100 for capability studies. Another point to observe is that variations are not negligible due to the randomness of capability indices.

The process is not approximately normally distributed

The indices that we considered thus far are based on normality of the process distribution. This poses a problem when the process distribution is not normal. Without going into the specifics, we can list some remedies.

1. Transform the data so that they become approximately normal. A popular transformation is the Box-Cox transformation

2. Use or develop another set of indices, that apply to nonnormal distributions. One statistic is called Cnpk (for non-parametric Cpk). Its estimator is calculated by

where p(0.995) is the 99.5th percentile of the data and p(.005) is the 0.5th percentile of the data.

Process Stability vs. Process Capability

A.R.Muralidharan. SQC Lecture Notes 70

Page 71: SQC2014m

Process stability and process capability are different ideas and there is no inherent relationship between them. That is, knowing that the process is capable (or not capable) tells us nothing about the process stability. Furthermore, knowing if the process is stable (or not) tells us nothing about the process capability. The following graphic illustrates all four possible scenarios. The graphic shows the distribution of individual measurements over time (left to right) compared to the upper and lower specification limits. 

In the upper left quadrant, the process is stable (in control) but is not capable of meeting specifications. If we viewed this process with a control chart, it would illustrate a stable process and we would have no idea that it's not capable.

In the lower left quadrant, the process is stable and capable. In the lower right quadrant, the process is not stable, although we might say that it is capable of meeting

specification (Note: This is not really the correct interpretation as will be discussed shortly.) In the upper right quadrant, the process is neither stable nor capable.

Conducting a Process Capability Study

The steps for conducting a process capability study are:

1. Preparing for the study.

2. Determining the process output.

3. Comparing the output to the spec.

4. Taking action to improve the process.

A process capability study measures the capability of a specific piece of equipment or a process under specific operating conditions.

It is important to identify and record this information prior to the beginning of the process capability study.

A.R.Muralidharan. SQC Lecture Notes 71

Page 72: SQC2014m

Step 1: Preparing for the Study

To prepare for the study:

Define the processing conditions.

Select a representative operator.

Assure sufficient raw materials are available.

Make sure the measurement system is reliable.

Step 2: Determining the Process Output

To determine the process output, run the process and collect data as you would if you were setting up a control chart.

Make sure the process is stable using the same methods as for setting up a control chart.

Since common process capability calculations are based on a stable, normally distributed process, if the process is not stable, you should not conduct a process capability study.

Calculate the process mean and process variation for the measured output.

Step 3: Comparing Process Output to the Specification

A specification normally consists of the nominal, or ideal, measure for the product and the tolerance, which is the amount of variation acceptable to the customer. It is often referred to as “the spec.”

The distance between the upper spec limit (USL) and the lower spec limit (LSL) is called the total tolerance, or T.T.

The Cpk for a process is determined by calculating the Cpu and the Cpl. The Cpk is the lower of those two numbers.

Step 4: Taking Action to Improve the Process

There are a variety of activities that can be undertaken to improvement process such as 8D Problem Solving or Mistake-Proofing.

Process Capability Study Complications

Some of the complications we may be faced with while conducting capability studies include:

Using Individual Data, not Subgroups

Handling One-Sided Tolerances

A.R.Muralidharan. SQC Lecture Notes 72

Page 73: SQC2014m

Handling Short-Run Processes

Dealing with Tool Wear Issues

Dealing with Skewed Distributions

Not Knowing What the Spec Should Be

Assessing True Position Capability

Six Sigma Capability

Six Sigma is a broad business approach to drive defects produced by all processes down into parts per million levels of performance.

This means it’s really about improving the process capability for all critical-to-quality (CTQ) characteristics from all processes in the organization.

The goal in a Six Sigma organization is to achieve defect levels of less than 3.4 parts per million for every process in the organization and for every CTQ characteristic produced by those processes.

Six Sigma has been accepted to mean a 4.5-sigma process, not “true six sigma” process.

A process that operates with “true six sigma” performance takes up 50% of the specification if centered. This gives it a Cpk and a Cp of 2.0. A process such as this will produce defects at a rate of only ~2 parts per billion.

Six Sigma professionals have allowed for the process to drift by up to 1.5 standard deviations from the mean. So if we have a process with a Cp = 2.0 but allow for a 1.5s drift, then we have the equivalent of a 4.5 sigma process. That is, the mean will be 4.5s from the specification limit at the edges of the drift. A 4.5 sigma process yields a 3.4 ppm defect level.

Instead of Cp and Cpk, some Six Sigma organizations report capability in terms of Z-values.

The Z-values represent the number of standard deviation units the mean is away from the specification limits.

Zl is the distance from the mean to the lower spec and Zu is the distance from the mean to the upper spec.

Zl equals 3 times Cpl and Zu equals 3 times Cpu.

Interpreting Cp, Cpk

“Cpk is an index (a simple number) which measures how close a process is running to its specification limits, relative to the natural variability of the process. The larger the index, the less likely it is that any item will be outside the specs.” Neil Polhemus “If you hunt our shoot targets with bow, darts, or gun try this analogy. If your shots are falling in the same

spot forming a good group this is a high Cp, and when the sighting is adjusted so this tight group of shots is landing on the bullseye, you now have a high Cpk.” Tommy

A.R.Muralidharan. SQC Lecture Notes 73

Page 74: SQC2014m

“Cpk measures how close you are to your target and how consistent you are to around your average performance. A person may be performing with minimum variation, but he can be away from his target towards one of the specification limit, which indicates lower Cpk, whereas Cp will be high. On the other hand, a person may be on average exactly at the target, but the variation in performance is high (but still lower than the tolerance band (i.e., specification interval). In such case also Cpk will be lower, but Cp will be high. Cpk will be higher only when you r meeting the target consistently with minimum variation.” Ajit

“You must have a Cpk of 1.33 [4 sigma] or higher to satisfy most customers.” Joe Perito “Consider a car and a garage. The garage defines the specification limits; the car defines the output of the

process. If the car is only a little bit smaller than the garage, you had better park it right in the middle of the garage (center of the specification) if you want to get all of the car in the garage. If the car is wider than the garage, it does not matter if you have it centered; it will not fit. If the car is a lot smaller than the garage (Six Sigma process), it doesn’t matter if you park it exactly in the middle; it will fit and you have plenty of room on either side. If you have a process that is in control and with little variation, you should be able to park the car easily within the garage and thus meet customer requirements. Cpk tells you the relationship between the size of the car, the size of the garage and how far away from the middle of the garage you parked the car.” Ben

“The value itself can be thought of as the amount the process (car) can widen before hitting the nearest spec limit (garage door edge).Cpk =1/2 means you’ve crunched nearest the door edge (ouch!)Cpk =1 means you’re just touching the nearest edgeCpk =2 means your width can grow 2 times before touchingCpk =3 means your width can grow 3 times before touching” Larry Seibel

Differences Between Cpk and Ppk

“Cpk is for short term, Ppk is for long term.” Sundeep Singh “Ppk produces an index number (like 1.33) for the process variation. Cpk references the variation to your

specification limits. If you just want to know how much variation the process exhibits, a Ppk measurement is fine. If you want to know how that variation will affect the ability of your process to meet customer requirements (CTQ’s), you should use Cpk.” Michael Whaley

“It could be argued that the use of Ppk and Cpk (with sufficient sample size) are far more valid estimates of long and short term capability of processes since the 1.5 sigma shift has a shaky statistical foundation.” Eoin

“Cpk tells you what the process is CAPABLE of doing in future, assuming it remains in a state of statistical control. Ppk tells you how the process has performed in the past. You cannot use it predict the future, like with Cpk, because the process is not in a state of control. The values for Cpk and Ppk will converge to almost the same value when the process is in statistical control. that is because sigma and the sample standard deviation will be identical (at least as can be distinguished by an F-test). When out of control, the values will be distinctly different, perhaps by a very wide margin.” Jim Parnella

“Cp and Cpk are for computing the index with respect to the subgrouping of your data (different shifts, machines, operators, etc.), while Pp and Ppk are for the whole process (no subgrouping). For both Ppk and Cpk the ‘k’ stands for ‘centralizing facteur’ – it assumes the index takes into consideration the fact that your data is maybe not centered (and hence, your index shall be smaller). It is more realistic to use Pp and Ppk than Cp or Cpk as the process variation cannot be tempered with by inappropriate subgrouping. However, Cp and Cpk can be very useful in order to know if, under the best conditions, the process is capable of fitting into the specs or not.It basically gives you the best case scenario for the existing process.”Chantal

“Cp should always be greater than 2.0 for a good process which is under statistical control. For a good process under statistical control, Cpk should be greater than 1.5.” Ranganadha Kumar

A.R.Muralidharan. SQC Lecture Notes 74

Page 75: SQC2014m

“As for Ppk/Cpk, they mean one or the other and you will find people confusing the definitions and you WILL find books defining them versa and vice versa. You will have to ask the definition the person is using that you are talking to.” Joe Perito

“I just finished up a meeting with a vendor and we had a nice discussion of Cpk vs. Ppk. We had the definitions exactly reversed between us. The outcome was to standardize on definitions and move forward from there. My suggestion to others is that each company have a procedure or document (we do not), which has the definitions of Cpk and Ppk in it. This provides everyone a standard to refer to for WHEN we forget or get confused.” John Adamo

As seen from the earlier discussions, there are three components of process capability: 1. Design specification or customer expectation (Upper Specification Limit, Lower Specification Limit) 2. The centering of the natural process variation (X-Bar) 3. Spread of the process variation (s)

A minimum of four possible outcomes can arise when the natural process variability is

compared with the design specifications or customer expectations:Case 1: Cpk > 1.33 (A Highly Capable Process)This process should produce less than 64 non-conforming ppm

A Highly Capable Process: Voice of the Process < Specification ( or Customer Expectations ).This process will produce conforming products as long as it remains in statistical control. The process owner can claim that the

customer should experience least difficulty and greater reliability with this product. This should translate into higher profits.

Note: Cpk values of 1.33 or greater are considered to be industry benchmarks. This means that the process is contained within four standard deviations of the process specifications

Case 2: Cpk = 1 to 1.33 (A Barely Capable Process)This process will produce greater than 64 ppm but less than 2700 non-conforming ppm.This process has a spread just about equal to specification width. It should be noted that if theprocess mean moves to the left or the right, a significant portion of product will start fallingoutside one of the specification limits. This process must be closely monitored.

A.R.Muralidharan. SQC Lecture Notes 75

Page 76: SQC2014m

A Barely Capable Process: Voice of the Process = Customer ExpectationsNote: This process is contained within three to four standard deviations of the processspecifications.

Case 3: Cpk < 1 (The Process is not Capable)This process will produce more than 2700 non-conforming ppm.

A Non-Capable Process: Voice of the Process > Customer Expectations.It is impossible for the current process to meet specifications even when it is in statistical control.If the specifications are realistic, an effort must be immediately made to improve the process (i.e.reduce variation) to the point where it is capable of producing consistently within specifications.Case 4: Cpk < 1 ( The Process is not Capable )This process will also produce more than 2700 non-conforming ppm.

A.R.Muralidharan. SQC Lecture Notes 76

Page 77: SQC2014m

The variability (s) and specification width is assumed to be the same as in case 2, but the process average is off-center. In such cases, adjustment is required to move the process mean back to target. If no action is taken, a substantial portion of the output will fall outside the specification limit even though the process might be in statistical control.

Assumptions, Conditions and Precautions: Capability indices described here strive to represent with a single number the capability of aprocess. Much has been written in the literature about the pitfalls of these estimates. Following are some of the precautions the readers should exercise while calculating and interpreting process capability:1. The indices for process capability discussed are based on the assumption that theunderlying process distribution is approximately bell shaded or normal. Yet in somesituations the underlying process distribution may not be normal. For example, flatness,pull strength, waiting time, etc., might natually follow a skewed distribution. For thesecases, calculating Cpk the usual way might be misleading. Many researchers havecontributed to this problem. Readers are requested to refer to John Clements article titled"Process Capability Calculations for Non-Normal Distributions" for details.2. The process / parameter in question must be in statistical control. It is this author'sexperience that there is tendency to want to know the capability of the process beforestatistical control is established. The presence of special causes of variation make theprediction of process capability difficult and the meaning of Cpk unclear.3. The data chosen for process capability study should attempt to encompass all naturalvariations. For example, one supplier might report a very good process capability value using only ten samples produced on one day, while another supplier of the same commodity mightreport a somewhat lesser process capability number using data from longer period of timethat more closely represent the process. If one were to compare these process indexnumbers when choosing a supplier, the best supplier might not be chosen.4. The number of samples used has a significant influence on the accuracy of the Cpk estimate.

A.R.Muralidharan. SQC Lecture Notes 77

Page 78: SQC2014m

3. Other statistical process-monitoring and control techniques3.1. Cumulative sum and Exponentially weighted Moving average control charts3.2. Other univariate statistical process monitoring and control techniques3.3. Multivariate process monitoring and control

Shewhart conventional control chart, have been in use for well over fifty years. It is a basic method of statistical process control. In quality control, there are many situations which have complicate and sometimes the conventional charts will not in use and we need to have some special charts for some occasions. However the increasing emphasis on variability reduction leads enhancement, and process improving has led to the development of many innovative techniques for statistical process monitoring and for control.

Now we can list some of the control charts that are used in special situations:

1. Cumulative sum chart (CUSUM- chart)2. Exponentially Weighted Moving Average Control charts (EWMA- control chart)3. Statistical process control for short production runs4. Modified and Acceptance control charts5. Control charts for Multiple stream processes6. Statistical process control with Auto correlated Process Data7. Adaptive sampling procedures8. Economic design of control charts9. Over of other procedures10. Multivariate quality control problem11. Hotelling T2 control chart12. Multivariate EWMA control chart

A.R.Muralidharan. SQC Lecture Notes 78

Page 79: SQC2014m

13. Regression Adjustment 14. Control charts for Monitoring variability15. Latent structure Methods

CUSUM control chart and EWMA control charts

A major disadvantage of any shewhart control chart is , it uses the information about the process contained in the last plotted point, further, it ignores any information given by the sequence of points. This feature makes the shewhart control chart with small shifts in process, the criteria for runs and the warning limits provides supplemented sensitizing rules reduces the simplicity . It helps to interpretation of the control charts in easy way.

Two effective and alternatives to dealt small shifts are Cumulative sum control chart and exponentially weighted Moving average control chart.

THE CUSUM control chart

Shewhart control charts and it failed to detect the shift. The reason for this failure is the relatively small magnitude of the shift. This shewhart control charts for average is not very effective in small shifts and it is very effective in the magnitude of the shift is 1.5 sigma to 2sigma or over.

There are major differences between cusum charts and other control (Shewhart) charts:

•A Shewhart control chart plots points based on information from a single subgroup sample. In cusum charts, each point is based on information from all samples taken up to and including the current subgroup.

•On a Shewhart control chart, horizontal control limits define whether a point signals an out-of-control condition. On a cusum chart, the limits can be either in the form of a V-mask or a horizontal decision interval.

•The control limits on a Shewhart control chart are commonly specified as 3σ limits. On a cusum chart, the limits are determined from average run length, from error probabilities, or from an economic design.

Cumulative Sum (Cusum) charts display cumulative sums of subgroup or individual measurements from a target value. Cusum charts are graphical and analytical tools for deciding whether a process is in a state of statistical control and for detecting a shift in the process mean. The CUSUM control charts is good in small shifts, It directly taken all the information in the sequence of sample values by plotting the cumulative sums of the deviations of the sample values from the target value.

The CUSUM control chart were first proposed by Page in 1954 and developed by some others. It is possible to devise cumulative sum procedures for other variables and we can used for monitoring process variability. We concentrate on the cumulative sum chart for the process mean.

CUSUM works as follows: Let us collect m samples, each of sizen, and compute the mean of each sample. Then the cumulative sum (CUSUM) control chart is formed by plotting one of the following quantities:

A.R.Muralidharan. SQC Lecture Notes 79

Page 80: SQC2014m

against the sample number m, where   is the estimate of the in-control mean and   is the known (or estimated) standard deviation of the sample means. The choice of which of these two quantities is plotted is usually determined by the statistical software package.

In either case, as long as the process remains in control centered at  , the CUSUM plot will show variation in a random pattern centered about zero. If the process mean shifts upward, the charted CUSUM points will eventually drift upwards, and vice versa if the process mean decreases.

Suppose if the given sample size is greater than or equal to one and the mean of the jth sample is given by x j , then the target for the process mean is μ0. The CUSUM control chart is formed by plotting the quantity

C i=∑j=1

i

( x¿¿ j−μ0)¿ against the sample i. Cj is called the cumulative sum up to and including the ith sample.

Because they combine information from several samples, cumulative sum charts are more effective than shewhart charts for detecting small process shifts. There are more effective with samples of size as one. This makes CUSUM charts as a good tool where rational subgrous are frequently of size one and in discrete parts manufacturing also in on-line control . the CUSUM defined in above is a random walk with mean zero if the process remains in control at the target value.

If the mean shifts upward to some valueμ1>μ0, then an upward or positive drift will develop in the CUSUM cj. Conversely, if the mean shifts downward to some μ1<μ0 then a downward or negative drift in Cj will develop. Thus, if a trend develops in the plotted points either upward or downward, we should consider this as evidence that the process mean has shifted, and a search for some assignable cause should be performed. The starting value of CUSUM will be Zero.

There are two ways to represent CUSUMs

1. Tabular or algorithmic CUSUMs2. The V-mask form of CUSUMs

Among the two forms “The tabular CUSUMs” is preferable.

A. Tabular or Algorithmic CUSUM for monitoring the process mean

Most users of CUSUM procedures prefer tabular charts over theV-Mask. The V-Mask is actually a carry-over of the pre-computer era. The tabular method can be quickly implemented by standard spreadsheet software.

CUSUMs may be constructed both for individual observations and for the averages of rational subgroups. It is very often in practice, the case of individual observations.

To generate the tabular form we use the h and k parameters expressed in the original data units. It is also possible to use sigma units.

A.R.Muralidharan. SQC Lecture Notes 80

Page 81: SQC2014m

The following quantities are calculated:

Shi(i) = max(0, Shi(i-1) + xi -   - k) ; Slo(i) = max(0, Slo(i-1) +   - k - xi) )

where Shi(0) and Slo(0) are 0. When either Shi(i) or Slo(i) exceeds h, the process is out of control.

As individual observation cusums are described as below:

Let xi be the ith observation on the process. When the process is in control, xi has a normal distribution with mean μ0with standard deviationσ . We assume that the standard deviation is known or estimated and it is available. The target value μ 0 is the value for the quality characteristics x. if the process drifts or shifts off this target value, the CUSUMs will signal, and an adjustment made to some manipulatable variable to bring the process on target. Further in some cases a signal from a cusums indicates the presence of an assignable cause that must be investigated just as in the Sherhart charts .

The tabular CUSUM works by accumulating derivations from the target value that are above target with one

statistics C +

and accumulating derivations form target value that are below target with one statistics C-

these statistics are called one-sided upper and lower CUSUMs respectively.

They are computed as follows:

In the above table K is usually called the reference value or allowance value or slack value and it is often chosen about halfway between the target value and out-of-control value of the mean μ 1 that we are interested in detecting quickly. It is noted that C+ and C- accumulate deviations from the target value that are greater thank, with both quantities reset to Zero on becoming negative. If either C+ or C- exceed the decision interval H, the process is considered to be out of control. It is very important that the two parameters have to be selected properly, as it has substantial impact on the performance of the CUSUMs. A reasonable value for H is five times the process standard deviation.

The tabular cusum also indicates when the shift probably occurred. The counter N+ records the number of consecutive periods since the upper-side cusum C+ rose above the value of zero. It is useful to present a graphical display for the tabular cusum. These charts are sometimes called Cusum status charts. They are constructed by plotting C+ and C- versus the sample number. Each vertical bar represents the value of C+ and C- in period i. with the decision interval plotted on the chart, the cusum status chart resembles a shewhart control chart. We have also plotted the observations xi for each period on the CUSUM status charts as the solid dots. This frequently helps the user of the control chart visualize the actual process performance that has led to a particular value of the cusums. It is identical to that with any chart, that the action taken a decision of an out-of-control signal on a CUSUM chart schemes; one should search non-random causes and to take any corrective action required, then reinitialized the cusum at Zero. This chart is in determining when the assignable cause has occurred. Just count backward from the out of control signal to the time period when the cusum lifted above zero to find the first period following the process sifts. The counters N+ and N- are used in this capacity

An estimate of the new process mean may be helpful when an adjustment to some manipulatable variable is required in order to bring the process back to the target value. This can be computed from

μ=¿

A.R.Muralidharan. SQC Lecture Notes 81

Page 82: SQC2014m

Finally, we can discuss the runs tests, in CUSUM and other sensitizing rules:

The Zone rules, cannot be safely applied, because successive values of C+ and C- are dependent. In fact, the cusum can be thought of as a weighted average, where the weights ae random.

Recommendation for CUSUM Designs

The important parameters to design the tabular cusum chart s are the reference K and the decision interval H. It is usually recommended that these parameters be selected to provide good average run length performance. There have been many innovative study of cusum ARL performance. Now some discussion based on selecting H and K.

Define H and K as shown in above table as to provide a cusum that has good ARL properties against a shift of about 1sigma in the process mean.

We will construct a CUSUM tabular chart for the example described above. For this example, the parameter are h = 4.1959 and k = 0.3175. Using these design values, the tabular form of the example is

h k

325 4.1959 0.3175

    Increase in mean   Decrease in mean    Group x x-325 x-325-k Shi 325-k-x Slo CUSUM

1 324.93 -0.07 -0.39 0.00 -0.24 0.00 -0.0072 324.68 -0.32 -0.64 0.00 0.01 0.01 -0.403 324.73 -0.27 -0.59 0.00 -0.04 0.00 -0.674 324.35 -0.65 -0.97 0.00 0.33 0.33 -1.325 325.35 0.35 0.03 0.03 -0.67 0.00 -0.976 325.23 0.23 -0.09 0.00 -0.54 0.00 -0.757 324.13 -0.88 -1.19 0.00 0.56 0.56 -1.628 324.53 -0.48 -0.79 0.00 0.16 0.72 -2.109 325.23 0.23 -0.09 0.00 0.54 0.17 -1.8710 324.60 -0.40 -0.72 0.00 0.08 0.25 -2.2711 324.63 -0.38 -0.69 0.00 0.06 0.31 -2.6512 325.15 0.15 -0.17 0.00 0.47 0.00 -2.5013 328.33 3.32 3.01 3.01 -3.64 0.00 0.8314 327.25 2.25 1.93 4.94* -0.57 0.00 3.0815 327.83 2.82 2.51 7.45* -3.14 0.00 5.9016 328.50 3.50 3.18 10.63* -3.82 0.00 9.4017 326.68 1.68 1.36 11.99* -1.99 0.00 11.0818 327.78 2.77 2.46 14.44* -3.09 0.00 13.85

A.R.Muralidharan. SQC Lecture Notes 82

Page 83: SQC2014m

19 326.88 1.88 1.56 16.00* -2.19 0.00 15.7320 328.35 3.35 3.03 19.04* -3.67 0.00 19.08

* = out of control signalThe Average Run Length of Cumulative Sum Control ChartsThe operation of obtaining samples to use with a cumulative sum (CUSUM) control chart consists of taking samples of size n and plotting the cumulative sums

versus the sample number r, where   is the sample mean and k is a reference value.

In practice, k might be set equal to ( +  1)/2, where   is the estimated in-control mean, which is sometimes known as the acceptable quality level, and  1 is referred to as the rejectable quality level.If the distance between a plotted point and the lowest previous point is equal to or greater than h, one concludes that the process mean has shifted (increased).Hence, h is referred to as the decision limit. Thus the sample size n, reference value k, and decision limit h are the parameters required for operating a one-sided CUSUM chart. If one has to control both positive and negative deviations, as is usually the case, two one-sided charts are used, with respective values k1, k2, (k1 > k2) and respective decision limitsh and -h.The shift in the mean can be expressed as   - k. If we are dealing with normally distributed measurements, we can standardize this shift by

Similarly, the decision limit can be standardized by

The average run length (ARL) at a given quality level is the average number of samples (subgroups) taken before an action signal is given. The standardized parameters ks and hs together with the sample size n are usually selected to yield approximate ARL's L0 and L1 at acceptable and rejectable quality levels  0 and 1 respectively. We would like to see a high ARL, L0, when the process is on target, (i.e. in control), and a low ARL, L1, when the process mean shifts to an unsatisfactory level.In order to determine the parameters of a CUSUM chart, the acceptable and rejectable quality levels along with the desired respective ARL ' s are usually specified. The design parameters can then be obtained by a number of ways. Unfortunately, the calculations of the ARL for CUSUM charts are quite involved.There are several nomographs available from different sources that can be utilized to find the ARL's when the standardized h and k are given. Some of the nomographs solve the unpleasant integral equations that form the basis of the exact solutions, using an

A.R.Muralidharan. SQC Lecture Notes 83

Page 84: SQC2014m

approximation of Systems of Linear Algebraic Equations (SLAE). This Handbook used a computer program that furnished the required ARL's given the standardized h and k. An example is given below:

mean shift Shewart(k = .5) 4 5

0 336 930 371.00.25 74.2 140 281.14.5 26.6 30.0 155.22.75 13.3 17.0 81.221.00 8.38 10.4 44.01.50 4.75 5.75 14.972.00 3.34 4.01 6.302.50 2.62 3.11 3.243.00 2.19 2.57 2.004.00 1.71 2.01 1.19

If k = .5, then the shift of the mean (in multiples of the standard deviation of the mean) is obtained by adding .5 to the first column. For example to detect a mean shift of 1 sigma at h = 4, the ARL = 8.38. (at first column entry of .5).The last column of the table contains the ARL's for a Shewhart control chart at selected mean shifts. The ARL for Shewhart = 1/p, where p is the probability for a point to fall outside established control limits. Thus, for 3-sigma control limits and assuming normality, the probability to exceed the upper control limit = .00135 and to fall below the lower control limit is also .00135 and their sum = .0027. (These numbers come from standard normal distribution tables or computer programs, setting z = 3). Then the ARL = 1/.0027 = 370.37. This says that when a process is in control one expects an out-of-control signal (false alarm) each 371 runs.When the means shifts up by 1 sigma, then the distance between the upper control limit and the shifted mean is 2 sigma (instead of 3  ). Entering normal distribution tables with z = 2 yields a probability of p = .02275 to exceed this value. The distance between the shifted mean and the lower limit is now 4 sigma and the probability of   < -4  is only .000032 and can be ignored. The ARL is 1 / .02275 = 43.96 .The conclusion can be drawn that the Shewhart chart is superior for detecting large shifts and the CUSUM scheme is faster for small shifts. The break-even point is a function of h, as the table shows.

V-Mask CUSUM Procedure

A visual procedure proposed by Barnard in 1959, known as the V-Mask, is sometimes used to

A.R.Muralidharan. SQC Lecture Notes 84

Page 85: SQC2014m

determine whether a process is out of control. An alternative procedure to use of a tabular CUSUM is the V-mask control scheme. The V-mask is applied to successive values of the

cusum statistic C i=∑j=1

i

y j= y i+Ci−1 where yi is the standardized observation y i=x i−μ0

σ. A typical value

v-mask is shown below. The decision procedure consists of placing the V-mask on the CUSUM chart with the point as shown in the diagram. More often, the tabular form of the V-Mask is preferred A V-Mask is an overlay shape in the form of a V on its side that is superimposed on the graph of the cumulative sums. The origin point of the V-Mask diagram below is placed on top of the latest cumulative sum point and past points are examined to see if any fall above or below the sides of the V. As long as all the previous points lie between the sides of the V, the process is in control. Otherwise (even if one point lies outside) the process is suspected of being out of control.

A.R.Muralidharan. SQC Lecture Notes 85

Page 86: SQC2014m

A.R.Muralidharan. SQC Lecture Notes 86

In the diagram above, the V-Mask shows an out of control situation because of the point that lies above the upper arm. By sliding the V-Mask backwards so that the origin point covers other cumulative sum data points, we can determine the first point that signaled an out-of-control situation. This is useful for diagnosing what might have caused the process to go out of control. From the diagram it is clear that the behavior of the V-Mask is determined by the distance k (which is the slope of the lower arm) and the rise distance h. These are the design parametersof the V-Mask. Note that we could also specify d and the vertex angle (or, as is more common in the literature, θ = 1/2 of the vertex angle) as the design parameters, and we would end up with the same V-Mask.

In practice, designing and manually constructing a V-Mask is a complicated procedure. A CUSUM spreadsheet style procedure shown below is more practical, unless you have statistical software that automates the V-Mask methodology. Before describing the spreadsheet approach, we will look briefly at an example of a V-Mask in graph form.An example will be used to illustrate the construction and application of a V-Mask. The 20 data points

324.925, 324.675, 324.725, 324.350, 325.350, 325.225, 324.125, 324.525, 325.225, 324.600, 324.625, 325.150, 328.325, 327.250, 327.825, 328.500, 326.675, 327.775, 326.875, 328.350

are each the average of samples of size 4 taken from a process that has an estimated mean of 325. Based on process data, the process standard deviation is 1.27 and therefore the sample means have a standard deviation of 1.27/(41/2)  = 0.635.

We can design a V-Mask using h and k or we can use an alpha and beta design approach. For the latter approach we must specify

α: the probability of a false alarm, i.e., concluding that a shift in the process has occurred, while in fact it did not,

β: the the probability of not detecting that a shift in the process mean has, in fact, occurred, and</li

δ (delta): the amount of shift in the process mean that we wish to detect, expressed as a multiple of the standard deviation of the data points (which are the sample means).

Note: Technically, α and β are calculated in terms of one sequential trial where we monitor Sm until we have either an out-of-control signal or Sm returns to the starting point (and the monitoring begins, in effect, all over again).

The values of h and k are related to α, β, and δ based on the following equations adapted from Montgomery, 2000).

Page 87: SQC2014m

Cusum Charts Compared with Shewhart Charts

Although cusum charts and Shewhart charts are both used to detect shifts in the proces mean, there are important differences in the two methods.

Each point on a Shewhart chart is based on information for a single subgroup sample or measurement. Each point on a cusum chart is based on information from all samples (measurements) up to and including the current sample (measurement).

On a Shewhart chart, upper and lower control limits are used to decide whether a point signals an out-of-control condition. On a cusum chart, the limits take the form of a decision interval or a V-mask.

On a Shewhart chart, the control limits are commonly computed as 3  limits. On a cusum chart, the limits are determined from average run length specifications, specified error probabilities, or an economic design.

A cusum chart offers several advantages over a Shewhart chart. A cusum chart is more efficient for detecting small shifts in the process mean, in particular, shifts of 0.5 to 2

standard deviations from the target mean. Lucas (1976) noted that "a V-mask designed to detect a   shift will detect it about four times as fast as a competing Shewhart chart."

Shifts in the process mean are visually easy to detect on a cusum chart since they produce a change in the slope of the plotted points. The point at which the slope changes is the point at which the shift has occurred.These advantages are not as pronounced if the Shewhart chart is augmented by the tests for special causes described by Nelson (1984,1985). Also see Tests for Special Causes. Moreover,

cusum schemes are more complicated to design. a cusum chart can be slower to detect large shifts in the process mean. it can be difficult to interpret point patterns on a cusum chart since the cusums are correlated.

A.R.Muralidharan. SQC Lecture Notes 87

Page 88: SQC2014m

The Exponentially Weighted Moving Average Control charts( EWMA)

The exponentially weighted moving average (EWMA) is a statistic for monitoring the process that averages the data in a way that gives less and less weight to data as they are further removed in time from the current measurement. The data Y1, Y2, ... , Yt are the check standard measurements ordered in time. The EWMA statistic at time t is computed recursively from individual data points, with the first EWMA statistic, EWMA1, being the arithmetic average of historical data.

The EWMA control chart can be made sensitive to small changes or a gradual drift in the process by the choice of the weighting factor,  . A weighting factor of 0.2 - 0.3 is usually suggested for this purpose (Hunter), and 0.15 is also a popular choice.

Because it takes time for the patterns in the data to emerge, a permanent shift in the process may not immediately cause individual violations of the control limits on a Shewhart control chart. The Shewhart control chart is not powerful for detecting small changes, say of the order of 1 - 1/2 standard deviations. The EWMA (exponentially weighted moving average) control chart is better suited to this purpose

This chart is a good alternative to the Shewhart control chart when we are in detecting small shifts. The process of this chart is mostly same of the CUSUM chart and in some ways it is easy to construct and operate. As the same of the CUSUM , the EWMA s typically used with individual observations and It is a type of control chart used to monitor either variables or attributes-type data using the monitored business or industrial process's entire history of output.[1] While other control charts treat rational subgroups of samples individually, the EWMA chart tracks the exponentially-weighted moving average of all prior sample means. EWMA weights samples in geometrically decreasing order so that the most recent samples are weighted most highly while the most distant samples contribute very little.

Although the normal distribution is the basis of the EWMA chart, the chart is also relatively robust in the face

of non-normally distributedquality characteristics. There is, however, an adaptation of the chart that accounts for

quality characteristics that are better modeled by the Poisson distribution. The chart monitors only the process

mean; monitoring the process variability requires the use of some other technique.

The EWMA control chart requires a knowledgeable person to select two parameters before setup:

A.R.Muralidharan. SQC Lecture Notes 88

Page 89: SQC2014m

1. The first parameter is λ, the weight given to the most recent rational subgroup mean. λ must satisfy 0 < λ

≤ 1, but selecting the "right" value is a matter of personal preference and experience. One source

recommends 0.05 ≤ λ ≤ 0.25,[2]:411while another recommends 0.2 ≤ λ ≤ 0.3.

2. The second parameter is L, the multiple of the rational subgroup standard deviation that establishes the

control limits. L is typically set at 3 to match other control charts, but it may be necessary to reduce L

slightly for small values of λ.

Instead of plotting rational subgroup averages directly, the EWMA chart computes successive observations zi by

computing the rational subgroup average,  , and then combining that new subgroup average with the running

average of all preceding observations, zi - 1, using the specially–chosen weight, λ, as follows:

.

The control limits for this chart type are   where T and S are the

estimates of the long-term process mean and standard deviation established during control-chart setup and n

is the number of samples in the rational subgroup. Note that the limits widen for each successive rational

subgroup, approaching  .[2]:407

The EWMA chart is sensitive to small shifts in the process mean, but does not match the ability of Shewhart-

style charts (namely the  and R and   and s charts) to detect larger shifts.[2]:412 One author recommends

superimposing the EWMA chart on top of a suitable Shewhart-style chart with widened control limits in

order to detect both small and large shifts in the process mean

The target or center line for the control chart is the average of historical data. The upper (UCL) and lower (LCL) limits are

where s times the radical expression is a good approximation to the standard deviation of the EWMA statistic and the factor k is chosen in the same way as for the Shewhart control chart -- generally to be 2 or 3.The implementation of the EWMA control chart is the same as for any other type of control procedure. The procedure is built on the assumption that the "good" historical data are representative of the in-control process, with future data from the same process tested for agreement with the historical data. To start the procedure, a target (average) and process standard deviation are computed

A.R.Muralidharan. SQC Lecture Notes 89

Page 90: SQC2014m

from historical check standard data. Then the procedure enters the monitoring stage with the EWMA statistics computed and tested against the control limits. The EWMA statistics are weighted averages, and thus their standard deviations are smaller than the standard deviations of the raw data and the corresponding control limits are narrower than the control limits for the Shewhart individual observations chart.Data collectionDepiction of check standard measurements with J = 4 repetitions per day on the surface of a silicon wafer over K days where the repetitions are randomized over position on the wafer

K days - 4 repetitions

2-level design for measurements on a check standard

For J measurements on each of K days, the measurements are denoted by

The check standard value for the kth day is

The accepted value, or baseline for the control chart, is

The process standard deviation is

Check standard measurements should be structured in the same way as values reported on the test items. For example, if the reported values are averages of two measurements made within 5 minutes

A.R.Muralidharan. SQC Lecture Notes 90

Page 91: SQC2014m

of each other, the check standard values should be averages of the two measurements made in the same manner.Averages and short-term standard deviations computed from J repetitions should be recorded in a file along with identifications for all significant factors. The best way to record this information is to use one file with one line (row in a spreadsheet) of information in fixed fields for each group. A list of typical entries follows:

1. Month2. Day3. Year4. Check standard identification5. Identification for the measurement design (if applicable)6. Instrument identification7. Check standard value8. Repeatability (short-term) standard deviation from J repetitions9. Degrees of freedom10. Operator identification11. Environmental readings (if pertinent)

Monitoring bias and long-term variability

Once the baseline and control limits for the control chart have been determined from historical data, and any bad observations removed and the control limits recomputed, the measurement process enters the monitoring stage. A Shewhart control chart and EWMA control chart for monitoring a mass calibration process are shown below. For the purpose of comparing the two techniques, the two control charts are based on the same data where the baseline and control limits are computed from the data taken prior to 1985. The monitoring stage begins at the start of 1985. Similarly, the control limits for both charts are 3-standard deviation limits. The check standard data and analysis are explained more fully in another section.

In the EWMA control chart below, the control data after 1985 are shown in green, and the EWMA statistics are shown as black dots superimposed on the raw data. The EWMA statistics, and not the raw data, are of interest in looking for out-of-control signals. Because the EWMA statistic is a weighted average, it has a smaller standard deviation than a single control measurement, and, therefore, the EWMA control limits are narrower than the limits for the Shewhart control chart shown

A.R.Muralidharan. SQC Lecture Notes 91

Page 92: SQC2014m

above.

The control strategy is based on the predictability of future measurements from historical data. Each new check standard measurement is plotted on the control chart in real time. These values are expected to fall within the control limits if the process has not changed. Measurements that exceed the control limits are probably out-of-control and require remedial action. Possible causes of out-of-control signals need to be understood when developing strategies for dealing with outliers.The control chart should be viewed in its entirety on a regular basis] to identify drift or shift in the process. In the Shewhart control chart shown above, only a few points exceed the control limits. The small, but significant, shift in the process that occurred after 1985 can only be identified by examining the plot of control measurements over time. A re-analysis of the kilogram check standard data shows that the control limits for the Shewhart control chart should be updated based on the the data after 1985. In the EWMA control chart, multiple violations of the control limits occur after 1986. In the calibration environment, the incidence of several violations should alert the control engineer that a shift in the process has occurred, possibly because of damage or change in the value of a reference standard, and the process requires review.

Remedial actions

There are many possible causes of out-of-control signals.

A. Causes that do not warrant corrective action for the process (but which do require that the current measurement be discarded) are:

1. Chance failure where the process is actually in-control2. Glitch in setting up or operating the measurement process3. Error in recording of data

B. Changes in bias can be due to:

1. Damage to artifacts2. Degradation in artifacts (wear or build-up of dirt and mineral deposits)

A.R.Muralidharan. SQC Lecture Notes 92

Page 93: SQC2014m

C. Changes in long-term variability can be due to:

1. Degradation in the instrumentation2. Changes in environmental conditions3. Effect of a new or inexperienced operator

An immediate strategy for dealing with out-of-control signals associated with high precision measurement processes should be pursued as follows:1. Repeat the measurement sequence to establish whether or not the out-of-control signal was

simply a chance occurrence, glitch, or whether it flagged a permanent change or trend in the process.

2. With high precision processes, for which a check standard is measured along with the test items, new values should be assigned to the test items based on new measurement data.

3. Examine the patterns of recent data. If the process is gradually drifting out of control because of degradation in instrumentation or artifacts, then:

o Instruments may need to be repairedo Reference artifacts may need to be recalibrated.

4. Reestablish the process value and control limits from more recent data if the measurement process cannot be brought back into control.

When to Use an EWMA Chart

EWMA (or Exponentially Weighted Moving Average) Charts are generally used for detecting small shifts in the process mean. They will detect shifts of .5 sigma to 2 sigma much faster than Shewhart charts with the same sample size. They are, however, slower in detecting large shifts in the process mean. In addition, typical run tests cannot be used because of the inherent dependence of data points.

EWMA Charts may also be preferred when the subgroups are of size n=1. In this case, an alternative chart might be the Individual X Chart, in which case you would need to estimate the distribution of the process in order to define its expected boundaries with control limits. The advantage of Cusum, EWMA and Moving Average charts is that each plotted point includes several observations, so you can use the Central Limit Theorem to say that the average of the points (or the moving average in this case) is normally distributed and the control limits are clearly defined.

When choosing the value of lambda used for weighting, it is recommended to use small values (such as 0.2) to detect small shifts, and larger values (between 0.2 and 0.4) for larger shifts. An EWMA Chart with lambda = 1.0 is an X-bar Chart.

EWMA charts are also used to smooth the affect of known, uncontrollable noise in the data. Many accounting processes and chemical processes fit into this categorization. For example, while day to day fluctuations in accounting processes may be large, they are not purely indicative of process instability. The choice of lambda can be determined to make the chart more or less sensitive to these daily fluctuations.

A modified EWMA control charts may be used for autocorrelated processes with a slowly drifting mean. The wandering mean case has been presented by Montgomery and Mastrangelo (Journal of Quality Technology, July 1991, vol. 23, No. 3, pp. 179-193) for processes that are positively autocorrelated and the mean does not drift too fast. Subgroup size for the

A.R.Muralidharan. SQC Lecture Notes 93

Page 94: SQC2014m

wandering mean case is limited to n=1, since the subgroup range would not provide a meaningful indicator of process variation when observations are autocorrelated. 

As with other control charts, EWMA charts are used to monitor processes over time. The x-axes are time based, so that the charts show a history of the process. For this reason, you must have data that is time-ordered; that is, entered in the sequence from which it was generated. If this is not the case, then trends or shifts in the process may not be detected, but instead attributed to random (common cause) variation.

Other Univariate Statistical Process Monitoring and Control Techniques

As we seen the basics of SPC methods and some other special methods and we see an overview of some of the more useful recent developments. We can start with a discussion of SPC methods for short production runs and modified for situations. Although there are other techniques that can be applied to the short-run scenario, this approach seems to be most widely used in practice. These techniques find some application in situations where process capability is high, such as the “Six-sigma” manufacturing environment. Multiple stream processes are encountered in many industries.

SPC methods have found wide applications in almost every type of Industrial applications. Some of the most interesting applications occur in job-shop manufacturing systems, or generally in any type of system characterized by short production runs. Some SPC methods for these situations are straight forward adoptions of the standard concepts and require no new methodology. In these situations one of the basic techniques of control charting used in the short-un environment using deviation from the nominal dimension as the variable on the control chart. Now we present a summary of several techniques of univariate SPC monitoring and control techniques that have proven successful in short production run situations.

A. x∧R charts for short productionruns

Statistical process-control methods have found wide application in almost every type of business. Some of the most interesting applications occur in job-shop manufacturing systems, or generally in any type of system characterized by short production runs. Some of the SPC methods for these situations are straightforward adaptations of the standard concepts and require no new methodology.The simplest technique for using and R charts in the short production run situation was introduced as deviation from nominal instead of the measured variable control chart. This is sometimes called the deviation from normal (DNOM) control chart.

A.R.Muralidharan. SQC Lecture Notes 94

Page 95: SQC2014m

If Mi represents the ith actual sample measurement in millimeters, then would be the deviation from nominal. The control limits have been calculated using the data from all 10 samples. In practice, we would recommend waiting until approximately 20 samples are available before calculating control limits. However, for purposes of illustration we have calculated the limits based on 10 samples to show that, when using deviation from nominal as the variable on the chart, it is not necessary to have a long production run for each part number. It is also customary to use a dashed vertical line to separate different products or part numbers and to identify clearly which section of the chart pertains to each part number.Three important points should be made relative to the DNOM approach:1. An assumption is that the process standard deviation is approximately the same forall parts. If this assumption is invalid, use a standardized and R chart 2. This procedure works best when the sample size is constant for all part numbers.3. Deviation from nominal control charts have intuitive appeal when the nominal specificationis the desired target value for the process

Standardized and R Charts. If the process standard deviations are different for different part numbers, the deviation from nominal (or the deviation from process target) control charts described above will not work effectively. However, standardized and R charts will handle this situation easily. Consider the jth part number. Let − and Tj be the average range and nominal value of x for this part number. Then for all the samples from this art number, plot

on a standardized R chart with control limits at LCL = D3 and UCL = D4, and plot

on a standardized chart with control limits at LCL = − A2 and UCL = + A2. Note that thecenter line of the standardized chart is zero because is the average of the original measurements for subgroups of the jth part number. We point out that for this to be meaningful, there must be some logical justification for “pooling” parts on the same chart.

A.R.Muralidharan. SQC Lecture Notes 95

Page 96: SQC2014m

Attributes Control Charts for Short Production RunsDealing with attributes data in the short production run environment is extremely simple; theproper method is to use a standardized control chart for the attribute of interest. This methodwill allow different part numbers to be plotted on the same chart and will automatically compensate for variable sample size. All standardized attributes control charts havethe center line at zero, and the upper and lower control limits are at +3 and −3, respectively.Other MethodsA variety of other approaches can be applied to the short-run production environment. Forexample, the cusum and EWMA control charts discussed already have potential application to short production runs, because they have shorter average run-length performancethan Shewhart-type charts, particularly in detecting small shifts. Since most production runs in the short-run environment will not, by definition, consist of many units, the rapid shift detection capability of those charts would be useful. Furthermore, cusum and EWMA controlcharts are very effective with subgroups of size one, another potential advantage in the shortrun situation.

The “self-starting” version of the cusum is also a useful procedure for the short-run environment. The self-starting approach uses regular process measurements for both establishing or calibrating the cusum and for process monitoring.Thus it avoids the phase I parameter estimation phase. It also produces the Shewhart control statistics as a by-product of the process. The number of subgroups used in calculating the trial control limits for Shewhart charts impacts the false alarm rate of the chart; in particular, when a small number of subgroups are used, the false alarm rate is inflated. Hillier (1969) studied this problem and presented a table of factors to use in setting limits for and R charts based in a small number of subgroups for the case of n = 5 [see also Wang and Hillier (1970)]. Quesenberry (1993) has investigated a similar problem for both and individuals control charts. Since control limits in the short-run environment will typically be

A.R.Muralidharan. SQC Lecture Notes 96

Page 97: SQC2014m

calculated from a relatively small number of subgroups, these papers present techniques of some interest.Quesenberry (1991a, b, c) has presented procedures for short-run SPC using a transformationthat is different from the standardization approach discussed above. He refers to theseas Q-charts, and notes that they can be used for both short or long production runs. TheQ-chart idea was first suggested by Hawkins (1987). Del Castillo and Montgomery (1994)have investigated the average run-length performance of the Q-chart for variables and showthat in some cases the average run length (ARL) performance is inadequate. They suggestsome modifications to the Q-chart procedure and some alternate methods based on theEWMA and a related technique called the Kalman filter that have better ARL performancethan the Q-chart. Crowder (1992) has also reported a short-run procedure based on theKalman filter. In a subsequent series of papers, Quesenberry (1995a, b, c, d) reports somerefinements to the use of Q-charts that also enhance their performance in detecting processshifts. He also suggests that the probability that a shift is detected within a specified numberof samples following its occurrence is a more appropriate measure of the performance of ashort-run SPC procedure than its average run length. The interested reader should refer to theJuly and October 1995 issues of the Journal of Quality Technology that contain these papersand a discussion of Q-charts by several authorities. These papers and the discussion include a number of useful additional references.

Some guidelines for univariate control chart selection

A.R.Muralidharan. SQC Lecture Notes 97

Page 98: SQC2014m

Figure : The situations where various types of control charts are useful.

Multivariate process monitoring and control

we have addressed process monitoring and control primarily from the univariate perspective; that is, we have assumed that there is only one process output variable or quality characteristic of interest. In practice, however, many if not most process monitoring and control scenarios involve several related variables. Although applying univariate control charts to each individual variable is a possible solution, we will see that this is inefficient and can lead to erroneous conclusions. Multivariate methods that consider the variables jointly are required.In this chapter, control charts that can be regarded as the multivariate extensions of some of the univariate charts. The Hotelling T2 chart is the analog of the Shewhart x chart. We will also discuss a multivariate version of the EWMA control chart and some methods for monitoring variability in the multivariate case. These multivariate control charts work well when the number of process variables is not too large—say, 10 or fewer. As the number of variables grows, however, traditional multivariate control charts lose efficiencywith regard to shift detection. A popular approach in these situations is to reduce thedimensionality of the problem. We show how this can be done with principal components

The Multivariate Quality-Control Problem

A.R.Muralidharan. SQC Lecture Notes 98

Page 99: SQC2014m

4. Acceptance Sampling (4LH)

4.1. Concepts of acceptance sampling

4.2. Lot-by-lot acceptance sampling

4.3. Other acceptance sampling techniques

1. Concepts of acceptance sampling

Accepting Sampling

Accepting sampling, defined as the inspection and classification of a sample of units selected at random from a

larger batch or lot or products and a decision about the lot based on the sample observed from the lot it involves

several terms.

In acceptance sampling there are two categories involved. In sampling inspection operation is performed

immediately following production, before the product is shipped to the customer termed as “Outgoing

Inspection”. In”Incoming inspection”, it is a situation in which lots of batches of product are sampled as they are

A.R.Muralidharan. SQC Lecture Notes 99

Page 100: SQC2014m

received from the supplier. Further, various lot may either be accepted or rejected or they may be reworked or

replaced with goods, this is known to be “Rectifying sampling inspection”

Below diagrams are show the Incoming/outgoing/ rectifying inspection.

Ship

Out going Inspection

Ship

Incoming Inspection

Accept

Rejected products/lots

A.R.Muralidharan. SQC Lecture Notes 100

Process Inspection Consumer

Process Inspection Consumer

Process Inspection Accepted lots

Scrape Rework

Consumer

Page 101: SQC2014m

Rectifying Sampling

Figure1. Variations of acceptance sampling: Outgoing / incoming /rectifying acceptance sampling

In recent quality systems usually place less emphasis on acceptance sampling and attempt to make statistical

process control and designed experiments the focus of their efforts. Sampling plans tends to “Conformance to

specification” without any feedback in quality improvement.

A typical evaluation in the use of the above concepts is shown below

Figure 2: Phase diagram of the use of Quality engineering methods

A.R.Muralidharan. SQC Lecture Notes 101

100

Acceptance Sampling Process Control

Design of Experiments

0

Page 102: SQC2014m

Terms and Terminology

The names of problem-solving methods have become “buzzwords” in the quality improvement concept. The

methods themselves are diverse; some steps involve in numerical calculation with complicated statistics and

others are simple charting methods. Some of the process associated with performing these methods can be

accomplished by a single person working alone, and others require multidisciplinary teams. The following is an

abbreviated list of the methods:

Acceptance Sampling involves collecting and analyzing a relatively small number of key variable measurements

to make “accept or reject” decisions about a relatively large number of units. Statistical evidence is generated

about the fraction of the units in the lot that are acceptable.

Control Planning is an activity performed by the “owners” of a process to assure that all process key variables

are being measured in a way that assures a high degree of quality. This effort can involve application of multiple

methods.

Design of Experiments (DOE) methods are structured approaches for collecting response data from varying

multiple Key Variables to a system. After the experimental tests yield the response outputs, specific methods for

analyzing the data are performed to establish approximate models for predicting outputs as a function of inputs.

Failure Mode & Effects Analysis (FMEA) is a method for prioritizing response measurements and subsystems

addressed with highest priority.

Formal Optimization is itself a diverse set of methods for writing technical problems in a precise way and for

developing recommended settings to improve a specific system or product, using input-output models as a starting

point.

A.R.Muralidharan. SQC Lecture Notes 102

Page 103: SQC2014m

Gauge Repeatability and Reproducibility (R&R) involves collecting repeated measurements on an engineering

system and performing complicated calculations to assess the acceptability of a specific

measurement system. (“Gage” is an alternative spelling.)

Process Mapping involves creating a diagram of the steps involved with an engineering system. The exercise can

be an important part of waste reduction efforts and lean engineering and can aid in identifying key input variables.

Regression is a curve-fitting method for developing approximate predictions of system KOVs (usually averages)

as they depend on key input variable settings. It can also be associated with proving statistically

Statistical Quality Control and Six Sigma that changes in KIVs affect changes in KOVs if used as part of a DOE

method.

Statistical Process Control (SPC) charting includes several methods to assess visually and statistically the

quality and consistency of process KOVs and to identify unusual occurrences. Therefore, SPC charting is useful

for initially establishing the value and accuracy of current settings and confirming whether recommended changes

will consistently improve quality.

Quality Function Deployment (QFD) involves creating several matrices that help decision-makers better

understand how their system differs from competitor systems, both in the eyes of their customers and in objective

features.

Acceptance Quality Level (AQL)

This is the quality level of the supplier’s process that the consumer would consider to be acceptable. Let p1 be the

fraction defective in a lot, which is fairly of good quality. Then

P[rejecting a lot of quality p1] = 0.05

A.R.Muralidharan. SQC Lecture Notes 103

Page 104: SQC2014m

P[accepting a lot of quality p1 ] = 0.95

p1 is known as the acceptance quality level.

Lot Tolerance Percent Defective (LTPD)

It is the maximum fraction defective (pt) in the lot that the consumer will tolerate. 100pt is called lot tolerance

percent defective. The probability of accepting lots with fraction defective pt or greater is very small

Average Outgoing Quality Limit (AOQL)

Let p be the fraction defective in the lot before inspection, also called the incoming quality. Then the expected

number of fraction defectives remaining in the lot after the application of the sampling inspection plan is known

as average outgoing quality. The maximum value of the AOQ, the maximum being taken with respect to p is

called average outgoing quality limit.

Average Amount of Total Inspection (ATI)

The expected value of the sample size required for coming to a decision in an acceptance- rectification sampling

inspection plan calling for 100% inspection of the rejected lots is called average amount of total inspection. It is a

function of the incoming quality. The curve obtained on plotting ATI against p is called the ATI Curve.

Average Sample Number (ASN)

The expected value of sample size required for coming to a decision whether to accept or reject a lot in an

acceptance- rejection sampling inspection plan is called ASN. The curve obtained on plotting ASN against p is

called the ASN curve.

Producer’s risk (Pp)

A.R.Muralidharan. SQC Lecture Notes 104

Page 105: SQC2014m

A producer can be an individual, a firm or a department that produces goods and supplies them to another

individual, firm or department. As our decisions are based on sampling inspection plans he is always under the

constant risk that sometimes certain lots of satisfactory quality be rejected. Further let the producer’s average

fraction defective is p i.e. a quality standard, which he has been able to maintain over a long period of time. Then

probability of rejecting a lot of quality p is called producer’s risk and is denoted by Pp.

Pp= P[rejecting a lot of quality p] = α ----------- (1)

Consumer’s risk (Pc)

By consumer here we will mean the person, firm or department that receives the articles from the producer. Just

like producer, consumer is also faced with the risk of accepting a lot of unsatisfactory quality on the basis of

sampling inspection. Thus probability of accepting a lot of quality tp is called consumer’s risk and is denoted by.

Pc Pc = P[accepting a lot of quality] = β --------- (2)

Operating characteristic curve (OC)

The curve obtained on plotting the acceptance probability L (p) against p, the lot fraction defective is called OC

curve. The discriminatory power of a sampling plan is revealed by its OC curve. The greater the slope of the OC.

curve the greater the discriminatory power. By increasing the sample size we can increase the discriminatory

power of the sampling plan. Of course, the acceptance number c should be kept proportional to n. The ideal OC

curve looks like

1

L (p)

0 0.01 p

A.R.Muralidharan. SQC Lecture Notes 105

Page 106: SQC2014m

However, this ideal OC curve can never be attained in reality.

There are a number of different ways to classify Lot acceptance- sampling plans. There are two classification is

important they are attributes and variables. In variable sampling, quality characteristics that are measured on a

numerical scale and in attributes are that are expressed on a “go, no-go” basis.

Here we can have a discussion on lot-by-lot acceptance sampling plans for attributes.

There are basically two types of sampling plans:

1) Acceptance-rejection sampling plans and 2) Acceptance-rectification sampling plans or rectifying

sampling plans

Acceptance – Rejection Sampling Plans

In these plans a decision to accept or reject a lot is taken on the basis of the samples drawn from it.

a. Single Sampling Plan

A single- sampling plan is a product screening method in which the decision to accept or reject the lot is based

on a single sample. For instance, a single sampling plan for attributes would consist of a random sample of size n

from the lot consisting of N units and an acceptance number c. If the number of defectives (d) in the sample is less

than or equal to c accept the lot, otherwise reject it.

Procedure to carry out single sampling plan:

Step 1: Select n items from the lot consisting of N units

Step 2: check c the acceptance number

Step 3: If the number of defectives (d) in the sample is less than or equal to c accept the lot,

Otherwise reject it.

Decision Flow Chart for a Single Sampling Plan

A.R.Muralidharan. SQC Lecture Notes 106

Select a sample of n items for Inspection

Page 107: SQC2014m

YES NO

Figure3:Flow chart-Single sampling

b. Double Sampling Plan

Double sampling plan is complicated. For this an initial sample, a decisions are based on sample is made in

following way .Decision is made either to (a) accept the given lot or (b) reject that lot or (c) select the second

sample and then to Accept or Reject that lot. Sometimes a second sample is required before one can reach a

decision about acceptance or rejection of the lot.

The procedure is:

Step 1: Draw a sample of size n1from the lot.

Step 2: check c1

Step 3: If the number of defectives (d1) obtained in the sample is less than or equal to c1 (sampling acceptance

number for the first sample) accept the lot.

Step 4: If d1>c2(acceptance number for both the samples combined) reject the lot.

A.R.Muralidharan. SQC Lecture Notes 107

Compare Number of defectives (d)

With the acceptance number (c)

Accept Rejectd≤ c

Page 108: SQC2014m

Step 5 : However, if c1+1≤ d1≤ c2,take another sample of size n2. Let be the number of defectives observed in

the second sample. If d1+d2 ≤ c2 , accept the lot. If d1+d2>c2 , reject the lot.

Decision Flow Chart for Double Sampling Plan

YES NO

NO YES

A.R.Muralidharan. SQC Lecture Notes 108

Select a sample of n1 items for Inspection

Compare Number of defectives (d1) with the acceptance number (c1)and (c2)

Accept

d1 ≤ c1

Reject

d1>c1

C1+1≤d1 ≤ c2

Let d2 be the number of defectives observed in second sample of size n2

Page 109: SQC2014m

YES NO

Figure3: Flow chart-Double sampling

The Average sample number: ASN = n ------------ (3)

as n units have to be inspected even if decision to accept or reject the lot is taken much before

ASN = n1 P1+ (n1+n2 ) (1−P1)=¿ (n1+n2 ) (1−P1 ) ---------- (4)

as only n1 units will be inspected if the lot is accepted or rejected on the basis of first sample or (n1+n2) units will

be inspected if a second sample has to be drawn

c. Multiple-sampling plan

It is an extension of the double-sampling plan, if we draw a decision based on two sample is double sample,

instead in that more than two sample may required in order to reach a decision regarding the disposition of the

lot. Sample sizes in multiple sampling are smaller than they are in either simple or double sampling. The

advantage of multiple sampling is that the samples are required at each stage usually smaller than those in single

or double sampling. Hence it has some economic efficiency is connected with use of the procedure. However,

multiple sampling is much more complex to apply.

d. Sequential Sampling Plan

It is an extension of the multiple sampling. In this plan, we take a sequence of samples from the lot and allow the

number of samples to be determined entirely by the results of the sampling process.

A.R.Muralidharan. SQC Lecture Notes 109

Accept Reject

d1+d2≤ c2

Page 110: SQC2014m

In application, sequential plan can theoretically continue indefinitely, until the lot is inspected 100%. But in

practice, it can be usually truncated after the number inspected is equal to three times inspected using a

corresponding single-sampling plan. Here units are selected from the lot one at a time, inspected and a decision

is made either to accept the lot or reject the lot or select another unit.

2. Rectifying Sampling Inspection Plans

These plans were developed by Harold F. Dodge and G. Romig at the Bell Telephone Laboratories before World

War II. Suppose that the incoming lots, which are being submitted for inspection have p0 fraction defective. Some

of these lots will be accepted on the basis of samples drawn from them, while others will be rejected. Under

rectifying sampling inspection plans whenever we accept a lot we replace all the defective items encountered in

the sample by good pieces. Whereas rejected lot are sent for 100% inspection and all the defectives encountered

are replaced by good pieces. Thus these programs serve to improve lot quality.

For all the sampling inspection plans discussed earlier viz. single, double, sequential we simply need to

interchange the phrases ‘accept’ and ‘reject’ by ‘accept’ and ‘replace all defective items in the sample’ and

‘inspect all the items in the lot and replace all defectives by good pieces’.

e. Rectifying Single Sampling Plan

Let p be the incoming quality, N the lot size, n the sample size, d the number of defectives in the sample and c the

sampling acceptance number. Then the flow diagram for the above plan may be depicted as follows:

A.R.Muralidharan. SQC Lecture Notes 110

Page 111: SQC2014m

Flow Diagram for Rectifying Single Sampling Inspection Plan

IF

Figure4: Flow chart-Rectifying Single sampling

Thus its AOQ will be given as:

AOQ =( N−n ) p .Pa( p)

N+0.¿ --------- (5)

where, Pa(p)is the probability of accepting a lot of quality p.

Pa( p)=∑x=0

c (Npx )(N−Np

n−x )(N

n ) (using hyper geometric distribution) --------- (6)

A.R.Muralidharan. SQC Lecture Notes 111

Consider a sample of size n from the lot of size N

d≤ c d >c

Accept the lot and replaces all defectives found in the sample by non-defectives

Accept the lot by inspecting the entire lot and replace all defectives by non-defectives

Page 112: SQC2014m

where, x, is the number of defectives observed in the sample.

The maximum ordinate on the AOQ curve representing the worst possible average quality that would result from

the rectification inspection program, is called the AOQL.

The Producer’s Risk is given by:

Pp=1−∑x=0

c (N px )(N−N p

n−x )(N

n ) --------------------------- (7)

Next the Consumer’s Risk is given by :

Pc=∑x=0

c (N p t

x )(N−N pt

n−x )(N

n )−−−−−−−(8)

The Average Amount of Total Inspection is given by:

ATI = n + (N-n)Pp -- - - - - - - - - - - - - - - - - - -- (9)

as n items have to be inspected in any case and the remaining (N-n) would be inspected if the number of

defectives in the sample exceeds c.

OC curve of a single sampling plan:

Curved obtained on plotting P a (p) against p is called OC curve, where

Pa( p)=∑x=0

c (Npx )(N−Np

n−x )(N

n )------------------------- (10)

f. Rectifying Double sampling plan

A.R.Muralidharan. SQC Lecture Notes 112

Page 113: SQC2014m

Let N=Lot size

n1= size of the first sample n2= Size of second sample

d1= Number of defectives observed in first sample d1= Number of defectives observed in

Second sample c1= Acceptance number in first sample c1= Acceptance Number in second sample

Flow chart for Rectifying Double sampling Inspection Plan

A.R.Muralidharan. SQC Lecture Notes 113

Consider a sample of size n1 from the lot of size N for inspection

If d1+d 2≤ c 2 If d1+d2 >c2

Accept the lot and replaces all defectives found in the sample by non-defectives

Reject the lot and inspect the lot 100% and replace all defectives by non-defectives

Compare number of defectives d1 with the acceptance number c1 and c2

If d1≤ c1 d1>c 2

Accept t and replace all defective by Non-defective

Reject the lot and inspect the lot 100% and replace all defectives with Non-defectives

IF c1+1 ≤ d 1

Draw a second sample of size n2

Let d2 be the number of defectives in the second sample

Page 114: SQC2014m

Figure5: Flow chart- Rectifying Double sampling

Thus its AOQ will be given as:

OC curve of double sampling plan:

Pa( p)=∑x=0

c 1 (Npx )(N−Np

n1−x )( Nn 1)

+ ∑y=0

c 2−x

∑x=c1+1

c2 (Npx )(N−Np

n1−x )(Np−xy )(N−n 1−(Np−x )

n 2− y )( Nn1)(N−n1

n 2 )

= Pa 1( p)+Pa 2( p) ------------------------------ (11)

Where x and y denotes the number of defectives observed in the first and second samples respectively. The

probabilities of acceptance on the basis of first and second samples are Pa 1 ( p )∧Pa 2( p) respectively. The

producer’s and consumer’s risks are expressed in equation (12) and (13) respectively

Pa( p)=∑x=0

c 1 (N px )( N−N p

n1−x )( Nn1)

+ ∑y=0

c 2−x

∑x=c1+1

c2 ( N px )(N−N p

n 1−x )(N p−xy )(N−n1−(N p−x)

n2− y )( Nn 1)(N−n 1

n 2 ) ---------- (12)

Pa( pt)=∑x=0

c1 (N pt

x )(N−N p t

n 1−x )( Nn1)

+ ∑y=0

c2− x

∑x=c1+1

c2 (N p t

x )(N−N pt

n1−x )( N pt−xy )(N−n1−(N pt−x)

n 2− y )( Nn1)(N−n1

n2 )----- (13)

Also ATI= n1+n2¿ ------------------- (14)

A.R.Muralidharan. SQC Lecture Notes 114

Page 115: SQC2014m

as only n1 units inspected if the lot is accepted on the basis of the first sample or (n1+n2) units will be inspected if

the lot is accepted on the basis of the second sample or all the N units will be inspected if the lot is not accepted

and

AOQ is

AOQ=p (N−n 1N )∑

x=0

c1 (Npx )(N−Np

n1− x )( Nn1)

+¿

p( N−n1−n 2N

) ∑y=0

c2− x

∑x=c1+1

c 2 (Npx )(N−Np

n 1−x )(Np−xy )(N−n1−(Np−x )

n 2− y )( Nn1)(N−n 1

n2 ) --------------- (15)

In above all plans that we have discussed so far one major issue that determination of n and c, as the value of the

lot size N will always be given. Another way of determining n and c is that consumer’s interests will satisfy the

value of AOQL, N in any case will be given to us.

A.R.Muralidharan. SQC Lecture Notes 115

Page 116: SQC2014m

5. Reliability and Life Testing (4 LH)

5.1. Definition of Reliability

5.2. Life history Curve

5.3. Type of reliability tests Reliability

5.1. Reliability- Introduction

Reliability is an important dimension in quality control programmers. Reliability is assumed greater

significance and importance during the past decade. Any manufacturing process has greater focus on

reliability. Reliability is only one of the tools of management which must be supplemented by other

tools like quality control and Design of experiments for the solution of problems of quality and cost.

Statistical meaning of “Reliability”

The term "reliability" is frequently used in research contexts, so what does it actually mean? In everyday

use, the word "reliable" means dependable, consistent or unfailing , but for research purposes we need a

more unambiguous definition. We have to be specific about what it means to have a dependable measure

or observation. One reason that a word like "dependable" is not a precise enough description, is that it

A.R.Muralidharan. SQC Lecture Notes 116

Page 117: SQC2014m

can be confused too easily with validity. Validity is the consideration of whether a particular research

method or technique actually measures what you want it to measure, whereas reliability refers to how

accurately a technique actually measures the phenomenon you are investigating. So, reliability means

repeatability or consistency. A measure is regarded as reliable if it would give us the same result on

repeated use, assuming what you are measuring doesn't change as you measure it, or between

measurements

Necessity

It has been known that the reliability of a system or product or tools is very important aspect of quality

for its consistent performance over its life span. It is essential requirement of large and complex system

with uninterrupted service and hazard free operation. In such cases a sudden failure cause a dangerous

impact on continuity of service.

The problems which are related to reliability may cause shutdown or reduced power and it leads to loss

in economic and productive activities. Unpredicted failures of a single component of a system may

cause some important and complex problems

The problem of assuring and maintaining has may responsible factors, including original equipment

design, control of quality, acceptance inspection, life testing and design modifications. Therefore ,

deficiencies in design and manufacture of products which go to build such complex systems needs to be

detected by elaborate testing at the development stage and later corrected by a planned programme of

maintenance.

A.R.Muralidharan. SQC Lecture Notes 117

Page 118: SQC2014m

Quality and Reliability

Quality control maintains the consistency of the products and thus affects reliability. But it is entirely a

separate function. Reliability is associated with quality over the long term where as quality control is

associated with the relatively short period of time requires for manufacture of the products.

The task of reliability is to see that in a product design, full account has been taken of every contingency

which may cause a break down in use and to forecast the components or assemblies that are likely to

become defective in service.

The equipment is designed still it may be unreliable, if some component has not been fully evaluated

under all service conditions, even if the production standards have been maintained by quality control

during manufacturing.

Definitions

The meaning of the term “Reliability” is “Performance of the product”. Reliability is the probability of a

product functioning in the intended manner over its intended life under the environmental conditions

encountered.

From the above definition we may observe that it involved four factors; they are

1. Numerical value

2. Intended function

3. Life

4. Environmental conditions

A.R.Muralidharan. SQC Lecture Notes 118

Page 119: SQC2014m

Many formal definition of reliability have been proposed that are similar in general. In below lines we

can see the definitions which are important :

1. Reliability is the probability of a device performing its purpose adequately for the period of time

intended under the operating conditions encountered.

2. Reliability is the capability of an equipment not to breakdown in operation

3. Reliability is the probability of no failure throughout a prescribed operation period.

An unreliable product may cause increased complexity of product.

Elements of reliability

The basic elements required for an adequate specification or definition of reliability are

1. Numerical value of probability

2. Statement defining successful product performance

3. Statement defining the environment in which the equipment must operate.

4. Statement of the required operating time

5. The type of distribution is Poisson.

Type of Reliability test

Reliability testing means a test conducted to verify that a system or a product or a process will work

satisfactorily for a given time period. In reliability have three different test

A.R.Muralidharan. SQC Lecture Notes 119

Page 120: SQC2014m

Reliability refers to the consistency of a measure. A test is considered reliable if we get the same result repeatedly. For example, if a test is designed to measure a trait, then each time the test is administered to a subject, the results should be approximately the same. Unfortunately, it is impossible to calculate reliability exactly, but it can be estimated in a number of different ways.

To gauge test-retest reliability, the test is administered twice at two different points in time. This kind of reliability is used to assess the consistency of a test across time. This type of reliability assumes that there will be no change in the quality or construct being measured. Test-retest reliability is best used for things that are stable over time, such as intelligence. Generally, reliability will be higher when little time has passed between tests.

1. Functional test: this test involves with the product and to determine if the product will

function at time as Zero.

2. Environmental test: As we know here the environmental parameters which are involved

in any system are temperature , humidity, vibration and so on. These environmental

conditions are critical to many products. This test consists of determining the expected

A.R.Muralidharan. SQC Lecture Notes 120

Reliability test

Functional test Environmental test Life test

Page 121: SQC2014m

levels and they set the level of environmental parameters which the product has to

operate.

3. Life test: Life tests are carried out to assess the life of a product its capabilities and hence

to form an idea of its quality level. For any product the quality is presence during a period

of time with the life of that component. This tests measure the time or period during

which the product will retain its desired quality characteristics. These characteristics are

based on the life of the product or during the life time or both.

For any machinery product which lasts for a considerable period. That is it will work for some period of

time here period life during use is an important factor. On the other hand for some food products, self

life is important. This we can use the life test based on the situation it required.

Life tests are carried out in different manners under different conditions as follows:

a. Test under actual working conditions

b. Tests under intensive conditions

c. Test under accelerated conditions

In the life test of the component under actual working conditions for full duration is quite labourious

cumbersome, time consuming and impracticable more over such full duration tests donot lend any help

in controlling a manufacturing process.

A.R.Muralidharan. SQC Lecture Notes 121

Page 122: SQC2014m

Let us take an example of a household mixer which works for say an hour every day. It is to be tested

under actual working condition, it would be operated for only one hour per day to find out after how

many days it fails. This is not practicable.

Therefore, it is worked continuously at rated specifications and thus the life can be estimated in a much

shorter duration on time. However it may be switched off for some period during intensive testing. The

tests under accelerated conditions are conducted under severe operating conditions to quicken the

product failure or breakdown. Some of the examples are “ an electric circuit may be exposed to high

voltages or high currents, a lathe may be subjected to sever vibrations and chatter, a refrigerator

performance may be checked under high ambien temperature conditions.

Reliability Vs. Validity

It is important to note that just because a test has reliability it does not mean that it has validity. Validity

refers to whether or not a test really measures what it claims to measure. Think of reliability as a

measure of precision and validity as a measure of accuracy. In some cases, a test might be reliable, but

not valid. For example, imagine that job applicants are taking a test to determine if they possess a

particular personality trait. While the test might produce consistent results, it might not actually be

measuring the trait that it purports to measure.

A.R.Muralidharan. SQC Lecture Notes 122