taguchi grey

9
AbstractIn this paper it has been discussed how Taguchi based Grey Relational Analysis is implemented to optimize a set of operational parameters which are called as input variables of any process to achieve best result of any performance parameter, which is also known as response variable, of that process. Taguchi based Grey Relational analysis is actually a Design of Experiment (DOE) approach to create best combination of experimental variables or input variables to get desired response variable. DOE is a systematic approach for investigation of a system or process. A series of structured tests are designed in which planned changes are made to the input variables of a process or system. The effect of these changes on a pre-defined output then assessed. DOE is important as a formal way of maximizing information gained while minimizing resources required. It has more to offer than ‘one change at a time’ experimental methods, because it allows a judgment on the significance to the output of input variables acting in combination with the other. In this article a detailed discussion has been taken place on different aspects of Taguchi design like ‘Orthogonal Arrays’, ‘Signal-Noise ratio’, ‘MSD analysis’ and ‘Analysis of Variance (ANOVA). After discussion on different aspects of Taguchi design different aspects of Grey Relational Numerical Method like ‘Processing of Primitive Data’, ‘Grey Relational Coefficient’ ‘Grey Relational Grade’, ‘Grey Relational Ordering’ and ‘Grey relational Matrix’ have been done. KeywordsDesign of Experiment (DOE), Taguchi Method, Grey Analysis. I. INTRODUCTION HE successful and efficient running of any system or any process largely depends on the fact that how it has been designed. Before a system or any process is developed it need to go through many experiments and a fruitful experiment helps the system or process to be designed successfully. So Design of Experiment (DOE) has a very important role in development of any system or a process. DOE is a systematic approach for investigation of a system or process. A series of structured tests are designed in which planned changes are made to the input variables of a process or system. The effects of these changes on a pre-defined output are then assessed. DOE is important as a formal way of maximizing information gained while minimizing resources required. It has mor e to offer than „one change at a time‟ experimental methods, because it allows a judgment on the significance to the output of input variables acting alone, as well input variables acting in combination with one another. One change at a time‟ testing always carries the risk that the experimenter may find one input variable to have a significant effect on the response (output) while failing to discover that changing another variable may alter the effect of the first (i.e. some kind of dependency or interaction). This is because the temptation is to stop the test when this first significant effect has been found. In order to reveal an interaction or dependency, „one change at a time‟ testing relies on the experimenter carrying the tests in the first place, and then prescribes exactly what data are needed to assess them i.e. whether input variables change the response on their own, when combined, or not at all. In terms of resource the exact length and size of the experiment are set by the design (i.e. before testing begins). DOE can be used to find answers in situations such as “what is the main contributing factor to a problem?”, “how well does the system/process perform in the presence of noise?”, “what is the best configuration of factor values to minimize variation in a response?” etc. In general, these questions are given labels as particular types of studies. In the examples given above, these are problem solving, parameter design and robustness studies. In each case, DOE is used to find the answer; the only thing that makes them different is factors used in the experiment. The order of tasks to using this tool starts with identifying the input variables and the response (output) that is to be measured. For each input variable, a number of levels are defined that represent the range for which the effect of that variable is desired to be known. An experiment plan is produced which tells the experimenter where to set test parameter for each run of the test. The response is then measured for each run. The method of analysis is to look for differences between response (output) readings for different groups of the input changes. These differences are then attributed to the input variables acting alone (called a single effect) or in combination with another input variable (called an interaction). DOE is team oriented and a variety of backgrounds (e.g. design, manufacturing, statistics etc.) should be involved when identifying factors and levels and developing the matrix as this is the most skilled part. Moreover, as this tool is used to answer specific questions, the team should have a clear understanding of the difference between control and noise factors. It is very important to get the most information from each experiment performed. Well designed experiments can produce significantly more information and often require fewer runs than haphazard or unplanned experiments. In addition, a well-designed experiment will ensure that the evaluation of the effects that had been identified as important. For example, if there is an interaction between two input variables, both variables should be included in the design rather than doing a „one factor at a time‟ experiment. An interaction occurs when the effect of one input variable is influenced by the level of another input variable. Designed experiments are carried out in four phases: planning, screening (also called process characterization), optimization, and verification. Planning : Utility of Taguchi Based Grey Relational Analysis to optimize any process or system T

Upload: amalendu-biswas

Post on 22-Dec-2015

6 views

Category:

Documents


0 download

DESCRIPTION

Grey and Taguchi

TRANSCRIPT

Page 1: Taguchi Grey

Abstract— In this paper it has been discussed how Taguchi based

Grey Relational Analysis is implemented to optimize a set of

operational parameters which are called as input variables of any

process to achieve best result of any performance parameter,

which is also known as response variable, of that process. Taguchi

based Grey Relational analysis is actually a Design of Experiment

(DOE) approach to create best combination of experimental

variables or input variables to get desired response variable. DOE

is a systematic approach for investigation of a system or process. A

series of structured tests are designed in which planned changes

are made to the input variables of a process or system. The effect

of these changes on a pre-defined output then assessed. DOE is

important as a formal way of maximizing information gained

while minimizing resources required. It has more to offer than

‘one change at a time’ experimental methods, because it allows a

judgment on the significance to the output of input variables

acting in combination with the other.

In this article a detailed discussion has been taken place on

different aspects of Taguchi design like ‘Orthogonal Arrays’,

‘Signal-Noise ratio’, ‘MSD analysis’ and ‘Analysis of Variance

(ANOVA). After discussion on different aspects of Taguchi design

different aspects of Grey Relational Numerical Method like

‘Processing of Primitive Data’, ‘Grey Relational Coefficient’

‘Grey Relational Grade’, ‘Grey Relational Ordering’ and ‘Grey

relational Matrix’ have been done.

Keywords— Design of Experiment (DOE), Taguchi Method,

Grey Analysis.

I. INTRODUCTION

HE successful and efficient running of any system or any

process largely depends on the fact that how it has been

designed. Before a system or any process is developed it

need to go through many experiments and a fruitful

experiment helps the system or process to be designed

successfully. So Design of Experiment (DOE) has a very

important role in development of any system or a process.

DOE is a systematic approach for investigation of a system

or process. A series of structured tests are designed in which

planned changes are made to the input variables of a process

or system. The effects of these changes on a pre-defined

output are then assessed. DOE is important as a formal way of

maximizing information gained while minimizing resources

required. It has more to offer than „one change at a time‟

experimental methods, because it allows a judgment on the

significance to the output of input variables acting alone, as

well input variables acting in combination with one another.

One change at a time‟ testing always carries the risk that the

experimenter may find one input variable to have a significant

effect on the response (output) while failing to discover that

changing another variable may alter the effect of the first (i.e.

some kind of dependency or interaction). This is because the

temptation is to stop the test when this first significant effect

has been found. In order to reveal an interaction or

dependency, „one change at a time‟ testing relies on the

experimenter carrying the tests in the first place, and then

prescribes exactly what data are needed to assess them i.e.

whether input variables change the response on their own,

when combined, or not at all. In terms of resource the exact

length and size of the experiment are set by the design (i.e.

before testing begins). DOE can be used to find answers in

situations such as “what is the main contributing factor to a

problem?”, “how well does the system/process perform in the

presence of noise?”, “what is the best configuration of factor

values to minimize variation in a response?” etc. In general,

these questions are given labels as particular types of studies.

In the examples given above, these are problem solving,

parameter design and robustness studies. In each case, DOE is

used to find the answer; the only thing that makes them

different is factors used in the experiment.

The order of tasks to using this tool starts with identifying

the input variables and the response (output) that is to be

measured. For each input variable, a number of levels are

defined that represent the range for which the effect of that

variable is desired to be known. An experiment plan is

produced which tells the experimenter where to set test

parameter for each run of the test. The response is then

measured for each run. The method of analysis is to look for

differences between response (output) readings for different

groups of the input changes. These differences are then

attributed to the input variables acting alone (called a single

effect) or in combination with another input variable (called an

interaction). DOE is team oriented and a variety of

backgrounds (e.g. design, manufacturing, statistics etc.) should

be involved when identifying factors and levels and

developing the matrix as this is the most skilled part.

Moreover, as this tool is used to answer specific questions, the

team should have a clear understanding of the difference

between control and noise factors.

It is very important to get the most information from each

experiment performed. Well – designed experiments can

produce significantly more information and often require

fewer runs than haphazard or unplanned experiments. In

addition, a well-designed experiment will ensure that the

evaluation of the effects that had been identified as important.

For example, if there is an interaction between two input

variables, both variables should be included in the design

rather than doing a „one factor at a time‟ experiment. An

interaction occurs when the effect of one input variable is

influenced by the level of another input variable. Designed

experiments are carried out in four phases: planning, screening

(also called process characterization), optimization, and

verification.

Planning :

Utility of Taguchi Based Grey Relational

Analysis to optimize any process or system

T

Page 2: Taguchi Grey

Careful planning helps to avoid problems that can occur

during the execution of the experimental plan. For example,

personnel, equipment availability, funding, and the mechanical

aspects of the system may affect the ability to complete the

experiment. The preparation required before beginning

experimentation depends on the nature of the problem. The

following are some of the steps that may be necessary.

Problem Definition: Developing a good problem statement

helps make sure that the correct variables are studied. At this

step, the questions that need to be answered are identified.

Object Definition: A well – defined objective will ensure

that the experiment answers the right questions and yields

practical, usable information. At this step the goals of the

experiment will be defined.

Development of an experimental plan that will provide

meaningful information: At this step it is necessary to make

sure that the relevant background information has been

reviewed, such as theoretical principles, and knowledge

gained through observation or previous experimentation. For

example, you may need to identify which factors or process

conditions affect process performance and contribute to

process variability. Or, if the process is already established

and the influential factors have been identified, it may be

necessary to determine the optimal process conditions.

Making sure the process and measured systems are in

control: Ideally, both the process and the measurement should

be in statistical control as measured by a functioning statistical

process control (SPC) system. Even if it does not have the

process completely in control, it must be able to reproduce

process settings. Also, it is necessary to determine the

variability in the measurement system.

Screening:

In many process development and manufacturing

applications, potentially influential variables are numerous.

Screening reduce the number of variables by identifying the

key variables that affect product quality. This reduction allows

process improvement efforts to be focused on the rally

important variables, or the “vital few.” Screening may also

suggest the “best” or optimal settings for these factors, and

indicate whether or not curvature exists in the responses.

Then, it can use optimization methods to determine the best

settings and define the nature of the curvature. Two – level

full and fractional factorial designs are used extensively in

industry.

Plackett – Burman designs have low resolution, but their

usefulness in some screening experimentation and robustness

testing is widely recognized. General full factorial designs

(designs with more than two – levels) may also be useful for

small screening experiments.

Optimization:

Next step after identified the “vital few” by screening, the

“best” or optimal values for these experimental factors needed

to be determine. Optimal factor values depend on the process

objective. For example, maximize the welding speed and

minimize the laser power.

Verification:

Verification involves performing a follow – up experiment

at the predicted “best” processing conditions to confirm the

optimization results.

II. TAGUCHI DESIGN

Dr. Genichi Taguchi is regarded as the foremost proponent

of robust parameter design, which is an engineering method

for product or process design that focuses on minimizing

variation and/ or sensitivity to noise. When used properly,

Taguchi designs provide a powerful and efficient method for

designing products that operate consistently and optimally

over a variety of conditions. In robust parameter design, the

primary goal is to find factor settings that minimize response

variation, while adjusting (or keeping) the process on target.

When the factors affecting variation have been determined, it

could be used to find settings for controllable factors that will

either reduce the variation, make the product insensitive to

changes in uncontrollable (noise) factors, or both. A process

designed with this goal will deliver more consistent

performance regardless of the environment in which it is used.

Engineering knowledge should guide the selection of factors

and responses.

The fundamental Terms Used in Taguchi Design

Orthogonal arrays: The taguchi method utilizes orthogonal

arrays from design of experiments theory to study a large

number of variables with a small number of experiments.

Using orthogonal arrays significantly reduces the number of

experimental configurations to be studied. Furthermore, the

conclusions drawn from small scale experiments are valid

over the entire experimental region spanned by the control

factors and their settings.

Orthogonal arrays are not unique to Taguchi. They were

discovered considerably earlier. However, Taguchi has

simplified their use by providing tabulated sets of standard

orthogonal arrays and corresponding linear graphs to fit

specific projects.

Examples of standard orthogonal arrays:

L-4, L-8, L-12, L-16, L-32 and L-64 all at 2 levels

L-9, L-18 and L-27 at 3 & 2 levels

L-16 and L-32 modified at 4 levels L-25 at 5 levels

Standard notations for orthogonal arrays:

L-16 (3 5), 16 = Number of experiments

3 = Number of level 5 = Number of

factors

To select an appropriate orthogonal array for the experiments,

the total degrees of freedom need to be computed. The degree

of freedom are defined as the number of comparisons between

process parameters that need to be made to determine which

level is better and specifically how much better it is. For

example, a two – level process parameter counts for one degree of freedom. The degrees of freedom associated with

interaction between two process parameters are given by the

product of the degrees of freedom for the two process

parameters. In the present study, the interaction between the

laser welding parameters is considered.

Once the degrees of freedom are known, the next step is

selecting an appropriate orthogonal array to fit the specific

task. The Degrees of freedom for the orthogonal array should

Page 3: Taguchi Grey

be greater than or at least equal to those for the process

parameters. The tabulations of the typical L16 & L18

orthogonal arrays used in this research with coded values are

shown in tables 1 and 2. Table 1: Typical L16 orthogonal array with coded value

St

d

Ru

n

Factor

1

Factor

2

Factor

3

Factor

4

Factor

5

Response

1

1 1 1 1 1 1 1

6 2 2 2 1 4 3

8 3 2 4 3 2 1

2 4 1 2 2 2 2

5 5 2 1 2 3 4

4 6 1 4 4 4 4

10 7 3 2 4 3 1

15 8 4 3 2 4 1

16 9 4 4 1 3 2

14 10 4 2 3 1 4

13 11 4 1 4 2 3

7 12 2 3 4 1 2

12 13 3 4 2 1 3

11 14 3 3 1 2 4

3 15 1 3 3 3 3

9 16 3 1 3 4 2

Table 2: Typical L18 orthogonal array with coded value

Control Factors

Expt.

No.

A B C D E F G H

1 1 1 1 1 1 1 1 1

2 1 1 2 2 2 2 2 2

3 1 1 3 3 3 3 3 3

4 1 2 1 1 2 2 3 3

5 1 2 2 2 3 3 1 1

6 1 2 3 3 1 1 2 2

7 1 3 1 2 1 3 2 3

8 1 3 2 3 2 1 3 1

9 1 3 3 1 3 2 1 2

10 2 1 1 3 3 2 2 1

11 2 1 2 1 1 3 3 2

12 2 1 3 2 2 1 1 3

13 2 2 1 2 3 1 3 2

14 2 2 2 3 1 2 1 3

15 2 2 3 1 2 3 2 1

16 2 3 1 3 2 3 1 2

17 2 3 2 1 3 1 2 3

18 2 3 3 2 1 2 3 1

S/N rations and MSD analysis: Taguchi recommends the use

of signal to noise (S/N) as opposed to simple process

optimizing process parameters. The rationale is that while

there is a need to maximizing the mean (signal) in the sense of

its proximity to nominal value, it is also desirable to minimize

the process variations (noise). The use of S/N accomplishes

both objectives simultaneously.

In order to evaluate the influence of each selected factor on the responses, the S/N for each control factor should be

calculated. The signals have indicated that the effect on the

average responses, which would indicate the sensitiveness of

the experiment output to the noise factors. The appropriate

S/N ratio must be chosen using previous knowledge, expertise,

absent signal factor (Static design), it is possible to choose the

S/N ratio depending on the goal of the design. S/N ratio

selection is based on Mean Squared Deviation (MSD) for

analysis of repeated results. MSD expression combines

variation around the given target and is consistent with

Taguchi‟s quality objective. The relationships among observed

results, MSD and S/N rations are follows (1 to 4):

(( ̅) ( ) ( ̅) )---For

nominal is better----- (1)

(

)----For smaller is better (2)

(

)

------For bigger is better ---- (3)

S/N = - 10Log (MSD)-----For all characteristic …. (3.4)

Analysis of variance (Anova): Analysis of variance (analysis

of variance) is a general method for studying sampled – data

relationships. The method enables the difference between two

or more sample means to be analyzed, achieved by

subdividing the total sum of squares. One way anova is the simplest case. The purpose is to test for significant differences

between class means, and this is done by analyzing the

variances. Analysis of variance (anova) is similar to regression

in that it is used to investigate and model the relationship

between a response variable and one or more independent

variables. In effect, analysis of variance extends the two

sample t – test for testing the equality of two population means

to a more general null hypothesis of comparing the equality of

more than two means, versus those that are not all equal. Table

3 is a sample of the Anova table used for analysis of the

models developed in this work. Sum of squares and mean square errors are calculated using Eq. 5 to 8.

Table 3: Sample anova table for a model

Source SS df MS FV – Value Prob.>Fv

Model SSM p Each

SS

Divided

by Its

df

Each MS

Divided by

MSE

From Table or

automatically

from the

software

P SSI

S SS2

F SS3

PS SS12

PF SS13

SF SS23

P2 SS11

S2 SS22

F2 SS33

Residual SSE n – p – 1

Cor.

Total

SSt n – l - - -

Where, p : Number of coefficients in the model.

df : Degree of freedom,

SS : Sum of squares,

MS : Mean squares,

n: Total number of runs

Cor. Total : Sum of squares total corrected for the mean.

∑( ̂ ̅) ( )

∑( ̂ )

( )

∑( ̅) ( )

Page 4: Taguchi Grey

( )

III. OPTIMIZATION

The optimization will allow the industrial user to achieve

the optimum welding composition and process parameter to

achieve the desired weld pool shape and mechanical

properties. All independent variables are measurable and can

be repeated with negligible error. The objective function can

be represented by :

Objective = f (x1, x2, …………… , xn) ……… (9)

Where : n is number of independent variables.

Determination of optimal condition(s):

With time, complexity of any process dynamics may increase

and as a consequence, problems related to determination of optimal or near – optimal working condition(s) are faced with

discrete and continuous parameter spaces with multimodel,

differentiable as well as non-differentiable objective function

or response (s). Search for optimal or acceptable near optimal

solution (s) by a suitable optimization technique based on

input – output and in – process parameter relationship or

objective function formulated from model(s) with or without

constraint (s), is a critical and difficult task for researchers and

practitioners. A large number of techniques have been

developed by researchers to solve these types of parameter

optimization problems, and my be classified as conventional

and nonconventional optimization techniques. Fig. 1 provided a general classification of parameters relationships modeling

and optimization techniques in any process or system design.

Whereas conventional techniques attempt to provide a local

optimal solution, non – conventional techniques based on

extrinsic model or objective function development, are only an

approximation, and attempt to provide near – optimal working

condition (s) of a process or system. Conventional techniques

may be broadly classified into two categories. In the first

category, experimental techniques that include statistical

design of experiment, such as Taguchi method, and response

surface design methodology (RSM) are referred to. In the second category, iterative mathematical search techniques,

such as linear programming (LP), non-linear programming

(NLP), and dynamic programming (DP) algorithms are

included. Non – conventional meta- heuristic search – based

techniques, which are sufficiently general and extensively

used by researchers in recent times are based on genetic on

genetic algorithm (GA), tabu search (TS), and simulated

annealing (SA).

Fig 1: Classification of modeling and optimization techniques

IV. EXPERIMENTAL PROCEDURE

The Taguchi method is used to improve the performance of

a process or system so that best quality of end product can be

achieved. Improved quality is also required to be attained

continuously and that can be possible when higher level of

performance can be obtained. The highest possible

performance is obtained by determining the optimum

combination of design factors. The consistency of

performance is obtained by making the process insensitive to

the influence of the uncontrollable factor. In Taguchi‟s approach, optimum design is determined by using design of

experiment principles, and consistency of performance is

achieved by carrying out the trial conditions under the

influence of the noise factors.

The following steps are performed in order to develop and

optimize a mathematical model in case of any process design.

Planning Experiments (Brainstorming)

This is a first step in any application. The session should

include individuals with firsthand knowledge of the project.

The literature review covers this step.

- Determine the vital process factors those to be determined from the literature review.

- Identify all influencing factors and those to be included in

the study.

Determine the factor levels. Before determining the factor

levels the operating range has been determined through a pilot

experiment which is carried out by changing one factor at

time. Once the operating range is determined, a DOE software

like Design-Expert 7 software may be used to divide the

operating range into levels according to the selected design.

Three and five levels were chosen depending on a select

orthogonal array.

Designing Experiments Using the factors and levels determined in the previous step,

the experiments now can be designed and the method carrying

them out established. To design the experiment, implement the

following:

-Select the appropriate orthogonal array.

Degrees of freedom owing to the different level of the process

parameters were evaluated. The degrees of freedom for the

orthogonal array should be greater than or at least equal to

those for the process parameters.

Running Experiment

All the experiments should be carried out in random order of the developed matrices by the software to avoid any

systematic error during the experiments. After the experiment

response parameters are tested and measured in sequential

order following the standard procedures when available for

each response. An average of at least three (in most cases)

Optimizing tools and techniques

Conventional techniques (Optimal Solution) Non - Conventional techniques [Near Optimal Solution(s)]

Design of Experiment (DOE) Mathematical Iterative search Meta Heuristic Search Problem specific Heuristic Search

Dynamic

Programming

(DP) – based

algorithm

Non – linear

Programming

(NLP) – based

algorithm

Linear

Programming

(LP) – based

algorithm Genetic

algorithm Simulated

Annealing

Tabu

Search

Taguchi

Method -

Based

Factorial

Design

based

Response surface

Design Methodology

(RSM) - based

Page 5: Taguchi Grey

recorded measurements in calculated and considered for more

analysis.

Analyzing Results

Before analysis, the raw experimental data might have to be

combined into an overall evaluation criterion. This is

particularly true when there are multiple criteria of evaluation. Analysis is performed to determine the following:

The optimum design.

Influence of individual factors.

Performance at the optimum condition.

Relative influence of individual factors.

The steps in this analyzing stage are following in this

sequence:

Developing the mathematical model

Design expert software develops and exhibits the possible

modules which can fit input data and suggest the model that

best fits the experiment data.

Estimating of the coefficients of the model independent factors

Regression analysis is carried out by software to estimate the

coefficients for all factors in each experiment.

The Signal-to-noise (S/N) ratio analysis

A signal to noise ratio in the ANOVA Table is presents as an Adequate Precision. Equations 10 and 11 are applied to the

model to compares the range of the predicted values at the

design points to the average prediction error. Ratios greater

than 4 indicate adequate model discrimination.

Adequate Precision max(Y) min(Y)

4V(Y)

…(10)

2n

f 1

1 PV(Y) V(Y)

n n

…(11)

P = number of model parameters, 2 = residual MS from

ANOVA table, n = number of experiments.

ANOVA Outputs

The analyses of variances (ANOVA) were applied to test

adequacy of the developed models. Each term in developed

models was examined by the following statistical significance

tools using Eq. 12-15

VF value: Test for comparing model variance with residual

(error) variance. When the variances are close to each other,

the ratio will be close to one and it is less likely that any of the

factors have a significant effort on the response. Model VF

=Value and associated probability value (Prob.> VF ) to

confirm model significance. VF value is calculated by term

mean square divided by residual mean square.

Prob.> VF : Probability of seeing the observed VF value if the

null hypothesis is true (there is no factor effect). If the Prob.>

VF of the model and/or of each term in the model does not

exceed the level of significance then the model can be

considered adequate within the confidence interval (1-a).

Precision of a parameter estimate is based on the number of

independent samples of information which can be determined

by degree of freedom f(d ).

Degree of Freedom f(d ) : the degree of freedom equals to the

number of experiments minus the number of experiments

minus the number of additional parameters estimated for that

calculation.

The same tables show also the other adequacy measures 2R ,

adjusted 2R and adequacy precision 2R for each response. In this study, all adequacy measures were close to 1, which

indicates adequate models.

The Adequate Precision compares the range of the predicated

value at the design points to the average predicted error. The

adequate precision ratio above 4 indicates adequate model

discrimination. In this study, the values of adequate precision

are significantly greater than 4.

2 r

M r

SSR 1

SS SS

…(12)

2 2n 1Adj. R (1 R )

n p

…(13)

2

r M

PRESSPredicted R 1

SS SS

…(14)

1

n2

f i ,f 1

PRESS (Y Y )

…(15)

Model reduction

Model reduction consists of eliminating those terms that are

not desired or which are statistically insignificant. In this case

it was done automatically by the software used. For each

response regression the starting model can be edited by

specifying fewer candidate terms than the full model would

contain. In the three automatic regression variations, those terms which are forced into the model regardless of their

entry/exit a value could be controlled. There are three basic

types of automatic model regression: Step-Wise: A term is

added, eliminated or exchanged at each step. Step-wise

regression is a combination of forward and backward

regressions. Backward elimination: A term is eliminated at

each step. The backward method may be the most robust

choice since all model terms will be given a chance of

inclusion in the model. Conversely, the forward selection

procedure starts with a minimal core model, thus some terms

never get included. Forward selection: A term is added at each step.

Development of final model form

The program automatically defaults to the “Suggested”

polynomial model which best fits the criteria discussed in the

Fit Summary section. The responses could be predicted at any

midpoints using the adequate model. Also, essential plots,

such as Contour, 3D surface, and perturbation plots of the

desirability function at each optimum can be used to explore

the function in the factor space. Also, any individual response

Running Confirmation Experiments

The final step is to predict and verify the improvement of the response using the optimal level of the welding process

parameters. In addition, to verify the satisfactoriness of the

developed models, at least three confirmation experiments

were carried out using new test conditions at optimal

parameters conditions, obtained using the Design Expert

software.

Page 6: Taguchi Grey

V. GREY SYSTEM THEORY

The multi-criteria decision-making problem must be

determined not with the exact criteria values, but with fuzzy

values or with values taken from some intervals. Deng (1982)

developed the Grey system theory. According to him, the Grey

relational analysis has some advantages: it involves simple

calculations and required a smaller number of samples; a

typical distribution of samples is not needed; the quantified

outcomes from the Grey relational grade do not result in

contradictory conclusions from the qualitative analysis; the

Grey relational grade model is a transfer functional model that

is effective in dealing with discreate4 data (Deng 1988).

The Meaning of ‘Grey’ in Grey System: The cognition of

our natural and/or artificial universe has been a tedious and a

progressive process. The formulations of natural and artificial

laws are certainly not overnight happenings. Nature to us is

not white (full of precise information), but on the other hand,

it is not black (completely lack of information) either, and it is

mostly grey (a mixture of black and white). Our thinking, no

matter how analytical, is grey, while our action and reaction,

no matter how practical, is also grey. In fact, since the

beginning of our existence, we are confined in a high

dimensional grey information relational space.

Natural phenomena have given us numerous difficult

problems. We are confronted with numerous such grey

systems: social system, environmental system, economic

system, human anatomical system, and our own human race

relational system, just to name a few. To insure continuation

of our very existence, it is imperative that we investigate and

understand these systems. However, given our present

knowledge or scientific information, we have to simplify

much of the complex embodiment of these systems. During

this process, we have to delete information left and right. After such an endeavor, we have a system that only possesses bone

but no flesh and blood. Such a model can only be at best

homomorphic to, or vaguely resemble the original system. As

a result, we can only command partial information, that can be

extracted from the system, the color we can obtain from a

system is grey. Therefore, the grey of a system is absolute, and

the black and white of a system is grey. Therefore, the grey of

a system is absolute, and the black and white of a system is

relative. Confronting such truths, in 1982, Professor Deng

Dulong of Huazhong University of Science and Technology,

P.R.C, wrote the first landmark article Control Problem of

Grey Systems, the hence started the theory of grey system. This inaugural article enunciated the concepts and numerical

methods of treating systems wherein only partial information

was known. It was recognized as a great breakthrough

contribution in the in depth study of system theory.

What is a characteristic of a grey system? The incompleteness

of information is the basic characteristic, and it serves as the

fundamental starting point of the investigation of grey

systems. The emphasis is to discover the true properties of

these systems under poorly informed situation. The main

melody of grey system theory is t supply information so that

we can within the greyness. Incomplete information follows from the limited availability of data. Therefore, incomplete

data analysis is really the theory of scarce (or few) data

analysis. The central problem of grey system theory is to seek

only the intrinsic structure of the system given such limitation

of data. In other words, we need to devise a methodology to

achieve an early understanding of the system under this

predicament. Out of whatever complexity of a given system,

information in still its basic elements. What is information ? Most people identify information as

numerical data. In grey system theory, we consider such a

concept to be narrow. In reality, data is only part of the total

information. Information should consist of two types. The first

is the qualitative elements; that is, the type that cannot be

measured, and it exemplifies the information‟s qualitative

appearance. The second type is the quantitative data elements,

exemplifying its measurable property. In real life, we may be

faced with a system, knowing only part of its informational

qualitative elements and no more. At the same time, we may

know only certain variation intervals of its informational

quantitative data elements, with their precise numerical values unknown. No doubt such a system has only provided us with

information that are grey. Furthermore, such grey information

in the system may be constraining each other, and they may be

very interdependent with each other. So such intrinsic

relational behavior may differentiate one grey system from

another. Therefore, relations between grey information

constitute another central study of grey system theory.

Facing the challenge of understanding our nature and

ourselves, we have built many classical system, and have

devoted a long period of time in investigating them.

Unfortunately, in keeping up with our high scientific and technological development the system‟s complexity and the

technological quest for rigor and precision have become

paradoxically uncompromising. At this serious juncture, in

1965, L. A. Zadeh enunciated the famous Fuzzy Logic (Fuzzy

Set) Theory, and thus created the Fuzzy System. As we know,

the theory of classical system based itself on the classical

Cantor set theory. In that, element x has only two

exemplification‟s x A or x A (Boolean Logic). Fuzzy

systems based on the fuzzy set theoretic membership function

A (x), takes values from 0 to 1, instead of 0 or 1. Therefore,

the difference between classical and fuzzy systems is merely

the unclear boundaries of the systems and the imprecise

intrinsic attribute of the systems. However, the incomplete

information (or how much do we know of the system) still

eludes the attention of the classical or fuzzy set theorists. This

objective characteristic of the system seems to be forgotten in

the development of classical or fuzzy system theory. Grey

system, in addition, focus keenly on what partial or limited

information the system can provide, and try to paint its total

picture from this. In fact, the theory or grey system bases itself

on Grey Hazy set. Grey huzy set exemplify itself in several

stages: the embryonic state, the hazy state, the4 white4ning state, and the verifiable state. Grey hazy set possesses several

properties: co-existence (co-habitationality), verifiability, time

effectiveness, informationality, and constructability. It can be

seen therefore, that grey hazy set is completely different from

the classical Cantor set and Zadeh‟s fuzzy set. Nevertheless,

the Cantor set (crisp) is the transparent nuclear state of the

grey hazy set, and classical and/or fuzzy systems are special

cases of grey systems when the degree of greyness is zero.

Page 7: Taguchi Grey

The mathematical foundation of grey systems and fuzzy

systems are rather different in their formulation. We have to

point out, after studying the two systems, grey hazy set

possesses excellent characteristics in treating natural

dynamical and static systems.

Grey Information Relation vs Fuzzy Relations: Let us recall our classical ordinary (white) relations. For x, y A, let

xRy denote the relation between x and y, and xRy denote that

x and y are not related. Here R is a relation between the

elements of set A. Using numerical values to describe the

above relation we have: xRy => R = 1 and xRy => R=0. Note

the use of {0,1} to describe this Boolean relation. Fuzzy

system generalizes this extrems relational value to the full

interval [0,1], and thus gives us a large convenience in

studying the system. This is no doubt a big step forward in

system theory. As fuzzy relation give: x y A => (xRy) (x,

y) [0,1], and this replaces the classical relation: x y A =>

xRy or xRy. However, neither classical relation nor fuzzy relation focuses on a profound concept of the property of self-

characteristics of the elements x and y of A. For example, x y

A contain how much information, or what are the degree of

greyness in x (or y) itself. Questions like these were never

addressed by either classical or Fuzy theory. Grey system

theory treats this fundamental academic perspective and

emphasizes the incompleteness of system information and the

greyness of elements themselves. These are the foundations of

the research of grey information relation theory. In such an

investigation, one has to start by paying attention to the

greyness and/or whiteness of the elements under focus, and these elements pervade our information decision space. In

fact, these relation between the grey-white elements further

relate4 to the degree of greyness (grey relational grade) of the

elements themselves. In addition not all the elements in the

dynamical system are grey, for in time, some grey elements,

through the whitening process, would become white elements.

Further, the grey information relations and the white

information relations in the system often mingle to form

certain grey-white information relations, which define the

transparency grade of information decision making. Therefore

the creation of the grey system theory, as it breaks through the

investigation of such relation, is indeed a great leap forward for the system theory research. So we see grey information

relation is built on the foundation of grey hazy set theory,

which is dynamical in nature, where fuzzy relation is built on

fuzzy set theory, which is a mere convenient extension of the

classical set theory. Is rather static in nature. From this point

of view, and because of their conceptual foundations,

problems described by grey information relation, and by fuzzy

relation are in fact different.

Grey Relational Model

Existence of Grey Relation: Objective observation of many

existing systems shows they consist of a number of subsystems, and the relations between these subsystems are

extremely complex. In particular, the different states of

appearances and the randomness of changes (chaotic system),

cause great confusion in the cognition of the true nature of the

systems. But the very essence of grey system theory is to

provide an analytic concept of the grey relational degree of

these subsystems. Here the central methodology is to seek out

the relations (including the numerical relations) between

subsyste3ms and sub causalities. We find, in the course of

grey systems research, that if the basic states of causal changes

of two subsystems are similar, their synchronized degree of

changes is high, and hence their grey relational grade is high;

otherwise their grey relational grade is low. Therefore, we can

provide a quantitative measure in grey relational analysis of systems during the course of its dynamic. There are

differences between grey relational analysis and the regression

analysis of statistics. In that:

1. They are different in their theoretical foundations.

Grey relational analysis is based on the grey process

of the grey system theory, whereas regression

analysis is based on the random process of the

probability theory;

2. Grey relational analysis compares and computes the

dynamic causalities of the subsystems of the given

system, whereas regression analysis focuses on the

grouped values of the random variables;

3. Grey relational analysis requires very minimal raw

data (as few as 4 in cardinality), whereas regression

analysis require sufficiently large set of sample data;

and

4. Grey relational analysis mainly investigates and

dynamic process of the system, whereas regression

analysis mainly studies the static behavior of the

system.

Grey Relational Numerical Method

I. The Processing of Primitive Data

The physical meanings of the causal elements in a

system could be different. As a result there are

differences in the system‟s data index (catalog), and

during the process of analytic comparison, we find

difficulty in reaching a proper are correct conclusion.

Therefore, we use:

1. Mean value processing. We first compute the

mean values of all the primitive sequences x1,

X2,…-, Xp (data space of the dynamic). Then we

use these mean values to divide values of the

corresponding sequences to obtain a collection of

new sequences, which is now called the mean

valued sequences –Xi, X2,….., Xp.

2. Initial value processing. We use the first value of

each sequence to divide each succeeding value of

the corresponding sequence to form a collection

of quotient sequences, which are now called the

initialized sequences, Xi, X2,…., Xp.

In general when analyzing the dynamic process

of certain stable socio-economic systems, we

often employ this initial valued process.

II. Grey Relational Coefficient

Let X = {Xi I I 1} be a space sequence where

I = {1, 2, …., n}. If we denote the numerically

Page 8: Taguchi Grey

proessed parent sequence 0X by 0X (1), 0X

(2),….. 0X (p), the generated sequence iX { by

x; (1), iX (2),….. iX (p), then the grey relational

coefficient 0 , i(k) of iX (k) is defined to be:

min max0,i

i max

(k)(k)

Where i 0 i(k) X (k) X (k)

Is the absolute difference of the two comparing

sequences 0X and iX ,

max jj I k

min min (k)

min jj I k

min min (k)

Are respectively the maximum and minimum

values of the absolute differences of all

comparing sequences, and [0, 1] is a

distinguishing coefficient, the purpose of which

is to weaken the effect of maxA when it gets too

big, and thus enlarges the difference significance

of the relational coefficient, 0 , i(k) reflect the

degree of closeness between the two comparing

sequences at k. At minA , 0 , i=1, that is, the

relational coefficient attains its largest value.

While at maxA , ^0 attains the smallest value.

Hence 0, i i0 1 .

III. Grey Relational Grade

In reality, grey relational analysis compares

relations of sequences in their appropriate metric

spaces. If two sequences agree at all points, then

their grey relational coefficient is 1 everywhire,

and therefore, their grey relational grade should

be 1. In view of this, the relational grade of two

comparing sequence can be quantified by the

mean value of their grey relational coefficients;

i.e. Here 70 is designated as the grey relational

grade between X, and 0X , and p is the length of

the two comparing sequences.

p

0, i 0,ik 1

1(k)

p

IV. Grey Relational Ordering

In relational analysis, the practical meaning of

the numerical values of grey relational grades

between elements is not absolutely important,

while the grey relational ordering between them

yields more subtle information. Here, being

primary or secondary form the bases of decision

making.

1. If o o, , we say X to 0X is better

than X to , 0X and we denote this

0 0X X X X .

2. o o, , we say X to 0X is worse

than X to 0X , and we denote this

0 0X X X X

3. o o, , we say X to 0X is worth

equally this X to 0X , and we denote this

0 0X X X X

V. Relational Matrix

If we have n parental sequences Yi, Y2, ….,

Yn, n 1, and m offspring (generated)

sequences 1 2 m,X ,X ,.....X m 1 , then the

relational grades of the parental sequences

1 2 nY ,Y ,..... Y to each offspring sequences

are:

0 1,2, , 1,1. ,m

0,2 0,2,2 , 2. ,m

n,2. n,2,2-, n, m

When arranged properly we have either

1,1 1,2 1,m 1,1 1,2 1,n

2,1 2 ,2 2 ,m 2,1 2 ,2 2 ,n

n ,1 n ,2 n ,m m,1 m,2 m,n

... ...

... ...R or R =

... ...

Matrices of grey relational grade, which from the bases of

decision making. Given a relational matrix, if for all I, the

columns satisfy :

1, j1,i

2 , j2 ,i

m , jm ,i

Where j = 1, 2, ……. , n, j I, we say yi is optimally better

than Yi, j . In order words, the relational grade of Xi, Yi, is

optimumly the best columns of the system, and we write

Yi>>Yj, j = 1,2,….,n,j i

If n n

k,i k , jk 1 k 1

1 1,i, j 1,2,...,m, i j,

n n

we say Yi relative to Yj in respect to relational grade of Xi is

pseudo optimumly the best in the system, and we denote as

Yi > Yj, I, j = 1,2,…,n,j i

Page 9: Taguchi Grey

REFERENCES

[1] T. Muthuramalingama, B. Mohanb, “Application of Taguchi-grey multi

responses optimization on process parameters in electro erosion”,

Volume 58, December 2014, Pages 495–502

[2] Mihir Patel, Vivek Deshpande, “Application of Taguchi Approach for

Optimization Roughness for Boring operation of E 250 B0 for Standard

IS: 2062 on CNC TC”, IJEDR | Volume 2, Issue 2 | ISSN: 2321-9939

[3] Kaining Shi, Dinghua Zhang, Junxue Ren, Changfeng Yao and Yuan

Yuan, “Multiobjective Optimization of Surface Integrity in Milling TB6

Alloy Based on Taguchi-Grey Relational Analysis”, Advances in

Mechanical Engineering, Volume 2014, Article ID 280313, 7 pages.

[4] Raghuraman S, Thiruppathi K, Panneerselvam T and Santosh S,

“OPTIMIZATION OF EDM PARAMETERS USING TAGUCHI

METHOD AND GREY RELATIONAL ANALYSIS FOR MILD

STEEL IS 2026”, International Journal of Innovative Research in

Science, Engineering and Technology, Vol. 2, Issue 7, July 2013.

[5] Ajeet Kumar rai, Shalini yadav Richa Dubey and Vivek Sachan,

“Application of Taguchi Method in the Optimization of Boring

Parameters”, International Journal of Advanced Research in Engineering

and Technology, Volume 4, Issue 4, May – June 2013, pp. 191-199

[6] B.Shivapragash, K.Chandrasekaran, C.Parthasarathy and M.Samuel,

“Multiple Response Optimizations in Drilling Using Taguchi and Grey

Relational Analysis”, International Journal of Modern Engineering

Research (IJMER), Vol.3, Issue.2, March-April. 2013 pp-765-768.

[7] Reddy Sreenivasulu and Dr. Ch. Srinivas Rao, “Application of Grey

Relational Analysis for Surface Roughness and Roughness Error in

Driling of Al 6061 Alloy”, International Journal of Lean Thinking,

Volume 3, Issue 2.

[8] Hartaj Singh, “TAGUCHI OPTIMIZATION OF PROCESS

PARAMETERS: A REVIEW AND CASE STUDY”, International

Journal of Advanced Engineering Research and Studies, E-ISSN2249–

8974.