improved parameter estimation algorithms for induction...

114
Improved Parameter Estimation Algorithms for Induction Motors Julius Susanto A thesis submitted for the degree of Masters by Coursework (Electrical Utility Engineering) Department of Electrical and Computer Engineering November 2013

Upload: lekien

Post on 18-Mar-2018

230 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

Improved Parameter Estimation Algorithms for

Induction Motors

Julius Susanto

A thesis submitted for the degree of Masters by Coursework

(Electrical Utility Engineering)

Department of Electrical and Computer Engineering

November 2013

Page 2: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

Abstract

The performance of Newton-Raphson, Levenberg-Marquardt, Damped Newton-

Raphson and genetic algorithms are investigated for the estimation of induction

motor equivalent circuit parameters from commonly available manufacturer data.

A new hybrid algorithm is then proposed that combines the advantages of both

descent and natural optimisation algorithms. Through computer simulation, the

hybrid algorithm is shown to significantly outperform the conventional algorithms

in terms of convergence and squared error rates. All of the algorithms are tested

on a large data set of 6,380 IEC (50Hz) and NEMA (60Hz) motors.

Keywords: parameter estimation, induction motor, double cage

ii

Page 3: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

Acknowledgements

I would like to thank my project supervisor Prof. Syed Islam for his patient

support, ideas and encouragement throughout the project.

I’d also like to acknowledge my colleagues at DIgSILENT GmbH for their

support and helping me to find this problem in the first place. Special thanks goes

to my managers Manuel Castillo and Flavio Fernandez for their encouragement,

Oscar Munoz and Goce Jauleski for assisting me trawl through the code and

Nicholai Wilson for the banter.

iii

Page 4: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

Contents

Abstract ii

Acknowledgements iii

1 Introduction 11.1 Rationale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Thesis Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

2 Induction Motor Equivalent Circuits 42.1 Single Cage Model . . . . . . . . . . . . . . . . . . . . . . . . . . 62.2 Double Cage Model . . . . . . . . . . . . . . . . . . . . . . . . . . 72.3 Higher Order Models . . . . . . . . . . . . . . . . . . . . . . . . . 82.4 Core and Mechanical Losses . . . . . . . . . . . . . . . . . . . . . 92.5 Calculating Torque and Current from the Equivalent Circuit . . . 102.6 Torque-Speed and Current-Speed Curves . . . . . . . . . . . . . . 12

3 Parameter Estimation Problem 143.1 Problem Formulation Ignoring Core Losses . . . . . . . . . . . . . 16

3.1.1 Single Cage Model (Ignoring Core Losses) . . . . . . . . . 163.1.2 Double Cage Model (Ignoring Core Losses) . . . . . . . . . 17

3.2 Problem Formulation Considering Core Losses . . . . . . . . . . . 183.2.1 Single Cage Model (with Core Losses) . . . . . . . . . . . 193.2.2 Double Cage Model (with Core Losses) . . . . . . . . . . . 19

3.3 Classes of Parameter Estimation Algorithms . . . . . . . . . . . . 20

4 Descent Algorithms 224.1 Requirement for Linear Restrictions . . . . . . . . . . . . . . . . . 224.2 Newton-Raphson Algorithm . . . . . . . . . . . . . . . . . . . . . 23

4.2.1 Parameter Constraints . . . . . . . . . . . . . . . . . . . . 244.2.2 Adaptive Step Size . . . . . . . . . . . . . . . . . . . . . . 264.2.3 Initial Conditions . . . . . . . . . . . . . . . . . . . . . . . 264.2.4 Convergence Criteria . . . . . . . . . . . . . . . . . . . . . 27

4.3 Levenberg-Marquardt Algorithm . . . . . . . . . . . . . . . . . . . 274.3.1 Choice of Damping Parameter . . . . . . . . . . . . . . . . 28

iv

Page 5: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

4.4 Damped Newton-Raphson Algorithm . . . . . . . . . . . . . . . . 294.5 Comparison of Descent Algorithms . . . . . . . . . . . . . . . . . 30

4.5.1 Single Cage Model . . . . . . . . . . . . . . . . . . . . . . 304.5.2 Double Cage Model . . . . . . . . . . . . . . . . . . . . . . 31

4.6 Selection of Linear Restrictions . . . . . . . . . . . . . . . . . . . 324.6.1 Approaches for Selecting Linear Restrictions . . . . . . . . 334.6.2 Computer Simulation . . . . . . . . . . . . . . . . . . . . . 34

4.7 Selection of Initial Conditions . . . . . . . . . . . . . . . . . . . . 364.7.1 Methods for Calculating Initial Conditions . . . . . . . . . 374.7.2 Sets of Initial Conditions . . . . . . . . . . . . . . . . . . . 424.7.3 Performance of Initial Conditions . . . . . . . . . . . . . . 454.7.4 Incorporation of Initial Conditions into Descent Algorithms 47

4.8 Conclusions about Descent Algorithms . . . . . . . . . . . . . . . 49

5 Natural Optimisation Algorithms 515.1 Genetic Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . 51

5.1.1 Application of GA to Motor Parameter Estimation . . . . 525.1.2 Computer Simulation . . . . . . . . . . . . . . . . . . . . . 55

5.2 Other Natural Optimisation Algorithms . . . . . . . . . . . . . . 585.3 Conclusions about Natural Optimisation Algorithms . . . . . . . . 59

6 Hybrid Algorithms 606.1 Motivation for Hybrid Algorithms . . . . . . . . . . . . . . . . . . 606.2 Proposed Hybrid Algorithm . . . . . . . . . . . . . . . . . . . . . 616.3 Computer Simulation . . . . . . . . . . . . . . . . . . . . . . . . . 64

7 Comparative Analysis of Algorithms 667.1 Comparison of Algorithm Performance . . . . . . . . . . . . . . . 667.2 Convergence and Error Tolerance . . . . . . . . . . . . . . . . . . 687.3 Algorithm Performance and Motor Rated Power . . . . . . . . . . 697.4 Comparison of Algorithm Computation Time . . . . . . . . . . . 74

8 Conclusions and Future Work 768.1 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 768.2 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 778.3 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78

References 78

A Motor Data Set 83

B MATLAB Source Code 84B.1 Common Auxiliary Functions . . . . . . . . . . . . . . . . . . . . 84

B.1.1 calc pqt . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84B.1.2 get torque . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

v

Page 6: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

B.2 Descent Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . 86B.2.1 nr solver . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86B.2.2 lm solver (Error Term Adjustment) . . . . . . . . . . . . . 89B.2.3 lm solver2 (Gain Ratio Adjustment) . . . . . . . . . . . . 92B.2.4 dnr solver . . . . . . . . . . . . . . . . . . . . . . . . . . . 95

B.3 Genetic Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . 98B.3.1 ga solver . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98

B.4 Hybrid Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . 101B.4.1 hybrid nr . . . . . . . . . . . . . . . . . . . . . . . . . . . 101B.4.2 hybrid lm . . . . . . . . . . . . . . . . . . . . . . . . . . . 102B.4.3 hybrid dnr . . . . . . . . . . . . . . . . . . . . . . . . . . . 104

vi

Page 7: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

List of Tables

4.1 Comparison of descent algorithms for the single cage model with

fixed restrictions kx = 1 and kr = 0.5 . . . . . . . . . . . . . . . . 30

4.2 Comparison of descent algorithms for the double cage model with

fixed restrictions kx = 1 and kr = 0.5 . . . . . . . . . . . . . . . . 31

4.3 Conventional Newton-Raphson algorithm results for double cage

model using diffrent methods for selecting linear restrictions . . . 36

4.4 Squared error performance of different initial conditions for IEC

motors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

4.5 Squared error performance of different initial conditions for NEMA

motors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

4.6 Conventional Newton-Raphson algorithm results for double cage

model with revised initial conditions . . . . . . . . . . . . . . . . 48

4.7 Conventional Newton-Raphson algorithm results for double cage

model with revised initial estimates for Xs and Rc (with kx = 1,

kr = 0.5) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

5.1 Range of initial parameter estimates . . . . . . . . . . . . . . . . 53

5.2 Standard deviations for mutation noise . . . . . . . . . . . . . . . 54

5.3 Default settings for genetic algorithm . . . . . . . . . . . . . . . . 55

5.4 Results of the genetic algorithm for the double cage model . . . . 55

6.1 Range of initial parameter estimates . . . . . . . . . . . . . . . . 61

6.2 Default settings for hybrid algorithm . . . . . . . . . . . . . . . . 63

6.3 Simulation results for baseline NR and hybrid algorithms . . . . . 64

7.1 Summary of simulation results for the double cage model . . . . . 67

7.2 Breakdown of motor data sets by motor rated power . . . . . . . 71

7.3 Algorithm performance broken down by rated power (IEC motors) 72

vii

Page 8: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

7.4 Algorithm performance broken down by rated power (NEMA mo-

tors) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73

7.5 Average algorithm solution time . . . . . . . . . . . . . . . . . . 75

viii

Page 9: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

List of Figures

2.1 General induction motor equivalent circuit . . . . . . . . . . . . . 5

2.2 Basic single cage model equivalent circuit (5 parameters) . . . . . 6

2.3 Basic double cage model equivalent circuit (7 parameters) . . . . 7

2.4 Two minimum parameter double cage model equivalent circuits (6

parameters) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

2.5 Example of a higher order model (triple cage) . . . . . . . . . . . 9

2.6 Motor torque-speed curve . . . . . . . . . . . . . . . . . . . . . . 12

2.7 Motor current-speed curve . . . . . . . . . . . . . . . . . . . . . . 13

4.1 Flowchart for conventional NR algorithm . . . . . . . . . . . . . . 25

4.2 Simplified motor equivalent circuit at locked rotor . . . . . . . . . 38

4.3 Approximate breakdown of stator current . . . . . . . . . . . . . . 39

5.1 Flowchart for genetic algorithm . . . . . . . . . . . . . . . . . . . 56

5.2 Error rates vs maximum number of generations . . . . . . . . . . 57

6.1 Flowchart for hybrid algorithm (with natural selection of Rs and

Xr2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

7.1 Convergence rate versus error tolerance plot for IEC motors . . . 68

7.2 Convergence rate versus error tolerance plot for IEC motors . . . 69

7.3 Visual depiction of error tolerance - Torque-speed curve of 75kW

motor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70

7.4 Visual depiction of error tolerance - Current-speed curve of 75kW

motor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70

ix

Page 10: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

CHAPTER 1

Introduction

1.1 Rationale

The three-phase induction motor is arguably the workhorse of modern industry,

found in almost all industrial settings from manufacturing to mining. Equivalent

circuit parameters of induction machines are essential for time-domain simula-

tions where the dynamic interactions between the machine(s) and the power

system need to be analysed, for example:

• Motor starting and re-acceleration

• Bus transfer studies

• Changes in motor loading

• Motor behaviour during faults

• Dynamic voltage stability

However, motor manufacturers do not tend to provide the equivalent cir-

cuit parameters for their machines. This is a problem because the parameters

are generally motor specific and typical values found in the literature are of-

ten not of sufficient accuracy. Moreover, power system studies involving motors

1

Page 11: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

1. Introduction 2

are normally performed during the design stages of projects, where the motors

themselves have not yet been ordered and on-site testing is not possible.

It is therefore desirable to estimate motor equivalent circuit parameters from

the data that manufacturers make available in their catalogues, data sheets and

technical brochures (i.e. performance parameters such as breakdown torque,

locked rotor torque, full-load power factor, full-load efficiency, etc).

A number of parameter estimation techniques have been proposed in the

literature (for example, see [1], [2], [3], [4] and [5]). The de facto approach that

has emerged, and which has been adopted by the majority of commercial software

packages, has been to use an algorithm based on a Newton-Raphson approach.

However, it has been observed from experience that the Newton-Raphson

based algorithms can have poor convergence and error performance. Therefore,

parameter estimation algorithms with improved performance would be preferred.

In this project, the performance of a number of motor parameter estimation

algorithms based on readily available manufacturer data is investigated, and a

new hybrid algorithm that exhibits improved performance is proposed.

1.2 Thesis Outline

The structure of this thesis is as follows:

Chapters 2 and 3 provide background on induction motor equivalent circuits

and the nature of the parameter estimation problem based on manufacturer data.

Chapter 4 begins the tour of parameter estimation approaches with descent

algorithms, which are algorithms based on variations of Newton’s method. Three

descent algorithms are investigated - the commonly used Newton-Raphson and

Levenberg-Marquardt algorithms and the damped Newton-Raphson algorithm.

Chapter 4 also contains a brief investigation on the selection of initial parameters.

Page 12: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

1. Introduction 3

Chapter 5 looks at natural optimisation approaches to solving the param-

eter estimation problem. In particular, the genetic algorithm is explored and

simulated on the motor data set.

In Chapter 6, a hybrid descent and natural optimisation approach is proposed

and investigated. Three variations of the hybrid algorithm are implemented (NR-

GA, LM-GA and DNR-GA) and simulated on the motor data set. The results

are then compared with the conventional NR algorithm.

Chapter 7 provides a comparative analysis of the algorithms explored in Chap-

ters 4, 5 and 6 in terms of algorithm performance, convergence criteria and com-

putation times.

Chapter 8 concludes the thesis by offering a generic workflow for solving motor

parameter estimation problems. The contributions of this project and avenues

for future work are also discussed.

Page 13: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

CHAPTER 2

Induction Motor Equivalent Circuits

An induction machine can be viewed as a generalised transformer with an air gap

and a rotating short-circuited secondary winding. The equivalent circuit of an

induction motor is therefore similar to that of a transformer. The key difference

is in the rotor equivalent circuit, where the voltages and currents are proportional

to the slip frequency. This is commonly represented in the equivalent circuit by

a variable (slip-dependent) rotor resistance (i.e. Rr

s).

For balanced steady-state analysis, it is acceptable to use a per-phase equiva-

lent circuit. Analysis is also made simpler by working with per-unit values, where

the scaling factors required to calculate polyphase quantities (such as polyphase

power) are not required. The general induction motor per-phase equivalent cir-

cuit with all parameters referred to the stator is shown in Figure 2.1.

The parameters of this equivalent circuit are as follows:

• Rs is the stator resistance (pu)

• Xs is the stator leakage reactance (pu)

• Xm is the magnetising reactance (pu)

• Rc is the core loss component (pu)

4

Page 14: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

2. Induction Motor Equivalent Circuits 5

Figure 2.1: General induction motor equivalent circuit

• Xr is the rotor leakage reactance (pu)

• Rr is the rotor resistance (pu)

The motor equivalent circuit in Figure 2.1 shows that the parameters vary

with frequency / slip, current (i.e. saturation effects) and temperature, for the

following reasons:

• AC resistances for the copper windings are temperature dependent [3]

• Inductances will vary due to eddy currents and saturation of teeth and core

[2]

• Rotor resistance is frequency (slip) dependent due to eddy currents in deep

rotor bars or double cage rotors [6]

• Impedances are affected by the skin effect at different frequencies [6]

For power system studies, it is desirable to use a motor equivalent circuit with

constant parameters that are valid over the full range of motor speeds (i.e. from 0

to 1 pu). It is important to note that these parameters will not likely correspond

to the real motor parameters, since as noted above, the real parameters are slip,

current and temperature dependent. However, the set of constant equivalent

Page 15: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

2. Induction Motor Equivalent Circuits 6

circuit parameters will match the motor’s performance characteristics over the

full speed range (e.g. torque-slip and current-slip curves, power factor, efficiency,

etc).

While others have attempted to apply motor models with variable parameters

(for example, Haque in [7]), there does not appear to be significant gains to be had

by this approach and it only makes the model more complicated and less man-

ageable for standard power systems analysis programs. Therefore, only constant

parameter models are considered in this project. In the following subsections, a

number of constant parameter equivalent circuit models are presented.

2.1 Single Cage Model

The single cage model is simply the general equivalent circuit in Figure 2.1 with

constant parameters (and excluding core losses). The single cage model is nor-

mally suitable to represent the performance characteristics of wound-rotor mo-

tors.

Figure 2.2: Basic single cage model equivalent circuit (5 parameters)

Page 16: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

2. Induction Motor Equivalent Circuits 7

2.2 Double Cage Model

To account for the effects of double-cage rotors or deep bar rotors (e.g. most

squirrel-cage machines), a second rotor branch is added to the equivalent circuit

of the single cage model.

Figure 2.3: Basic double cage model equivalent circuit (7 parameters)

In the equivalent circuit, the inner cage leakage reactance Xr1 is always higher

than the outer cage leakage reactance Xr2, but the outer cage impedance is

typically higher than the inner cage impedance on starting. These conditions can

be resolved by including the following two inequality constraints in the model [5]:

• Xr1 > Xr2

• Rr2 > Rr1

Corcoles et al [8] showed that since the double cage model (without core

losses) has 6 model invariant functions (MIVs), then any model with greater

than 6 parameters (such as the 7 parameter double cage model in Figure 2.3) can

be reduced to a 6 parameter model without any loss of information. There are

5 so-called minimum parameter models for the double cage model, two of which

are shown below in Figure 2.4.

Page 17: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

2. Induction Motor Equivalent Circuits 8

Figure 2.4: Two minimum parameter double cage model equivalent circuits (6 param-eters)

2.3 Higher Order Models

Additional rotor branch circuits and other impedances (e.g. reactances between

rotor branches) can be added to imbue the model with additional degrees of

freedom. As will be shown later, adding more parameters to the model can be

problematic for estimation purposes, as the system of equations becomes under-

determined and some parameters need to be constrained in order to solve the

system, e.g. by linear restrictions.

As discussed earlier, Corcoles et al [8] found that some higher order models can

be algebraically reduced to lower order models without any loss of information. In

other words, some parameters are redundant and their addition can turn a system

with a univocally identifiable solution into a system with an infinite number of

solutions (i.e. with parameters that need to be constrained).

Page 18: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

2. Induction Motor Equivalent Circuits 9

Figure 2.5: Example of a higher order model (triple cage)

2.4 Core and Mechanical Losses

In the equivalent circuit models described thus far, the shunt magnetising branch

is represented only by a magnetising reactance (Xm) and the core losses are

neglected. In a practical motor, there will also be eddy currents in the core

laminations that manifest themselves as heat losses. These core (or iron) losses

can be modelled as a shunt resistance as in the general motor model in Figure 2.1.

Furthermore, there are mechanical losses due to friction on the rotor bearings, and

while it isn’t completely appropriate to model mechanical losses in an electrical

circuit, these frictional losses can also be approximated by a shunt resistance.

The reason for modelling these losses in the motor equivalent circuit is so that

motor efficiencies can be properly estimated (this is described in more detail in

section 3.2).

The core and mechanical losses are lumped together as a single shunt resis-

tance Rc. To simplify further, the shunt resistance can be placed at the input of

the equivalent circuit rather than parallel to the magnetising branch [9].

Page 19: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

2. Induction Motor Equivalent Circuits 10

2.5 Calculating Torque and Current from the Equivalent

Circuit

The electrical torque developed in an induction machine is proportional to the

square of rotor current, i.e.

T =pq

4πf

Rr

sI2r (2.1)

where T is the electrical torque developed (N-m)

p is the number of motor poles

q is the number of stator phases

f is the nominal frequency (Hz)

Rr is the equivalent rotor resistance (Ω)

s is the motor slip (pu)

Ir is the rotor current (A)

By using per-unit values, the constant terms can be eliminated and the equa-

tion above reduces to:

T =Rr

sI2r (2.2)

where all the quantities in this equation are in per-unit values.

It can be sees that for any given motor equivalent circuit, standard circuit

analysis can be used to calculate the rotor current and therefore electrical torque.

By way of example, the torque in the double cage model (without core losses)

introduced earlier in section 2.2 will be calculated.

Recasting the impedances as admittances:

Page 20: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

2. Induction Motor Equivalent Circuits 11

Ys =1

Rs + jXs

(2.3)

Ym =1

jXm

(2.4)

Yr1 =1

Rr1

s+ jXr1

(2.5)

Yr2 =1

Rr2

s+ jXr2

(2.6)

Applying Kirchoff’s law, the voltage U1 at the magnetising branch is:

(Un − U1)Ys = U1 (Ym + Yr1 + Yr2) (2.7)

UnYs = U1 (Ys + Ym + Yr1 + Yr2) (2.8)

U1 =UnYs

Ys + Ym + Yr1 + Yr2(2.9)

The per-unit stator current Is is therefore:

Is = (Un − U1)Ys (2.10)

The per-unit rotor currents in each cage Ir1 and Ir2 are:

Ir1 = U1Yr1 (2.11)

Ir2 = U1Yr2 (2.12)

Finally, the per-unit electrical torque developed in the motor is:

T =Rr1

sI2r1 +

Rr2

sI2r2 (2.13)

Page 21: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

2. Induction Motor Equivalent Circuits 12

A similar kind of analysis can be done for other motor equivalent circuit

models to calculate the electrical torque and current of the machine.

2.6 Torque-Speed and Current-Speed Curves

Based on the torque and stator current equations developed in the previous sec-

tion, torque-speed and current-speed curves can be constructed from the equiva-

lent circuit for the full range of motor speeds (i.e. from standtill to synchronous

speed).

Examples of motor torque-speed and current-speed curves are shown in Figure

2.6 and Figure 2.7 respectively.

Figure 2.6: Motor torque-speed curve

Page 22: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

2. Induction Motor Equivalent Circuits 13

Figure 2.7: Motor current-speed curve

Page 23: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

CHAPTER 3

Parameter Estimation Problem

The characteristics of an induction motor are normally provided by manufacturers

in the form of a standard set of performance parameters, with the following

parameters being the most common:

• Nominal voltage, Un (V)

• Nominal frequency, f (Hz)

• Rated asynchronous speed, nfl (rpm)

• Rated (stator) current, Is,fl (A)

• Rated mechanical power, Pm,fl (kW)

• Rated torque, Tn (Nm)

• Full load power factor, cosφfl (pu)

• Full load efficiency, ηfl (pu)

• Breakdown torque, Tb/Tn (normalised)

• Locked rotor torque, Tlr/Tn (normalised)

• Locked rotor current, Ilr/Is,fl) (pu)

14

Page 24: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

3. Parameter Estimation Problem 15

From previous sections, we know that a set of equivalent circuit parameters

can yield specific torque-speed and current-speed curves. So given a set of perfor-

mance parameters that contain features on the torque-speed and current-speed

curves (e.g. breakdown torque, locked-rotor current, etc), is it possible to deter-

mine the corresponding equivalent circuit parameters that yield these features?

This is the crux of the parameter estimation problem and can be posed as fol-

lows - ”How can the motor performance parameters be converted into equivalent

circuit parameters?”.

While all of the performance parameters in the above set can be used in an

estimation procedure, there are actually only six indpendent magnitudes that

can be formed from them: Pm,fl, Qfl, Tb, Tlr, Ilr and ηfl [5]. These independent

magnitudes will thus form the basis of the problem formulation, where the inde-

pendent magnitudes calculated from the equivalent circuit are matched with the

performance parameters supplied by the manufacturer.

The basic double cage model in section 2.2 is used to illustrate how these

six independent magnitudes can be calculated from the equivalent circuit model.

Stator and rotor currents at slip s can be readily calculated from the equivalent

circuit as shown in section 2.5.

Quantities for per-unit active power P , reactive power Q and power factor

cosφ at slip s can be calculated as follows:

S(s) = UnIs(s)∗ (3.1)

P (s) = T (s)(1− s) (3.2)

Q(s) = =S(s) (3.3)

cosφ(s) =<S(s)||S(s)||

(3.4)

Page 25: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

3. Parameter Estimation Problem 16

Nominal speed ns and full load slip sf is calculated as follows:

ns =120f

p(3.5)

sf = 1− nflns

(3.6)

where p is the number of motor poles

f is the nominal frequency (Hz)

nfl is the asynchronous speed at full load (rpm)

Calculating the slip at maximum torque smax is found by solving the equation:

dT

ds= 0 (3.7)

(Under the condition that the second derivative d2

ds2< 0)

In the double cage model, the solution to this equation is not trivial and it is

more convenient to use an estimate, e.g. based on an interval search between

s = 0 and s = 0.5.

3.1 Problem Formulation Ignoring Core Losses

3.1.1 Single Cage Model (Ignoring Core Losses)

In the single cage model, the locked rotor torque Tlr and locked rotor current

Ilr are not used because the single cage model does not have enough degrees

of freedom to capture both the starting and breakdown torque characteristics

without introducing significant errors [10]. As a result, it is more commonplace to

only consider the breakdown torque Tb in the single cage model and simply ignore

the torque and current characteristics at locked rotor. For wound-rotor motors,

Page 26: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

3. Parameter Estimation Problem 17

this yields sufficiently accurate results (i.e. in terms of the resulting torque-speed

curve). However, a single-cage model is unable to accurately model the torque-

speed characteristics of squirrel cage motors, especially those with deep bars, and

thus a double cage model should be used for these types of motors.

Without taking into account core losses, the full load motor efficiency ηfl also

cannot be used (see section 3.2 for more details). Therefore, there are only three

independent parameters that can be used in the problem formulation: Pm,fl, Qfl

and Tb.

These independent parameters can be used to formulate the parameter esti-

mation in terms of a non-linear least squares problem, with a set of non-linear

equations of the form F(x) = 0:

f1(x) = Pm,fl − P (sf ) = 0 (3.8)

f2(x) = sinφ−Q(sf ) = 0 (3.9)

f3(x) = Tb − T (smax) = 0 (3.10)

(3.11)

where F = (f1, f2, f3) and

x = (Rs, Xs, Xm, Rr, Xr) are the equivalent circuit parameters of the single

cage model

3.1.2 Double Cage Model (Ignoring Core Losses)

In the double cage model, the locked rotor torque Tlr and locked rotor current

Ilr are included as independent parameters. As in the single cage model, the full

load motor efficiency ηfl cannot be used without taking into account core losses.

Therefore, there are five independent parameters and the following non-linear

Page 27: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

3. Parameter Estimation Problem 18

least squares problem:

f1(x) = Pm,fl − P (sf ) = 0 (3.12)

f2(x) = sinφ−Q(sf ) = 0 (3.13)

f3(x) = Tb − T (smax) = 0 (3.14)

f4(x) = Tlr − T (s = 1) = 0 (3.15)

f5(x) = Ilr − I(s = 1) = 0 (3.16)

(3.17)

where F = (f1, f2, f3, f4, f5) and

x = (Rs, Xs, Xm, Rr1, Xr1, Rr2, Xr2) are the equivalent circuit parameters of

the double cage model

3.2 Problem Formulation Considering Core Losses

It was previously noted that without taking into account the core (and mechan-

ical) losses, the motor full load efficiency ηfl cannot be used as an independent

parameter in the problem formulation. This is because efficiency is calculated

based on the ratio of output mechanical power to input electrical power. If the

heat losses through the core and rotor frictional losses are not taken into account,

then the equivalent circuit is not suitable to accurately estimate motor efficiency

[7]. It follows that attempting to use the motor full load efficiency in the estima-

tion of the equivalent circuit without a core loss component would cause errors

in the parameter estimates (e.g. the stator resistance would be overestimated).

When core losses are included in the model, then the motor full load efficiency

ηfl can also be used as an independent parameter. The problem formulations are

Page 28: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

3. Parameter Estimation Problem 19

restated below for the single cage and double cage models with core losses taken

into account. Note that in this project, motor models including core losses are

always used.

3.2.1 Single Cage Model (with Core Losses)

The non-linear least squares problem for the single cage model with core losses

is as follows:

f1(x) = Pm,fl − P (sf ) = 0 (3.18)

f2(x) = sinφ−Q(sf ) = 0 (3.19)

f3(x) = Tb − T (smax) = 0 (3.20)

f4(x) = ηfl − η(sf ) = 0 (3.21)

where F = (f1, f2, f3, f4) and

x = (Rs, Xs, Xm, Rr, Xr, Rc) are the equivalent circuit parameters of the single

cage model (with core losses)

3.2.2 Double Cage Model (with Core Losses)

The non-linear least squares problem for the double cage model with core losses

is as follows:

Page 29: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

3. Parameter Estimation Problem 20

f1(x) = Pm,fl − P (sf ) = 0 (3.22)

f2(x) = sinφ−Q(sf ) = 0 (3.23)

f3(x) = Tb − T (smax) = 0 (3.24)

f4(x) = Tlr − T (s = 1) = 0 (3.25)

f5(x) = Ilr − I(s = 1) = 0 (3.26)

f6(x) = ηfl − η(sf ) = 0 (3.27)

where F = (f1, f2, f3, f4, f5, f6) and

x = (Rs, Xs, Xm, Rr1, Xr1, Rr2, Xr2, Rc) are the equivalent circuit parameters

of the double cage model (with core losses)

3.3 Classes of Parameter Estimation Algorithms

The parameter estimation problems formulated in the preceding sections can be

solved by a variety of non-linear least squares solver algorithms. As with all non-

linear least squares problems, closed form solutions are generally not available

and iterative algorithms are used to converge on a solution by minimising error

residuals.

Motor parameter estimation algorithms generally fall under two broad classes:

1. Descent Methods: are the class of algorithms based on variations of

Newton’s method for convergence to a solution, e.g. Newton-Raphson,

Levenberg-Marquardt, etc

2. Natural Optimisation Methods: are the class of algorithms based on

processes found in nature where successive randomised trials are filtered for

Page 30: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

3. Parameter Estimation Problem 21

”fitness” at each iteration, e.g. genetic algorithm, particle swarm optimi-

sation, ant colony optimisation, simulated annealing, etc

Page 31: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

CHAPTER 4

Descent Algorithms

4.1 Requirement for Linear Restrictions

It can be seen from the problem formulations in Chapter 3 that in each case,

the number of paramters to be estimated (i.e. unknown variables) exceeds the

number of simultaneous equations. In other words, the systems of equations are

all underdetermined. Therefore, in order to make the systems exactly determined

and solvable with descent algorithms, we must either:

1. Fix two parameters a priori (i.e. parameters are ”known”)

2. Impose two constraints on the problem formulations, e.g. linear restrictions

In this project, the use of linear restrictions was found to be superior to fixed

parameters. Therefore, the baseline descent algorithms will include two linear

restrictions by default.

It was shown in [5] that the stator resistance Rs was the least sensitive pa-

rameter in the equivalent circuit, i.e. variations in the value of Rs had the least

significant effect on the resulting torque-speed and current-speed curves. There-

fore, Rs can be subject to a linear restriction by linking it to the rotor resistance,

leading to the first linear restriction:

22

Page 32: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

4. Descent Algorithms 23

• Rs = krRr (for the single cage model)

• Rs = krRr1 (for the double cage model)

Where kr is a constant linear restriction

Moreover, it is assumed that the rotor reactance is linearly related to the

stator reactance, leading to the second linear restriction.

• Xr = kxXs (for the single cage model)

• Xr2 = kxXs (for the double cage model)

Where kx is a constant linear restriction

An investigation into the selection of linear restrictions is discussed in Section

4.6.

4.2 Newton-Raphson Algorithm

Of the class of descent methods used to solve non-linear least squares problems,

the Newton-Raphson (NR) algorithm is probably the most straightforward. The

NR algorithm is an iterative method where each iteration is calculated as follows:

xk+1 = xk − hnJ−1F(xk) (4.1)

where xk+1 is the solution at the (k + 1)th iteration, xk is the solution at

the kth iteration, hn is the step-size coefficient (more on this later) and J is the

Jacobian matrix evaluated with the parameters at the kth iteration, xk.

The Jacobian matrix J has the general form:

J =

∂f1∂x1

. . . ∂f1∂x6

.... . .

...

∂f6∂x1

. . . ∂f6∂x6

(4.2)

Page 33: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

4. Descent Algorithms 24

For systems where it is impractical to compute the exact partial derivatives

analytically, a numerical approximation may be used with finite difference equa-

tions:

∂fi∂xj≈ fi(x + δjh)− fi(x)

h(4.3)

where δj is vector of zeros with a single non-zero value of 1 at the j-th element

and h is a constant with a very small absolute value (e.g. 1× 10−6).

A modified form of the NR algorithm proposed in [9] for the double cage model

is shown in Figure 4.1. This algorithm was selected because of its completeness,

numerical accuracy and robustness compared to previously proposed methods

(for example, in [1], [2] and [3]). Furthermore, the algorithm can be applied

using commonly available manufacturer data, whereas other algorithms require

more detailed data that may not be readily available (for example, the full torque-

speed curve in [4]). Two other features of the algorithm that aid its robustness

are worth highlighting:

4.2.1 Parameter Constraints

The inequality constraints of the double cage model (Xr1 > Xr2 and Rr2 > Rr1)

can be implicitly included into the formulation by a simple change of variables

[5]:

Page 34: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

4. Descent Algorithms 25

Figure 4.1: Flowchart for conventional NR algorithm

Page 35: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

4. Descent Algorithms 26

x1 = Rr1

x2 = Rr2 −Rr1

x3 = Xm

x4 = Xs

x5 = Xr1 − kxXs

x6 = Rc

Furthermore, only the absolute values of the parameter estimates are used to

ensure that no negative parameters are estimated.

4.2.2 Adaptive Step Size

The step size hn in equation 4.1 is a scaling term that determines how far the

algorithm should go along the descent direction J−1F(xk). Choosing a step size

that is too large risks the algorithm not converging. On the other hand, choosing

a step size that is too small can cause the algorithm to converge too slowly. An

adaptive step size can avoid both these problems by starting with a high step size

and only reducing it if the algorithm does not converge (refer to the flowchart in

Figure 4.1).

4.2.3 Initial Conditions

For the base case NR algorithm, the initial parameter estimates are selected as

follows [5]:

Page 36: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

4. Descent Algorithms 27

Rr1 =UnsfPm,fl

Xm =UnQfl

Xs = 0.05Xm

Rs = krRr1

Rr2 = 5Rr1

Xr1 = 1.2Xs

Xr2 = kxXs

Rc = 10

An investigation into the selection of different initial conditions is discussed

later in Section 4.7.

4.2.4 Convergence Criteria

The default convergence criteria in this project is a squared error value of 1×10−5.

The algorithms will stop when the squared error is below this value.

4.3 Levenberg-Marquardt Algorithm

The Levenberg-Marquardt (LM) algorithm, sometimes called the damped least-

squares algorithm, is another popular technique for solving least-saures problems

[11] [12]. In the LM algorithm, each iteration is calculated as follows:

xk+1 = xk −[JTJ + λdiag(JTJ)

]−1JTF(xk) (4.4)

where xk+1 is the solution at the (k + 1)th iteration, xk is the solution at

Page 37: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

4. Descent Algorithms 28

the kth iteration, λ is the damping parameter (more on this later) and J is

the Jacobian matrix evaluated with the parameters at the kth iteration, xk (as

described previously in Equation 4.2).

Parameter constraints as implemented in the Newton-Raphson algorithm are

also applied in the LM algorithm (refer to Section 4.2.1). The initial conditions

are also selected in the same way as the NR algorithm.

4.3.1 Choice of Damping Parameter

The selection of the damping parameter λ affects both the direction and magni-

tude of an iteration step. If the damping parameter is large, then the algorithm

will move at short steps in the steepest descent direction. This is good when the

present iteration is far away from the solution. On the other hand, if the damping

parameter is small, then the algorithm approaches a Gauss-Newton type method,

which exhibits good convergence in the neighbourhood of the solution.

Therefore, the damping parameter should be updated at each iteration de-

pending on whether the algorithm is far or close to the solution. Two methods

for adjusting the damping parameter are described below.

Gain Ratio Adjustment

Marquardt suggested updating the damping parameter based on a ”gain ratio”

[12]:

ρ =F(xk)− F(xk+1)

12∆xT (λ∆x− JTF(xk))

(4.5)

Where ∆x = −[JTJ + λdiag(JTJ)

]−1JTF(xk) is the correction step at it-

eration k.

The damping parameter is adjusted depending on the value of the gain ratio

Page 38: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

4. Descent Algorithms 29

as follows:

λ =

λ× β, if ρ < ρ1

λγ, if ρ > ρ2

(4.6)

Where ρ1, ρ2, β and γ are algorithm control parameters. In this project, the

algorithm control parameters used were ρ1 = 0.25, ρ2 = 0.75, β = 3 and γ = 3.

Error Term Adjustment

An alternative to using the gain ratio is to adjust the damping parameter based

only on the error term (i.e. the numerator of the gain ratio). The damping

parameter is therefore updated as follows:

λ =

λ× β, if F(xk)− F(xk+1) < 0

λγ, if F(xk)− F(xk+1) > 0

(4.7)

Where ρ1, ρ2, β and γ are the algorithm control parameters as described

above.

4.4 Damped Newton-Raphson Algorithm

The damped Newton-Raphson algorithm is a variation of the conventional NR al-

gorithm where a damping factor helps to get around problems with near-singular

and/or ill-conditioned Jacobian matrices. In the damped NR algorithm, each

iteration is calculated as follows:

xk+1 = xk − hn(J−1 + λI)F(xk) (4.8)

Where the damping parameter λ is adjusted at each iteration based on the

Page 39: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

4. Descent Algorithms 30

error term as follows:

λ =

λ× β, if F(xk)− F(xk+1) < 0

λγ, if F(xk)− F(xk+1) > 0

(4.9)

All other aspects of the damped NR algorithm are the same as per the con-

ventional NR algorithm described in Section 4.2 (e.g. parameter constraints,

adaptive step sizes, etc)

4.5 Comparison of Descent Algorithms

The descent algorithms were tested on the EURODEEM motor data set (see

Appendix A) for both the single cage and double cage models (with core losses).

In the simulations, convergence is defined as an error rate of < 1× 10−5.

4.5.1 Single Cage Model

The results of the simulations on the single cage model are shown in Table 4.1

with fixed linear restrictions (kx = 1 and kr = 0.5).

CaseIEC Motors NEMA Motors

Convergence Convergence

Newton-Raphson 3967 (99.1%) 2347 (98.7%)

Levenberg-Marquardt 3030 (75.7%) 1783 (75.0%)

Damped Newton-Raphson 1387 (34.7%) 424 (17.8%)

Table 4.1: Comparison of descent algorithms for the single cage model with fixedrestrictions kx = 1 and kr = 0.5

The simulation results show that the conventional Newton-Raphson algo-

rithm has robust convergence. The Levenberg-Marquardt and damped Newton-

Raphson algorithms exhibit worse performance than the conventional NR al-

Page 40: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

4. Descent Algorithms 31

CaseIEC Motors NEMA Motors

Convergence Average Error2 Convergence Average Error2

Newton-Raphson 685 (17.1%) 0.5411 751 (31.6%) 0.2514

Levenberg-Marquardt(gain ratio)

663 (16.6%) 0.8926 594 (25.0%) 0.4325

Levenberg-Marquardt(error term)

740 (18.5%) 0.9114 770 (32.4%) 0.2867

Damped NR (maxi-mum iterations = 30)

628 (15.7%) 0.2058 568 (23.9%) 0.0899

Damped NR (maxi-mum iterations = 50)

689 (17.2%) 0.2021 670 (28.2%) 0.0872

Table 4.2: Comparison of descent algorithms for the double cage model with fixedrestrictions kx = 1 and kr = 0.5

gorithm, suggesting that the system of equations in the single cage model are

well-conditioned and suited for steepest descent type algorithms. Because of

the high level of convergence with the Newton-Raphson algorithm, no further

investigations were conducted in this project for the single cage model.

4.5.2 Double Cage Model

The results of the simulations on the single cage model are shown in Table 4.2

with fixed linear restrictions (kx = 1 and kr = 0.5)

The simulation results show that unlike for the single cage model, the conven-

tional NR algorithm performs performs poorly on the double cage model, both

in terms of convergence and average squared error rates.

The simulations suggest that the LM algorithm can lead to a higher conver-

gence rate compared to the conventional NR algorithm, but at the cost of a higher

average squared error. In terms of convergence, the adjustment of the damping

parameter λ using the error term is superior compared to using the gain ratio.

Adjusting the damping parameter using the gain ratio leads to worse convergence

Page 41: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

4. Descent Algorithms 32

and average squared error when compared to the conventional NR algorithm.

The LM algorithm works well in the neighbourhood of the solution, but does

not perform very well at the early stages, particularly when the initial estimates

are far from the solution. The LM algorithm can also produce spectacularly bad

results when the Jacobian matrix is ill-conditioned or near-singular.

The damped NR algorithm is intended to help address the issue of ill-conditioned

and near-singular Jacobian matrices. Adding a damping factor λI to the Jaco-

bian matrix makes it more likely to be invertible. However, the damped NR

algorithm takes longer to converge. This is shown in the simulation results by

comparing the convergence rate when the maximum number of iterations is in-

creased from 30 (base case) to 50. When the maximum number of iterations is

30, the convergence rate is 15.7% (lower than the conventional NR algorithm).

But when it is raised to 50, the convergence rate improves to 17.2% (slightly

higher than the conventional NR algorithm).

The average squared error of the damped NR algorithm is also significantly

lower than the conventional NR and LM algorithms. Therefore, compared to the

conventional NR algorithm, the damped NR algorithm can produce results with

better convergence and error rates, but at higher computational cost.

4.6 Selection of Linear Restrictions

It was previously shown that two linear restrictions are necessary to make the

problem formulations exactly determined and thus solvable by steepest descent

methods. The selection of the linear restrictions kr and kx is important because

by constraining Rs and Xr2 with linear restrictions, we are also constraining

the solution space by two degrees of freedom. Therefore, a solution to a non-

converging problem could potentially be found at different values of kr and kx.

Page 42: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

4. Descent Algorithms 33

In this section, the effects of linear restriction selection on the convergence and

error rates of the conventional NR algorithm are investigated. Three approaches

for selecting linear restrictions will firstly be presented, followed by computer

simulations on the EURODEEM motor data set.

4.6.1 Approaches for Selecting Linear Restrictions

Fixed Restrictions

The simplest approach is to select a set of fixed linear restrictions that are ap-

plicable for all motors. The following values of kr and kx are recommended in

[9]:

kr = 0.5

kx = 1

Interval Search

For each motor, the NR algorithm is run multiple times with different values

of kr and / or kx selected from a closed interval with discrete steps. The value

that leads to algorithm convergence or error minimisation is the selected linear

restriction. In this project, the interval ranges and step sizes that are used for kr

and kx are as follows :

kr|0.1 ≤ kr ≤ 1.5 in steps of 0.1

kx|0.4 ≤ kx ≤ 1.5 in steps of 0.1

Page 43: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

4. Descent Algorithms 34

Heuristic for kx

Based on a rudimentary cluster analysis of the kx interval search simulation re-

sults, a simple heuristic for selecting kx is proposed:

IF (Ilr < 6 OR cosφfl < 0.8) AND (Tb < 4)

THEN select kx = 1

ELSE select kx = 0.5

Likewise, a cluster analysis was performed on the kr interval search simulation

results, but this did not lead to any suitable heuristics for selecting kr. This is

consistent with the fact that there is no physical relationship between the stator

and rotor resistances and thus a forced linear restriction is meaningless.

4.6.2 Computer Simulation

This section investigates the effects of varying the linear restrictions on the con-

vergence and error rates of the Newton-Raphson algorithm. The algorithm was

tested on the EURODEEM motor data set (see Appendix A) for the double cage

model (with core losses).

The conventional Newton-Raphson algorithm was tested using the following

approaches for selecting linear restrictions:

1. Fixed values kx = 1 and kr = 0.5

2. Fixed values kx = 0.5 and kr = 1

3. Fixed values kx = 1 and kr = 1

4. Fixed values kx = 0.5 and kr = 0.5

Page 44: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

4. Descent Algorithms 35

5. Interval search on kx, fixed value kr = 0.5

6. Interval search on kx, fixed value kr = 1

7. Heuristic kx, fixed value kr = 0.5

8. Heuristic kx, fixed value kr = 1

9. Fixed value kx = 1, interval search on kr

10. Fixed value kx = 0.5, interval search on kr

11. Heuristic kx, interval search on kr

The results of the simulations are shown in Table 4.3. It can be seen that the

choices of kr and kx significantly affect both the rate of convergence and the

average squared error. With the IEC motors, the average error is almost 10

times lower when selecting kx with a heuristic and kr with an interval search over

using fixed linear restrictions (kx = 1 and kr = 0.5).

When fixed linear restrictions are used, the default values of kx = 1 and

kr = 0.5 produce the worst results. Convergence is increased by simply adopting

different fixed linear restrictions, for example kx = 0.5 and kr = 1

Varying kx yields modest improvements over using a fixed kx. The selection

of kx with the simple heuristic proposed in this project leads to marginally worse

convergence compared to an interval search on kx, but the average squared error is

almost doubled. Notwithstanding, it is still a reasonable improvement over using

fixed linear restrictions with little additional cost in terms of computation. While

the results of the interval search on kx are superior, the process is considerably

more computationally intensive.

The results show that varying kr yields much more significant gains than

varying kx, in terms of both convergence and average error rates. Even though

Page 45: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

4. Descent Algorithms 36

CaseIEC Motors NEMA Motors

Convergence Average Error2 Convergence Average Error2

kx = 1, kr = 0.5 685 (17.1%) 0.5411 751 (31.6%) 0.2514

kx = 0.5, kr = 1 974 (24.3%) 0.9261 934 (39.3%) 0.1425

kx = 1, kr = 1 893 (22.3%) 0.3096 866 (36.4%) 0.2086

kx = 0.5, kr = 0.5 782 (19.5%) 0.3939 804 (33.8%) 0.1846

kx interval search,kr = 0.5

816 (20.4%) 0.2057 829 (34.9%) 0.0865

kx interval search,kr = 1

1068 (26.7%) 0.0449 955 (40.2%) 0.0567

kx heuristic, kr = 0.5 777 (19.4%) 0.3924 797 (33.5%) 0.1803

kx heuristic, kr = 1 1000 (25.0%) 0.1762 917 (38.6%) 0.1361

kx = 1, kr intervalsearch

1140 (28.5%) 0.0979 1003 (42.2%) 0.0787

kx = 0.5, kr intervalsearch

1218 (30.4%) 0.0572 1048 (44.1%) 0.0738

kx heuristic, kr inter-val search

1230 (30.7%) 0.0556 1120 (47.1%) 0.0611

Table 4.3: Conventional Newton-Raphson algorithm results for double cage modelusing diffrent methods for selecting linear restrictions

the convergence rate is still under 50% in both the IEC and NEMA data sets,

the errors are relatively low for the majority of motors.

4.7 Selection of Initial Conditions

So far, the descent algorithms presented in this chapter have used the baseline

initial conditions suggested by Pedra [5] and described in section 4.2.3. In this

section, alternative approaches for calculating initial estimates are reviewed their

performance against the baseline methodology are investigated. A consolidated

methodology is then adopted and incorporated into the baseline descent algo-

rithms.

Page 46: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

4. Descent Algorithms 37

4.7.1 Methods for Calculating Initial Conditions

Stator Resistance Rs

Rogers and Shirmohammadi suggested that the losses in an induction machine

can be formulated as follows (ignoring core losses) [2]:

Pin(1− η) = I2sRs + I2rRr (4.10)

At rated full load, Pin = cosφfl, η = ηfl, Is = 1.0 pu and Tn = I2rRr

sf.

Substituting these quantities into the equation above:

cosφfl(1− ηfl) = Rs + sfTn (4.11)

Rated torque is also equivalent to Tn =ηfl cosφfl

nfl. Substituting this in yields:

cosφfl(1− ηfl) = Rs +sfηfl cosφfl

nfl(4.12)

Finally, solving for Rs yields:

Rs = cosφfl1− ηflnfl

(4.13)

Stator Reactance Xs

Pedra assumes that the stator reactance is calculated based on the magnetising

reactance [5]:

Xs =1

20Xm (4.14)

In the formulation of Rogers and Shirmohammadi, the total leakage reac-

tance of the motor is first calculated. Neglecting the magnetising branch, we can

Page 47: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

4. Descent Algorithms 38

approximate the equivalent circuit at locked rotor (s = 1) as per Figure 4.2.

Figure 4.2: Simplified motor equivalent circuit at locked rotor

Applying Kirchhoff’s voltage law yields the following equation:

U − Ilr√

(Rs +Rr)2 +X2L = 0 (4.15)

Assuming nominal voltage U = 1.0 pu and solving for XL yields:

XL =

√1

I2lr− (Rs +Rr)2 (4.16)

Assuming that the leakage reactances are split evenly across the stator and

rotor circuits, then:

Xs =XL

2(4.17)

Rotor Resistances Rr, Rr1 and Rr2

Pedra estimates the rotor resistance by using the following approximation for

motor active power [5]:

Pm,fl ≈U2

Rr

sf

(4.18)

Based on nominal voltage U = 1.0pu and solving for Rr:

Page 48: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

4. Descent Algorithms 39

Figure 4.3: Approximate breakdown of stator current

Rr ≈sfPm,fl

(4.19)

For the double cage model, Pedra assumes that Rr1 = Rr and Rr2 = 5Rr1.

Rogers and Shirmohammadi estimate the rotor resistance by first assuming

that the stator current can approximately be broken down into a real rotor current

component and a reactive magnetising current component (see Figure 4.3), i.e.

Ir = Is cosφ (4.20)

Im = Is sinφ (4.21)

Knowing that:

Pm,fl = ηfl cosφfl = Tnnfl = I2rRr

sfnfl (4.22)

We substitute Ir = Is cosφ to yield:

ηfl cosφfl = (Is cosφ)2Rr

sfnfl (4.23)

At full load, Is = 1.0 pu. Therefore, solving for Rr:

Page 49: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

4. Descent Algorithms 40

Rr =ηflsf

cosφnfl(4.24)

For the double cage model, the locked rotor characteristics of the motor are

used. At locked rotor s = 1, the torque developed by the machine can be given

by Tlr = I2lrRlr, where Rlr is the equivalent rotor resistance at locked rotor.

Therefore:

Rlr =TlrI2lr

(4.25)

Using a machine design factor m such that it is defined as follows:

m =Rr1 +Rr2

Xr

(4.26)

The rotor resistances can then be calculated as follows:

Rr1 = Rlr(1 +m2)−Rrm2 (4.27)

Rr2 =Rr1Rr

Rr1 −Rr

(4.28)

A typical value of m = 0.5 . . . 0.7 is suggested by Rogers and Shirmohammadi

[2].

Rotor Reactances Xr1 and Xr2

Pedra assumes that the inner cage rotor reactance is related to the stator reac-

tance as follows [5]:

Xr1 = 1.2Xs (4.29)

Page 50: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

4. Descent Algorithms 41

The outer cage reactance is calculated by the linear restriction Xr2 = kxXs.

Similarly, Rogers and Shirmohammadi estimate the total rotor reactance Xr

based on the leakage reactance being evenly split between stator and rotor, i.e.

[2]

Xr =1

2

√1

I2lr− (Rs +Rlr)2 (4.30)

Magnetising Reactance Xm

Pedra estimates the magnetising reactance by using the following approximation

for motor reactive power [5]:

Qfl ≈U2

Xm

(4.31)

Based on nominal voltage U = 1.0pu and solving for Xm:

Xm ≈1

Qfl

(4.32)

Using the same assumption shown in Figure 4.3, we can see that:

tanφ =ImIr

=Rr

sfXm

(4.33)

Substituting Ir = Is cosφ and Im = Is sinφ, we get:

Is sinφ

Is cosφ=

Rr

sfXm

(4.34)

Solving for Xm:

Page 51: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

4. Descent Algorithms 42

Xm =ηfl

sinφnfl(4.35)

Core Loss Resistance Rc

The full-load efficieny of the motor can be expressed as follows:

ηfl =Pm,flPin,fl

(4.36)

Where Pin,fl is the full-load input power of the motor. Assuming that the core

losses make up the bulk of the motor losses, the input power can be approximated

as Pin,fl = Pm,fl + Pc, where Pc is the core power loss. Substituting this into the

efficiency equation:

ηfl =Pm,fl

Pm,fl + Pc(4.37)

The core loss power can be approximated by Pc = U2

Rc. At nominal voltage,

U = 1.0pu. Substituting this into equation 4.37 and solving for Rc:

Rc =ηfl

Pm,fl(1− ηfl)(4.38)

4.7.2 Sets of Initial Conditions

Pedra Formulation

As described earlier in Section 4.2.3, the set of initial conditions suggested by

Pedra is as follows [5]:

Page 52: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

4. Descent Algorithms 43

Rr1 =UnsfPm,fl

Xm =UnQfl

Xs = 0.05Xm

Rs = krRr1

Rr2 = 5Rr1

Xr1 = 1.2Xs

Xr2 = kxXs

Johnson and Willis Formulation

Johnson and Willis suggested the following constant values (”flat start”) for the

initial conditions [1]:

Rs = 0.01

Xs = 0.05

Xm = 2.5

Rr1 = 0.01

Rr2 = 0.05

Xr1 = 0.05

Xr2 = 0.01

Page 53: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

4. Descent Algorithms 44

Rogers and Shirmohammadi Formulation

The set of initial conditions proposed by Rogers and Shirmohammadi is as follows

[2]:

Rs = cosφfl1− ηflnfl

Rr =ηflsf

cosφnfl

Rlr =TlrI2lr

Xm ==ηfl

sinφnfl

Xs = Xr =1

2

√1

I2lr− (Rst +Rlr)2

Rr1 = Rlr(1 +m2)−Rrm2

Rr2 =Rr1Rr

Rr1 −Rr

Where m = 0.5 . . . 0.7.

This formulation is also used by by Lindenmeyer et al [4].

Consolidated Formulation

A consolidated set of initial conditions comprising a mixture of methods for

calculating initial conditions is presented as follows:

Page 54: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

4. Descent Algorithms 45

Rs = cosφfl1− ηflnfl

Rlr =TlrI2lr

Xs =1

2

√1

I2lr− (Rs +Rlr)2

Xm =UnQfl

Rr1 =UnsfPm,fl

Xr1 = 2Xs

Rr2 = 10Rr1

Xr2 = 2Xs

Rc =ηfl

Pm,fl(1− ηfl)

4.7.3 Performance of Initial Conditions

The sets of initial conditions described in the previous section are tested by

themselves on the EURODEEM motor data set (see Appendix A). The results

of the tests evaluated in terms of the squared errors and are presented in Table

4.4 and Table 4.5 for the IEC and NEMA motors respectively.

As can be expected, a flat start (i.e. Johnson and Willis method) yields very

poor results. The Pedra formulation is simulated for various values of the linear

restrictions kr and kx and similar to the results in Section 4.6, linear restrictions

of kr = 1 and kx = 0.5 give the best results. The performance of the Rogers

and Shirmohammadi method is generally inferior to that of Pedra’s formulation.

However, the best performance comes from the consolidated method, which is

essentially a mixture of the Pedra and Rogers and Shirmohammadi approaches.

Page 55: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

4. Descent Algorithms 46

CaseSquared Error (IEC Motors)

Average Variance Minimum Maximum

Johnson and Willis 29.6398 1608.1 0.3595 540.2915

Pedra kx = 1, kr = 0.5 0.7505 1.6739 0.0120 19.769

Pedra kx = 1, kr = 1 0.4153 0.2226 0.0081 7.0569

Pedra kx = 0.5, kr = 1 0.3458 0.2445 0.013 7.8627

Pedra kx = 0.5, kr = 0.5 0.702 1.8053 0.014 20.1632

Pedra kx = 1.5, kr = 1 0.4778 0.2219 0.0086 6.9272

Rogers and Shirmoham-madi (m=0.5)

0.6899 0.8317 0.0109 7.4035

Rogers and Shirmoham-madi (m=0.7)

0.6642 0.7395 0.0108 14.2079

Consolidated Formulation 0.2757 0.0954 0.0153 2.309

Table 4.4: Squared error performance of different initial conditions for IEC motors

CaseSquared Error (NEMA Motors)

Average Variance Minimum Maximum

Johnson and Willis 13.0493 123.3466 1.5787 105.8328

Pedra kx = 1, kr = 0.5 0.5542 1.3249 0.0059 20.2474

Pedra kx = 1, kr = 1 0.4119 0.3021 0.0119 8.3059

Pedra kx = 0.5, kr = 1 0.3934 0.4122 0.0126 8.9798

Pedra kx = 0.5, kr = 0.5 0.5678 1.5901 0.0083 20.7859

Pedra kx = 1.5, kr = 1 0.4773 0.2674 0.0085 7.6698

Rogers and Shirmoham-madi (m=0.5)

1.3129 3.9838 0.0135 29.1587

Rogers and Shirmoham-madi (m=0.7)

1.3744 5.7699 0.0126 31.1414

Consolidated Formulation 0.2555 0.1795 0.0238 8.7821

Table 4.5: Squared error performance of different initial conditions for NEMA motors

Page 56: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

4. Descent Algorithms 47

4.7.4 Incorporation of Initial Conditions into Descent Algorithms

It was shown earlier that the consolidated set of initial condition estimates yields

the best performance on the IEC and NEMA data sets. It would be beneficial to

use this set of improved initial conditions in the descent algorithms, but incor-

porating different initial estimates is complicated by the use of linear restrictions

in the descent algorithms.

New initial estimates for Xs, Xm, Rr1, Rr2, Xr1 and Rc can be incorporated

directly into the descent algorithms. However, since Rs and Xr2 are subject to

linear restrictions, new initial estimates for these parameters need to be treated

in a special manner.

These parameters would either need to be treated as fixed values or alterna-

tively, the linear restrictions kr and kx could be calculated based on the initial

estimates, i.e.

kr =Rs,initial

Rr1,initial

kx =Xr2,initial

Xs,initial

Both of these options are explored in computer simulations on the EURODEEM

data set using the conventional NR algorithm and the consolidated set of initial

conditions described in Section 4.7.2. The results of the simulations are shown

in Table 4.6.

It can be seen that both options yield very poor convergence, although er-

ror rates are improved relative to the base case. As shown in Section 4.6, the

NR algorithm is sensitive to the choice of linear restrictions and these results

indicate that the selection of linear restrictions based on the initial conditions or

using fixed parameters for Rs and Xr2 leads to significantly worse convergence

performance.

Page 57: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

4. Descent Algorithms 48

CaseIEC Motors NEMA Motors

Convergence Average Error2 Convergence Average Error2

Base case (kx = 1,kr = 0.5)

685 (17.1%) 0.5411 751 (31.6%) 0.2514

Revised initial esti-mates (Rs and Xr2

fixed)

0 (0.0%) 0.2450 0 (0.0%) 0.1918

Revised initial esti-mates (kx and kr cal-culated)

60 (1.5%) 0.2731 52 (2.2%) 0.2108

Table 4.6: Conventional Newton-Raphson algorithm results for double cage model withrevised initial conditions

Rather than trying to incorporate all of the revised initial conditions in the

consolidated set, it would be interesting to investigate the performance of the

conventional NR algorithm when only the revised initial estimates for Xs and Rc

are applied, i.e.

Rlr =TlrI2lr

Xs =1

2

√1

I2lr− (Rs +Rlr)2

Rc =ηfl

Pm,fl(1− ηfl)

The simulation results are shown in Table 4.7. The use of the revised initial

estimates Xs and Rc yields improvements in the convergence of the conventional

NR algorithm, as well as the average squared error in the IEC motors. Inter-

estingly, the average squared errors in the NEMA motors are worse using the

revised initial estimates, even though convergence is increased.

Page 58: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

4. Descent Algorithms 49

CaseIEC Motors NEMA Motors

Convergence Average Error2 Convergence Average Error2

Base case 685 (17.1%) 0.5411 751 (31.6%) 0.2514

Revised initial esti-mates

808 (20.2%) 0.1935 770 (32.4%) 0.3297

Table 4.7: Conventional Newton-Raphson algorithm results for double cage model withrevised initial estimates for Xs and Rc (with kx = 1, kr = 0.5)

4.8 Conclusions about Descent Algorithms

Based on the simulation results and investigations conducted in this section, the

following conclusions can be made about the performance of descent algorithms:

• Conventional NR and LM algorithms exhibit problems with ill-conditioned

and near singular Jacobian matrices

• There is a trade-off between lower average error rates and convergence in the

different descent algorithms. The LM algorithm yields higher convergence,

but lower average error rates, while the damped NR algorithm yields the

opposite.

• Error rates can go out of control in the LM algorithm, i.e. when it fails, it

can fail spectacularly.

• Descent algorithm performance is significantly influenced by the selection

of linear restrictions.

• The linear restriction for the outer cage reactance kx has a less significant

effect on convergence and error rates than the linear restriction for the

stator resistance kr.

• There are discernible patterns for the selection of kx based on the charac-

teristics of the motor (e.g. locked rotor current, breakdown torque, etc). A

Page 59: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

4. Descent Algorithms 50

crude heuristic can be formed to select an appropriate value of kx.

• The optimal linear restriction for the stator resistance kr appears to be ran-

domly distributed in motors that converge. This is consistent with the fact

that there is no physical relationship between stator and rotor resistances.

• The choice of initial conditions can affect the convergence and error rates

of descent algorithms, but there are complications incorporating the initial

conditions for parameters that are subject to linear restrictions (i.e. Rs and

Xr2).

Page 60: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

CHAPTER 5

Natural Optimisation Algorithms

5.1 Genetic Algorithm

The genetic algorithm (GA) is part of the class of evolutionary algorithms mod-

elled on natural selection and evolutionary processes to optimise non-linear cost

functions. It was developed in the 1960s and 1970s, but only gained widespread

popularity in the late 1980s when advances in computational processing power

made the algorithm more practical to apply on desktop computers [13].

The goal of the genetic algorithm is to minimise a non-linear cost function.

For non-linear least squares problems, this can be interpreted as minimising the

squared error residuals. The general methodology can be summarised under

four broad headings - 1) Initialisation, 2) Fitness and Selection, 3) Breeding and

Inheritance and 4) Termination.

1. Initialisation: an initial population of candidate solutions to minimise the

cost function is first generated, typically via random sampling. The size of

the population is an algorithm setting and largely depends on the nature

of the problem.

2. Fitness and Selection: the population is ranked according to the fitness

51

Page 61: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

5. Natural Optimisation Algorithms 52

of its members. The fitness of each member is normally calculated from

the cost function, where lower values signal higher fitness. The fittest set

of members are selected to evolve / breed the next generation.

3. Breeding and Inheritance: the members chosen in the selection stage

are designated as ”parents” and evolved to create the next generation of

candidate solutions (”children”). There are three common methods for

generating children - elite children, crossover and mutation.

Elite children are simply clones of the fittest-ranked parents. In the crossover

operation, pairs of parents are bred together by randomly selecting traits

from each parent that are passed on to the children. In the mutation op-

eration, the traits of a parent are randomly altered (mutated) and then

passed on to the children. Crossover and mutation are obviously inspired

by nature and in the genetic algorithm, they can either be implemented

simultaneously or as separate processes.

4. Termination: once the next generation of candidate solutions has been

produced, the population is then ranked again according to their fitness.

Since the least fit members of the previous generation have been discarded,

the average fitness of the new generation should be higher. Successive

generations are bred until either a converging solution is found (i.e. a

squared error < 1×10−5) or the maximum number of generations is reached.

5.1.1 Application of GA to Motor Parameter Estimation

In the context of motor parameter estimation, the genetic algorithm is used to

minimise the squared error of the problem formulation vector F, described earlier

in Section 3.1. In GA terminology, the squared error is the fitness function and

is calculated as follows:

Page 62: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

5. Natural Optimisation Algorithms 53

fitness = FF′ (5.1)

where F = (f1, f2, f3, f4, f5, f6)

Genetic algorithms can be binary coded where the solution paramaters are

quantized into binary strings (for example, in [14], [15] and [16]). However, the

equivalent circuit parameters in a motor are continuous parameters and not nat-

urally quantized. Thus, binary coding necessarily imposes limits on the precision

of the parameters (i.e. due to the chosen length of the binary string). For this

reason, a continuous parameter algorithm is used instead.

An initial population of npop parameter estimates are randomly sampled from

a uniform distribution with upper and lower limits as shown in Table 5.1.

ParameterRange of Initial Estimate (pu)

Lower Bound Upper Bound

Rs 0 0.15

Xs 0 0.15

Xm 0 5

Rr1 0 0.15

Xr1 0 0.30

Rr2 0 0.15

Xr2 0 0.15

Rc 0 100

Table 5.1: Range of initial parameter estimates

The fitness of each member in the population is then calculated and ranked.

The lowest fitness members are discarded and the rest are retained to form the

mating pool for the next generation (there are npool members in the mating pool).

The fittest ne members in the mating pool are retained for the next generation

as elite children.

Page 63: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

5. Natural Optimisation Algorithms 54

Of the remaining npop − ne children to be created for the next generation,

cf% will be produced by crossover and the rest (1 − cf%) by mutation. The

proportion cf is called the crossover fraction.

1. Crossover: in the crossover process, two members of the mating pool

are randomly selected and combined by taking a random blend of each

member’s parameters, e.g. the crossover of parameter Rs:

Rs,child = αRs,parent1 + (1− α)Rs,parent2 (5.2)

where α is a random variable selected from a uniform distribution over the

interval [0, 1]

2. Mutation: in the mutation process, a member of the mating pool is ran-

domly selected and its parameters are mutated by adding Gaussian noise

with parameter-dependent standard deviations (see Table 5.2).

Parameter Standard Deviation (σ)

Rs 0.01

Xs 0.01

Xm 0.33

Rr1 0.01

Xr1 0.01

Rr2 0.01

Xr2 0.01

Rc 6.67

Table 5.2: Standard deviations for mutation noise

The fitness of the next generation is then calculated and the process repeats

itself for ngen generations.

Page 64: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

5. Natural Optimisation Algorithms 55

The default settings for the genetic algorithm implemented in this project for

motor parameter estimation are shown in Table 5.3. A flowchart of the genetic

algorithm implemented in this project is shown in Figure 5.1.

Setting Setting Description Default Value

npop Population of each generation 20

npool Number of members in the mating pool 15

ne Number of elite children 2

cf Crossover fraction 80%

ngen Maximum number of generations 30

Table 5.3: Default settings for genetic algorithm

5.1.2 Computer Simulation

The genetic algorithm was tested on the EURODEEM motor data set (see Ap-

pendix A) for the double cage model (with core losses). In the simulations,

convergence is defined as an error rate of < 1 × 10−5. The results of the simu-

lations on the double cage model are shown in Table 5.4 with the results of the

conventional Newton-Raphson algorithm shown for comparison.

The simulation results show that the error rates of the genetic algorithms are

never low enough to qualify for ”convergence”. However, the average squared

CaseIEC Motors NEMA Motors

Convergence Average Error2 Convergence Average Error2

Newton-Raphson 685 (17.1%) 0.5411 751 (31.6%) 0.2514

10 generations 0 (0%) 0.1530 0 (0%) 0.1032

30 generations 0 (0%) 0.0471 0 (0%) 0.0403

50 generations 0 (0%) 0.0281 0 (0%) 0.0328

100 generations 0 (0%) 0.0186 0 (0%) 0.0268

Table 5.4: Results of the genetic algorithm for the double cage model

Page 65: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

5. Natural Optimisation Algorithms 56

Figure 5.1: Flowchart for genetic algorithm

Page 66: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

5. Natural Optimisation Algorithms 57

Figure 5.2: Error rates vs maximum number of generations

error rates are significantly lower than in the conventional NR algorithm, even

when the maximum number of generations is just 10.

As expected, increasing the number of generations leads to corresponding

decreases in the error rate. Figure 5.2 shows the minimum, maximum and average

squared error rates versus the maximum number of generations in the genetic

algorithm. From the figure, we see that the error rate falls sharply when the

maximum number of generations is increased from 10 to 30, but begins to exhibit

diminishing returns thereafter. The minimum error also appears to bottom out at

a maximum number of generations of 30, even though the maximum and average

errors keep falling.

Page 67: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

5. Natural Optimisation Algorithms 58

5.2 Other Natural Optimisation Algorithms

Since the development and popularisation of the genetic algorithm, a number

of other natural optimisation algorithms have been introduced. All of these

algorithms are inspired by natural processes and all share the basic methodology

of injecting randomness and selecting for fitness in order to iteratively reach an

optimal point in the search-space.

The literature abounds with examples of natural optimisation algorithms

adapted for the estimation of induction motor parameters, for instance the fol-

lowing works:

• Simulated annealing [17]

• Particle swarm optimisation [18] [19]

• Artificial immune system [20]

• Bacterial foraging technique [21]

• Ant colony optimisation [22]

• Harmony search [23]

The results of these investigations indicate that while some of the alternative

algorithms suggest faster convergence than the genetic algorithm, they tend to

converge to the same solution and error rates. As discussed in the previous

section, these error rates are not low enough to qualify for convergence as defined

by squared errors < 1×10−5. For this reason, the alternative natural optimisation

algorithms are not explored in this project, though could be examined in future

investigations.

Page 68: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

5. Natural Optimisation Algorithms 59

5.3 Conclusions about Natural Optimisation Algorithms

Based on the simulation results and investigations conducted in this section, the

following conclusions can be made about the performance of natural optimisation

algorithms:

• Natural optimisation methods do not yield solutions with very low error

rates, unlike steepest descent methods when they converge. The simulations

suggest that squared error rates bottom out at approximately 4× 10−5.

• At a high number of generations, the genetic algorithm tends to produce

results with low average and maximum errors. For example, the maximum

squared error with 100 generations is 0.8062 compared to 5.9996 for the

conventional NR algorithm, 19.771 for the LM algorithm and 9.2314 for

the damped NR algorithm.

• Increasing the number of generations has diminishing returns with respect

to the error rates. From the simulation results, the error rates begin to

flatten out after 30 generations.

• Unlike descent algorithms, natural optimisation algorithms are not sensitive

to the choice of initial conditions. Indeed in most natural optimisation

algorithms, the initial conditions are randomly selected.

Page 69: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

CHAPTER 6

Hybrid Algorithms

6.1 Motivation for Hybrid Algorithms

It was shown in Section 4.6 that linear restrictions imposed on Rs and Xr2 have

a significant influence on the convergence and error rates of descent algorithms.

The parameters Rs and Xr2 are also difficult to estimate accurately based solely

on commonly available manufacturer data. Moreover, the selection of initial

conditions can also affect the performance of descent algorithms.

On the other hand, natural optimisation algorithms can yield lower average

error rates, but never low enough to qualify for convergence (as defined by a

squared error of < 1× 10−5). However, the performance of natural optimisation

algorithms is unaffected by the choice of initial conditions.

Therefore, hybrid algorithms are proposed, which are the combinations of

natural optimisation and descent algorithms. The goal of combining descent and

natural optimisation algorithms is to accentuate the advantages of each algo-

rithm, while at the same time mitigating the disadvantages.

60

Page 70: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

6. Hybrid Algorithms 61

6.2 Proposed Hybrid Algorithm

The proposed hybrid algorithm attempts to overcome the limitations of descent

algorithms by applying a genetic algorithm to select Rs and Xr2. In other words,

a baseline descent algorithm (e.g. NR, Damped NR, LM, etc) is run with fixed

values for Rs and Xr2, which are in turn iteratively selected using a genetic

algorithm in an outer loop. A flowchart of the proposed hybrid algorithm is

shown in Figure 6.1. A more detailed description of the proposed algorithm

follows.

An initial population of npop estimates for Rs and Xr2 are randomly sampled

from a uniform distribution with upper and lower limits as shown in Table 6.1.

Each pair of estimates is referred to as a member of the population.

ParameterRange of Initial Estimate (pu)

Lower Bound Upper Bound

Rs 0 0.15

Xr2 0 0.15

Table 6.1: Range of initial parameter estimates

The descent algorithm is then run on each member of the population. The

fitness of each member (in terms of the squared error F′F) is calculated and

ranked. The lowest fitness members are discarded and the rest are retained to

form the mating pool for the next generation (there are npool members in the

mating pool).

The fittest ne members in the mating pool are retained for the next generation

as elite children. Of the remaining npop−ne children to be created for the next

generation, cf% will be produced by crossover and the rest (1−cf%) by mutation.

The proportion cf is called the crossover fraction.

1. Crossover: in the crossover process, two members of the mating pool

Page 71: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

6. Hybrid Algorithms 62

Figure 6.1: Flowchart for hybrid algorithm (with natural selection of Rs and Xr2)

Page 72: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

6. Hybrid Algorithms 63

are randomly selected and combined by taking a random blend of each

member’s parameters, e.g. the crossover of parameter Rs:

Rs,child = αRs,parent1 + (1− α)Rs,parent2 (6.1)

where α is a random variable selected from a uniform distribution over the

interval [0, 1]

2. Mutation: in the mutation process, a member of the mating pool is ran-

domly selected and its parameters are mutated by adding Gaussian noise

with standard deviations of 0.01.

The descent algorithm is then run for the next generation of estimates for

Rs and Xr2. The fitness is calculated and the process repeats itself for ngen

generations. If at any point during the process the descent algorithm converges,

then the hybrid algorithm stops and selects the parameter estimates from the

converged descent algorithm as the solution. Otherwise, the parameter estimates

yielding the best fitness after ngen generations are selected.

The default settings for the hybrid algorithm implemented in this paper are

shown in Table 6.2.

Setting Setting Description Default Value

npop Population of each generation 15

npool Number of members in the mating pool 10

ne Number of elite children 2

cf Crossover fraction 80%

ngen Maximum number of generations 10

Table 6.2: Default settings for hybrid algorithm

Page 73: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

6. Hybrid Algorithms 64

CaseIEC Motors NEMA Motors

Convergence Average Error2 Convergence Average Error2

Baseline NR Algo-rithm kx = 1, kr = 0.5

685 (17.1%) 0.5411 751 (31.6%) 0.2514

Hybrid NR-GA 1363 (34.1%) 0.0625 1159 (48.7%) 0.0282

Hybrid LM-GA 1388 (34.7%) 1.0941 1181 (49.7%) 0.5334

Hybrid DNR-GA 1373 (34.3%) 0.0174 1168 (49.1%) 0.4228

Table 6.3: Simulation results for baseline NR and hybrid algorithms

6.3 Computer Simulation

The following variations of the hybrid algorithm were tested on the EURODEEM

data set (see Appendix A):

• NR-GA - Conventional NR and GA selection of Rs and Xr2

• LM-GA - Levenberg-Marquardt (with an error term lambda adjustment)

and GA selection of Rs and Xr2

• DNR-GA - Damped NR (with a maximum number of 30 iterations for each

damped NR step) and GA selection of Rs and Xr2

Table 6.3 shows the results of the simulations, presenting the rates of conver-

gence and average squared errors of the proposed hybrid algorithms compared

with the baseline NR algorithm.

It can be seen from Table 6.3 that the proposed hybrid algorithms significantly

outperform the baseline NR algorithm, both in terms of convergence rates and

squared errors. In the IEC data set, the convergence rates are almost doubled

when using the hybrid algorithms, while in the NEMA data set, there is around

a 55% improvement. The average squared errors in the NR-GA and DNR-GA

hybrid algorithms are also significantly lower than the baseline NR algorithm for

Page 74: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

6. Hybrid Algorithms 65

the IEC and NEMA data sets. However, the LM-GA hybrid algorithm has a poor

average error performance despite having the best convergence rate, indicating

that the LM-GA algorithm either converges or fails badly.

There is a computational cost for the improvement in performance since the

hybrid algorithm is considerably more computationally intensive than any of the

descent algorithms. This is because the evolutionary part of the hybrid algorithm

must run the descent algorithm multiple times for each generation. For example,

based on the default settings as shown in Table 6.2, the hybrid algorithm may

have to perform up to npop×ngen = 10×15 = 150 descent algorithms. This would

occur in the worst case condition when the hybrid algorithm fails to converge.

The average computation times for the algorithms are discussed in more detail

in Section 7.4.

As motor parameter estimation for the purpose of system studies is not a

particularly time-critical task, it could be argued that algorithm performance is

a much larger driver than computational burden and solution time.

Page 75: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

CHAPTER 7

Comparative Analysis of Algorithms

In this chapter, the descent, natural optimisation and hybrid algorithms described

previously are compared and analysed across the following dimensions:

• Algorithm performance (convergence and error rates)

• Cconvergence vs error tolerance

• Algorithm performance vs motor rated power

• Algorithm computation / solution times

7.1 Comparison of Algorithm Performance

An overall summary of the simulation results obtained in this project is presented

in Table 7.1. The table shows the convergence rate and average and maximum

errors for each of the key algorithms described earlier in Chapters 4, 5 and 6.

Purely in terms of convergence and error rates, it can be seen from Table

7.1 that the hybrid algorithms are superior to the other algorithms. However,

there is no hybrid algorithm that clearly stands out as the best option. The LM-

GA has the highest number of converging solutions, but when it fails, it yields

poor results (as evidenced by the high average error rate). On the other hand,

66

Page 76: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

7. Comparative Analysis of Algorithms 67

Case

IEC Motors NEMA Motors

ConvergenceSquared Error

ConvergenceSquared Error

Average Maximum Average Maximum

Newton-Raphson(kx = 1, kr = 0.5)

685(17.1%)

0.5411 5.9996 751(31.6%)

0.2514 5.9902

Levenberg-Marquardt (kx = 1,kr = 0.5)

740(18.5%)

0.9114 6 770(32.4%)

0.2867 6

Damped NR (kx =1, kr = 0.5)

628(15.7%)

0.2058 9.2314 568(23.9%)

0.0899 4.6188

Newton-Raphson(kx = 0.5, kr = 1)

974(24.3%)

0.9261 120.08 934(39.3%)

0.1425 5.999

Levenberg-Marquardt(kx = 0.5, kr = 1)

1035(25.9%)

2.691 6 945(39.7%)

1.7268 6

Damped NR (kx =0.5, kr = 1)

1006(25.1%)

0.04 4.4584 935(39.3%)

0.05054 4.6574

Genetic Algorithm(Max gens = 30)

0 (0.0%) 0.0471 3.6459 0 (0.0%) 0.04029 0.50916

Genetic Algorithm(Max gens = 50)

0 (0.0%) 0.0281 2.031 0 (0.0%) 0.03279 0.45363

Genetic Algorithm(Max gens = 100)

0 (0.0%) 0.01861 0.8062 0 (0.0%) 0.02676 0.37716

Hybrid NR-GA 1363(34.1%)

0.0625 5.9998 1159(48.7%)

0.0282 0.9682

Hybrid LM-GA 1388(34.7%)

1.0941 6 1181(49.7%)

0.53387 6

Hybrid DNR-GA 1373(34.3%)

0.0174 5.9995 1168(49.1%)

0.01948 0.42282

Table 7.1: Summary of simulation results for the double cage model

Page 77: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

7. Comparative Analysis of Algorithms 68

Figure 7.1: Convergence rate versus error tolerance plot for IEC motors

the DNR-GA is more consistent in yielding low error rates, but the convergence

rate is also lower. Moreover, the conventional genetic algorithm run with 100

generations produces solutions with low average and maximum errors (though

never converges).

7.2 Convergence and Error Tolerance

Throughout this project, the default criterion for convergence (i.e. error toler-

ance) has been a squared error of 1×10−5. When the criterion for convergence is

relaxed, one would expect a corresponding increase in the convergence rate and

this is in fact what is observed in the results.

Figures 7.1 and 7.2 show the convergence rate as a function of the error

tolerance for the conventional NR and hybrid NR-GA algorithms (Figure 7.1

is for the IEC motors and Figure 7.2 is for the NEMA motors). As expected,

convergence rates increase as the error tolerance is also increased. The slope of

the curve starts off relatively flat at low error tolerances, but increases rapidly

as the the error tolerance is raised above 1 × 10−3. Note also that the hybrid

NR-GA algorithm outperforms the conventional NR algorithm in all cases.

Page 78: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

7. Comparative Analysis of Algorithms 69

Figure 7.2: Convergence rate versus error tolerance plot for IEC motors

It can be seen that the choice of error tolerance affects the convergence rate,

but what do the error tolerances actually signify and how should one select an

appropriate value? Figures 7.3 and 7.4 depict visually what choosing different

error tolerance values means in terms of how well the estimate matches the perfor-

mance parameters. In the figures, the parameters of a 75kW motor are estimated

at different error tolerances ranging from < 1× 10−5 (default setting for conver-

gence) to 0.4. The red stars indicate the motor performance parameters that the

equivalent circuit parameters are being fitted to.

It can be seen from the figures that the torque-speed and current-speed curves

begin to diverge from the performance parameters when the squared error reaches

5×10−3. When the squared error is > 0.1, it is visually evident that the estimated

curves do not correspond well at all with the performance parameters.

Page 79: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

7. Comparative Analysis of Algorithms 70

Figure 7.3: Visual depiction of error tolerance - Torque-speed curve of 75kW motor

Figure 7.4: Visual depiction of error tolerance - Current-speed curve of 75kW motor

Page 80: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

7. Comparative Analysis of Algorithms 71

7.3 Algorithm Performance and Motor Rated Power

In this section, the performance of the algorithms are analysed with the data sets

broken down by motor rated power. Table 7.2 presents the breakdown of the IEC

and NEMA motor data sets, showing the quantity of motors for various nominal

power ranges.

Motor Rating No. IEC Motors No. NEMA Motors

0.37kW - 3.6kW 1208 630

4kW - 15kW 963 598

18.5kW - 75kW 973 741

90kW - 185kW 477 284

200kW - 630kW 355 123

>630kW 26 2

TOTAL 4002 2378

Table 7.2: Breakdown of motor data sets by motor rated power

Page 81: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

7.Com

parative

Analy

sisof

Algorith

ms

72

Case0.37 - 3.6kW 4 - 15kW 18.5 - 75kW 90 - 185kW 200 - 630kW >630kW

Conv-ergence

AverageError2

Conv-ergence

AverageError2

Conv-ergence

AverageError2

Conv-ergence

AverageError2

Conv-ergence

AverageError2

Conv-ergence

AverageError2

Newton-Raphson(kx = 1, kr = 0.5)

4(0.33%)

0.8323 52(0.33%)

0.4891 305(31.4%)

0.5880 166(34.8%)

0.1788 137(38.6%)

0.0887 21(80.8%)

0.0092

Levenberg-Marquardt (kx = 1,kr = 0.5)

14(1.16%)

1.9423 62(6.44%)

0.6743 336(34.5%)

0.1055 190(39.8%)

0.3229 122(34.4%)

0.9954 16(61.5%)

1.6178

Damped NR (kx =1, kr = 0.5)

4(0.33%)

0.5279 36(3.7%)

0.1132 262(26.9%)

0.0447 160(33.5%)

0.0402 143(40.3%)

0.0401 23(88.5%)

0.0047

Genetic Algorithm(Max gens = 30)

0(0.0%)

0.0830 0(0.0%)

0.0196 0(0.0%)

0.0266 0(0.0%)

0.0469 0(0.0%)

0.0550 0(0.0%)

0.0569

Genetic Algorithm(Max gens = 50)

0(0.0%)

0.0446 0(0.0%)

0.0157 0(0.0%)

0.0180 0(0.0%)

0.0251 0(0.0%)

0.0362 0(0.0%)

0.0313

Hybrid NR-GA 48(3.97%)

0.1842 245(25.4%)

0.0219 559(57.5%)

0.0035 281(58.9%)

0.0032 207(58.3%)

0.0040 23(88.5%)

0.0014

Hybrid LM-GA 149(12.3%)

2.147 274(28.5%)

1.196 405(41.6%)

0.5132 303(63.5%)

0.1923 240(67.6%)

0.1185 17(65.4%)

0.0005

Hybrid DNR-GA 140(11.6%)

0.037 265(27.5%)

0.014 404(41.5%)

0.0104 303(63.5%)

0.0021 244(68.7%)

0.0014 17(65.4%)

0.0014

Table 7.3: Algorithm performance broken down by rated power (IEC motors)

Page 82: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

7.Com

parative

Analy

sisof

Algorith

ms

73

Case0.37 - 3.6kW 4 - 15kW 18.5 - 75kW 90 - 185kW 200 - 630kW >630kW

Conv-ergence

AverageError2

Conv-ergence

AverageError2

Conv-ergence

AverageError2

Conv-ergence

AverageError2

Conv-ergence

AverageError2

Conv-ergence

AverageError2

Newton-Raphson(kx = 1, kr = 0.5)

9(1.43%)

0.3225 100(16.7%)

0.3254 415(56.0%)

0.1455 162(57.0%)

0.1052 65(52.9%)

0.0469 0(0.0%)

0.2338

Levenberg-Marquardt (kx = 1,kr = 0.5)

83(13.2%)

0.2788 158(26.4%)

0.2833 395(53.3%)

0.3086 102(35.9%)

0.3288 31(25.2%)

0.1653 1(50.0%)

0.0765

Damped NR (kx =1, kr = 0.5)

58(9.21%)

0.0820 110(18.4%)

0.0751 297(40.1%)

0.0722 74(26.1%)

0.1683 28(22.8%)

0.0863 1(50.0%)

0.2711

Genetic Algorithm(Max gens = 30)

0(0.0%)

0.0366 0(0.0%)

0.0362 0(0.0%)

0.0449 0(0.0%)

0.0515 0(0.0%)

0.0493 0(0.0%)

0.0926

Genetic Algorithm(Max gens = 50)

0(0.0%)

0.0306 0(0.0%)

0.0303 0(0.0%)

0.0374 0(0.0%)

0.0405 0(0.0%)

0.0344 0(0.0%)

0.0872

Hybrid NR-GA 161(25.6%)

0.0317 311(52.0%)

0.0245 512(69.1%)

0.0226 123(43.3%)

0.0294 51(41.5%)

0.0059 1(50.0%)

0.0416

Hybrid LM-GA 237(37.6%)

0.519 308(51.5%)

0.5374 413(55.7%)

0.4339 153(53.9%)

0.5693 69(56.1%)

0.5659 1(50.0%)

0.0554

Hybrid DNR-GA 235(37.3%)

0.02 307(51.3%)

0.0181 405(54.7%)

0.0192 151(53.2%)

0.0141 69(56.1%)

0.0125 1(50.0%)

0.0586

Table 7.4: Algorithm performance broken down by rated power (NEMA motors)

Page 83: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

7. Comparative Analysis of Algorithms 74

Tables 7.3 and 7.4 show the convergence and average squared error rates for

the IEC and NEMA motor data sets respectively, with the data sets subdivided

by rated power. It is observed that the convergence and average error rates are

not uniformly distributed across the full range of motor rated powers.

Of interest is the poor performance for smaller motors, particularly motors

rated below 4kW, where convergence rates are in the order of 0.3% to 12.3%

for IEC motors and 1.4% to 37.3% for NEMA motors. Performance begins to

improve in all algorithms as the motor size is increased. For motors ≥90kW, the

convergence rates of the hybrid algorithms improve to >60% for the IEC motors

and >50% for the NEMA motors.

The poor performance for smaller rated motors suggests that the double cage

model may not be sufficient to characterise small motors. It should also be noted

that dynamic modelling is least likely to be performed on individual small motors,

since they are often aggregated as lumped loads in power system studies.

Lastly, the NEMA motors were categorised by NEMA design type (e.g. A, B,

C, etc) and the simulation results were analysed. It was found that convergence

and error rates were uniformly distributed across all NEMA design types and no

discernible patterns relating to NEMA design types were uncovered.

7.4 Comparison of Algorithm Computation Time

Indicative computation times for the different algorithms in this project are shown

in Table 7.5. The computation times were obtained from simulations performed

on a 2.1GHz Intel dual core processor with 2GB RAM and are presented here

primarily for comparison.

From Table 7.5, it can be seen that the hybrid algorithms have average so-

lution times between 50 and 100 times slower than the conventional descent

Page 84: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

7. Comparative Analysis of Algorithms 75

AlgorithmSolution Time (s)

Average Maximum

Newton-Raphson 0.257 0.742

Levenberg-Marquardt 0.162 0.332

Damped NR 0.241 0.427

Genetic Algorithm (10 Gens) 0.257 0.328

Genetic Algorithm (30 Gens) 0.778 0.947

Genetic Algorithm (50 Gens) 1.356 1.758

Genetic Algorithm (100 Gens) 2.753 3.036

Hybrid NR-GA 24.395 53.290

Hybrid LM-GA 14.289 42.198

Hybrid DNR-GA 29.050 64.568

Table 7.5: Average algorithm solution time

algorithms (i.e. NR, LM and DNR algorithms). This was as expected since the

hybrid algorithms are required to perform up to 150 descent algorithm runs (for

further discussion, refer to Section 6.3).

The genetic algorithm has average solution times that are dependent on the

maximum number of generations to be simulated. For a low number of gener-

ations (e.g. 10), the GA solution times are comparable to that of the descent

algorithms. The solution times increase more or less linearly as the maximum

number of generations is increased.

Page 85: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

CHAPTER 8

Conclusions and Future Work

8.1 Conclusions

In this project, a number of algorithms were investigated for the estimation of

induction motor parameters based on manufacturer data. Simulations on a large

data set of IEC and NEMA motors showed that for the single cage model, the

conventional Newton-Raphson algorithm with fixed linear restrictions (kx = 1

and kr = 0.5) was robust and exhibited excellent convergence.

However, when applied to the double cage model, the conventional Newton-

Raphson algorithm performed very poorly. In this project, hybrid algorithms

were proposed as an alternative to the conventional NR algorithm. Simulation

results suggest that the proposed hybrid algorithms show promise as a parameter

estimation tool, with large improvements in convergence and error rates over the

conventional algorithms.

The key drawback for the hybrid algorithms is their computation time, which

depending on the algorithm settings, can be significantly slower than conventional

descent or genetic algorithms. In any case, it is argued that the parameter

estimation task is not particularly time critical and a slow computation time

can be tolerated in return for better performance.

76

Page 86: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

8. Conclusions and Future Work 77

Based on the investigations conducted in this project, the following general

workflow is recommended for solving induction motor parameter estimation prob-

lems using double cage models:

• As an initial attempt, use the damped NR algorithm with fixed linear

restrictions kr = 1 and kx = 0.5

• If there is no convergence, try the LM algorithm with fixed linear restric-

tions kr = 1 and kx = 0.5

• If there is no convergence, try the hybrid DNR-GA algorithm

• If there is no convergence, try the hybrid LM-GA algorithm

• Finally, if there is still no convergence, use the genetic algorithm with 50

to 100 generations to give a solution with an adequately low error

8.2 Contributions

The main contributions of this project can be summarised as follows:

• Proposal of a new hybrid descent and natural optimisation algorithm for

estimating induction motor parameters

• Comparison of Newton-Raphson, Levenberg-Marquardt, Damped Newton-

Raphson and Genetic Algorithms using a consistent, structured framework

• Simulation of the parameter estimation algorithms on a large data set of

4,002 IEC and 2,378 NEMA motors

The following publication arose from the work presented in this project:

Page 87: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

8. Conclusions and Future Work 78

• Susanto, J., Islam, S., Estimation of Induction Motor Parameters using Hy-

brid Algorithms for Power System Dynamic Studies, Australasian Universi-

ties Power Engineering Conference (AUPEC), Hobart, Australia, October

2013

8.3 Future Work

The hybrid algorithms explored in this project were aimed at using evolutionary

methods to optimise for two double-cage model parameters (Rs and Xr2) in an

underdetermined system of equations. It is anticipated that hybrid approaches

could also be implemented in different ways, for example:

• A natural optimisation algorithm to estimate the initial conditions and then

a descent algorithm to refine the initial conditions for convergence

• The use of a descent algorithm to find a baseline estimate and then natural

optimisation algorithms to tune higher order models

These and other hybrid approaches could be explored in future work where

the general goal would be to find parameter estimation techniques that yield high

convergence rates and / or very low errors.

Another avenue for future investigation is to explore different types of natural

optimisation algorithms. In this project, only the genetic algorithm was imple-

mented, but improvements in algorithm performance could potentially be found

in other natural optimisation algorithms, e.g. particle swarm optimisation, ant

colony optimisation, etc.

Page 88: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

Bibliography

[1] B. K. Johnson and J. R. Willis, “Tailoring induction motor analytical models

to fir known performance characteristics and satisfy particular study needs,”

IEEE Transactions on Power Systems, vol. 6, no. 3, 1991.

[2] G. Rogers and D. Shirmohammadi, “Induction machine modelling for elec-

tromagnetic transient program,” IEEE Transactions on Energy Conversion,

vol. EC-2, no. 4, 1987.

[3] S. S. Waters and R. D. Willoughby, “Modeling induction motors for system

studies,” IEEE Transactions on Industry Applications, vol. 1A-19, no. 5,

1983.

[4] D. Lindenmeyer, H. W. Dommel, A. Moshref, and P. Kundur, “An induction

motor parameter estimation method,” Electrical Power and Energy Systems,

vol. 23, pp. 251–262, 2001.

[5] J. Pedra, “Estimation of induction motor double-cage model parameters

from manufacturer data,” IEEE Transactions on Energy Conversion, vol.

19, no. 2, 2004.

[6] I. Boldea and S. Nasar, The Induction Machine Handbook, CRC Press, 2002.

79

Page 89: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

BIBLIOGRAPHY 80

[7] M. Haque, “Determination of nema design induction motor parameters from

manufacturer data,” IEEE Transactions on Energy Conversion, vol. 23, no.

4, 2008.

[8] F. Corcoles, J. Pedra, M. Salichs, and L. Sainz, “Analysis of the induction

machine parameter identification,” IEEE Transactions on Energy Conver-

sion, vol. 17, no. 2, 2002.

[9] J. Pedra, “On the determination of induction motor parameters from man-

ufacturer data for electromagnetic transient programs,” IEEE Transactions

on Power Systems, vol. 23, no. 4, 2008.

[10] J. Pedra, “Estimation of typical squirrel-cage induction motor parameters

for dynamic performance simulation,” IEEE Proceedings on Generation,

Transmission and Distribution, vol. 153, no. 2, 2006.

[11] K. Levenberg, “A method for the solution of certain non-linear problems in

least squares,” The Quarterly of Applied Mathematics, vol. 2, 1944.

[12] D.W. Marquardt, “An algorithm for least-squares estimation of non-linear

parameters,” Journal of the Society for Industrial and Applied Mathematics,

vol. 11, no. 2, 1963.

[13] R. L. Haupt and S. E. Haupt, Practical Genetic Algorithms, John Wiley

and Sons, 1998.

[14] P. Nangsue, P. Pillay, and S. E. Conry, “Evolutionary algorithms for in-

duction motor parameter determination,” IEEE Transactions on Energy

Conversion, vol. 14, no. 3, 1999.

Page 90: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

BIBLIOGRAPHY 81

[15] H.H. Weatherford and C.W. Brice, “Estimation of induction motor parame-

ters by a genetic algorithm,” in Conference Record of the 2003 Annual Pulp

and Paper Industry Technical Conference, 2003., 2003, pp. 21–28.

[16] R Nolan, P Pillay, and T Haque, “Application of genetic algorithms to

motor parameter determination,” in Proceedings of 1994 IEEE Industry

Applications Society Annual Meeting, 1994, pp. 47–54.

[17] R. Bhuvaneswari and S. Subramanian, “Optimization of three-phase in-

duction motor design using simulated annealing algorithm,” Electric Power

Components and Systems, vol. 33, no. 9, pp. 947–956, 2005.

[18] V.P. Sakthivel, R. Bhuvaneswari, and S. Subramanian, “Multi-objective pa-

rameter estimation of induction motor using particle swarm optimization,”

Engineering Applications of Artificial Intelligence, vol. 23, no. 3, pp. 302–

312, 2010.

[19] V.P. Sakthivel, R. Bhuvaneswari, and S Subramanian, “An Improved Par-

ticle Swarm Optimization for Induction Motor Parameter Determination,”

International Journal of Computer Applications, vol. 1, no. 2, pp. 71–76,

2010.

[20] V.P. Sakthivel, R. Bhuvaneswari, and S. Subramanian, “Artificial immune

system for parameter estimation of induction motor,” Expert Systems with

Applications, vol. 37, no. 8, pp. 6109–6115, 2010.

[21] V. P. Sakthivel, R. Bhuvaneswari, and S. Subramanian, “Bacterial Foraging

Technique Based Parameter Estimation of Induction Motor from Manufac-

turer Data,” Electric Power Components and Systems, vol. 38, no. 6, pp.

657–674, 2010.

Page 91: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

BIBLIOGRAPHY 82

[22] Zhenfeng Chen, Yanru Zhong, and Jie Li, “Parameter identification of induc-

tion motors using ant colony optimization,” in Evolutionary Computation,

2008. CEC 2008. (IEEE World Congress on Computational Intelligence).

IEEE Congress on, 2008, pp. 1611–1616.

[23] J.R. Marques, I.F. Machado, and J.R. Cardoso, “Induction motor parameter

determination using the harmony search algorithm to power, torque and

speed estimation,” in IECON 2012 - 38th Annual Conference on IEEE

Industrial Electronics Society, 2012, pp. 1835–1840.

[24] EuroDEEM, “Eurodeem: The european database of efficient electric mo-

tors,” http://sunbird.jrc.it/energyefficiency/eurodeem/, 2007.

Page 92: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

APPENDIX A

Motor Data Set

A large data set from the EuroDEEM and MotorMaster databases (version 1.0.17

- 4 April 2007) was used to test the parameter estimation algorithms with a mix-

ture of IEC and NEMA type motors [24]. From the original set, the motor data

was conditioned by eliminating duplicate records, removing motors without power

factor, efficiency or torque data and removing motors with strange or inconsistent

data (e.g. full load torque greater than breakdown torque, asynchronous speed

greater than synchronous speed, etc). After data cleansing, the final data set

consisted of 6,380 motors with nominal ratings from 0.37kW to 1000kW, and the

following total quantities:

• 4,002 IEC 50Hz motors

• 2,378 NEMA 60Hz motors

83

Page 93: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

APPENDIX B

MATLAB Source Code

This appendix contains MATLAB source code listings for all of the double cage

algorithms described in this project. All source code is available online on GitHub

at www.github.com/susantoj/asm toolkit.

B.1 Common Auxiliary Functions

B.1.1 calc pqt

% CALC PQT − Ca l cu l a t e s motor mechanical power , r e a c t i v e power , breakdown

% torque and e f f i c i e n c y from equ iva l en t c i r c u i t parameters ( used f o r double

% cage model with core l o s s e s )

%

% Usage : c a l c pq t ( s l i p , x )

%

% Where s f i s the f u l l load s l i p (pu)

% x i s a 8 x 1 vec to r o f motor equ iva l en t parameters :

% x = [ Rs Xs Xm Rr1 Xr1 Rr2 Xr2 Rc ]

% x (1) = Rs = s t a t o r r e s i s t a n c e

% x (2) = Xs = s t a t o r r eac tance

% x (3) = Xm = magnet i s ing reac tance

% x (4) = Rr1 = ro to r / inner cage r e s i s t a n c e

% x (5) = Xr1 = ro to r / inner cage reac tance

% x (6) = Rr2 = outer cage r e s i s t a n c e

% x (7) = Xr2 = outer cage reac tance

% x (8) = Rc = core r e s i s t a n c e

%

% Outputs : y i s a vec to r [Pm Q Tb I n l ]

%

func t i on [ y ] = ca l c pq t ( s f , x )

84

Page 94: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

B. MATLAB Source Code 85

x = abs (x ) ;

[ T f l i s ] = ge t to rque ( s f , x ) ; % Ca lcu la te f u l l −load torque and cur rent

Pm = T f l ∗ (1 − s f ) ; % Ca lcu la te mechanical power ( at FL)

Sn = complex (1 ,0 )∗ conj ( i s ) ;

Q f l = abs ( imag (Sn ) ) ; % Ca lcu la te r e a c t i v e power input ( at FL)

i c = 1 / complex (x ( 8 ) , 0 ) ; % Ca lcu la te core cu r r en t s ( at FL)

i i n = i s + i c ; % Ca lcu la te t o t a l input cur rent ( at FL)

p in = r e a l ( complex (1 ,0 )∗ conj ( i i n ) ) ; % Ca lcu la te input power ( at FL)

e f f f l = Pm / p in ; % Calcu la te e f f i c i e n c y ( at FL)

% Estimate breakdown torque by an i n t e r v a l search

T b = 0 ;

f o r i =0 . 01 : 0 . 01 : 1

T i = ge t to rque ( i , x ) ;

i f T i > T b

T b = T i ; % Estimated breakdown torque

end

end

[ T l r i l r ] = ge t to rque (1 , x ) ;

y = [Pm Q f l T b T l r abs ( i l r + i c ) e f f f l ] ;

end

B.1.2 get torque

% GETTORQUE − Calcu la te double cage motor torque

%

% Usage : g e t t o rque ( s l i p , x )

%

% Where s l i p i s the motor s l i p (pu)

% x i s a vec to r o f motor equ iva l en t parameters :

% x = [ Rs Xs Xm Rr1 Xr1 Rr2 Xr2 ]

% Rs = s t a t o r r e s i s t a n c e

% Rs = s t a t o r r eac tance

% Xm = magnet i s ing reac tance

% Rr1 = ro to r / inner cage r e s i s t a n c e

% Xr1 = ro to r / inner cage reac tance

% Rr2 = outer cage r e s i s t a n c e

% Xr2 = outer cage reac tance

%

% Outputs : motor torque (pu) as a r e a l number and s t a t o r cur rent ( as a

% complex number )

%

func t i on [ torque i s t ] = ge t to rque ( s l i p , x )

% Calcu la te admittances

Ys = 1/complex (x ( 1 ) , x ( 2 ) ) ;

Ym = 1/complex (0 , x ( 3 ) ) ;

Yr1 = 1/complex (x (4)/ s l i p , x ( 5 ) ) ;

Yr2 = 1/complex (x (6)/ s l i p , x ( 7 ) ) ;

Page 95: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

B. MATLAB Source Code 86

% Calcu la te vo l tage and cu r r en t s

u1 = Ys / (Ys + Ym + Yr1 + Yr2 ) ;

i r 1 = abs ( u1 ∗ Yr1 ) ;

i r 2 = abs ( u1 ∗ Yr2 ) ;

torque = x (4)/ s l i p ∗ i r 1 ˆ2 + x (6)/ s l i p ∗ i r 2 ˆ2 ;

i s t = (1 − u1 ) ∗ Ys ;

end

B.2 Descent Algorithms

B.2.1 nr solver

% NR SOLVER − Newton−Raphson s o l v e r f o r double cage model with core l o s s e s

% So lve s f o r 6 c i r c u i t parameters [ Xs Xm Rr1 Xr1 Rr2 Rc ]

% Rs and Xr2 are computed by l i n e a r r e s t r i c t i o n s

% Inc lude s change o f v a r i a b l e s

% Inc lude s adapt ive s tep s i z e ( as per Pedra 2008)

% Inc lude s determinant check o f j acob ian matrix

%

% Author : Ju l i u s Susanto

%

% Usage : n r s o l v e r (p , kx , kr , max i ter )

%

% Where p i s a vec to r o f motor performance parameters :

% p = [ s f e f f p f Tb Tlr I l r ]

% s f = f u l l −load s l i p

% e f f = f u l l −load e f f i c i e n c y

% pf = f u l l −load power f a c t o r

% T b = breakdown torque ( as % o f FL torque )

% T lr = locked ro to r torque ( as % o f FL torque )

% I l r = locked ro to r cur rent

% kx and kr are l i n e a r r e s t r i c t i o n s

% max iter i s the maximum number o f i t e r a t i o n s

%

% OUTPUT: x i s a vec to r o f motor equ iva l en t parameters :

% x = [ Rs Xs Xm Rr1 Xr1 Rr2 Xr2 Rc ]

% x (1) = Rs = s t a t o r r e s i s t a n c e

% x (2) = Xs = s t a t o r r eac tance

% x (3) = Xm = magnet i s ing reac tance

% x (4) = Rr1 = ro to r / inner cage r e s i s t a n c e

% x (5) = Xr1 = ro to r / inner cage reac tance

% x (6) = Rr2 = outer cage r e s i s t a n c e

% x (7) = Xr2 = outer cage reac tance

% x (8) = Rc = core r e s i s t a n c e

% i t e r i s the number o f i t e r a t i o n s

% e r r i s the squared e r r o r o f the ob j e c t i v e func t i on

% conv i s a t rue / f a l s e f l a g i nd i c a t i n g convergence

func t i on [ z i t e r e r r conv ] = n r s o l v e r (p , kx , kr , max i ter )

Page 96: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

B. MATLAB Source Code 87

% Human−r eadab le motor performance parameters

% And base value i n i t i a l i s a t i o n

s f = p ( 1 ) ; % Full−load s l i p (pu)

e f f = p ( 2 ) ; % Full−load e f f i c i e n c y (pu)

pf = p ( 3 ) ; % Full−load power f a c t o r (pu)

T f l = pf ∗ e f f / (1 − s f ) ; % Full−load torque (pu)

T b = p (4) ∗ T f l ; % Breakdown torque (pu)

T l r = p (5) ∗ T f l ; % Locked ro to r torque (pu)

i l r = p ( 6 ) ; % Locked ro to r cur rent (pu)

Pm fl = pf ∗ e f f ; % Mechanical power ( at FL)

Q f l = s i n ( acos ( pf ) ) ; % Full−load r e a c t i v e power (pu)

% Set i n i t i a l c ond i t i on s

z (3 ) = 1 / Q f l ; %Xm

z (2) = 0.05 ∗ z ( 3 ) ; %Xs

z (4 ) = 1 / Pm fl ∗ s f ; %Rr1

z (1 ) = kr ∗ z ( 4 ) ; %Rs

z (5 ) = 1 .2 ∗ z ( 2 ) ; %Xr1

z (6 ) = 5 ∗ z ( 4 ) ; %Rr2

z (7 ) = kx ∗ z ( 2 ) ; %Xr2

z (8 ) = 12 ;

% Change o f v a r i a b l e s to cons t ra ined parameters ( with i n i t i a l va lue s )

x (1 ) = z ( 4 ) ;

x (2 ) = z (6 ) − z ( 4 ) ;

x (3 ) = z ( 3 ) ;

x (4 ) = z ( 2 ) ;

x (5 ) = z (5 ) − kx ∗ z ( 2 ) ;

x (6 ) = z ( 8 ) ;

% Formulate s o l u t i o n

pqt = [ Pm fl Q f l T b T l r i l r e f f ] ;

% Set up NR algor i thm parameters

h = 0 . 00001 ;

n = 0 ;

hn = 1 ;

hn min = 0 .0000001 ;

e r r t o l = 0 .000001 ;

e r r = 1 ;

i t e r = 0 ;

conv = f a l s e ;

% Run NR algor i thm

whi le ( e r r > e r r t o l ) && ( i t e r < max iter )

% Evaluate ob j e c t i v e func t i on f o r cur rent i t e r a t i o n

y = ( pqt − c a l c pq t ( s f , z ) ) . / pqt ;

e r r 0 = y∗y ’ ;

% Construct Jacobian matrix

f o r i =1:6

x ( i ) = x ( i ) + h ;

Page 97: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

B. MATLAB Source Code 88

% Change o f v a r i a b l e s back to equ iva l en t c i r c u i t parameters

z (2 ) = x ( 4 ) ;

z (3 ) = x ( 3 ) ;

z (4 ) = x ( 1 ) ;

z (5 ) = kx∗x (4) + x ( 5 ) ;

z (6 ) = x (1) + x ( 2 ) ;

z (8 ) = x ( 6 ) ;

z (1 ) = kr ∗ z ( 4 ) ;

z (7 ) = kx ∗ z ( 2 ) ;

j ( : , i ) = ( ( pqt − c a l c pq t ( s f , z ) ) . / pqt − y ) / h ;

x ( i ) = x ( i ) − h ;

end

% Check i f j a cob ian matrix i s s i n gu l a r and ex i t func t i on i f so

i f ( det ( j ) == 0)

re turn ;

end

x r e s e t = x ;

y r e s e t = y ;

i t e r 0 = i t e r ;

% Inner loop ( descent d i r e c t i o n check and step s i z e adjustment )

whi l e ( i t e r == i t e r 0 )

% Calcu la te next i t e r a t i o n and update x

de l t a x = j \y ’ ;

x = abs (x − hn ∗ de l ta x ’ ) ;

% Change o f v a r i a b l e s back to equ iva l en t c i r c u i t parameters

z (2 ) = x ( 4 ) ;

z (3 ) = x ( 3 ) ;

z (4 ) = x ( 1 ) ;

z (5 ) = kx∗x (4) + x ( 5 ) ;

z (6 ) = x (1) + x ( 2 ) ;

z (8 ) = x ( 6 ) ;

z (1 ) = kr ∗ z ( 4 ) ;

z (7 ) = kx ∗ z ( 2 ) ;

% Ca lcu la te squared e r r o r terms

y = ( pqt − c a l c pq t ( s f , z ) ) . / pqt ;

e r r = y∗y ’ ;

% Descent d i r e c t i o n check and step s i z e adjustment

i f ( abs ( e r r ) >= abs ( e r r 0 ) )

n = n + 1 ;

hn = 2ˆ(−n ) ;

x = x r e s e t ;

y = y r e s e t ;

e l s e

n = 0 ;

Page 98: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

B. MATLAB Source Code 89

i t e r = i t e r + 1 ;

end

% I f descent d i r e c t i o n isn ’ t minimising , then the re i s no convergence

i f (hn < hn min )

re turn ;

end

end

end

i f e r r < e r r t o l

conv = true ;

end

end

B.2.2 lm solver (Error Term Adjustment)

% LM SOLVER − Levenberg−Marquadt s o l v e r f o r double cage model with core l o s s e s

% So lve s f o r 6 c i r c u i t parameters [ Xs Xm Rr1 Xr1 Rr2 Rc ]

% Rs and Xr2 are computed by l i n e a r r e s t r i c t i o n s

% Inc lude s change o f v a r i a b l e s

% Basic e r r o r adjustment o f damping parameter

%

% Usage : lm so l v e r (p , kr , kx , lambda 0 , lambda max , max iter )

%

% Where p i s a vec to r o f motor performance parameters :

% p = [ s f e f f p f Tb Tlr I l r ]

% s f = f u l l −load s l i p

% e f f = f u l l −load e f f i c i e n c y

% pf = f u l l −load power f a c t o r

% T b = breakdown torque ( as % o f FL torque )

% T lr = locked ro to r torque ( as % o f FL torque )

% I l r = locked ro to r cur rent

% kr and kx are l i n e a r r e s t r i c t i o n s

% lambda 0 i s i n i t i a l damping parameter

% lambda max i s maximum damping parameter

% max iter i s the maximum number o f i t e r a t i o n s

%

% OUTPUT: x i s a vec to r o f motor equ iva l en t parameters :

% x = [ Rs Xs Xm Rr1 Xr1 Rr2 Xr2 Rc ]

% x (1) = Rs = s t a t o r r e s i s t a n c e

% x (2) = Xs = s t a t o r r eac tance

% x (3) = Xm = magnet i s ing reac tance

% x (4) = Rr1 = ro to r / inner cage r e s i s t a n c e

% x (5) = Xr1 = ro to r / inner cage reac tance

% x (6) = Rr2 = outer cage r e s i s t a n c e

% x (7) = Xr2 = outer cage reac tance

% x (8) = Rc = core r e s i s t a n c e

% i t e r i s the number o f i t e r a t i o n s

% e r r i s the squared e r r o r o f the ob j e c t i v e func t i on

% conv i s a t rue / f a l s e f l a g i nd i c a t i n g convergence

func t i on [ z i t e r e r r conv ] = lm so l v e r (p , kr , kx , lambda 0 , lambda max , max iter )

Page 99: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

B. MATLAB Source Code 90

% Human−r eadab le motor performance parameters

% And base value i n i t i a l i s a t i o n

s f = p ( 1 ) ; % Full−load s l i p (pu)

e f f = p ( 2 ) ; % Full−load e f f i c i e n c y (pu)

pf = p ( 3 ) ; % Full−load power f a c t o r (pu)

T f l = pf ∗ e f f / (1 − s f ) ; % Full−load torque (pu)

T b = p (4) ∗ T f l ; % Breakdown torque (pu)

T l r = p (5) ∗ T f l ; % Locked ro to r torque (pu)

i l r = p ( 6 ) ; % Locked ro to r cur rent (pu)

Pm fl = pf ∗ e f f ; % Mechanical power ( at FL)

Q f l = s i n ( acos ( pf ) ) ; % Full−load r e a c t i v e power (pu)

% Set i n i t i a l c ond i t i on s

z (3 ) = 1 / Q f l ; %Xm

z (2) = 0.05 ∗ z ( 3 ) ; %Xs

z (4 ) = 1 / Pm fl ∗ s f ; %Rr1

z (1 ) = z ( 4 ) ; %Rs

z (5 ) = 1 .2 ∗ z ( 2 ) ; %Xr1

z (6 ) = 5 ∗ z ( 4 ) ; %Rr2

z (7 ) = kx ∗ z ( 2 ) ; %Xr2

z (8 ) = 12 ;

% Change o f v a r i a b l e s to cons t ra ined parameters ( with i n i t i a l va lue s )

x (1 ) = z ( 4 ) ;

x (2 ) = z (6 ) − z ( 4 ) ;

x (3 ) = z ( 3 ) ;

x (4 ) = z ( 2 ) ;

x (5 ) = z (5 ) − kx ∗ z ( 2 ) ;

x (6 ) = z ( 8 ) ;

% Formulate s o l u t i o n

pqt = [ Pm fl Q f l T b T l r i l r e f f ] ;

% Set up LM algor i thm parameters

h = 0 . 00001 ;

e r r t o l = 0 . 00001 ;

e r r = 1 ;

i t e r = 0 ;

lambda = lambda 0 ;

conv = f a l s e ;

beta = 3 ;

gamma = 3 ;

% Run LM algor i thm

whi le ( e r r > e r r t o l ) && ( i t e r < max iter )

% Evaluate ob j e c t i v e func t i on f o r cur rent i t e r a t i o n

y = ( pqt − c a l c pq t ( s f , z ) ) . / pqt ;

e r r 0 = y∗y ’ ;

% Construct Jacobian matrix

f o r i =1:6

x ( i ) = x ( i ) + h ;

Page 100: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

B. MATLAB Source Code 91

% Change o f v a r i a b l e s back to equ iva l en t c i r c u i t parameters

z (2 ) = x ( 4 ) ;

z (3 ) = x ( 3 ) ;

z (4 ) = x ( 1 ) ;

z (5 ) = kx∗x (4) + x ( 5 ) ;

z (6 ) = x (1) + x ( 2 ) ;

z (8 ) = x ( 6 ) ;

z (1 ) = kr ∗ z ( 4 ) ;

z (7 ) = kx ∗ z ( 2 ) ;

j ( : , i ) = ( ( pqt − c a l c pq t ( s f , z ) ) . / pqt − y ) / h ;

x ( i ) = x ( i ) − h ;

end

x r e s e t = x ;

y r e s e t = y ;

i t e r 0 = i t e r ;

% Inner loop ( lambda adjustments )

whi l e ( i t e r == i t e r 0 )

% Calcu la te next i t e r a t i o n and update x

de l t a x = inv ( j ’∗ j + lambda .∗ diag ( diag ( j ’∗ j ) ) )∗ j ’∗ y ’ ;

x = abs (x − de l ta x ’ ) ;

% Change o f v a r i a b l e s back to equ iva l en t c i r c u i t parameters

z (2 ) = x ( 4 ) ;

z (3 ) = x ( 3 ) ;

z (4 ) = x ( 1 ) ;

z (5 ) = kx∗x (4) + x ( 5 ) ;

z (6 ) = x (1) + x ( 2 ) ;

z (8 ) = x ( 6 ) ;

z (1 ) = kr ∗ z ( 4 ) ;

z (7 ) = kx ∗ z ( 2 ) ;

% Ca lcu la te squared e r r o r terms

y = ( pqt − c a l c pq t ( s f , z ) ) . / pqt ;

e r r = y∗y ’ ;

i f ( i snan ( e r r ) )

e r r = 6 ;

end

% Error adjustment o f lambda

% ∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗i f ( abs ( e r r ) >= abs ( e r r 0 ) ) && ( i t e r > 0)

lambda = lambda ∗ beta ;

x = x r e s e t ;

y = y r e s e t ;

e l s e

lambda = lambda / gamma;

i t e r = i t e r + 1 ;

Page 101: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

B. MATLAB Source Code 92

end

% I f descent d i r e c t i o n isn ’ t minimising , then the re i s no convergence

i f ( lambda > lambda max )

re turn ;

end

end

end

i f e r r < e r r t o l

conv = true ;

end

end

B.2.3 lm solver2 (Gain Ratio Adjustment)

% LM SOLVER2 − Levenberg−Marquadt s o l v e r f o r double cage model with core l o s s e s

% So lve s f o r 6 c i r c u i t parameters [ Xs Xm Rr1 Xr1 Rr2 Rc ]

% Rs and Xr2 are computed by l i n e a r r e s t r i c t i o n s

% Inc lude s change o f v a r i a b l e s

% Gain r a t i o adjustment o f damping parameter

%

% Usage : lm so lv e r2 (p , kr , kx , lambda 0 , lambda max , max iter )

%

% Where p i s a vec to r o f motor performance parameters :

% p = [ s f e f f p f Tb Tlr I l r ]

% s f = f u l l −load s l i p

% e f f = f u l l −load e f f i c i e n c y

% pf = f u l l −load power f a c t o r

% T b = breakdown torque ( as % o f FL torque )

% T lr = locked ro to r torque ( as % o f FL torque )

% I l r = locked ro to r cur rent

% kr and kx are l i n e a r r e s t r i c t i o n s

% lambda 0 i s i n i t i a l damping parameter

% lambda max i s maximum damping parameter

% max iter i s the maximum number o f i t e r a t i o n s

%

% OUTPUT: x i s a vec to r o f motor equ iva l en t parameters :

% x = [ Rs Xs Xm Rr1 Xr1 Rr2 Xr2 Rc ]

% x (1) = Rs = s t a t o r r e s i s t a n c e

% x (2) = Xs = s t a t o r r eac tance

% x (3) = Xm = magnet i s ing reac tance

% x (4) = Rr1 = ro to r / inner cage r e s i s t a n c e

% x (5) = Xr1 = ro to r / inner cage reac tance

% x (6) = Rr2 = outer cage r e s i s t a n c e

% x (7) = Xr2 = outer cage reac tance

% x (8) = Rc = core r e s i s t a n c e

% i t e r i s the number o f i t e r a t i o n s

% e r r i s the squared e r r o r o f the ob j e c t i v e func t i on

% conv i s a t rue / f a l s e f l a g i nd i c a t i n g convergence

func t i on [ z i t e r e r r conv ] = lm so lve r2 (p , kr , kx , lambda 0 , lambda max , max iter )

Page 102: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

B. MATLAB Source Code 93

% Human−r eadab le motor performance parameters

% And base value i n i t i a l i s a t i o n

s f = p ( 1 ) ; % Full−load s l i p (pu)

e f f = p ( 2 ) ; % Full−load e f f i c i e n c y (pu)

pf = p ( 3 ) ; % Full−load power f a c t o r (pu)

T f l = pf ∗ e f f / (1 − s f ) ; % Full−load torque (pu)

T b = p (4) ∗ T f l ; % Breakdown torque (pu)

T l r = p (5) ∗ T f l ; % Locked ro to r torque (pu)

i l r = p ( 6 ) ; % Locked ro to r cur rent (pu)

Pm fl = pf ∗ e f f ; % Mechanical power ( at FL)

Q f l = s i n ( acos ( pf ) ) ; % Full−load r e a c t i v e power (pu)

% Set i n i t i a l c ond i t i on s

z (3 ) = 1 / Q f l ; %Xm

z (2) = 0.05 ∗ z ( 3 ) ; %Xs

z (4 ) = 1 / Pm fl ∗ s f ; %Rr1

z (1 ) = z ( 4 ) ; %Rs

z (5 ) = 1 .2 ∗ z ( 2 ) ; %Xr1

z (6 ) = 5 ∗ z ( 4 ) ; %Rr2

z (7 ) = kx ∗ z ( 2 ) ; %Xr2

z (8 ) = 12 ;

% Change o f v a r i a b l e s to cons t ra ined parameters ( with i n i t i a l va lue s )

x (1 ) = z ( 4 ) ;

x (2 ) = z (6 ) − z ( 4 ) ;

x (3 ) = z ( 3 ) ;

x (4 ) = z ( 2 ) ;

x (5 ) = z (5 ) − kx ∗ z ( 2 ) ;

x (6 ) = z ( 8 ) ;

% Formulate s o l u t i o n

pqt = [ Pm fl Q f l T b T l r i l r e f f ] ;

% Set up LM algor i thm parameters

h = 0 . 00001 ;

e r r t o l = 0 . 00001 ;

e r r = 1 ;

i t e r = 0 ;

lambda = lambda 0 ;

conv = f a l s e ;

rho1 = 0 . 2 5 ;

rho2 = 0 . 7 5 ;

beta = 2 ;

gamma = 3 ;

% Run LM algor i thm

whi le ( e r r > e r r t o l ) && ( i t e r < max iter )

% Evaluate ob j e c t i v e func t i on f o r cur rent i t e r a t i o n

y = ( pqt − c a l c pq t ( s f , z ) ) . / pqt ;

e r r 0 = y∗y ’ ;

% Construct Jacobian matrix

Page 103: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

B. MATLAB Source Code 94

f o r i =1:6

x ( i ) = x ( i ) + h ;

% Change o f v a r i a b l e s back to equ iva l en t c i r c u i t parameters

z (2 ) = x ( 4 ) ;

z (3 ) = x ( 3 ) ;

z (4 ) = x ( 1 ) ;

z (5 ) = kx∗x (4) + x ( 5 ) ;

z (6 ) = x (1) + x ( 2 ) ;

z (8 ) = x ( 6 ) ;

z (1 ) = kr ∗ z ( 4 ) ;

z (7 ) = kx ∗ z ( 2 ) ;

j ( : , i ) = ( ( pqt − c a l c pq t ( s f , z ) ) . / pqt − y ) / h ;

x ( i ) = x ( i ) − h ;

end

x r e s e t = x ;

y r e s e t = y ;

i t e r 0 = i t e r ;

% Inner loop ( lambda adjustments )

whi l e ( i t e r == i t e r 0 )

% Calcu la te next i t e r a t i o n and update x

de l t a x = inv ( j ’∗ j + lambda .∗ diag ( diag ( j ’∗ j ) ) )∗ j ’∗ y ’ ;

x = abs (x − de l ta x ’ ) ;

% Change o f v a r i a b l e s back to equ iva l en t c i r c u i t parameters

z (2 ) = x ( 4 ) ;

z (3 ) = x ( 3 ) ;

z (4 ) = x ( 1 ) ;

z (5 ) = kx∗x (4) + x ( 5 ) ;

z (6 ) = x (1) + x ( 2 ) ;

z (8 ) = x ( 6 ) ;

z (1 ) = kr ∗ z ( 4 ) ;

z (7 ) = kx ∗ z ( 2 ) ;

% Ca lcu la te squared e r r o r terms

y = ( pqt − c a l c pq t ( s f , z ) ) . / pqt ;

e r r = y∗y ’ ;

i f ( i snan ( e r r ) == 1)

e r r = 6 ;

end

dE = er r0 − e r r ;

dL = −de l ta x ’∗(− lambda∗ de l t a x − j ’∗ y ’ ) / 2 ;

rho = dE/dL ;

i f ( i snan ( rho ) )

rho = −1;end

Page 104: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

B. MATLAB Source Code 95

% Gain r a t i o adjustment o f lambda

% ∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗i f ( rho > 0)

i t e r = i t e r + 1 ;

e l s e

x = x r e s e t ;

y = y r e s e t ;

end

i f ( rho < rho1 )

lambda = lambda ∗ beta ;

end

i f ( rho > rho2 )

lambda = lambda / gamma;

end

% I f descent d i r e c t i o n isn ’ t minimising , then the re i s no convergence

i f ( lambda > lambda max )

re turn ;

end

% i t e r = i t e r + 1 ;

end

end

i f e r r < e r r t o l

conv = true ;

end

end

B.2.4 dnr solver

% DNR SOLVER − Damped Newton−Rhapson s o l v e r f o r double cage model with core l o s s e s

% So lve s f o r 6 c i r c u i t parameters [ Xs Xm Rr1 Xr1 Rr2 Rc ]

% Rs and Xr2 are computed by l i n e a r r e s t r i c t i o n s

% Inc lude s change o f v a r i a b l e s

% Inc lude s adapt ive s tep s i z e ( as per Pedra 2008)

% Inc lude s determinant check o f j acob ian matrix

%

% Usage : dn r s o l v e r (p , kr , kx , lambda , max iter )

%

% Where p i s a vec to r o f motor performance parameters :

% p = [ s f e f f p f Tb Tlr I l r ]

% s f = f u l l −load s l i p

% e f f = f u l l −load e f f i c i e n c y

% pf = f u l l −load power f a c t o r

% T b = breakdown torque ( as % o f FL torque )

% T lr = locked ro to r torque ( as % o f FL torque )

% I l r = locked ro to r cur rent

% kr and kx are l i n e a r r e s t r i c t i o n s

% lambda i s the i n i t i a l damping parameter

% max iter i s the maximum number o f i t e r a t i o n s

Page 105: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

B. MATLAB Source Code 96

%

% OUTPUT: x i s a vec to r o f motor equ iva l en t parameters :

% x = [ Rs Xs Xm Rr1 Xr1 Rr2 Xr2 Rc ]

% x (1) = Rs = s t a t o r r e s i s t a n c e

% x (2) = Xs = s t a t o r r eac tance

% x (3) = Xm = magnet i s ing reac tance

% x (4) = Rr1 = ro to r / inner cage r e s i s t a n c e

% x (5) = Xr1 = ro to r / inner cage reac tance

% x (6) = Rr2 = outer cage r e s i s t a n c e

% x (7) = Xr2 = outer cage reac tance

% x (8) = Rc = core r e s i s t a n c e

% i t e r i s the number o f i t e r a t i o n s

% e r r i s the squared e r r o r o f the ob j e c t i v e func t i on

% conv i s a t rue / f a l s e f l a g i nd i c a t i n g convergence

func t i on [ z i t e r e r r conv ] = dn r s o l v e r (p , kr , kx , lambda , max iter )

% Human−r eadab le motor performance parameters

% And base value i n i t i a l i s a t i o n

s f = p ( 1 ) ; % Full−load s l i p (pu)

e f f = p ( 2 ) ; % Full−load e f f i c i e n c y (pu)

pf = p ( 3 ) ; % Full−load power f a c t o r (pu)

T f l = pf ∗ e f f / (1 − s f ) ; % Full−load torque (pu)

T b = p (4) ∗ T f l ; % Breakdown torque (pu)

T l r = p (5) ∗ T f l ; % Locked ro to r torque (pu)

i l r = p ( 6 ) ; % Locked ro to r cur rent (pu)

Pm fl = pf ∗ e f f ; % Mechanical power ( at FL)

Q f l = s i n ( acos ( pf ) ) ; % Full−load r e a c t i v e power (pu)

% Set i n i t i a l c ond i t i on s

z (3 ) = 1 / Q f l ; %Xm

z (2) = 0.05 ∗ z ( 3 ) ; %Xs

z (4 ) = 1 / Pm fl ∗ s f ; %Rr1

z (1 ) = z ( 4 ) ; %Rs

z (5 ) = 1 .2 ∗ z ( 2 ) ; %Xr1

z (6 ) = 5 ∗ z ( 4 ) ; %Rr2

z (7 ) = kx ∗ z ( 2 ) ; %Xr2

z (8 ) = 12 ;

% Change o f v a r i a b l e s to cons t ra ined parameters ( with i n i t i a l va lue s )

x (1 ) = z ( 4 ) ;

x (2 ) = z (6 ) − z ( 4 ) ;

x (3 ) = z ( 3 ) ;

x (4 ) = z ( 2 ) ;

x (5 ) = z (5 ) − kx ∗ z ( 2 ) ;

x (6 ) = z ( 8 ) ;

% Formulate s o l u t i o n

pqt = [ Pm fl Q f l T b T l r i l r e f f ] ;

% Set up NR algor i thm parameters

h = 0 . 00001 ;

n = 0 ;

hn = 1 ;

Page 106: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

B. MATLAB Source Code 97

hn min = 0 .0000001 ;

e r r t o l = 0 . 00001 ;

e r r = 1 ;

i t e r = 0 ;

conv = f a l s e ;

gamma = 3 ;

beta = 3 ;

% Run NR algor i thm

whi le ( e r r > e r r t o l ) && ( i t e r < max iter )

% Evaluate ob j e c t i v e func t i on f o r cur rent i t e r a t i o n

y = ( pqt − c a l c pq t ( s f , z ) ) . / pqt ;

e r r 0 = y∗y ’ ;

% Construct Jacobian matrix

f o r i =1:6

x ( i ) = x ( i ) + h ;

% Change o f v a r i a b l e s back to equ iva l en t c i r c u i t parameters

z (2 ) = x ( 4 ) ;

z (3 ) = x ( 3 ) ;

z (4 ) = x ( 1 ) ;

z (5 ) = kx∗x (4) + x ( 5 ) ;

z (6 ) = x (1) + x ( 2 ) ;

z (8 ) = x ( 6 ) ;

z (1 ) = kr ∗ z ( 4 ) ;

z (7 ) = kx ∗ z ( 2 ) ;

j ( : , i ) = ( ( pqt − c a l c pq t ( s f , z ) ) . / pqt − y ) / h ;

x ( i ) = x ( i ) − h ;

end

% Check i f j a cob ian matrix i s s i n gu l a r and ex i t func t i on i f so

i f ( det ( j ) == 0)

re turn ;

end

x r e s e t = x ;

y r e s e t = y ;

i t e r 0 = i t e r ;

% Inner loop ( descent d i r e c t i o n check and step s i z e adjustment )

whi l e ( i t e r == i t e r 0 )

% Calcu la te next i t e r a t i o n and update x

de l t a x = ( j + lambda .∗ eye ( 6 ) )\ y ’ ;

x = abs (x − hn ∗ de l ta x ’ ) ;

% Change o f v a r i a b l e s back to equ iva l en t c i r c u i t parameters

z (2 ) = x ( 4 ) ;

z (3 ) = x ( 3 ) ;

z (4 ) = x ( 1 ) ;

z (5 ) = kx∗x (4) + x ( 5 ) ;

Page 107: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

B. MATLAB Source Code 98

z (6 ) = x (1) + x ( 2 ) ;

z (8 ) = x ( 6 ) ;

z (1 ) = kr ∗ z ( 4 ) ;

z (7 ) = kx ∗ z ( 2 ) ;

% Ca lcu la te squared e r r o r terms

y = ( pqt − c a l c pq t ( s f , z ) ) . / pqt ;

e r r = y∗y ’ ;

% Descent d i r e c t i o n check and step s i z e adjustment

i f ( abs ( e r r ) >= abs ( e r r 0 ) )

n = n + 1 ;

hn = 2ˆ(−n ) ;

lambda = lambda ∗ beta ;

x = x r e s e t ;

y = y r e s e t ;

e l s e

n = 0 ;

lambda = lambda / gamma;

i t e r = i t e r + 1 ;

end

% I f descent d i r e c t i o n isn ’ t minimising , then the re i s no convergence

i f (hn < hn min )

re turn ;

end

end

end

i f e r r < e r r t o l

conv = true ;

end

end

B.3 Genetic Algorithm

B.3.1 ga solver

% GA SOLVER − Standard g ene t i c a lgor i thm so l v e r

%

% Usage : g a s o l v e r (p , n gen )

%

% Where p i s a vec to r o f motor performance parameters :

% p = [ s f e f f p f Tb Tlr I l r ]

% s f = f u l l −load s l i p

% e f f = f u l l −load e f f i c i e n c y

% pf = f u l l −load power f a c t o r

% T b = breakdown torque ( as % o f FL torque )

% T lr = locked ro to r torque ( as % o f FL torque )

% I l r = locked ro to r cur rent

Page 108: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

B. MATLAB Source Code 99

% n gen i s the maximum number o f g ene ra t i on s

%

% OUTPUT: x i s a vec to r o f motor equ iva l en t parameters :

% x = [ Rs Xs Xm Rr1 Xr1 Rr2 Xr2 Rc ]

func t i on [ z e r r conv ] = ga s o l v e r (p , n gen )

% Se t t i n g s

pop = 20 ; % populat ion in each gene ra t i on

n r = 15 ; % number o f members r e t a i n ed f o r mating

n e = 2 ; % number o f e l i t e ch i l d r en per gene ra t i on

c f = 0 . 8 ; % c ro s s ov e r f r a c t i o n

% standard dev i a t i on weight ing vec to r f o r mutation no i s e

sigma = [ 0 . 0 1 0 .01 0 .33 0 .01 0 .01 0 .01 0 .01 6 . 6 7 ] ;

w = [ 0 . 1 5 0 .15 5 0 .15 0 .3 0 .15 0 .15 1 0 0 ] ; % Weighting vec to r

n gen = 30 ; % maximum number o f g ene ra t i on s

e r r t o l = 0 . 00001 ; % e r r o r t o l e r an c e

% Human−r eadab le motor performance parameters

% And base value i n i t i a l i s a t i o n

s f = p ( 1 ) ; % Full−load s l i p (pu)

e f f = p ( 2 ) ; % Full−load e f f i c i e n c y (pu)

pf = p ( 3 ) ; % Full−load power f a c t o r (pu)

T f l = pf ∗ e f f / (1 − s f ) ; % Full−load torque (pu)

T b = p (4) ∗ T f l ; % Breakdown torque (pu)

T l r = p (5) ∗ T f l ; % Locked ro to r torque (pu)

i l r = p ( 6 ) ; % Locked ro to r cur rent (pu)

Pm fl = pf ∗ e f f ; % Mechanical power ( at FL)

Q f l = s i n ( acos ( pf ) ) ; % Full−load r e a c t i v e power (pu)

% Formulate s o l u t i o n

pqt = [ Pm fl Q f l T b T l r i l r e f f ] ;

% Create i n i t i a l populat ion

x = (w’ ∗ ones (1 , pop ) ) ’ .∗ rand (pop , 8 ) ;

gen = 1 ;

f o r i =1:pop

y = ( pqt − c a l c pq t ( s f , x ( i , : ) ) ) . / pqt ;

e r r ( i ) = y∗y ’ ;

i f e r r ( i ) < e r r t o l

z = x( i , : ) ;

conv = 1 ;

e r r = e r r ( i ) ;

r e turn

end

end

% Genetic a lgor i thm

f o r gen=2: n gen

% Se l e c t f o r f i t n e s s

[ f i t n e s s index ] = so r t ( err ’ , 1 ) ;

Page 109: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

B. MATLAB Source Code 100

% Create next gene ra t i on

x mate = x( index ( 1 : n r ) , : ) ; % s e l e c t mating pool

% E l i t e ch i l d r en ( s e l e c t bes t ” n e ” ch i l d r en f o r next gene ra t i on )

x new = x( index ( 1 : n e ) , : ) ;

% Crossover ( random weighted average o f parents )

n c = round ( ( pop − n e )∗ c f ) ; % number o f c r o s s ov e r ch i l d r en

f o r j =1: n c

i p a i r = c e i l ( n r ∗ rand ( 2 , 1 ) ) ; % generate random pa i r o f parents

weight = rand ( 1 , 8 ) ; % generate random weight ing

% Crossover parents by weighted blend to generate new ch i l d

x new ( ( n e + j ) , : ) = weight .∗ x mate ( i p a i r ( 1 ) , : ) + ( ones (1 , 8 ) − weight ) .∗x mate ( i p a i r ( 2 ) , : ) ;

end

% Mutation ( gauss ian no i s e added to parents )

n m = pop − n e − n c ; % number o f mutation ch i l d r en

f o r k=1:n m

% Se l e c t random parent from mating pool and add white no i s e

x new ( ( n e + n c + k ) , : ) = abs ( x mate ( c e i l ( n r ∗ rand ( ) ) , : ) + sigma .∗ randn ( 1 , 8 ) ) ;

end

x = x new ;

f o r i =1:pop

y = ( pqt − c a l c pq t ( s f , x ( i , : ) ) ) . / pqt ;

e r r ( i ) = y∗y ’ ;

i f e r r ( i ) < e r r t o l

z = x( i , : ) ;

conv = 1 ;

e r r = e r r ( i ) ;

r e turn

end

end

% I f the l a s t generat ion , then output best r e s u l t s

i f gen == n gen

[ f i t n e s s index ] = so r t ( err ’ , 1 ) ;

z = x ( index ( 1 ) , : ) ;

conv = 0 ;

e r r = f i t n e s s ( 1 ) ;

end

end

end

Page 110: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

B. MATLAB Source Code 101

B.4 Hybrid Algorithms

B.4.1 hybrid nr

% HYBRID NR − Hybrid g ene t i c a lgor i thm and NR so l v e r to opt imi se Rs and Xr2

% Uses NR SOLVER2 as the base newton−raphson s o l v e r

% Note that NR SOLVER2 i s s imply the convent iona l NR SOLVER

% with Rs and Xr2 as known inputs

% Optimises Rs and Xr2 with g ene t i c a lgor i thm

%

func t i on [ z e r r conv ] = hybr id nr (p , max iter )

% Se t t i n g s

pop = 15 ; % populat ion in each gene ra t i on

n r = 10 ; % number o f members r e t a i n ed f o r mating

n e = 2 ; % number o f e l i t e ch i l d r en per gene ra t i on

c f = 0 . 8 ; % c ro s s ov e r f r a c t i o n

sigma = 0 . 0 1 ; % standard dev i a t i on f o r mutation no i s e

n gen = 10 ; % maximum number o f g ene ra t i on s

% Create i n i t i a l populat ion

Rs = 0 . 1 5 .∗ rand (pop , 2 )

gen = 1

f o r i =1:pop

[ x ( i , : ) i t e r ( i ) e r r ( i ) conv ( i ) ] = n r s o l v e r 2 (p , Rs( i , 1 ) , Rs ( i , 2 ) , max i ter ) ;

i f conv ( i ) == 1

z = x( i , : ) ;

conv = 1 ;

e r r = e r r ( i ) ;

r e turn

end

end

f o r gen=2: n gen

gen

% Se l e c t f o r f i t n e s s

[ f i t n e s s index ] = so r t ( err ’ , 1 ) ;

% Create next gene ra t i on

Rs mate = Rs( index ( 1 : n r ) , : ) ; % s e l e c t mating pool

% E l i t e ch i l d r en ( s e l e c t bes t ” n e ” ch i l d r en f o r next gene ra t i on )

Rs new = Rs( index ( 1 : n e ) , : ) ;

% Crossover ( random weighted average o f parents )

n c = round ( ( pop − n e )∗ c f ) ; % number o f c r o s s ov e r ch i l d r en

f o r j =1: n c

i p a i r = c e i l ( n r ∗ rand ( 2 , 1 ) ) ; % generate random pa i r o f parents

weight = rand ( ) ; % generate random weight ing

Page 111: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

B. MATLAB Source Code 102

% Crossover parents by weighted blend to generate new ch i l d

Rs new ( ( n e + j ) , : ) = weight .∗ Rs mate ( i p a i r ( 1 ) , : ) + (1 − weight ) .∗Rs mate ( i p a i r ( 2 ) , : ) ;

end

% Mutation ( gauss ian no i s e added to parents )

n m = pop − n e − n c ; % number o f mutation ch i l d r en

f o r k=1:n m

% Se l e c t random parent from mating pool and add white no i s e

Rs new ( ( n e + n c + k ) , : ) = abs (Rs mate ( c e i l ( n r ∗ rand ( ) ) , : ) + sigma∗ randn ( 1 , 2 ) ) ;

end

Rs = Rs new

f o r i =1:pop

[ x ( i , : ) i t e r ( i ) e r r ( i ) conv ( i ) ] = n r s o l v e r 2 (p , Rs( i , 1 ) , Rs ( i , 2 ) , max i ter ) ;

i f conv ( i ) == 1

z = x( i , : ) ;

conv = 1 ;

e r r = e r r ( i ) ;

r e turn

end

end

% I f the l a s t generat ion , then output best r e s u l t s

i f gen == n gen

[ f i t n e s s index ] = so r t ( err ’ , 1 ) ;

z = x ( index ( 1 ) , : )

conv = 0

e r r = f i t n e s s (1 )

Rs best = Rs( index ( 1 ) , 1 )

Xr2 best = Rs( index ( 1 ) , 2 )

end

end

end

B.4.2 hybrid lm

% HYBRID LM − Hybrid g ene t i c a lgor i thm and LM so l v e r to opt imi se Rs and Xr2

% Uses LM SOLVER1a as the base newton−raphson s o l v e r

% Note that LM SOLVER1a i s s imply the convent iona l LM SOLVER

% ( e r r o r term adjustment ) with Rs and Xr2 as known inputs .

% Optimises Rs and Xr2 with g ene t i c a lgor i thm

%

func t i on [ z e r r conv ] = hybrid lm (p , max iter )

% Se t t i n g s

pop = 15 ; % populat ion in each gene ra t i on

n r = 10 ; % number o f members r e t a i n ed f o r mating

n e = 2 ; % number o f e l i t e ch i l d r en per gene ra t i on

c f = 0 . 8 ; % c ro s s ov e r f r a c t i o n

Page 112: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

B. MATLAB Source Code 103

sigma = 0 . 0 1 ; % standard dev i a t i on f o r mutation no i s e

n gen = 10 ; % maximum number o f g ene ra t i on s

% Create i n i t i a l populat ion

Rs = 0 . 1 5 .∗ rand (pop , 2 )

gen = 1

f o r i =1:pop

[ x ( i , : ) i t e r ( i ) e r r ( i ) conv ( i ) ] = lm so lve r1a (p , Rs( i , 1 ) , Rs ( i , 2 ) , 1e−7, 5 , max iter ) ;

i f conv ( i ) == 1

z = x( i , : ) ;

conv = 1 ;

e r r = e r r ( i ) ;

r e turn

end

end

f o r gen=2: n gen

gen

% Se l e c t f o r f i t n e s s

[ f i t n e s s index ] = so r t ( err ’ , 1 ) ;

% Create next gene ra t i on

Rs mate = Rs( index ( 1 : n r ) , : ) ; % s e l e c t mating pool

% E l i t e ch i l d r en ( s e l e c t bes t ” n e ” ch i l d r en f o r next gene ra t i on )

Rs new = Rs( index ( 1 : n e ) , : ) ;

% Crossover ( random weighted average o f parents )

n c = round ( ( pop − n e )∗ c f ) ; % number o f c r o s s ov e r ch i l d r en

f o r j =1: n c

i p a i r = c e i l ( n r ∗ rand ( 2 , 1 ) ) ; % generate random pa i r o f parents

weight = rand ( ) ; % generate random weight ing

% Crossover parents by weighted blend to generate new ch i l d

Rs new ( ( n e + j ) , : ) = weight .∗ Rs mate ( i p a i r ( 1 ) , : ) + (1 − weight ) .∗Rs mate ( i p a i r ( 2 ) , : ) ;

end

% Mutation ( gauss ian no i s e added to parents )

n m = pop − n e − n c ; % number o f mutation ch i l d r en

f o r k=1:n m

% Se l e c t random parent from mating pool and add white no i s e

Rs new ( ( n e + n c + k ) , : ) = abs (Rs mate ( c e i l ( n r ∗ rand ( ) ) , : ) + sigma∗ randn ( 1 , 2 ) ) ;

end

Rs = Rs new

f o r i =1:pop

[ x ( i , : ) i t e r ( i ) e r r ( i ) conv ( i ) ] = lm so lve r1a (p , Rs( i , 1 ) , Rs ( i , 2 ) , 1e−7, 5 , max iter ) ;

i f conv ( i ) == 1

z = x( i , : ) ;

Page 113: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

B. MATLAB Source Code 104

conv = 1 ;

e r r = e r r ( i ) ;

r e turn

end

end

% I f the l a s t generat ion , then output best r e s u l t s

i f gen == n gen

[ f i t n e s s index ] = so r t ( err ’ , 1 ) ;

z = x ( index ( 1 ) , : )

conv = 0

e r r = f i t n e s s (1 )

Rs best = Rs( index ( 1 ) , 1 )

Xr2 best = Rs( index ( 1 ) , 2 )

end

end

end

B.4.3 hybrid dnr

% HYBRID DNR − Hybrid g ene t i c a lgor i thm and LM so l v e r to opt imi se Rs and Xr2

% Uses DNR SOLVER2 as the base newton−raphson s o l v e r

% Note that DNR SOLVER2 i s s imply the convent iona l DNR SOLVER

% with Rs and Xr2 as known inputs

% Optimises Rs and Xr2 with g ene t i c a lgor i thm

%

func t i on [ z e r r conv ] = hybr id dnr (p , max iter )

% Se t t i n g s

pop = 15 ; % populat ion in each gene ra t i on

n r = 10 ; % number o f members r e t a i n ed f o r mating

n e = 2 ; % number o f e l i t e ch i l d r en per gene ra t i on

c f = 0 . 8 ; % c ro s s ov e r f r a c t i o n

sigma = 0 . 0 1 ; % standard dev i a t i on f o r mutation no i s e

n gen = 10 ; % maximum number o f g ene ra t i on s

% Create i n i t i a l populat ion

Rs = 0 . 1 5 .∗ rand (pop , 2 )

gen = 1

f o r i =1:pop

[ x ( i , : ) i t e r ( i ) e r r ( i ) conv ( i ) ] = dn r s o l v e r 2 (p , Rs( i , 1 ) , Rs ( i , 2 ) , 1e−7, max iter ) ;

i f conv ( i ) == 1

z = x( i , : ) ;

conv = 1 ;

e r r = e r r ( i ) ;

r e turn

end

end

f o r gen=2: n gen

gen

Page 114: Improved Parameter Estimation Algorithms for Induction …susanto.info/wp-content/uploads/2016/11/MSc_JSusanto.pdf · Improved Parameter Estimation Algorithms for Induction Motors

B. MATLAB Source Code 105

% Se l e c t f o r f i t n e s s

[ f i t n e s s index ] = so r t ( err ’ , 1 ) ;

% Create next gene ra t i on

Rs mate = Rs( index ( 1 : n r ) , : ) ; % s e l e c t mating pool

% E l i t e ch i l d r en ( s e l e c t bes t ” n e ” ch i l d r en f o r next gene ra t i on )

Rs new = Rs( index ( 1 : n e ) , : ) ;

% Crossover ( random weighted average o f parents )

n c = round ( ( pop − n e )∗ c f ) ; % number o f c r o s s ov e r ch i l d r en

f o r j =1: n c

i p a i r = c e i l ( n r ∗ rand ( 2 , 1 ) ) ; % generate random pa i r o f parents

weight = rand ( ) ; % generate random weight ing

% Crossover parents by weighted blend to generate new ch i l d

Rs new ( ( n e + j ) , : ) = weight .∗ Rs mate ( i p a i r ( 1 ) , : ) + (1 − weight ) .∗Rs mate ( i p a i r ( 2 ) , : ) ;

end

% Mutation ( gauss ian no i s e added to parents )

n m = pop − n e − n c ; % number o f mutation ch i l d r en

f o r k=1:n m

% Se l e c t random parent from mating pool and add white no i s e

Rs new ( ( n e + n c + k ) , : ) = abs (Rs mate ( c e i l ( n r ∗ rand ( ) ) , : ) + sigma∗ randn ( 1 , 2 ) ) ;

end

Rs = Rs new

f o r i =1:pop

[ x ( i , : ) i t e r ( i ) e r r ( i ) conv ( i ) ] = dn r s o l v e r 2 (p , Rs( i , 1 ) , Rs ( i , 2 ) , 1e−7, max iter ) ;

i f conv ( i ) == 1

z = x( i , : ) ;

conv = 1 ;

e r r = e r r ( i ) ;

r e turn

end

end

% I f the l a s t generat ion , then output best r e s u l t s

i f gen == n gen

[ f i t n e s s index ] = so r t ( err ’ , 1 ) ;

z = x ( index ( 1 ) , : )

conv = 0

e r r = f i t n e s s (1 )

Rs best = Rs( index ( 1 ) , 1 )

Xr2 best = Rs( index ( 1 ) , 2 )

end

end

end