record

41
EXP NO : 1 DATE:06/01/2 012 FUZZY CONTROLLER AIM: To design a fuzzy controller for the first order system TOOLS USED: Matlab THEORY: FUZZY SETS: Fuzzy logic starts with the concept of a fuzzy set. A fuzzy set is a set without a crisp, clearly defined boundary. It can contain elements with only a partial degree of membership. Any statement can be fuzzy. The major advantage that fuzzy reasoning offers is the ability to reply to a yes- no question with a not-quite-yes-or-no answer. Humans do this kind of thing all the time (think how rarely you get a straight answer to a seemingly simple question), but it is a rather new trick for computers. Page | 1

Upload: muthukumar-natarajan

Post on 29-Aug-2014

24 views

Category:

Documents


6 download

TRANSCRIPT

Page 1: Record

EXP NO : 1DATE:06/01/2012 FUZZY CONTROLLER

AIM:

To design a fuzzy controller for the first order system

TOOLS USED:

Matlab

THEORY:

FUZZY SETS:

Fuzzy logic starts with the concept of a fuzzy set. A fuzzy set is a set without a crisp, clearly defined boundary. It can contain elements with only a partial degree of membership.

Any statement can be fuzzy. The major advantage that fuzzy reasoning offers is the ability to reply to a yes-no question with a not-quite-yes-or-no answer. Humans do this kind of thing all the time (think how rarely you get a straight answer to a seemingly simple question), but it is a rather new trick for computers.

MEMBERSHIP FUNCTIONS:

A membership function (MF) is a curve that defines how each point in the input space is mapped to a membership value (or degree of membership) between 0 and 1. The input space is sometimes referred to as the universe of discourse, a fancy name for a simple concept.

Page | 1

Page 2: Record

LOGICAL OPERATIONS:

The next figure uses a graph to show the same information. In this figure, the truth table is converted to a plot of two fuzzy sets applied together to create one fuzzy set. The upper part of the figure displays plots corresponding to the preceding two-valued truth tables, while the lower part of the figure displays how the operations work over a continuously varying range of truth values A and B according to the fuzzy operations you have defined

PROCEDURE:

You can use five primary GUI tools for building, editing, and observing fuzzy inference systems in the toolbox:

1. Fuzzy Inference System (FIS) Editor 2. Membership Function Editor

3. Rule Editor

4. Rule Viewer

5. Surface Viewer

To build a fuzzy inference system using custom functions in the GUI:

1. Open the FIS Editor by typing fuzzy at the MATLAB prompt.2. Specify the number of inputs and outputs of the fuzzy system, as described in The FIS

Editor.

Page | 2

Page 3: Record

3. Create custom membership functions, and replace the built-in membership functions with them, as described in Specifying Custom Membership Functions.

Membership functions define how each point in the input space is mapped to a membership value between 0 and 1.

4. Create rules using the Rule Editor, as described in The Rule Editor.

Rules define the logical relationship between the inputs and the outputs.

5. Create custom inference functions, and replace the built in inference functions with them, as described in Specifying Custom Inference Functions.

Inference methods include the AND, OR, implication, aggregation and defuzzification methods. This action generates the output values for the fuzzy system.

Select View > Surface to view the output of the fuzzy inference system in the Surface Viewer, as described in The Surface Viewer

6. Fuzzy Logic Toolbox software is designed to work in Simulink environment. After you have created your fuzzy system using the GUI tools or some other method, you are ready to embed your system directly into a simulation.

BLOCK DIAGRAM:

Page | 3

Page 4: Record

GRAPH:

Page | 4

Page 5: Record

RESULT:

Thus the fuzzy controller is success fully designed for first order system.

EXP NO : 2DATE: 13/01/2012

NEURAL NETWORK CONTROLLER

AIM:

To design a neural controller for the second order system

TOOLS USED:

Matlab

THEORY:

Neural Networks (NNs) are networks of neurons, for example, as found in real (i.e. biological) brains Artificial Neural Networks (ANNs) are networks of Artificial Neurons,and hence constitute crude approximations to parts of real brains. Theymay be physical devices, or simulated on conventional computers. From a practical point of view, an ANN is just a parallel computational system consisting of many simple processing elements connected together in a specific way in order to perform a particular task. One should never lose sight of how crude the approximations are, and how over-simplified our ANNs are compared to real brains.

They are extremely powerful computational devices (Turing equivalent, universal computers).

Massive parallelism makes them very efficient.

They can learn and generalize from training data – so there is no need for enormous feats of programming.

They are particularly fault tolerant – this is equivalent to the “graceful degradation” found in biological systems.

They are very noise tolerant – so they can cope with situations where normal symbolic systems would have difficulty.

In principle, they can do anything a symbolic/logic system can do, and more. (In practice, getting them to do it can be rather difficult…)

Page | 5

Page 6: Record

PROCEDURE

1. Start MATLAB®. 2. Double-click the Model Reference Control block. This brings up the following

window for training the model reference controller.

3. The next step would normally be to select Plant Identification, which opens the Plant Identification window. You would then train the plant model. Because the Plant Identification window is identical to the one used with the previous controllers, that process is omitted here.

4. Select Generate Data. The program starts generating the data for training the controller. After the data is generated, the following window appears.

5. Select Accept Data. Return to the Model Reference Control window and select Train Controller. The program presents one segment of data to the network and trains the network for a specified number of iterations (five in this case). This process continues, one segment at a time, until the entire training set has been presented to the network. Controller training can be significantly more time consuming than plant model training. This is because the controller must be trained using dynamic backpropagation (see [HaJe99]). After the training is complete, the response of the resulting closed loop system is displayed.

6. Go back to the Model Reference Control window. If the performance of the controller is not accurate, then you can select Train Controller again, which continues the controller training with the same data set. If you would like to use a new data set to continue training, select Generate Data or Import Data before you select Train Controller. (Be sure that Use Current Weights is selected if you want to continue training with the same weights.) It might also be necessary to retrain the plant model. If the plant model is not accurate, it can affect the controller training. For this demonstration, the controller should be accurate enough, so select OK. This loads the controller weights into the Simulink model.

7. Return to the Simulink model and start the simulation by selecting the Start command from the Simulation menu. As the simulation runs, the plant output and the reference signal are displayed.

Page | 6

Page 7: Record

BLOCK DIAGRAM:

GRAPH

Setpoint -0.5

Page | 7

Page 8: Record

Set point-0.4

RESULT:

Thus the neural controller is successfully designed for a second order system.

Page | 8

Page 9: Record

EXP NO : 3DATE:20/01/2012 MODEL PREDICTIVE CONTROLLER

AIM:

To design a model predictive controller for a first system.

TOOLS USED:

Matlab

THEORY:

MPC is based on iterative, finite horizon optimization of a plant model. At time t the current plant state is sampled and a cost minimizing control strategy is computed (via a numerical minimization algorithm) for a relatively short time horizon in the

future:  . Specifically, an online or on-the-fly calculation is used to explore state trajectories that emanate from the current state and find (via the solution of Euler-Lagrange equations) a cost-minimizing control strategy until time  . Only the first step of the control strategy is implemented, then the plant state is sampled again and the calculations are repeated starting from the now current state, yielding a new control and new predicted state path. The prediction horizon keeps being shifted forward and for this reason MPC is also called receding horizon control. Although this approach is not optimal, in practice it has given very good results. Much academic research has been done to find fast methods of solution of Euler-Lagrange type equations, to understand the global stability properties of MPC's local optimization, and in general to improve the MPC method. To some extent the theoreticians have been trying to catch up with the control engineers when it comes to MPC.

Page | 9

Page 10: Record

ALGORITHM:Model Predictive Control (MPC) is a multivariable control algorithm that uses:

an internal dynamic model of the process a history of past control moves and

an optimization cost function J over the receding prediction horizon, to calculate the optimum control moves.

The optimization cost function is given by:

without violating constraints (low/high limits) With:

 = i -th controlled variable (e.g. measured temperature)

 = i -th reference variable (e.g. required temperature)

 = i -th manipulated variable (e.g. control valve)

 = weighting coefficient reflecting the relative importance of 

 = weighting coefficient penalizing relative big changes in 

etc.

Page | 10

Page 11: Record

PROCEDURE:

1. Type mpctool

The design tool is part of the Control and Estimation Tools Manager. When invoked as shown above, the design tool opens and creates a new project named MPCdesign

2. In the MPC task we should rename the manipulated variable , control variable and disturbance.

3. Then provide the value for control interval , prediction horizon and control horizon.

4. In the constraints we should select the range for control variable and measured variable.

5. Assign the value for robustness and overall estimator gain.

6. Select the type of input for measured disturbance and control variable.

7. Click the simulation button and get the graph to note down the response for different disturbance and control variable.

BLOCK DIAGRAM:

REAL PLANT:

Page | 11

Page 12: Record

OVERALL BLOCK DIAGRAM:

GRAPH:

Set point-50

Disturbance-10 , Sampling time-100

Page | 12

Page 13: Record

Set point-50

Disturbance-30 , Sampling time-200

RESULT:

Thus the model predictive controller is successfully designed for first order system.

Page | 13

Page 14: Record

EXP NO : 4DATE: 03/02/2012 KALMAN FILTER

AIM:

To design a kalman filter in order to estimate the states of the system amidst noise.

TOOLS USED:

Matlab.

THEORY:

The Kalman filters are based on linear dynamic systems discretized in the time domain. They are modelled on a Markov chain built on linear operators perturbed by Gaussian noise. The state of the system is represented as a vector of real numbers. At each discrete time increment, a linear operator is applied to the state to generate the new state, with some noise mixed in, and optionally some information from the controls on the system if they are known. Then, another linear operator mixed with more noise generates the observed outputs from the true ("hidden") state.

In order to use the Kalman filter to estimate the internal state of a process given only a sequence of noisy observations, one must model the process in accordance with the framework of the Kalman filter. This means specifying the following matrices: Fk, the state-transition model; Hk, the observation model; Qk, the covariance of the process noise; Rk, the covariance of the observation noise; and sometimes Bk, the control-input model, for each time-step, k, as described below

The Kalman filter model assumes the true state at time k is evolved from the state at (k − 1) according to

where

Fk is the state transition model which is applied to the previous state xk−1; Bk is the control-input model which is applied to the control vector uk;

wk is the process noise which is assumed to be drawn from a zero mean multivariate normal distribution with covariance Qk.

At time k an observation (or measurement) zk of the true state xk is made according to

where Hk is the observation model which maps the true state space into the observed space and vk is the observation noise which is assumed to be zero mean Gaussian white noise with covariance Rk.

Page | 14

Page 15: Record

The initial state, and the noise vectors at each step {x0, w1, ..., wk, v1 ... vk} are all assumed to be mutually independent.

The Kalman filter is a recursive estimator. This means that only the estimated state from the previous time step and the current measurement are needed to compute the estimate for the current state. In contrast to batch estimation techniques, no history of observations and/or estimates is required. In what follows, the notation represents the estimate of at time n given observations up to, and including at time m.

The state of the filter is represented by two variables:

, the a posteriori state estimate at time k given observations up to and including at time k;

, the a posteriori error covariance matrix (a measure of the estimated accuracy of the state estimate).

PREDICT:

Predicted (a priori) state estimatePredicted (a priori) estimate covariance

UPDATE:

Innovation or measurement residualInnovation (or residual) covariance

Optimal Kalman gainUpdated (a posteriori) state estimateUpdated (a posteriori) estimate covariance

PROCEDURE:

1. Get the state matrix.2. Describe the noise.3. Estimate the state before the measurement at time k.4. Predict the noise covariance before the measurement at the time K.5. Update the estimate and noise covariance after the measurement has been taken at

time K.

CODE:

Page | 15

Page 16: Record

% define system% x+ = x + u * dt + n% y = x + v dt = 1; F_x = 1;F_u = dt;F_n = 1;H = 1; Q = 1;R = 1; % simulated variables X = 7;u = 1; % estimated variables x = 0;P = 1e4; % trajectoriestt = 0:dt:40;XX = zeros(1, size(tt,2));xx = zeros(1, size(tt,2));yy = zeros(1, size(tt,2));PP = zeros(1, size(tt,2)); % perturbation levelsq = sqrt(Q);r = sqrt(R); % start loopi = 1;for t = tt % simulate n = q * randn; X = F_x * X + F_u * u + F_n * n; v = r * randn; y = H*X + v; % estimate - prediction x = F_x * x + F_u * u; P = F_x * P * F_x' + F_n * Q * F_n'; % correction e = H * x; E = H * P * H'; z = y - e; Z = R + E; K = P * H' * Z^-1; x = x + K * z; P = P - K * H * P;

Page | 16

Page 17: Record

% collect data XX(:,i) = X; xx(:,i) = x; yy(:,i) = y; PP(:,i) = diag(P); % update index i = i + 1;end % plotplot(tt,XX,'r',tt,xx,'c',tt,yy,'b',tt,XX+3*sqrt(PP),'g',tt,XX-3*sqrt(PP),'g');legend('truth','estimate','measurement','+/- 3 sigma')

GRAPH:

Page | 17

Page 18: Record

RESULT:

A Kalman filter for estimating the state amidst noise has been designed and its performance was studied.

EXP NO : 5DATE:10/02/2012 CASCADE CONTROL

AIM:

Page | 18

Page 19: Record

To determine the response of the process using cascade control.

TOOLS USED:

Matlab

THEORY:

Cascade control is one of the most successful methods for enhancing single loop control performance. It can dramatically improve the performance of

control strategies, reducing both the maximum deviation and the integral error for disturbance responses. Cascade control has one manipulated variable and more than one measured variable.

The calculation can be implemented with a wide variety of analog and digital equipment. This combination of ease of implementation and potentially large control performance improvement has led to the widespread application of cascade control for many decades.

As explained above, single-loop enhancement takes advantage of extra information to improve on the performance of the PID feedback control system. The selection of this extra measurement, which is based on information about the most common disturbance and about the process dynamic response, is critical to the success of the cascade controller. Disturbance in the secondary loop are corrected by the secondary controller, before they affect the value of the primary controlled output.

PROCEDURE:

1. Open Matlab and create new model2. Drag the blocks from the simulink browser, and place it in the model.3. The desired output (i.e. settling time) is reached, by changing the

values of the PID controller by trial and error method.4. Study the response of the process without using cascade control and by

with using cascade control.

INFERENCE:

1. By using cascade control, the overshoot will be reduced, but in conventional feedback control large overshoot will be present.

Page | 19

Page 20: Record

2. Load disturbance is reduced.3. The inner loop has the effect of reducing the time lag in the outer loop,

with the result that the system responds more quickly with a higher frequency oscillation.

BLOCK DIAGRAM:

PROCESS WITH FEEDBACK CONTROL:

PROCESS WITH CASCADE CONTROL:

GRAPH:

Page | 20

Page 21: Record

RESULT:

Thus the response of the cascade control is studied.

EXP NO : 6DATE: 17/02/2012 FEED FORWARD CONTROL

Page | 21

Page 22: Record

AIM:

To determine the response of the process using feed forward control.

TOOLS USED:

Matlab

THEORY:

Combined feed forward plus feedback control can significantly improve performance over simple feedback control whenever there is a major disturbance that can be measured before it affects the process output. In the most ideal situatution, feedforward control can entirely eliminate the effect of the measured disturbance on the process output. Even when there are modeling errors, feedforward control can often reduce the effect of measured disturbance on the output better than that achievable by feedback control alone.However, the decision as to whether or not to use feedforward control depends on whether the degree of improvement in the response to the measured disturbance justifies the added costs of implementation and maintenance. The economic benefits of the feedforwaard control can come from lower operating costs and/or increased stability of the product due to its more consistent quality. Feedforward control is always used along with feedforward control because a feedforward system is required to track set point changes and to suppress unmeasured disturbance.

PROCEDURE:

5. Open matlab and create new model6. Drag the blocks from the simulink browser, and place it in the model.7. The desired output (i.e. Setteling time) is reached, by changing the

values of the PID controller by trial and error method.8. Study the response of the process without using feedforward control

and by with using feedforward control.

INFERENCE:

1. Using feedforward and feedback control oscillation will be there, but the desired output is reached.

2. Using only feedforward control deviation will be there, so the desired output will not be reached.

3. The stability characteristics of a feedback system will not change with the addition of a feedforward control.

BLOCK DIAGRAM:

PROCESS WITH FEEDBACK:

Page | 22

Page 23: Record

PROCESS WITH FEED FORWARD:

GRAPH:

Page | 23

Page 24: Record

RESULT:

Thus, the response of the process using feed forward control is studied.

Page | 24

Page 25: Record

EXP NO : 7 DATE:02/03/2012 ROBUST CONTROLLER –CASE STUDY

AIM:

To make a detailed case study on Robust Control.

THEORY:

Robust control is a branch of control theory that explicitly deals with uncertainty in its approach to controller design. Robust control methods are designed to function properly so long as uncertain parameters or disturbances are within some (typically compact) set. Robust methods aim to achieve robust performance and/or stability in the presence of bounded modeling errors. In contrast with an adaptive control policy, a robust control policy is static; rather than adapting to measurements of variations, the controller is designed to work assuming that certain variables will be unknown but, for example, bounded. Informally, a controller designed for a particular set of parameters is said to be robust if it would also work well under a different set of assumptions. High-gain feedback is a simple example of a robust control method; with sufficiently high gain, the effect of any parameter variations will be negligible. High-gain feedback is the principle that allows simplified models of operational amplifiers and emitter-degenerated bipolar transistors to be used in a variety of different settings.

THE MODERN THEORY OF ROBUST CONTROL:

Probably the most important example of a robust control technique is shaping. This method minimizes the sensitivity of a system over its frequency spectrum, and this guarantees that the system will not greatly deviate from expected trajectories when disturbances enter the system. H-infinity loop-shaping is a design methodology in modern control theory. It combines the traditional intuition of classical control methods, such as Bode's sensitivity integral, with H-infinity optimization techniques to achieve controllers whose stability and performance properties hold good in spite of bounded differences between the nominal plant assumed in design and the true plant encountered in practice. Essentially, the control system designer describes the desired responsiveness and noise-suppression properties by weighting the plant transfer function in the frequency domain; the resulting 'loop-shape' is then 'robustified' through optimization. Robustification usually has little effect at high and low frequencies, but the response around unity-gain crossover is adjusted to maximise the system's stability margins. H-infinity loop-shaping can be applied to multiple-input multiple-output (MIMO) systems.That the technique had the following benefits:

Easy to apply – commercial software handles the hard math. Easy to implement – standard transfer functions and state-space methods can be used.

Page | 25

Page 26: Record

Plug and play – no need for re-tuning on an installation-by-installation basis. "Robust control refers to the control of unknown plants with unknown dynamics subject to unknown disturbances". Clearly, the key issue with robust control systems is uncertainty and how the control system can deal with this problem. Uncertainty is shown entering the system in three places. There is uncertainty in the model of the plant. There are disturbances that occur in the plant system. Also there is noise which is read on the sensor inputs. Each of these uncertainties can have an additive or multiplicative component.

Figure : Plant control loop with uncertainty

The figure above also shows the separation of the computer control system with that of the plant. It is important to understand that the control system designer has little control of the uncertainty in the plant. The designer creates a control system that is based on a model of the plant. However, the implemented control system must interact with the actual plant, not the model of the plant.

EFFECTS OF UNCERTAINTY :

Control system engineers are concerned with three main topics: observability, controllability and stability. Observability is the ability to observe all of the parameters or state variables in the system. Controllability is the ability to move a system from any given state to any desired state. Stability is often phrased as the bounded response of the system to any bounded input. Any successful control system will have and maintain all three of these properties. Uncertainty presents a challenge to the control system engineer who tries to maintain these properties using limited information. Robust control methods seek to bound the uncertainty rather than express it in the form of a distribution. Given a bound on the uncertainty, the control can deliver results that meet the control system requirements in all cases. Therefore robust control theory might be stated as a worst-case analysis method rather than a typical case method. It must be recognized that some performance may be sacrificed in order to guarantee that the system meets certain requirements. However, this seems to be a common theme when dealing with safety critical embedded systems.One of the most difficult parts of designing a good control system is modeling the behavior of the plant.

Page | 26

Page 27: Record

One technique for handling the model uncertainty that often occurs at high frequencies is to balance performance and robustness in the system through gain scheduling. A high gain means that the system will respond quickly to differences between the desired state and the actual state of the plant. At low frequencies where the plant is accurately modeled, this high gain results in high performance of the system. This region of operation is called the performance band. At high frequencies where the plant is not modeled accurately, the gain is lower. A low gain at high frequencies results in a larger error term between the measured output and the reference signal. This region is called the robustness band. In this region the feedback from the output is essentially ignored. The method for changing the gain over different frequencies is through the transfer function. This involves setting the poles and zeros of the transfer function to achieve a filter. Between these two regions, performance and robustness, there is a transition region. In this region the controller does not perform well for either performance or robustness. The transition region cannot be made arbitrarily small because it depends on the number of poles and zeros of the transfer function. Adding terms to the transfer function increases the complexity of the control system. Thus, there is a trade-off between the simplicity of the model and the minimal size of the transition band.

AVAILABLE TOOLS, TECHNIQUES, AND METRICS :

There are a variety of techniques that have been developed for robust control. Adaptive control - An adaptive control system sets up observers for each significant state variable in the system. The system can adjust each observer to account for time varying parameters of the system. In an adaptive system, there is always a dual role of the control system. The output is to be brought closer to the desired input while, at the same time, the system continues to learn about changes in the system parameters. This method sometimes suffers from problems in convergence for the system parametersH2 and Hinfinity - Hankel norms are used to measure control system properties. A norm is an abstraction of the concept of length. Both of these techniques are frequency domain techniques. H2 control seeks to bound the power gain of the system while H infinity control seeks to bound the energy gain of the system. Gains in power or energy in the system indicate operation of the system near a pole in the transfer function. These situations are unstable. Parameter Estimation - Parameter estimation techniques establish boundaries in the frequency domain that cannot be crossed to maintain stability. These boundaries are evaluated by given uncertainty vectors. This technique is graphical. It has some similarities to the root locus technique. The advancement of this technique is based upon computational simplifications in evaluating whether multiple uncertainties cause the system to cross a stability boundary. These techniques claim to give the user clues on how to change the system to make it more insensitive to uncertaintiesLyapanov - This is claimed to be the only universal technique for assessing non-linear systems. The technique focuses on stability. Lyaponov functions are constructed, which are described as energy like functions, that model the behavior of real systems. These functions are evaluated along the system trajectory to see if the first derivative is always dissipative in energy. Any gain in energy represents the system is operating near a pole and will therefore be unstable. Fuzzy Control - Fuzzy control is based upon the construction of fuzzy sets to describe the uncertainty inherent in all variables and a method of combining these variables called fuzzy logic. Fuzzy control is applicable to robust control because it is a method of handling the uncertainty of the system. Fuzzy control is a controversial issue. Its proponents claim the ability to control without the requirement for complex mathematical modeling. It appears to

Page | 27

Page 28: Record

have applications where there are a large number of variables to be controlled and it is intuitively obvious (but not mathematically obvious) how to control the system.

INFERENCES:

There is a concern for the extremes of operation in an embedded control system that has safety implications. It is in these extremes that uncertainty is high and robust control methods can be of service. Good models of systems are difficult to construct. They require a variety of skills from physics, electrical, mechanical and computer engineering to design and implement.

To bring the techniques to use by the general industry, a variety of tools have been developed. However, there is always an issue of the correctness of the tools especially when they are used to simplify a very complex technique.

With the high level of research devoted to robust control the gap between robust control theory and its application may be closing.

RESULT:

A detailed case study has been made on the robust controller.

EXP NO : 8 DATE:09/03/2012 ADAPTIVE CONTROLLER –CASE STUDY

Page | 28

Page 29: Record

AIM:

To have a case study on the adaptive control and its types.

THEORY:

Adaptive control is the control method used by a controller which must adapt to a controlled system with parameters which vary, or are initially uncertain. For example, as an aircraft flies, its mass will slowly decrease as a result of fuel consumption; a control law is needed that adapts itself to such changing conditions. Adaptive control is different from robust control in that it does not need a priori information about the bounds on these uncertain or time-varying parameters; robust control guarantees that if the changes are within given bounds the control law need not be changed, while adaptive control is concerned with control law changes themselves.

PARAMETER ESTIMATION:

The foundation of adaptive control is parameter estimation. Common methods of estimation include recursive least squares and gradient descent. Both of these methods provide update laws which are used to modify estimates in real time (i.e., as the system operates). Lyapunov stability is used to derive these update laws and show convergence criterion (typically persistent excitation). Projection (mathematics) and normalization are commonly used to improve the robustness of estimation algorithms.

CLASSIFICATION OF ADAPTIVE CONTROL TECHNIQUES:

In general one should distinguish between:

1. Feedforward Adaptive Control2. Feedback Adaptive Control

as well as between

1. Direct Methods and2. Indirect Methods

Direct methods are ones wherein the estimated parameters are those directly used in the adaptive controller. In contrast, indirect methods are those in which the estimated parameters are used to calculate required controller parameters[1]

There are several broad categories of feedback adaptive control (classification can vary):

Dual Adaptive Controllers [based on Dual control theory] o Optimal Dual Controllers [difficult to design]

o Suboptimal Dual Controllers

Nondual Adaptive Controllers

o Adaptive Pole Placement

Page | 29

Page 30: Record

o Extremum Seeking Controllers

o Iterative learning control

o Gain scheduling

o Model Reference Adaptive Controllers (MRACs) [incorporate a reference model defining desired closed loop performance]

Page | 30

Page 31: Record

APPLICATIONS:

When designing adaptive control systems, special consideration is necessary of convergence and robustness issues. Lyapunov stability is typically used to derive control adaptation laws and show convergence.

Typical applications of adaptive control are (in general):

Self-tuning of subsequently fixed linear controllers during the implementation phase for one operating point;

Self-tuning of subsequently fixed robust controllers during the implementation phase for whole range of operating points;

Self-tuning of fixed controllers on request if the process behaviour changes due to ageing, drift, wear etc.;

Adaptive control of linear controllers for nonlinear or time-varying processes;

Adaptive control or self-tuning control of nonlinear controllers for nonlinear processes;

Adaptive control or self-tuning control of multivariable controllers for multivariable processes (MIMO systems);

Usually these methods adapt the controllers to both the process statics and dynamics. In special cases the adaptation can be limited to the static behavior alone, leading to adaptive control based on characteristic curves for the steady-states or to extremum value control, optimizing the steady state. Hence, there are several ways to apply adaptive control algorithms.

RESULT:

Thus a case study has been made on the adaptive controller.

Page | 31

Page 32: Record

Page | 32