optimization for dummies

5

Click here to load reader

Upload: mauricio-bedoya

Post on 28-Oct-2015

145 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Optimization for Dummies

Matlab Optimization for Dummies (Matlab R2012a)

Find the parameters that minimize the square root difference of observed values and user equation.

A good starting point to avoid problems in matlab is to identify which algorithm to use. For this purpose, search in the help documentation the content for "Choosing a Solver". Here you will find the optimization decision table.

Autor: Mauricio Bedoya ([email protected])version: 01/2013

fminsearchFirst, read the documentation of this function. In the command window type:

>> doc fminsearch

Now, let´s do several examples.

Example 1f(x) = a * (x^b)x=[1 2 3 4 5];y = [2 8 18 32 50]; % y = f(x)

¿find the value of "a" and "b" that minimize the difference ?

Here we have 2 parameters, but the fminsearch function only allow 1 input parameter. The trick is to define the parameters as a vector. In the decision table, we have that for least square objective function with none constraints, lscurvefit algorithm is recommended. However, I will proceed with fminsearch

step 1 "f(x) function"

In an m.file (file/new script)

function my_f = f_x(parameters)global xa = parameters(1);b = parameters(2);my_f = a * (x.^b);

step 2 "objective function"Minimize the square root difference between y and f(x). In another m.file

function objective = f_objective(parameters)global yobjective = sqrt(sum(y - f_x(parameters)).^2);

step 3 "implement fminsearch"In the command windowglobal x yx=[1 2 3 4 5];y = [2 8 18 32 50];parameters0 = [2,1];options = optimset('LargeScale', 'off', 'MaxIter', 2500, ... 'MaxFunEvals', 3500, 'Display', 'iter', 'TolFun', 1e-40, 'TolX', 1e-40);[x,feval] = fminsearch(@(parameters) f_objective(parameters), parameters0, options)

a = x(1);b = x(2);

If you try to run again the model, you could get an error. To avoid this error, implement the step 3 in an empty script(m.file).

Example 2f1(x) = a * (x^b)

Page 2: Optimization for Dummies

f2(x) = (b*exp(a)) * xx=[1 2 3 4 5];y = [2 8 18 32 50]; % y = f1(x) or y = f2(x)

¿find the value of "a" and "b" that minimize the difference ?

In this case, we have 2 equations. The first approach is to solve them independently, like in example 1. The approach that I will developed, is to add an additional parameter to the objective function to define which f(x) to solve (user defined).

step 1 "f(x) function"In different m.files write both functions.

function my_f1 = f1_x(params)global xa = params(1);b = params(2);my_f1 = a*(x.^b);

In a different m.file write

function my_f2 = f2_x(params)global xa = params(1);b = params(2);my_f2 = (b*exp(a)) * x;

step 2 "objective function"Minimize the square root difference between y and f(x).Here, the objective function has 2 parameters and allow user to define wihch function to solve.

function objective = f_objective(params, model)global yswitch model case 'first' objective = sqrt(sum(y - f1_x(params)).^2); case 'second' objective = sqrt(sum(y- f2_x(params)).^2);end

step 3 "implement fminsearch"In a different m.file write.

global x yx=[1 2 3 4 5];y = [2 8 18 32 50];parameters0 = [2,1];model = 'second';options = optimset('LargeScale', 'off', 'MaxIter', 2500, ... 'MaxFunEvals', 3500, 'Display', 'iter', 'TolFun', 1e-40, 'TolX', 1e-40);[x,feval] = fminsearch(@(params) f_objective(params,model),parameters0, options)a = x(1);b = x(2);

Notes:The expression @(params) define the values that you want to search in the minimization process. As you can see, all other parameters are defined before fminsearch is called. I recommend to implement everything in m.files. Those m.files that you created, must be defined in the current folder. Other examples can be found in the documentation of the function (type in the command window >>doc fminsearch).

lscurvefitFirst, read the documentation of this function. In the command window type:

>> doc lscurvefit

Page 3: Optimization for Dummies

Now, let´s do several examples. I´m going to continue with the example provided in the matlab documentation.

Example 1x1 = [0 1 2 3 4 5];f(x1) = [0 6 20 42 72 110]; f(x1) = a*x1^2 + b*x1 + x1^c;

What are the values of "a", "b" and "c" that minimize the square root difference ?The first thing we have to do is to choose the right algorithm. For this purpose, go to the Optimization Decision Table in the matlab help documentation. Because the objective function minimize the square root difference, a good starting point is to use lsqcurvefit.

step 1 "f(x) function"Write in an m.files the functionsfunction my_f = f_1(params,x1)a = params(1);b = params(2);c = params(3);my_f = a * x1.^2 + b * x1 + x1.^c;

step 2 "implement lsqcurvefit"write in an m.fileglobal x1 Yx1 = [0 1 2 3 4 5];Y = [0 6 20 42 72 110];params0=[2 1 1.5];[x,resnorm,residual,exitflag] = lsqcurvefit(@f_1,params0,x1,Y);a = x(1);b = x(2);c = x(3);

Example 2x1 = [0 1 2 3 4 5];x2 = [6 7 8 9 10 11]f(x1,x2) = [ 36 66 108 162 228 306]; f(x1,x2) = a*x1^2 + b*x1*x2 + x2^c;

lscurvefit apply here too. However, the inputs in the function are data_x, data_y, and I have two x(x1 and x2). To solve this, organize x as a matrix.

step 1 "f(x) function"Write in an m.files the functionsfunction my_f = f_1(params,x)a = params(1);b = params(2);c = params(3);my_f = a * x(1,:).^2 + b * x(1,:).*x(2,:) + x(2,:).^c;

step 2 "implement lsqcurvefit"write in an m.fileglobal x1 x2 Yx1 = [0 1 2 3 4 5];x2 = [6 7 8 9 10 11];x_data = [x1; x2];Y = [ 36 66 108 162 228 306];params0=[2 1 1.5];[x,resnorm,residual,exitflag] = lsqcurvefit(@f_1,params0,x_data,Y)

fminconFirst, read the documentation of this function. In the command window type:

>> doc fmincon

Example 1

Page 4: Optimization for Dummies

Max 5 -(x(1) -2)^2 - 2*(x(2)-1)^2constrain: x(1) + 4*x(2) = 3

First, identify algorithm to use. The objective function is smooth and nonlinear and the constraint is linear. If you go to de optimization decision table, we can identify that the optimal algorithm to use is fmincon.

step 1 "objective function"write in an m.filefunction my_f = f_1(params)x1 = params(1);x2 = params(2);my_f = -(5 - (x1 - 2)^2 - 2 * (x2 -1)^2); % Minus because matlab always minimize

step 2 "implement fmincon"A=[];b=[];Ae = [1 4];be = 3;lb = [];ub = [];nonlcon = [];params0=[1 1];options = optimset('Display','iter');[params,fval,exitflag,output,lambda,grad,hessian]=fmincon(@(params)f_1(params),params0,A,b,Ae,be,lb,ub,nonlcon,options);x1 = params(1);x2 = params(2);

Example 2f(x) = -(a - (x - 2).^b

x = [4 5 6 7 8];Y = [0 1 2 3 4];

The function is smooth nonlinear and we want to minimize the least square error. In this case, we can use fminunc, fminsearch, lsqcurvefit, lsqnonlin.I will implement lsqcurvefit again

step 1 f(x) functionfunction my_f = f_1(params,x)a = params(1);b = params(2);my_f = -(a - (x - 2).^b);

step 2 implement lsqcurvefitX = [4 5 6 7 8];Y = [0 1 2 3 4];params0=[0 0];lb = [];ub = [];options = optimset('Display','iter');[params,resnorm,residual,exitflag] = lsqcurvefit(@f_1,params0,X,Y,lb,ub,options);a = params(1);b = params(2);

step 3 implement fminconlets suppose that we have constraint´s equal to:a*X + b >=2(a+b)*X = 30<=a<=30<=b<=4In this case, it`s appropriated to use fmincon. First, we need to define the objective function

step 3.1 objective functionfunction [objective] = f_objective(params)global X Yobj = sqrt(sum(f_1(params,X) - Y).^2); %Minimize the least square error

Page 5: Optimization for Dummies

step 3.2 organize constraints to standard form-a*X <= b - 2(a+b)*X = 30<=a<=30<=b<=4

step 3.3 implement fminconX = [4 5 6 7 8];Y = [0 1 2 3 4];params0=[1 1];a_ = params(1)b_ = params(2)A=[-a_ 0];b=[b_-2];Aeq = [a b];beq = [3];lb = [0,0];ub = [3,4];nonlcon = [];options = optimset('LargeScale', 'off', 'MaxIter', 2500, ... 'MaxFunEvals', 3500, 'Display', 'iter', 'TolFun', 1e-40, 'TolX', 1e-40);[params,fval,exitflag,output,grad,hessian]=fmincon(@(params) f_objective(params),params0,A,b,Aeq,beq,lb,ub,nonlcon,options);a_ = params(1);b_ = params(2);