mtaea unconstrained optimization in euclidean space

38
Unconstrained Optimization in Euclidean Space MTAEA – Unconstrained Optimization in Euclidean Space Scott McCracken School of Economics, Australian National University February 17, 2010 Scott McCracken MTAEA – Unconstrained Optimization in Euclidean Space

Upload: others

Post on 12-Sep-2021

8 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: MTAEA Unconstrained Optimization in Euclidean Space

Unconstrained Optimization in Euclidean Space

MTAEA – Unconstrained Optimization inEuclidean Space

Scott McCracken

School of Economics,Australian National University

February 17, 2010

Scott McCracken MTAEA – Unconstrained Optimization in Euclidean Space

Page 2: MTAEA Unconstrained Optimization in Euclidean Space

Unconstrained Optimization in Euclidean Space

Some Notation.

I Given any two vectors, x = (x1, . . . , xn) and y = (y1, . . . , yn) inRn, we write

x = y , if xi = yi , i = 1, . . . ,n;

x ≥ y , if xi ≥ yi , i = 1, . . . ,n;

x > y , if x ≥ y and x 6= y ;

x � y , if xi > yi , i = 1, . . . ,n.

I Note thatI x ≥ y does not exclude the possibility that x = y , andI for n > 1, the vectors x and y may not be comparable under any

of the categories above. For example, the vectors x = (2,1) andy = (1,2) in R2 do not satisfy x ≥ y or y ≥ x .

Scott McCracken MTAEA – Unconstrained Optimization in Euclidean Space

Page 3: MTAEA Unconstrained Optimization in Euclidean Space

Unconstrained Optimization in Euclidean Space

Some Notation.

I The nonnegative and positive orthants of Rn, denoted by Rn+ and

Rn++ respectively, are defined as

Rn+ = {x ∈ Rn | x ≥ 0}

and

Rn++ = {x ∈ Rn | x � 0}.

Scott McCracken MTAEA – Unconstrained Optimization in Euclidean Space

Page 4: MTAEA Unconstrained Optimization in Euclidean Space

Unconstrained Optimization in Euclidean Space

Optimization Problems.I First we introduce the notation used to represent abstract

optimization problems and their solutions.

Definition.An optimization problem (in Rn) is one where the values of a givenfunction f : Rn → R are to be maximized or minimized over a givenset D ⊆ Rn. The function f is called the objective function, and the setD the constraint or feasible set.

I Notationally, we represent a maximation problem by

Maximize f (x) subject to x ∈ D,

or, more compactly,

max{f (x) | x ∈ D}.

Scott McCracken MTAEA – Unconstrained Optimization in Euclidean Space

Page 5: MTAEA Unconstrained Optimization in Euclidean Space

Unconstrained Optimization in Euclidean Space

Optimization Problems.I Similarly, for a minimization problem, we write

Minimize f (x) subject to x ∈ D,

ormin{f (x) | x ∈ D}.

I A solution to a maximization problem is a point x ∈ D such that

f (x) ≥ f (y) for all y ∈ D.

We say that f attains a maximum on D at x , and refer to x as amaximizer of f on D.

I Simlarly, a solution to a minimizaton problem is a point x ∈ Dsuch that

f (x) ≤ f (y) for all y ∈ D.We say that f attains a minimum on D at x , and refer to x as aminimizer of f on D.

Scott McCracken MTAEA – Unconstrained Optimization in Euclidean Space

Page 6: MTAEA Unconstrained Optimization in Euclidean Space

Unconstrained Optimization in Euclidean Space

Optimization Problems.I The set of attainable values of f on D, f (D), is defined by

f (D) = {w ∈ R | f (x) = w for some x ∈ D}We also refer to f (D) as the image of D under f . N

I The following examples show that for a given optimizationproblem a solution may not exist, and even if a solution exists itneed no be unique.

Example(1) Let D = R+ and f (x) = x for all x ∈ D. Then f (D) = R+ and

sup f (D) = +∞, so the problem max{f (x) | x ∈ D} has nosolution.

(2) Let D = [0,1] and f (x) = x(1− x) for all x ∈ D. Then the problemof maximizing f on D has exactly one solution, x = 1/2.

(3) Let D = [−1,1] and f (x) = x2 for all x ∈ D. Then the problem ofmaximizing f on D has two solutions, x = −1 and x = 1. �

Scott McCracken MTAEA – Unconstrained Optimization in Euclidean Space

Page 7: MTAEA Unconstrained Optimization in Euclidean Space

Unconstrained Optimization in Euclidean Space

Optimization Problems.I Now we introduce some terminology to refer to the sets of

maximizers and minimizers.

Definition.The set of all maximizers of f on D, denoted byarg max{f (x) | x ∈ D}, is the set given by

arg max{f (x) | x ∈ D} = {x ∈ D | f (x) ≥ f (y) for all y ∈ D}.

Similarly, the set of all minimizers of f on D, denoted byarg min{f (x) | x ∈ D}, is the set given by

arg min{f (x) | x ∈ D} = {x ∈ D | f (x) ≤ f (y) for all y ∈ D}. N

ExampleIn part (1) of the previous example, the set arg max{f (x) | x ∈ D} is∅. In part (2) this set is {1/2}, and in part (3) this set is {−1,1}. �

Scott McCracken MTAEA – Unconstrained Optimization in Euclidean Space

Page 8: MTAEA Unconstrained Optimization in Euclidean Space

Unconstrained Optimization in Euclidean Space

Optimization Problems.I The following result allows us to apply all the results about

maxima to minima.

TheoremLet −f denote the function given by −f (x) for all x ∈ D. Then

(i) x is a maximizer of f on D iff x is a minimizer of −f on D;(ii) x is a minimizer of f on D iff x is a maximizer of −f on D.

I The following theorem says a strictly increasing transformation ofa function does not change the optimum points.

TheoremLet g be a strictly increasing function. Then

(i) x is a maximizer of f on D iff x is a maximizer of g ◦ f on D;(ii) x is a minimizer of f on D iff x is a minimizer of g ◦ f on D.

Scott McCracken MTAEA – Unconstrained Optimization in Euclidean Space

Page 9: MTAEA Unconstrained Optimization in Euclidean Space

Unconstrained Optimization in Euclidean Space

Optimization Problems.

Example

(1) Consider the model in consumer theory in which a single agentconsumes n commodities in nonnegative quantities.

I The agent’s utility from consuming xi ≥ 0 units of commodity i(i = 1, . . . ,n) is given by u(x1, . . . , xn) where u : Rn → R is theagent’s utility function.

I The agent has an income I ≥ 0, and faces the price vectorp = (p1, . . . ,pn), where pi ≥ 0 denotes the price per unit of the i thcommodity.

I The agent’s budget set – the set of feasible or affordableconsumption bundles, given income and prices – denoted byB(p, I), is given by

B(p, I) = {x ∈ Rn+ | p · x ≤ I}.

Scott McCracken MTAEA – Unconstrained Optimization in Euclidean Space

Page 10: MTAEA Unconstrained Optimization in Euclidean Space

Unconstrained Optimization in Euclidean Space

Optimization Problems.I The agent’s objective is to maximize the level of utility over the set

of affordable commodity bundles, i.e. to solve

Maximize u(x) subject to x ∈ B(p, I).

(2) An agent’s expenditure minimization problem is dual to the utilitymaximization problem.

I The agent’s problem is to minimize the amount of income neededto give a utility level of at least u, where u is some fixed utility level,given a price vector p ∈ Rn

+.I In this problem

X (u) = {x ∈ Rn+ | u(x) ≥ u}

is the constraint set and

Minimize p · x subject to x ∈ X (u)

is the objective is to solve. �

Scott McCracken MTAEA – Unconstrained Optimization in Euclidean Space

Page 11: MTAEA Unconstrained Optimization in Euclidean Space

Unconstrained Optimization in Euclidean Space

The Weierstrass Theorem.I We would like to know under what conditions on f and D we can

guarantee the existence of optima.I The idea behind the Weierstrass theorem is as follows.

I If the constraint set D is a compact set and the function f iscontinuous, then the image f (D) is also a compact set.

I Since any closed and bounded set attains a maximum and aminimum, f must attain a maximum and minimum on D.

Theorem (Weierstrass Theorem)Let D ⊆ Rn be compact and let f : D → R be a continuous functionon D. Then f attains a maximum and a minimum on D; that is, thereexist points z1, z2 ∈ D such that f (z1) ≥ f (x) ≥ f (z2) for all x ∈ D.

I The Weierstrass theorem only provides sufficient conditions forthe existence of optima.

I As we shall see in the following examples we cannot sayanything, in general, about what happens if these conditions arenot satisfied.

Scott McCracken MTAEA – Unconstrained Optimization in Euclidean Space

Page 12: MTAEA Unconstrained Optimization in Euclidean Space

Unconstrained Optimization in Euclidean Space

The Weierstrass Theorem.Example(1) Let D = R and f (x) = x3 for all x ∈ R. Then f is continuous, butD is not compact (it is closed but not bounded). Since f (D) = R,we can see f attains neither a maximum nor a minimum on D.

(2) Let D = (0,1) and f (x) = x for all x ∈ (0,1). Then f iscontinuous, but D is not compact (it is bounded but not closed).The set f (D) is the open interval (0,1) and so, again, f attainsneither a maximum nor a minimum on D

(3) Let D = [−1,1] and let f be given by

f (x) =

{0, if x = −1 or x = 1x , if −1 < x < 1.

Here D is compact, but f is not continuous at the points −1 and 1.In this case, f (D) is the open interval (−1,1). Thus f fails to attaineither a maximum or a minimum on D.

Scott McCracken MTAEA – Unconstrained Optimization in Euclidean Space

Page 13: MTAEA Unconstrained Optimization in Euclidean Space

Unconstrained Optimization in Euclidean Space

The Weierstrass Theorem.(4) Let D = R++ and let f : D → R be defined by

f (x) =

{1, if x ∈ Q0, otherwise.

Then D is not compact, and f is discontinuous at every point inR++. But, f does attain a maximum (at every rational number)and a minimum (at every irrational number). �

I We now apply the Weierstrass theorem to the consumer’s utilitymaximization problem.

ExampleAgain, consider the utility maximization problem

Maximize u(x) subject to x ∈ B(p, I) = {x ∈ Rn+ | p · x ≤ I}.

I Assume that the utility function u : Rn+ → R is continuous on its

domain.Scott McCracken MTAEA – Unconstrained Optimization in Euclidean Space

Page 14: MTAEA Unconstrained Optimization in Euclidean Space

Unconstrained Optimization in Euclidean Space

The Weierstrass Theorem.I By the Wierstrass theorem, a solution to the utility maximization

problem will always exist provided the budget set B(p, I) iscompact.

I We “show” that compactness holds if p � 0.I Notice that even if the agent spent all the income I on commodity

j , consumption of this commodity cannot exceed I/pj . Let

ψ = max{

Ip1, . . . ,

Ipn

}.

Then, for any x ∈ B(p, I), xi ≤ ψ for all i ∈ {1, . . . ,n}, and wehave 0 ≤ x ≤ (ψ, . . . , ψ). Thus B(p, I) is bounded.

I A longer argument shows that B(p, I) is also closed (try showingthis!). Thus, the budget set is compact and so a solution to theutility maximization problem exists. �

Scott McCracken MTAEA – Unconstrained Optimization in Euclidean Space

Page 15: MTAEA Unconstrained Optimization in Euclidean Space

Unconstrained Optimization in Euclidean Space

Unconstrained Optima.I We now look at identifying necessary conditions that the

derivative of f must satisfy at an optimum.I We begin by looking at what are called unconstrained or interior

optima. That is, optima at which the constraints are not binding.

Definition.Let f be a function on D ⊆ Rn.

I A point x ∈ D is an unconstrained (global) maximizer of f on D ifx ∈ intD and f (x) ≥ f (y) for all y ∈ D.

I A point x ∈ D is a local maximizer of f on D if there is somer > 0 such that f (x) ≥ f (y) for all y ∈ D ∩ Br (x).

I A point x ∈ D is an unconstrained local maximizer of f on D ifthere is some r > 0 such that Br (x) ⊆ D and f (x) ≥ f (y) for ally ∈ Br (x). N

I Similar definitions apply to local and global minimizers of f on D.

Scott McCracken MTAEA – Unconstrained Optimization in Euclidean Space

Page 16: MTAEA Unconstrained Optimization in Euclidean Space

Unconstrained Optimization in Euclidean Space

Unconstrained Optima.

x y z

DFigure: The points x , y , and z are local maximizers of f on D; points y and z are unconstrainedlocal maximizers of f on D; while the point z is a (global) unconstrained maximizer of f on D.

Scott McCracken MTAEA – Unconstrained Optimization in Euclidean Space

Page 17: MTAEA Unconstrained Optimization in Euclidean Space

Unconstrained Optimization in Euclidean Space

First Order Conditions.

I Our first result states that the derivative of f must be zero atevery unconstrained local maximum or minimum.

I The result is easy to see in the one-dimensional case. Supposex is, say, a local maximum and f ′(x) 6= 0.

I Then if f ′(x) > 0, it would be possible to increase the value of f bymoving to the right a small amount.

I If f ′(x) < 0, the value of f could be increased by moving slightly tothe left.

I At an unconstrained maximizer movement in either direction isfeasible.

I But then the definition of a local maximum would be violated.

TheoremLet x ∈ intD ⊆ Rn be a local maximizer of f on D or local minimizerof f on D. Then Df (x) = 0.

Scott McCracken MTAEA – Unconstrained Optimization in Euclidean Space

Page 18: MTAEA Unconstrained Optimization in Euclidean Space

Unconstrained Optimization in Euclidean Space

First Order Conditions.

I This theorem only provides a necessary condition for anunconstrained local optimum. It is not sufficient.

I The theorem does not say that if Df (x) = 0 for some x in theinterior of the feasible set, then x must be an unconstrained localoptimizer. The following example illustrates this fact.

ExampleLet D = R, and let f : R→ R be given by f (x) = x3 for all x ∈ R.

I Then, we have f ′(0) = 0 but 0 is neither a local maximizer nor alocal minimizer of f on R.

I Any open ball about 0 must contain a point x > 0 and a pointz < 0. At x > 0, we have f (x) = x3 > 0 = f (0) while at z > 0 wehave f (z) = z3 < 0 = f (0). �

Scott McCracken MTAEA – Unconstrained Optimization in Euclidean Space

Page 19: MTAEA Unconstrained Optimization in Euclidean Space

Unconstrained Optimization in Euclidean Space

First Order Conditions.

Definition.We call a point x ∈ D that satisfies the first order condition Df (x) = 0a critical point of f on D. N

I So, the previous theorem says that every unconstrained localoptimizer must be a critical point.

I But as we saw in our example, there may exist critical points thatare neither local maximizers nor local minimizers.

I The first order conditions for unconstrained local optima do notdistinguish between maxima and minima.

I To do so we need to look at the behaviour of the secondderivative, i.e. D2f (the Hessian of f ).

Scott McCracken MTAEA – Unconstrained Optimization in Euclidean Space

Page 20: MTAEA Unconstrained Optimization in Euclidean Space

Unconstrained Optimization in Euclidean Space

Second Order Conditions.

I Just as we look at a linear approximation when we want to saysomething about the slope of a function, we look at a quadraticapproximation when we want to say something about thecurvature of a function.

I A (very) sketch(y) proof of the second order conditions we willsee later is as follows.

I Let x be a critical point of f . We can represent the value of a C2

function, at a point x + h close to x , as the sum of the Taylorpolynomial of order two about x and the remainder term R(h):

f (x + h) = f (x) + Df (x)h +12

hT D2f (x)h + R(h).

As h goes to zero, the remainder term goes to 0 very quickly, sowe will ignore this negligibly small term.

Scott McCracken MTAEA – Unconstrained Optimization in Euclidean Space

Page 21: MTAEA Unconstrained Optimization in Euclidean Space

Unconstrained Optimization in Euclidean Space

Second Order Conditions.I Now, since x is a critical point, the derivative of f at x i.e. Df (x) is

zero. Thusf (x + h)− f (x) ≈ 1

2hT D2f (x)h.

for small h.I Thus if hT D2f (x)h < 0, it follows that for all small h 6= 0, we have

f (x + h) < f (x).

Hence, x is a (strict) local maximizer of f .I We say that the n × n Hessian matrix D2f (x) is negative definite

if yT D2f (x)y < 0 for all y ∈ Rn with y 6= 0.I Thus, the definiteness, or otherwise, of D2f (x) will provide the

conditions we are after.I So before we present the second order conditions, we need to

look at the definiteness of general quadratic forms.

Scott McCracken MTAEA – Unconstrained Optimization in Euclidean Space

Page 22: MTAEA Unconstrained Optimization in Euclidean Space

Unconstrained Optimization in Euclidean Space

Definiteness of Quadratic Forms.I We now take a detour to look at the definiteness of quadratic

forms.

Definition.Let A be an n × n symmetric matrix. Then A is said to be

I positive definite if xT Ax > 0 for all x in Rn, x 6= 0;I positive semidefinite if xT Ax ≥ 0 for all x in Rn;I negative definite if xT Ax < 0 for all x in Rn, x 6= 0;I negative semidefinite if xT Ax ≤ 0 for all x in Rn;I indefinite if xT Ax > 0 and yT Ay < 0 for some x and y in Rn. N

I Note that a matrix that is positive definite (negative definite) isalways positive semidefinite (negative semidefinite).

I Also, notice that A is positive (semi)definite iff the matrix −A isnegative (semi)definite.

Scott McCracken MTAEA – Unconstrained Optimization in Euclidean Space

Page 23: MTAEA Unconstrained Optimization in Euclidean Space

Unconstrained Optimization in Euclidean Space

Definiteness of Quadratic Forms.Example

(1) The quadratic form A defined by(1 00 1

)is positive definite, since for any (x1, x2) ∈ R2, we havexT Ax = x2

1 + x22 , and this is positive whenever x 6= 0.

(2) Consider the quadratic form A defined by(1 00 0

).

For any (x1, x2) ∈ R2, we have xT Ax = x21 , which can be zero

even if x 6= 0. Hence A is not positive definite. However, it is truethat we always have xT Ax ≥ 0, so A is positive semidefinite.

Scott McCracken MTAEA – Unconstrained Optimization in Euclidean Space

Page 24: MTAEA Unconstrained Optimization in Euclidean Space

Unconstrained Optimization in Euclidean Space

Definiteness of Quadratic Forms.

(3) As an example of an indefinite quadratic form, consider(0 11 0

).

For any (x1, x2) ∈ R2, we have xT Ax = 2x1x2. When x = (1,1),xT Ax = 2 > 0, so A is not negative semidefinite. But whenx = (−1,1), xT Ax = −2 < 0, so that A is not positivesemidefinite either. �

I But, in practice, how do we determine the definiteness of amatrix?

I To answer this question, we need some more terminology.

Scott McCracken MTAEA – Unconstrained Optimization in Euclidean Space

Page 25: MTAEA Unconstrained Optimization in Euclidean Space

Unconstrained Optimization in Euclidean Space

Second Order Conditions.

Definition.Let A be an n × n matrix.

I The k th order leading principal submatrix of A, denoted by Ak isthe submatrix obtained by retaining only the first k rows andcolumns.

I The k th order leading principal minor of A, denoted by |Ak | is thedeterminant of the k th order leading principal submatrix of A. N

TheoremLet A be an n × n symmetric matrix. Then

(i) A is positive definite iff |Ak | > 0 for all k ∈ {1, . . . ,n};(ii) A is negative definite iff (−1)k |Ak | > 0 for all k ∈ {1, . . . ,n}.

Furthermore, a positive (negative) semidefinite matrix A is positive(negative) definite iff |A| 6= 0.

Scott McCracken MTAEA – Unconstrained Optimization in Euclidean Space

Page 26: MTAEA Unconstrained Optimization in Euclidean Space

Unconstrained Optimization in Euclidean Space

Second Order Conditions.

I So a symmetric matrix is positive definite iff all its leadingprincipal minors are positive.

I A symmetric matrix is negative definite iff its leading principalminors alternate in sign, starting with negative, i.e. |A1| < 0,|A2| > 0, |A3| < 0, etc.

Definition.Let A be an n × n matrix.

I A k th order principal submatrix of A is a submatrix obtained byretaining the same k rows and columns.

I A k th order principal minor of A is a determinant of a k th orderprincipal submatrix of A. N

Scott McCracken MTAEA – Unconstrained Optimization in Euclidean Space

Page 27: MTAEA Unconstrained Optimization in Euclidean Space

Unconstrained Optimization in Euclidean Space

Second Order Conditions.

ExampleConsider the matrix

A =

1 2 34 5 67 8 9

.

The principal submatrices of order 1 are (1), (5), and (9). The first ofthese is the the leading principal submatrix A1. The principalsubmatrices of order two are(

1 24 5

),

(1 37 9

), and

(5 68 9

),

with the first being the leading second order principal submatrix A2.The third order (leading) principal submatrix is A3 = A itself. �

Scott McCracken MTAEA – Unconstrained Optimization in Euclidean Space

Page 28: MTAEA Unconstrained Optimization in Euclidean Space

Unconstrained Optimization in Euclidean Space

Second Order Conditions.TheoremLet A be an n × n symmetric matrix. Then

(i) A is positive semidefinite iff every principal minor of A isnonnegative;

(ii) A is negative semidefinite iff every principal minor of odd order isnonpositive and every principal minor of even order isnonnegative.

I In our example above, there were 1 + 3 + 3 = 7 principalsubmatrices. In general there will be

n∑k=1

(nk

)=

(n0

)+

(n1

)+ · · ·+

(n

n − 1

)+

(nn

)= 2n − 1

principal submatrices of an n × n matrix.

Scott McCracken MTAEA – Unconstrained Optimization in Euclidean Space

Page 29: MTAEA Unconstrained Optimization in Euclidean Space

Unconstrained Optimization in Euclidean Space

Second Order Conditions.I Before we state the second order conditions, another definition.

Definition.A point x ∈ D is a strict local maximizer of f on D if there is somer > 0 such that f (x) > f (y) for all y ∈ D ∩ Br (x), y 6= x . N

TheoremLet f be a C2 function on D ⊆ Rn and let x ∈ int D.

(i) If f has a local maximum at x, then D2f (x) is negativesemidefinite.

(ii) If f has a local minimum at x, then D2f (x) is positivesemidefinite.

(iii) If Df (x) = 0 and D2f (x) is negative definite at x, then x is a strictlocal maximizer of f on D.

(iv) If Df (x) = 0 and D2f (x) is positive definite at x, then x is a strictlocal minimizer of f on D.

Scott McCracken MTAEA – Unconstrained Optimization in Euclidean Space

Page 30: MTAEA Unconstrained Optimization in Euclidean Space

Unconstrained Optimization in Euclidean Space

Second Order Conditions.

I This first two parts of the theorem give necessary conditionswhich the second derivative D2f must satisfy at interior localmaxima and minima.

I Parts (i) and (ii) of the theorem therefore allow us to rule out fromcontention as optima any interior critical points at which theHessian is indefinite.

I Such points, which are maximizers of f in some directions andminimizers of f in other directions are called saddle points.

I The second two parts give sufficient conditions on the secondderivative that identify specific points as being local optimizers.

I However the sufficient conditions, requiring definiteness at theoptimal point, are different to the necessary conditions whichrequire semidefiniteness.

I This can be problematic when we come to apply the theorem.

Scott McCracken MTAEA – Unconstrained Optimization in Euclidean Space

Page 31: MTAEA Unconstrained Optimization in Euclidean Space

Unconstrained Optimization in Euclidean Space

Second Order Conditions.I Suppose that x is a critical point at which D2f (x) is positive

semidefinite (but not positive definite).I Part (iii) of the theorem does not allow us to conclude that x must

be a local minimizer.I Neither can we use part (ii) to rule out the possibility of x being a

local minimizer, because we only need D2f (x) to be positivesemidefinite at a local maximizer.

I The next example shows that we cannot strengthen parts (i) and(ii) of the theorem to say

I D2f must be negative (positive) definite at a local maximizer(minimizer) x .

ExampleLet D = R, and let f : R→ R and g : R→ R be defined, respectively,by f (x) = x4 and g(x) = −x4 for all x ∈ R.

I Clearly, since x4 ≥ 0 everywhere and f (0) = g(0) = 0, 0 is aglobal minimum of f on D and a global maximum of g on D.

Scott McCracken MTAEA – Unconstrained Optimization in Euclidean Space

Page 32: MTAEA Unconstrained Optimization in Euclidean Space

Unconstrained Optimization in Euclidean Space

Second Order Conditions.I However, f ′′(0) = g′′(0) = 0, so f ′′(0) is positive semidefinite but

not positive definite, while g′′(0) is negative semidefinite, but notnegative definite. �

I Now we show by example that we cannot strengthen parts (iii)and (iv) of the theorem to say

I If Df (x) = 0 and D2f (x) is negative (positive) semidefinite, then xis a local maximizer (minimizer) of f on D.

ExampleLet D = R, and let f : R→ R be given by f (x) = x3 for all x ∈ R.

I Then f ′(x) = 3x2 and f ′′(x) = 6x , so f ′(0) = f ′′(0) = 0 and sof ′′(0) is both positive semidefinite and negative semidefinite (butnot positive or negative definite).

I However 0 is neither a local maximizer nor a local minimizer of fon D. �

Scott McCracken MTAEA – Unconstrained Optimization in Euclidean Space

Page 33: MTAEA Unconstrained Optimization in Euclidean Space

Unconstrained Optimization in Euclidean Space

Using the First and Second Order Conditions.I The first order condition on its own is of limited use in finding

solutions to optimization problems for two reasons.I The condition only applies to cases where the optimum occurs at

an interior point, but in most applications some or all constraintswill bind.

I The condition is only a necessary one.I Even when we combine the first order condition with the second

order conditions, we do not gain a lot.I First, parts (i) and (ii)are only necessary conditions.I Second, although parts (iii) and (iv) give sufficient conditions, they

are sufficient for all unconstrained local optima, and not just globaloptima.

I At best, the second order conditions help determine if a point is alocal optimum, they are of no use in determining if a localoptimum is a global optimum or not.

I Global optima may fail to exist even if several critical points,including local optima exist.

Scott McCracken MTAEA – Unconstrained Optimization in Euclidean Space

Page 34: MTAEA Unconstrained Optimization in Euclidean Space

Unconstrained Optimization in Euclidean Space

Using the First and Second Order Conditions.

ExampleLet D = R and let f : R→ R be given by f (x) = 2x3 − 3x2.

I First, f is a C2 function on R.I There are exactly two critical points at x = 0 and x = 1.I We find that f ′′(0) = −6 < 0 while f ′′(1) = 6 > 0.I Thus the point 0 is a strict local maximizer of f on D, while 1 is a

strict local minimizer of f on D.I However, the first and second order conditions do not help

determine whether these are global optima. Clearly they are not– there are no global optima in this problem, sincelimx→+∞ f (x) = +∞ and limx→−∞ f (x) = −∞. �

Scott McCracken MTAEA – Unconstrained Optimization in Euclidean Space

Page 35: MTAEA Unconstrained Optimization in Euclidean Space

Unconstrained Optimization in Euclidean Space

Using the First and Second Order Conditions.I On the other hand, in problems in which we know beforehand

that a global optimum exists, we can sometimes use the firstorder conditions to compute the solution.

I For example, say we know there is an unconstrained solution.I In this case the set of all points that meet the first order conditions

must contain the optimum point, and the problem is reduced tofinding the point that optimizes the objective function over this set.

I More generally, if we do not know that the optimum is in theinterior of the set, we can modify the above procedure as long asthe set of boundary points of the constraint set is small.

I The optimum must either occur on the boundary or in the interiorof the constraint set. In the latter case the first order conditionsmust be satisfied.

I Thus, to identify the optimum, simply compare the optimum valueof the objective function on the boundary with the value of thefunction at those points in the interior that satisfy the first orderconditions.

Scott McCracken MTAEA – Unconstrained Optimization in Euclidean Space

Page 36: MTAEA Unconstrained Optimization in Euclidean Space

Unconstrained Optimization in Euclidean Space

Using the First and Second Order Conditions.ExampleConsider the problem of maximizing f (x) = 4x3 − 5x2 + 2x over theunit interval [0,1].

I Since [0,1] is compact, and f is continuous on this interval, theWeierstrass theorem tells us that f has a (global) maximum onthis interval.

I Either the maximum occurs at one of the boundary points 0 or 1,or it is an unconstrained maximum.

I If it is an unconstrained maximum, it must meet the first ordercondition f ′(x) = 12x2 − 10x + 2 = 0. The only points satisfyingthis condition are x = 1/2 and x = 1/3.

I Evaluating f at the four points 0,1/3,1/2 and 1 shows that x = 1is the point where f attains its maximum on [0,1]. Simlarly wecan show that x = 0 is the point where f is minimized on[0,1]. �

Scott McCracken MTAEA – Unconstrained Optimization in Euclidean Space

Page 37: MTAEA Unconstrained Optimization in Euclidean Space

Unconstrained Optimization in Euclidean Space

Using the First and Second Order Conditions.I In most economic applications, the set of boundary points is

large and so carrying out the comparisons, as in the previousexample, is a nontrivial task.

I For example, in the utility maximization problem, the entire linesegment

{x ∈ Rn | p · x = I, x ≥ 0},

is part of the boundary of the budget set B(p, I).I Still, the procedure used in the example indicates how global

optima may be identified by combining knowledge of theexistence of a solution with first order necessary conditions forlocal optima.

I As we saw, if a solution is known to exist, it can be identified bysimply comparing the value of the objective function at points onthe boundary and at points satisfying the first order condtions.

Scott McCracken MTAEA – Unconstrained Optimization in Euclidean Space

Page 38: MTAEA Unconstrained Optimization in Euclidean Space

Unconstrained Optimization in Euclidean Space

Using the First and Second Order Conditions.

I The only role second order conditions can play is to cut down onthe number of points at which the comparison has to be carriedout.

I If, say, we are seeking a maximum of f then all critical points thatalso satisfy the second order conditions for a strict localminimum can be ruled out.

I The most that second order conditions can do is help identifylocal optima.

Scott McCracken MTAEA – Unconstrained Optimization in Euclidean Space