Pre Session Math 2009-09-16

Embed Size (px)

Citation preview

  • 8/2/2019 Pre Session Math 2009-09-16

    1/56

    Presession Mathematics Seminar

    ron Tbis

    Ph.D. Student

    Central European UniversityDepartment of Economics

    September 16, 2009

    Presession Mathematics Seminar 1 / 54

    http://find/http://goback/
  • 8/2/2019 Pre Session Math 2009-09-16

    2/56

    Constrained Optimization

    Constrained Optimization

    Presession Mathematics Seminar 2 / 54

    http://find/http://goback/
  • 8/2/2019 Pre Session Math 2009-09-16

    3/56

    Constrained Optimization General Formalization

    General Formalization

    Consider a twice differentiable function f : Rn R. Unconstrainedoptimization, which we have exhaustively discussed, amounted to solvingthe following problem (s.t. stands for subject to or such that):

    f(x) max / mins.t. x

    Rn.

    Now we are considering constrained optimization, which consists of solv-ing the following problem:

    f(x)

    max / min

    s.t. x S Rn.S is called the constraint set pertaining to the optimization problem.

    Presession Mathematics Seminar 3 / 54

    http://find/
  • 8/2/2019 Pre Session Math 2009-09-16

    4/56

    Constrained Optimization General Formalization

    General Formalization

    The constraint set is typically represented by means of some equalities

    and/or weak inequalities that x has to satisfy. The general form of aconstrained optimization problem is the following:

    f(x) max / min

    s.t.

    g1(x) = b1, h1(x) c1,...

    .

    ..gK(x) = bK, hM(x) cM,

    where b1, . . . , bK and c1, . . . , cM are constant. That is, the constraint setis determined by K equalities and M weak inequalities. Formally:

    S = {x Rn

    : g1(x) = b1} . . . {x Rn

    : gK(x) = bK}{x Rn : h1(x) c1} . . . {x Rn : hM(x) cM} .

    We generally assume that the functions g1, . . . , gk and h1, . . . , hm are alsotwice continuously differentiable.

    Presession Mathematics Seminar 4 / 54

    C O G

    http://find/
  • 8/2/2019 Pre Session Math 2009-09-16

    5/56

    Constrained Optimization General Formalization

    General Formalization

    Greater-than-or-equal-to constraints can also be incorporated in this frame-work. For example, if we require that hm(x) cm, we could as well requirethat hm(x) cm.

    The whole process of solving such a constrained optimization problem isalternatively called nonlinear programming. The function f to be maxi-mized or minimized is the objective function.

    In the special case when the objective function and all of the constraints are linear,we have a linear programming problem. Such problems can be more efficientlyanalyzed using methods different than those we are going to discuss. This isbeyond our scope.

    Presession Mathematics Seminar 5 / 54

    C t i d O ti i ti K h T k C diti

    http://find/
  • 8/2/2019 Pre Session Math 2009-09-16

    6/56

    Constrained Optimization Kuhn Tucker Conditions

    Kuhn Tucker Conditions

    How to solve such a problem? For now consider the case when there are

    only weak inequality constraints. Also, lets consider maximization only.This case is most typical in economic analysis.

    f(x) maxs.t. h1(x) c1, . . . , hM(x) cM.

    The first step is to set up the Lagrangian function:

    L(x) = f(x) M

    m=1

    m [hm(x) cm] .

    The variables m (m = 1, . . . , M) are the Lagrange multipliers corre-sponding to the respective inequality constraints. Informally, each multi-plier measures how restrictive the constraint isthat is, in which directionand to what extent the optimal value of the objective function wouldchange if a constraint were to be slightly relaxed (or stiffened).

    Presession Mathematics Seminar 6 / 54

    Const ained O timi ation K hn T cke Conditions

    http://find/http://goback/
  • 8/2/2019 Pre Session Math 2009-09-16

    7/56

    Constrained Optimization Kuhn Tucker Conditions

    Kuhn Tucker Conditions

    The second step is to find the critical point(s) of the Lagrangian, that is,

    the point(s) x

    which solve(s) the following equations (i = 1, . . . , n):L (x)

    xi=

    f(x)

    xi

    Mm=1

    mhm (x

    )

    xi= 0.

    The third step is to prescribe the so-called complementarity conditions,

    which require the following three things: (i) Each of the multipliers mustbe nonnegative. (ii) If any condition m is non-binding at (any of) thecritical point(s) of the Lagrangian [that is, it holds with strict inequality:hm (x

    ) < cm], then the respective multiplier must be zero: m = 0. (iii)

    If any of the multipliers is positive: m > 0, then the respective constraintmust be binding at the critical point: hm (x) = cm. In short:

    m 0, m [gm (x) cm] = 0.The above two formulae are referred to as the Kuhn Tucker conditions.

    Presession Mathematics Seminar 7 / 54

    Constrained Optimization Kuhn Tucker Conditions

    http://find/http://goback/
  • 8/2/2019 Pre Session Math 2009-09-16

    8/56

    Constrained Optimization Kuhn Tucker Conditions

    Kuhn Tucker Conditions

    x1

    x2

    x**

    contour lines of f(x)

    h(x) c

    Figure 1: Function f(x1, x2) has a global maximum at x. The constrainth(x1, x2) c is non-binding at x, thus the constrained maximum coin-cides with the unconstrained one and = 0.

    Presession Mathematics Seminar 8 / 54

    Constrained Optimization Kuhn Tucker Conditions

    http://find/http://goback/
  • 8/2/2019 Pre Session Math 2009-09-16

    9/56

    Constrained Optimization Kuhn Tucker Conditions

    Kuhn Tucker Conditions

    x1

    x2

    x**

    contour lines of f(x)

    h(x) cx*

    Figure 2: Function f(x1, x2) has a global maximum at x, but the con-strained maximum is x. The constraint h(x1, x2) c is binding at x. Anyrelaxation of the constraint increases the objective function and > 0.

    Presession Mathematics Seminar 9 / 54

    Constrained Optimization Constraints with Equality

    http://find/http://goback/
  • 8/2/2019 Pre Session Math 2009-09-16

    10/56

    Constrained Optimization Constraints with Equality

    Constraints with Equality

    Suppose that there also equality constraints: g1(x) = b1, . . ., gK(x) == bK. The usual notation for the multipliers corresponding to these con-straints are 1, . . ., K. The Kuhn Tucker conditions remain the same in

    the presence of equality constraints, with the only modification that we donot need complementarity conditions for these constraints. In particular,it means that any k can be negative as well.

    Try to explain graphically the relevance of equality constraints based on Figures

    1 and 2.

    Presession Mathematics Seminar 10 / 54

    Constrained Optimization Optimality of the Kuhn Tucker Solution

    http://find/
  • 8/2/2019 Pre Session Math 2009-09-16

    11/56

    Constrained Optimization Optimality of the Kuhn Tucker Solution

    Optimality of the Kuhn Tucker Solution

    The following result is very important in practice: If the constraint setS is nonempty and convex, and the objective function f is strictly qua-siconcave, then any solution x to the constrained maximization problemis unique. Moreoverprovided that a technical condition, which we dis-cuss later, holdsthe Kuhn Tucker conditions are both necessary and

    sufficient for determining this unique maximum. A special case when thisholds is when f is strictly concave and the hs are convex. (Why?) If theseconvexity/concavity conditions on the constraint set and on the objectivefunction do not hold, then the Kuhn Tucker conditions are necessary butmay not be sufficient.

    Again, try to argue graphically. What happens if the maximization problem admitsno solution at all?

    Presession Mathematics Seminar 11 / 54

    Constrained Optimization Existence of a Solution

    http://find/
  • 8/2/2019 Pre Session Math 2009-09-16

    12/56

    p

    Existence of a Solution

    We have just concluded that if a solution exists to a nonlinear program-ming problem, then it has to satisfy the Kuhn Tucker conditions. But

    when does a solution exist?Weierstrasss theorem

    says that if the con-straint set is bounded and closed (the latter is always satisfied in ourcontextwhy?), then any continuous function takes both maximum andminimum values on this set.

    Presession Mathematics Seminar 12 / 54

    Constrained Optimization Problem 1

    http://find/
  • 8/2/2019 Pre Session Math 2009-09-16

    13/56

    Problem 1

    Solve the following nonlinear programming problem:

    f(x1, x2) = x21 + x22 + x2 1 maxs.t. h(x1, x2) = x

    2

    1 + x2

    2 1.Note that even though the constraint set is convex, the objective functionis strictly convex (you should check these), so the Kuhn Tucker condi-

    tions may not be sufficient. They are still necessary, however. Set up theLagrangian:

    L(x1, x2) = x21 + x22 + x2 1

    x21 + x2

    2 1

    .

    Any maximum point x (at least one exists by Weierstrasss theorem, since

    the constraint set is bounded) must satisfy the first-order conditions:L (x)

    x1= 2x1 2x1 = 2x1 (1 ) = 0,

    L (x)x2

    = 2x2 + 1

    2x2 = 2x

    2 (1

    ) + 1 = 0.

    Presession Mathematics Seminar 13 / 54

    Constrained Optimization Problem 1

    http://find/http://goback/
  • 8/2/2019 Pre Session Math 2009-09-16

    14/56

    Problem 1

    Contd:

    L (x)x1

    = 2x1 2x1 = 2x1 (1 ) = 0,L (x)

    x2= 2x2 + 1 2x2 = 2x2 (1 ) + 1 = 0.

    Also, we have to have the complementarity condition satisfied:

    0, = 0 if (x1 )2 + (x2 )2 < 1.

    From the first equation, either x1

    = 0 or = 1. Suppose that = 1.

    Then the second equation yields a contradiction (1 = 0). Therefore, itmust be that x

    1= 0. Now suppose that the constraint (x

    1)2 + (x

    2)2 = 1

    holds with equality. (Give it a thought: Could be zero in the case? Yes,it possibly could!)

    Presession Mathematics Seminar 14 / 54

    Constrained Optimization Problem 1

    http://goforward/http://find/http://goback/
  • 8/2/2019 Pre Session Math 2009-09-16

    15/56

    Problem 1

    Maintaining the supposition that (x1

    )2 + (x2

    )2 = 1, the fact weve alreadyestablished that x

    1= 0 yields that either x

    2= 1 or x

    2= 1. Assume for

    the moment that x2

    = 1. From the second equation: 2x2

    (1 ) + 1 = 0,we conclude that = 3/2, which clearly satisfies the complementaritycondition. Hence x

    1= 0, x

    2= 1, and = 3/2 satisfies the Kuhn Tucker

    conditions. But were not done here. Now suppose that the constraintstill holds with equality but x2

    = 1. By the second equation, =1/2 0. Therefore, the triple consisting of x

    1= 0, x

    2= 1, and

    = 1/2 also satisfies the Kuhn Tucker conditions. Finally, suppose that(x

    1)2 + (x

    2)2 < 1. The complementarity condition yields that = 0.

    Since x1 = 0 must still hold, we have from the second equation thatx2

    = 1/2. Therefore, x1

    = 0, x2

    = 1/2, and = 0 satisfies theKuhn Tucker conditions, too.

    Presession Mathematics Seminar 15 / 54

    Constrained Optimization Problem 1

    http://find/
  • 8/2/2019 Pre Session Math 2009-09-16

    16/56

    Problem 1

    We have three candidates for the maximum point:

    x1 x2 Candidate #1 0 1 3/2Candidate #2 0 1 1/2Candidate #3 0 1/2 0

    Lets evaluate the objective function f(x1, x2) = x2

    1 + x2

    2 + x2 1 at eachof them:f(0, 1) = 1,

    f(0, 1) = 1,

    f(0, 1/2) = 5/4.Therefore, the only solution is x

    1= 0 and x

    2= 1 with = 3/2. Note,

    incidentally, that the point (0, 1/2) is a global minimum of the objectivefunction. (Why?)

    Presession Mathematics Seminar 16 / 54

    Constrained Optimization Problem 1

    http://find/
  • 8/2/2019 Pre Session Math 2009-09-16

    17/56

    Problem 1

    -2 -1.5 -1 -0.5 0 0.5 1 1.5 2

    -2

    -1

    1

    2

    x1

    x2

    contour lines of f(x)

    x

    +x

    1222

    1

    (0,-1/2)

    (0,1)

    direction of growth

    Figure 3: Graphical representation of Problem 1.

    Presession Mathematics Seminar 17 / 54

    Constrained Optimization Problem 2

    http://find/
  • 8/2/2019 Pre Session Math 2009-09-16

    18/56

    Problem 2

    Solve the following nonlinear programming problem:

    f(x1, x2, x3) = x21 x22 x23 + 4x3 maxs.t. h1(x1, x2, x3) = x3 x1x2 0.

    h2(x1, x2, x3) = x2

    1 + x2

    2 + x2

    3 3 0.

    The first-order conditions and the complementarity condition are as fol-lows:

    2x1 (1 + 2) + 1x2 = 0,

    2x2 (1 + 2) + 1x

    1 = 0,

    4 2x3 (1 + 2) 1 = 0,1 0, 1 = 0 if x3 x1x2 < 0,2 0, 2 = 0 if (x1 )2 + (x2 )2 + (x3 )2 < 3.

    Presession Mathematics Seminar 18 / 54

    Constrained Optimization Problem 2

    http://find/http://goback/
  • 8/2/2019 Pre Session Math 2009-09-16

    19/56

    Problem 2

    What we are going to do now is to contemplate every possible variation cor-

    responding to each of the constraints being either binding or non-binding.Itll be tedious, but its worth it for practice.

    Case 1 Both constraints are non-binding. From the fourth and the fifthequations, 1 = 2 = 0. From the first three equations x1 = x

    2= 0,

    and x

    3 = 2. But then (x

    1 )2

    + (x

    2 )2

    + (x

    3 )2

    = 4 > 3, which violates thesecond constraint. Contradiction.

    Case 2 The first constraint is non-binding, the second one is binding. Then1 = 0 and (x1 )

    2 + (x2

    )2 + (x3

    )2 = 3. By the first equation, x1

    = 0 (since

    1 + 2 1 > 0). Similarly, by the second equation, x2 = 0. Therefore,by the second constraint x

    3= 3. But the first constraint requires

    that x3

    < x1

    x2

    = 0, so that x3

    = 3. Now, from the third equation2 = 2/

    3 1 < 0, which is impossible.

    Presession Mathematics Seminar 19 / 54

    Constrained Optimization Problem 2

    http://find/
  • 8/2/2019 Pre Session Math 2009-09-16

    20/56

    Problem 2

    Case 3 The first constraint is binding, the second one is non-binding. Thenx1

    x2

    = x3

    and 2 = 0. Suppose first that x1 = 0. Then, by the secondequation 1 = 2x2/x

    1. Substitute this into the first equation to conclude

    that (x1

    )2 = (x2

    )2 or x2

    = x1

    . Therefore, 1 = 2. But 1 0 by thecomplementarity condition, so 1 = 2 and x1 = x

    2. By the third equation,

    we have that x3 = 1. Since the first constraint is binding, it must be that(x

    1)2 = (x

    2)2 = 1. Therefore, (x

    1)2 + (x

    2)2 + (x

    3)2 = 3, which violates

    the assumption that the second constraint is non-binding. Thus we canonly have x

    1= 0. By the second equation x

    2= 0. Therefore, by the first

    constraint x3

    = 0. Now from the third equation, we have 1 = 4. To sumup, x

    1= x

    2= x

    3= 0 with 1 = 4 and 2 = 0 satisfies the Kuhn Tucker

    conditions.

    Presession Mathematics Seminar 20 / 54

    Constrained Optimization Problem 2

    http://find/http://goback/
  • 8/2/2019 Pre Session Math 2009-09-16

    21/56

    Problem 2

    Case 4 Both constraints are binding: (x1

    )2+ (x2

    )2+ (x3

    )2 = 3 and x1

    x2

    =

    = x

    3 . Subtract the second equation from the first to conclude that(x2

    x1

    ) (2 + 1 + 22) = 0. The second term of the product is strictlypositive by the complementarity conditions, so it must be that x

    1= x

    2.

    Substituting this into the first constraint, we have x3

    = (x1

    )2. Substitut-ing again into the second constraint, we obtain (x

    1)4 + 2 (x

    1)2

    3 = 0.

    The solution to this quadratic equation in (x

    1 )2

    is (x

    1 )2

    = 12. Since x

    1

    must be real, the only possibility is (x1

    )2 = 1 + 2 = 1, that is, x1

    = 1.Then, x

    2= x

    1= 1 and x

    3= 1. Substituting this into the first equation,

    we have (2 + 1 22)x1 = 0, which implies that 2 + 1 22, sincex1

    = 0. But the third equation with x

    3= 1 implies that 2

    1

    22 = 0.

    Add this two equations: 42 = 0 or 2 = 0. Moreover, 1 = 2. There-fore, the point 1 = 2 and 2 = 0 with either x1 = x

    2= 1 and x

    3= 1,

    or x1

    = x2

    = 1 and x3

    = 1 both satisfy the Kuhn Tucker conditions.

    Presession Mathematics Seminar 21 / 54

    Constrained Optimization Problem 2

    http://find/http://goback/
  • 8/2/2019 Pre Session Math 2009-09-16

    22/56

    Problem 2

    To sum up, we have three candidates for the maximum points:

    x1 x2 x

    3 1 2

    Candidate #1 0 0 0 4 0Candidate #2 1 1 1 2 0Candidate #3 1 1 1 2 0

    Lets evaluate the objective function f(x1, x2, x3) = x21 x22 x23 + 4x3at each of them:

    f(0, 0, 0) = 0,

    f(

    1,

    1, 1) = 1,

    f(1, 1, 1) = 1.

    Therefore, Candidates #2 and #3 both solve the constrained maximiza-tion problem.

    Presession Mathematics Seminar 22 / 54

    Constrained Optimization Problem 3

    http://goforward/http://find/http://goback/
  • 8/2/2019 Pre Session Math 2009-09-16

    23/56

    Problem 3

    Relax, this problem is going to be much less tiresome. Solve the followingnonlinear programming problem:

    f(x1, x2) = 2 (x1 1)2 exp

    x22 max

    s.t. h(x1, x2) = x2

    1 + x2

    2

    1

    2

    .

    Now the constraint set is convex and the objective function is strictlyconcave (you should check these). Therefore, if a solution exists it willbe unique and it will be able to be completely characterized by the Kuhn Tucker conditions. Moreover, the constraint set is also bounded, whichmeans by Weierstrasss theorem that a solution indeed exists.

    Presession Mathematics Seminar 23 / 54

    Constrained Optimization Problem 3

    http://find/
  • 8/2/2019 Pre Session Math 2009-09-16

    24/56

    Problem 3

    The first-order conditions, along with the complementarity condition, are

    as follows:x1 (1 + ) = 1,

    x2

    + exp

    (x2 )

    2

    = 0,

    0, = 0 if (x

    1 )

    2

    + (x

    2 )

    2

    (1 + 0)2

    = 1. This is impossible. Thus, the only possibility is that(1 + )2 = 1/2 or = 2 1 and x1

    = (1 + )1 = 1/2.

    The quadratic equation (1 + )2 = 1/2 has another root for . Why did weneglect it?

    Presession Mathematics Seminar 24 / 54

    Constrained Optimization Problem 4

    http://find/http://goback/
  • 8/2/2019 Pre Session Math 2009-09-16

    25/56

    Problem 4

    Solve the following nonlinear programming problem:

    f(x1, x2) = 2 (x1 1)2 exp

    x22 max

    s.t. h(x1, x2) = x2

    1 + x2

    2 2.This is the same as Problem 3 with the only modification that the right-hand side of the constraint is now 2 instead of 1/2.

    Presession Mathematics Seminar 25 / 54

    Constrained Optimization Problem 4

    http://find/http://goback/
  • 8/2/2019 Pre Session Math 2009-09-16

    26/56

    Problem 4

    The first-order conditions, along with the complementarity condition, are

    as follows:x1 (1 + ) = 1,

    x2

    + exp

    (x2 )

    2

    = 0,

    0, = 0 if (x

    1 )2

    + (x

    2 )2

    < 2.

    The second equation implies that x2

    = 0. By the first equation, x1

    =

    = (1 + )1. Now use the complementarity condition. It tells us that if(x

    1)2+(x

    2)2 = (1+)2 < 2, then = 0, which is equivalent to that 2 >

    > (1 + 0)2

    = 1. This sounds reasonable. Thus, = 0, x

    1 = (1 + )1

    == 1, and x

    2= 0 satisfies the Kuhn Tucker conditions. We have already

    seen that the solution in this case is unique, so no further research forother optima is needed.

    Presession Mathematics Seminar 26 / 54

    Constrained Optimization Problem 4

    http://find/http://goback/
  • 8/2/2019 Pre Session Math 2009-09-16

    27/56

    Problem 4

    Note that the point x1

    = 1 and x2

    = 0 is also a global maximum ofthe objective function f(x1, x2) = 2 (x1 1)2 exp

    x22

    . (You should

    check this.) This maximum point is now contained in the interior of the

    constraint set, which is tantamount to saying that the constraint is non-binding and the corresponding multiplier is thus zero. Note also that thismaximum point was not contained in the constraint set in Problem 3 andthat the function took its constrained maximum on the boundary of thebinding constraint set.

    Presession Mathematics Seminar 27 / 54

    Constrained Optimization Problem 4

    http://find/http://goback/
  • 8/2/2019 Pre Session Math 2009-09-16

    28/56

    Problem 4

    -2 -1.5 -1 -0.5 0 0.5 1 1.5 2

    -2

    -1

    1

    2

    x1

    x2

    contour lines of f(x)

    x + x 1/2222

    1 (1/2,0)

    direction of growth

    x + x 22 221

    Figure 4: Graphical representation of Problems 3 and 4.

    Presession Mathematics Seminar 28 / 54

    Constrained Optimization Value Function and the Envelope Theorem

    http://find/
  • 8/2/2019 Pre Session Math 2009-09-16

    29/56

    Value Function and the Envelope Theorem

    Now we formalize the idea embodied in the multipliers. Consider again the

    general problem.f(x) max / min

    s.t.

    g1(x) = b1, h1(x) c1,...

    ...

    gK(x) = bK, hM(x) cM,Suppose now that the parameters corresponding to the constraints b RKand c RM are allowed to vary. That is, for every possible set of val-ues of the parameters, we solve the corresponding nonlinear programming

    problem and calculate the optimum value of the objective function. Thefunction that assigns to any set of admissible parameter values this opti-mum value of the objective function is called the value function v(b, c).

    Presession Mathematics Seminar 29 / 54

    Constrained Optimization Value Function and the Envelope Theorem

    V l F i d h E l Th

    http://find/
  • 8/2/2019 Pre Session Math 2009-09-16

    30/56

    Value Function and the Envelope Theorem

    The envelope theorem says that the following: Suppose that the valuefunction is differentiable. Assume also that we have a set of parametervalues b and c such that if we change any of them by a sufficiently smallamount, then at the new altered values every binding constraint remainsbinding and every non-binding constraint also remains so. Then the partial

    derivatives of the value function are exactly the multipliers at the actualoptimum point. Formally, for any k = 1, . . . , K and m = 1, . . . , M, wehave that

    v(b, c)

    bk

    = k,

    v(b, c)

    cm= m.

    Presession Mathematics Seminar 30 / 54

    Constrained Optimization Value Function and the Envelope Theorem

    V l F i d h E l Th

    http://find/
  • 8/2/2019 Pre Session Math 2009-09-16

    31/56

    Value Function and the Envelope Theorem

    As an example, lets solve again Problems 3 and 4 but now lets allow the

    right-hand side of the inequality constraint to be any real numberc

    R

    .That is, our task is to solve the following nonlinear programming problem:

    f(x1, x2) = 2 (x1 1)2 exp

    x22 max

    s.t. h(x1, x2) = x2

    1 + x2

    2 c.Our ultimate goal is to determine the value function corresponding to thisproblem. Thus we have to solve this problem for every possible value ofc. First of all, observe that if c < 0, then the constraint set is empty.Therefore, there exists no solution to the problem and the value function

    is not defined for these values of c. If c = 0, then the constraint setconsists of the single point x1 = x2 = 0 and this is trivially the uniquesolution to the problem. That is, v(0) = f(0, 0) = 0.

    Presession Mathematics Seminar 31 / 54

    Constrained Optimization Value Function and the Envelope Theorem

    V l F ti d th E l Th

    http://find/
  • 8/2/2019 Pre Session Math 2009-09-16

    32/56

    Value Function and the Envelope Theorem

    Next, we contemplate values c > 0. From Problems 3 and 4, we alreadyknow the Kuhn Tucker conditions:

    x1 (1 + ) = 1,

    x2

    + exp

    (x2 )

    2

    = 0,

    0, = 0 if (x1 )

    2 + (x2 )2 < c.

    The second equation implies that x2

    = 0. By the first equation, x1

    =

    = (1 + )1. Now use the complementarity condition. It tells us that if(x

    1)2+(x

    2)2 = (1+)2 < c, then = 0, which is equivalent to that c >

    > (1 + 0)2 = 1. This makes sense if and only if c > 1. Therefore, if

    c > 1, then x

    1 = 1, x

    2 = 0 with = 0 is the unique solution and v(c) == f(1, 0) = 1. However, if c 1, then we run into a contradiction ifwe assume that the constraint is non-binding. In this case, the constraintmust bind and we have x

    1= (1 + )1 =

    c and = 1/

    c 1. Also,

    v(c) = f(

    c, 0) = 1

    (1

    c)

    2.

    Presession Mathematics Seminar 32 / 54

    Constrained Optimization Value Function and the Envelope Theorem

    V l F ti d th E l Th

    http://find/
  • 8/2/2019 Pre Session Math 2009-09-16

    33/56

    Value Function and the Envelope Theorem

    To sum up:

    x1

    x2

    v(c)c < 0 N/A N/A N/A N/Ac = 0 0 0 N/A 0

    0 < c 1 c 0 1/c 1 1 (1 c)2c > 1 1 0 0 1

    Lets calculate v(c) whenever the value function is differentiable and com-pare its value with . What is our conclusion?

    Presession Mathematics Seminar 33 / 54

    Constrained Optimization Value Function and the Envelope Theorem

    Value Function and the Envelope Theorem

    http://find/http://goback/
  • 8/2/2019 Pre Session Math 2009-09-16

    34/56

    Value Function and the Envelope Theorem

    0 0.5 1

    0.5

    1

    y

    x

    Figure 5: Value function of the generalized version of Problems 3 and 4.

    Presession Mathematics Seminar 34 / 54

    Constrained Optimization Problem 5

    Problem 5

    http://find/http://goback/
  • 8/2/2019 Pre Session Math 2009-09-16

    35/56

    Problem 5

    Lets solve Problem 4 yet again but now lets prescribe that the constraint

    hold with equality:

    f(x1, x2) = 2 (x1 1)2 exp

    x22 max

    s.t. h(x1, x2) = x2

    1 + x2

    2 = 2.

    The Lagrangian pertaining to this problem is the following:

    L(x1, x2) = 2 (x1 1)2 exp

    x22 (x21 + x22 2).

    What can we say about the existence and uniqueness of the maximum point?Think twice: The constraint set is not convex now!

    Presession Mathematics Seminar 35 / 54

    Constrained Optimization Problem 5

    Problem 5

    http://find/http://goback/
  • 8/2/2019 Pre Session Math 2009-09-16

    36/56

    Problem 5

    The first-order conditions are the same as in Problem 4 (except for thatwe have instead of ), but now we do not require any complementaritycondition.

    x1 (1 + ) = 1,

    x2

    + exp

    (x2 )

    2

    = 0.

    By the second equation, one possibility is that x2

    = 0 and, from theconstraint (x

    1)2 + (x

    2)2 = 2, x

    1= 2. From the first equation, the

    respective multipliers pertaining to these cases are = 1 1/2. Bothvalues are negative in these cases. Since now , as we have just seen, can

    also take negative values, we cannot know for sure that the only possibilityfor the second equation to hold is that x

    2= 0 as it was in Problems 3 and

    4. So we have to explore the possibility that + exp

    (x2

    )2

    = 0, too.

    Presession Mathematics Seminar 36 / 54

    Constrained Optimization Problem 5

    Problem 5

    http://find/http://goback/
  • 8/2/2019 Pre Session Math 2009-09-16

    37/56

    Problem 5

    So suppose that + exp (x

    2)2 = 0. From the first equation, == 1/x

    1 1. Substitute this into the previous equation to yield 1/x

    1 1 +

    + exp

    (x2

    )2

    = 0. From the constraint we know that (x2

    )2 = 2 (x1

    )2.

    We conclude that we have to solve the following (univariate) equation:

    1/x1 + exp 2 (x1 )2 1 = 0.By numerical methods (you dont have to be able to calculate these), wefind three solutions: 1.1768, 0.1613, and 1.6994. The third one isinadmissible, since it would yield no real value for x

    2. Each of the first two

    roots admits two values for x2

    : 0.7842 and 1.4050, respectively.

    Presession Mathematics Seminar 37 / 54

    Constrained Optimization Problem 5

    Problem 5

    http://find/http://goback/
  • 8/2/2019 Pre Session Math 2009-09-16

    38/56

    Problem 5

    To sum up, we have six candidates (once we have x1

    and x2

    , and the

    value of the objective function are easily calculated):

    x1

    x2

    f(x1

    , x2

    )Candidate #1 1.4142 0.0000 0.2929 0.8284Candidate #2 1.4142 0.0000 1.7071 4.8284Candidate #3 1.1768 0.7842 1.8497 4.5884Candidate #4 1.1768 0.7842 1.8497 4.5884Candidate #5 0.1613 1.4050 7.1993 6.5479Candidate #6 0.1613 1.4050 7.1993 6.5479

    Therefore, we have a unique solution to the constrained maximizationproblem: x1

    = 1.4142 = 2 and x2

    = 0.

    Presession Mathematics Seminar 38 / 54

    Constrained Optimization Problem 5

    Problem 5

    http://find/http://goback/
  • 8/2/2019 Pre Session Math 2009-09-16

    39/56

    Problem 5

    -2 -1.5 -1 -0.5 0 0.5 1 1.5 2

    -2

    -1

    1

    2

    x1

    x2contour lines of f(x)

    x + x = 2222

    1

    (1.4142,0)

    direction of growth

    (-0.1613,1.4050)

    (-1.1768,-0.7842)

    (-0.1613,-1.4050)

    (-1.1768,0.7842)

    (-1.4142,0)

    Figure 6: Graphical representation of Problem 5.

    Presession Mathematics Seminar 39 / 54

    Constrained Optimization Regularity Condition

    Regularity Condition

    http://find/http://goback/
  • 8/2/2019 Pre Session Math 2009-09-16

    40/56

    Regularity Condition

    Remember that when we were discussing the optimality of the Kuhn Tucker solution, we argued that they are necessary (but sufficient onlyunder further special conditions) for a point to be a maximum provided thata technical condition is satisfied. This is called the regularity condition

    or constraint qualification. Be careful: If this condition fails, then it maybe the case that there are some points which satisfy the Kuhn Tuckerconditions but none of them is a maximum point and the maximum iselsewhere at a point which does not satisfy the Kuhn Tucker conditions!

    Presession Mathematics Seminar 40 / 54

    Constrained Optimization Regularity Condition

    Regularity Condition

    http://find/http://goback/
  • 8/2/2019 Pre Session Math 2009-09-16

    41/56

    Regularity Condition

    Formally, consider the following nonlinear optimization problem:

    f(x) maxs.t. h1(x) c1, . . . , hM(x) cM.

    We say that a point x S satisfies the regularity condition if the gradientsof the constraint functions corresponding to the binding constraints arelinearly independent at this point. More formally, suppose that the firstM M constraints are the ones which are binding. Then, the followingmatrix must have rank

    M:

    h1(x)/x1 h1(x)/xn... . . . ...h bM(x)/x1 h bM(x)/xn

    .

    Presession Mathematics Seminar 41 / 54

    Constrained Optimization Regularity Condition

    Regularity Condition

    http://find/http://goback/
  • 8/2/2019 Pre Session Math 2009-09-16

    42/56

    Regularity Condition

    So far, we were not concerned about the regularity condition when we were

    solving problems. However, as the following problem suggests, its failureto hold may well cause complications. Therefore, the general methodof solving a nonlinear optimization problem must be complemented by asecond step:

    1 Determine all points in the constraint set for which the Kuhn Tucker

    conditions are satisfied. (Weve been doing this until now.)2 Determine all points in the constraint set for which the regularity

    condition does not hold.

    Then, if a maximum point exists (or several maxima exist), it (they) must

    be among the points that we can determine following these two steps.

    Presession Mathematics Seminar 42 / 54

    Constrained Optimization Regularity Condition

    Regularity Condition

    http://find/
  • 8/2/2019 Pre Session Math 2009-09-16

    43/56

    Regularity Condition

    So far, we were not concerned about the regularity condition when we were

    solving problems. However, as the following problem suggests, its failureto hold may well cause complications. Therefore, the general methodof solving a nonlinear optimization problem must be complemented by asecond step:

    1 Determine all points in the constraint set for which the Kuhn Tucker

    conditions are satisfied. (Weve been doing this until now.)2 Determine all points in the constraint set for which the regularity

    condition does not hold.

    Then, if a maximum point exists (or several maxima exist), it (they) must

    be among the points that we can determine following these two steps.

    Presession Mathematics Seminar 42 / 54

    Constrained Optimization Regularity Condition

    Regularity Condition

    http://find/http://goback/
  • 8/2/2019 Pre Session Math 2009-09-16

    44/56

    gu y C

    So far, we were not concerned about the regularity condition when we were

    solving problems. However, as the following problem suggests, its failureto hold may well cause complications. Therefore, the general methodof solving a nonlinear optimization problem must be complemented by asecond step:

    1 Determine all points in the constraint set for which the Kuhn Tucker

    conditions are satisfied. (Weve been doing this until now.)2 Determine all points in the constraint set for which the regularity

    condition does not hold.

    Then, if a maximum point exists (or several maxima exist), it (they) must

    be among the points that we can determine following these two steps.

    Presession Mathematics Seminar 42 / 54

    Constrained Optimization Problem 6

    Problem 6

    http://find/http://goback/
  • 8/2/2019 Pre Session Math 2009-09-16

    45/56

    Solve the following nonlinear programming problem and for this time be

    careful about the regularity condition:f(x1, x2) = 3x1 + x2 max

    s.t. h1(x1, x2) = (x1 1)3 + x2 0,x1

    0,

    x2 0.

    First of all, lets rewrite the nonnegativity constraints into the followingless-than-or-equal-to forms: h2(x1, x2) = x1 0 and h3(x1, x2) = x2

    0. The Lagrangian is thus as follows:L(x1, x2) = 3x1 + x2 1

    (x1 1)3 + x2

    + 2x1 + 3x2.

    Presession Mathematics Seminar 43 / 54

    Constrained Optimization Problem 6

    Problem 6

    http://find/http://goback/
  • 8/2/2019 Pre Session Math 2009-09-16

    46/56

    The first-order conditions, along with the complementarity conditions,(Reminder: How do we call these two sets of conditions collectively?),are the following (With a little of abuse of notation, usual asterisks areomitted for simplicity.):

    3 31(x1 1)2 + 2 = 0,1

    1 + 3 = 0,

    1 0, 1 = 0 if (x1 1)3 + x2 < 0,2 0, 2 = 0 if x1 > 0,3 0, 3 = 0 if x2 > 0.

    By either the first or the second equation, 1 > 0 must hold (why?). Thefirst constraint must thus hold with equality: (x1 1)3 + x2 = 0. Nowsuppose that 3 > 0. Therefore, x2 = 0. But then x1 = 1, which leads usto a contradiction through the first equation. Therefore, it must be that3 = 0.

    Presession Mathematics Seminar 44 / 54

    Constrained Optimization Problem 6

    Problem 6

    http://find/http://goback/
  • 8/2/2019 Pre Session Math 2009-09-16

    47/56

    But if3 = 0, then 1 = 1 by the second equation. Substituting this into

    the first equation, we have 3 3(x1 1)2 = 2 0, which is equivalentto 1 (x1 1)2. Therefore, either x1 2 or x1 0. But if we substituteany x1 2 into the first constraint, which, as we have just seen, musthold with equality: (x1 1)3 + x2 = 0, then we obtain a negative value forx2, which violates the nonnegativity condition. So the only possibility leftis that x1 0. Together with the nonnegativity constraint, this meansthat x1 = 0. Putting this into the first constraint, we have that x2 = 1.Moreover, 1 = 1, 3 = 0 (weve already established these), and, fromthe first equation, 2 = 0. This set of values clearly satisfies the Kuhn

    Tucker conditions. Moreover, this is the only set of values that does so.We thus have a unique Kuhn Tucker solution. The value of the objectivefunction is f(0, 1) = 1.

    Presession Mathematics Seminar 45 / 54

    Constrained Optimization Problem 6

    Problem 6

    http://find/http://goback/
  • 8/2/2019 Pre Session Math 2009-09-16

    48/56

    Now lets determine the points in the constraint set where the regu-larity conditions are not satisfied. Remember, its not satisfied when-ever the gradients of all binding constraints are not linearly independent.The constraints are represented by the following functions: h1(x1, x2) == (x1 1)3 + x2, h2(x1, x2) = x1, and h3(x1, x2) = x2. The gradientsare given as follows:

    h1(x)/x1 = 3(x1 1)2, h1(x)/x2 = 1,h2(x)/x1 = 1, h2(x)/x2 = 0,h3(x)/x1 = 0, h3(x)/x2 = 1.

    We should systematically go through every possible variation correspondingto each of the constraints being either binding or non-binding (there areeight possibilities). Again, itll be tedious, but we have to practice.

    Presession Mathematics Seminar 46 / 54

    Constrained Optimization Problem 6

    Problem 6

    http://find/http://goback/
  • 8/2/2019 Pre Session Math 2009-09-16

    49/56

    Case 1 None of the constraints are binding. Then the regularity conditionis trivially satisfied.

    Case 2 Only the first constraint is binding. That is: (x1 1)3 + x2 = 0,x1 > 0, and x2 > 0. Since the second element of the gradient of thisconstraint is always 1 irrespective of x, the regularity condition holds.(Remember that if a set of vectors contains only one element, then wesay that linear independence holds whenever this single vector is not 0.)

    Case 3 Only the second constraint is binding. That is: (x1 1)3 + x2 < 0,x1 = 0, and x2 > 0. Since the first element of the gradient of this

    constraint is always 1, the regularity condition holds.Case 4 Only the third constraint is binding. That is: (x1 1)3 + x2 < 0,x1 > 0, and x2 = 0. Analogous to Case 3.

    Presession Mathematics Seminar 47 / 54

    Constrained Optimization Problem 6

    Problem 6

    http://find/
  • 8/2/2019 Pre Session Math 2009-09-16

    50/56

    Case 5 The first two constraints are binding, but the third is not. Thatis: (x1 1)3 + x2 = 0, x1 = 0, and x2 > 0. Therefore, for the regularitycondition to hold, we must have that the following matrix (containing thegradients of the first two constraints as rows) have rank 2:

    h1(x)/x1 = 3(x1 1)2, h1(x)/x2 = 1,h2(x)/x1 = 1, h2(x)/x2 = 0.

    But this matrix is always of rank 2. (To see this, calculate the determinant,for example.)

    Presession Mathematics Seminar 48 / 54

    Constrained Optimization Problem 6

    Problem 6

    http://find/
  • 8/2/2019 Pre Session Math 2009-09-16

    51/56

    Case 6 The second and the third constraints are binding, but the first isnot. That is: (x1 1)3 + x2 < 0, x1 = 0, and x2 = 0. For the regularitycondition to hold, it must be that the following matrix be of rank 2:

    h2(x)/x1 = 1, h2(x)/x2 = 0,h3(x)/x1 = 0, h3(x)/x2 = 1.

    This matrix is always of rank 2.

    Presession Mathematics Seminar 49 / 54

    Constrained Optimization Problem 6

    Problem 6

    http://goforward/http://find/http://goback/
  • 8/2/2019 Pre Session Math 2009-09-16

    52/56

    Case 7 All constraints are binding. That is: (x11)3+x2 = 0, x1 = 0, andx2 = 0. For the regularity condition to hold, it must be that the followingmatrix be of rank 3:

    h1(x)/x1 = 3(x1 1)2, h1(x)/x2 = 1,h2(x)/x1 = 1, h2(x)/x2 = 0,h

    3

    (x)/x1

    = 0, h3

    (x)/x2

    = 1.

    But this matrix can never be of rank 3. (Why? Hint: Think of thefundamental theorem of linear algebra.) So, if such a point were to exist,we would have to conclude that the regularity condition is violated. But,

    fortunately, no such point can exist where all of the three constraints aresimultaneously binding. (Why?)

    Presession Mathematics Seminar 50 / 54

    Constrained Optimization Problem 6

    Problem 6

    http://goforward/http://find/http://goback/
  • 8/2/2019 Pre Session Math 2009-09-16

    53/56

    Case 8 The first and the third constraints are binding, but the second is

    not. That is: (x1 1)3

    + x2 = 0, x1 > 0, and x2 = 0. For the regularitycondition to hold, it must be that the following matrix be of rank 2:h1(x)/x1 = 3(x1 1)2, h1(x)/x2 = 1,h3(x)/x1 = 0, h3(x)/x2 = 1.

    We can easily show that this matrix has rank 2 if and only if 3(x11)2 = 0.If 3(x11)2 = 0, then matrix is of only rank 1 and the regularity conditionis violated. This can occur if and only if x1 = 1. But the third constraintis binding, therefore x2 = 0. Note that the first constraint is binding with

    this choice of values. We thus have the regularity condition violated atx1 = 1 and x2 = 0. Note that f(1, 0) = 3. Lets call this point anirregular point.

    Presession Mathematics Seminar 51 / 54

    Constrained Optimization Problem 6

    Problem 6

    http://find/http://goback/
  • 8/2/2019 Pre Session Math 2009-09-16

    54/56

    Now lets compare the Kuhn Tucker solution and the irregular point:

    x1 x2 f(x1, x2)Kuhn Tucker solution 0 1 1Irregular point 1 0 3

    We now see that the Kuhn Tucker solution is not a maximum! Moreover,we can no way assign values to the multipliers so that the irregular pointsatisfy the Kuhn Tucker conditions. We conclude that the unique solutionto the maximization problem is the irregular point in this case.

    Presession Mathematics Seminar 52 / 54

    Constrained Optimization Problem 6

    Problem 6

    http://find/http://goback/
  • 8/2/2019 Pre Session Math 2009-09-16

    55/56

    0 1

    1

    2

    3

    x

    x

    2

    1

    (x - 1) + x 03

    1 2

    x 02x 01

    KuhnTucker solution

    irregular pointgradient of the first constraint

    gradient of the nonnegativity constraint

    contour lines of the objective function

    direction of growth: upward and to the right

    Figure 7: Example where the Kuhn Tucker method does not work due tothe regularity condition failing to hold.

    Presession Mathematics Seminar 53 / 54

    http://find/
  • 8/2/2019 Pre Session Math 2009-09-16

    56/56

    Thank you for your attention.

    Presession Mathematics Seminar 54 / 54

    http://find/http://goback/