constrained optimization is the problem of optimizing an objective function with respect to some variables when there’re constraints on those variables

Example: Maximize on the set . See the contour map below

Process

(Guided Examples)

  1. Substitute the constraint function into the function and solve the now unconstrained function
  2. Lagrange multipliers

Process with Example

  1. Make sure the problem follow a form:
    1. The problem can be represented by a differentiable, multivariate function with n-dimensional inputs
    2. There are constraint functions, each takes the form of multivariate function , where is a constant and has the same dimension as
    3. Both and are twice-differentiable around the open neighborhood of the optimizer
  2. For each constraint function , introduce a new variables Lagrange multiplier. Then define the Lagrangian function as follows:
  1. Set the gradient of to the zero vector to find critical points of . All components (partial derivatives) of must equate to 0.
  1. This leads to a system of equations
  1. Each candidate solution looks like . Remove , then we have found critical points of , namely

To classify these critical point, there are two ways:

  1. Plug back the values into function and compare function values, to determine the global maximum and minimum over the feasible region ). Easy and recommended.
  2. Check bordered Hessian : if the leading principal minor

Examples

Guided Example

Link to original