A partial derivative is the result of multivariable differentiation, where we differentiate one variable and leave the other CONSTANT. Other than that, all rules still hold.
limit
Given a multivariable function
Partial VS Implicit differentiation
With implicit differentiation, both variables are differentiated, but at the end of the problem, one variable is isolated (without any number being connected to it) on one side. If two variables are independent, implicit differentiation is inappropriate.
With partial differentiation, one variable is differentiated, but the other is held constant.
Tip: In implicit differentiation, we consider y as an implicit function of x (or vice versa), while in partial differentiation, we only consider partial rate of change of a multivariable function
Example Calculations
and are treated as constants
is treated as a constant
and are treated as constants
- Given
, what are their partial derivatives at the point ?
The tangent line to
High-order partial derivatives
For a function with two variables
Note that there is a mismatch in notation in
- In
, we first differentiate with respect to and then to (see ). - In
, we first differentiate with respect to and then to .
Example
Application
tangent plane
A plane that touches the graph of a function
Critical point
In multivariable functions, a critical point occurs when ALL of the function partial derivatives are simultaneously 0’s. We classify those critical points using second partial derivative test
second partial derivative test
second partial derivative test
second partial derivative test test to determine whether a critical point of a twice-differentiable multivariate function
is a local minimum or a local maximum Suppose all second partial derivatives are defined and continuous on a neighborhood around the critical point
of a multivariate function Define:
- If
and , is a local minimum - If
and , is a local maximum - If
, is a saddle point - If
, the test is inconclusive Examples
Example 1: multivariable optimization
Example 2: The scalar field
has a critical point at . How does the second partial derivative test classify this point?
- Calculate regular partial derivative
- Calculate all second-order partial derivatives
- Perform the second partial derivative test:
The test is inconclusive.
Link to original
global extremum
Multivariable calculus
Linear constraints
- Define the function’s domain
to get endpoints - Compute the function values at critical points
- Find all first partial derivatives
- Set partial derivatives to 0 to find critical points
- Only consider those that are within the domain
- (Optional) Using the second partial derivative test to classify critical points (in case they are saddle points)
- Compute the function values at critical points
- Compute the function values at the boundaries of the domain. For each boundary
- Substitute it into the function to simplify the function into a single-variable function
- Differentiate the new single-variable function
- Set the derivative computed to 0 to find the value of the other variable
- Compute the function value
- Compute the function values at corner points
- Compare all values from step 2-3-4 to find absolute extremum.
Find the local and global extrema of the function
such that (Minerva Uni) (Workbook)
Step 1: Sketch the domain
Step 2: Critical points
Taking the partial derivative of
: Only the case
lies inside the domain Using the second derivative test, we confirm that is a local minimum. Step 3: Boundaries
Because we find local extrema in the domain
, we must substitute the value along the boundaries
- Along the boundary
, while :
- Along the boundary
, while : Step 4: Corner points
Step 5: Compare function values of local extremum, boundaries and endpoints
From previous steps, we list 5 candidates:
And corner points:
When both objective function and constraints are linear, we could use linear programming
KKT conditions
Transclude of KKT-conditionsLink to original
Optimization
Transclude of mathematical-optimization#process
Constrained optimization - Lagrange multipliers
constrained optimization
constrained optimization is the problem of optimizing an objective function with respect to some variables when there’re constraints on those variables
Example: Maximize
on the set . See the contour map below Process
- Substitute the constraint function
into the function and solve the now unconstrained function - Lagrange multipliers
Link to originalProcess with Example
- Make sure the problem follow a form:
- The problem can be represented by a differentiable, multivariate function
with n-dimensional inputs - There are
constraint functions, each takes the form of multivariate function , where is a constant and has the same dimension as - Both
and are twice-differentiable around the open neighborhood of the optimizer - For each constraint function
, introduce a new variables —Lagrange multiplier. Then define the Lagrangian function as follows:
- Set the gradient of
to the zero vector to find critical points of . All components (partial derivatives) of must equate to 0.
- This leads to a system of equations
- Each candidate solution looks like
. Remove , then we have found critical points of , namely To classify these critical point, there are two ways:
- Plug back the values
into function and compare function values, to determine the global maximum and minimum over the feasible region ). Easy and recommended. - Check bordered Hessian
: if the leading principal minor Examples
Link to original