Given a function \(f(x,y)\text{,}\) we are often interested in points where \(z=f(x,y)\) takes on the largest or smallest values. For instance, if \(f\) represents a cost function, we would likely want to know what \((x,y)\) values minimize the cost. If \(f\) represents the ratio of a volume to surface area, we would likely want to know where \(f\) is greatest. This leads to the following definition.
Let \(z=f(x,y)\) be defined on a set \(S\) containing the point \(P=(x_0,y_0)\text{.}\)
If \(f(x_0,y_0)\geq f(x,y)\) for all \((x,y)\) in \(S\text{,}\) then \(f\) has an absolute maximum at \(P\) If \(f(x_0,y_0)\leq f(x,y)\) for all \((x,y)\) in \(S\text{,}\) then \(f\) has an absolute minimum at \(P\text{.}\)
If there is an open disk \(D\) containing \(P\) such that \(f(x_0,y_0) \geq f(x,y)\) for all points \((x,y)\) that are in both \(D\) and \(S\text{,}\) then \(f\) has a relative maximum at \(P\text{.}\) If there is an open disk \(D\) containing \(P\) such that \(f(x_0,y_0) \leq f(x,y)\) for all points \((x,y)\) that are in both \(D\) and \(S\text{,}\) then \(f\) has a relative minimum at \(P\text{.}\)
If \(f\) has an absolute maximum or minimum at \(P\text{,}\) then \(f\) has an absolute extremum at \(P\text{.}\) If \(f\) has a relative maximum or minimum at \(P\text{,}\) then \(f\) has a relative extremum at \(P\text{.}\)
If \(f\) has a relative or absolute maximum at \((x_0,y_0)\text{,}\) it means every curve on the graph of \(f\) through \((x_0,y_0,f(x_0,y_0))\) will also have a relative or absolute maximum at \(P\text{.}\) Recalling what we learned in Section 3.1, the slopes of the tangent lines to these curves at \(P\) must be 0 or undefined. Since directional derivatives are computed using \(f_x\) and \(f_y\text{,}\) we are led to the following definition and theorem.
Theorem14.5.4.Critical Points and Relative Extrema.
Let \(z=f(x,y)\) be defined on an open set \(S\) containing \(P=(x_0,y_0)\text{.}\) If \(f\) has a relative extrema at \(P\text{,}\) then \(P\) is a critical point of \(f\text{.}\)
Therefore, to find relative extrema, we find the critical points of \(f\) and determine which correspond to relative maxima, relative minima, or neither. The following examples demonstrate this process.
Each is never undefined. A critical point occurs when \(f_x\) and \(f_y\) are simultaneously 0, leading us to solve the following system of linear equations:
The graph in Figure 14.5.7 shows \(f\) along with this critical point. It is clear from the graph that this is a relative minimum; further consideration of the function shows that this is actually the absolute minimum.
It is clear that \(f_x=0\) when \(x=0\) & \(y\neq0\text{,}\) and that \(f_y=0\) when \(y=0\) & \(x\neq0\text{.}\) At \((0,0)\text{,}\) both \(f_x\) and \(f_y\) are not\(0\text{,}\) but rather undefined. The point \((0,0)\) is still a critical point, though, because the partial derivatives are undefined. This is the only critical point of \(f\text{.}\)
The graph of \(f\) is plotted in Figure 14.5.9 along with the point \((0,0,2)\text{.}\) The graph shows that this point is the absolute maximum of \(f\text{.}\)
In each of the previous two examples, we found a critical point of \(f\) and then determined whether or not it was a relative (or absolute) maximum or minimum by graphing. It would be nice to be able to determine whether a critical point corresponded to a max or a min without a graph. Before we develop such a test, we do one more example that sheds more light on the issues our test needs to consider.
We have two critical points: \((-1,2)\) and \((1,2)\text{.}\) To determine if they correspond to a relative maximum or minimum, we consider the graph of \(f\) in Figure 14.5.11.
The critical point \((-1,2)\) clearly corresponds to a relative maximum. However, the critical point at \((1,2)\) is neither a maximum nor a minimum, displaying a different, interesting characteristic.
If one walks parallel to the \(y\)-axis towards this critical point, then this point becomes a relative maximum along this path. But if one walks towards this point parallel to the \(x\)-axis, this point becomes a relative minimum along this path. A point that seems to act as both a max and a min is a saddle point. A formal definition follows.
Let \(P=(x_0,y_0)\) be in the domain of \(f\) where \(f_x=0\) and \(f_y=0\) at \(P\text{.}\) We say \(P\) is a saddle point of \(f\) if, for every open disk \(D\) containing \(P\text{,}\) there are points \((x_1,y_1)\) and \((x_2,y_2)\) in \(D\) such that \(f(x_0,y_0) \gt f(x_1,y_1)\) and \(f(x_0,y_0)\lt f(x_2,y_2)\text{.}\)
At a saddle point, the instantaneous rate of change in all directions is 0 and there are points nearby with \(z\)-values both less than and greater than the \(z\)-value of the saddle point.
Before Example 14.5.10 we mentioned the need for a test to differentiate between relative maxima and minima. We now recognize that our test also needs to account for saddle points. To do so, we consider the second partial derivatives of \(f\text{.}\)
Recall that with single variable functions, such as \(y=f(x)\text{,}\) if \(\fp'(c) \gt 0\text{,}\) then \(f\) is concave up at \(c\text{,}\) and if \(\fp(c) =0\text{,}\) then \(f\) has a relative minimum at \(x=c\text{.}\) (We called this the Second Derivative Test.) Note that at a saddle point, it seems the graph is “both” concave up and concave down, depending on which direction you are considering.
However, this is not the case. Functions \(f\) exist where \(f_{xx}\) and \(f_{yy}\) are both positive but a saddle point still exists. In such a case, while the concavity in the \(x\)-direction is up (i.e., \(f_{xx} \gt 0\)) and the concavity in the \(y\)-direction is also up (i.e., \(f_{yy} \gt 0\)), the concavity switches somewhere in between the \(x\)- and \(y\)-directions.
To account for this, consider \(D = f_{xx}f_{yy}-f_{xy}f_{yx}\text{.}\) Since \(f_{xy}\) and \(f_{yx}\) are equal when continuous (refer back to Theorem 11.3.15), we can rewrite this as \(D = f_{xx}f_{yy}-f_{xy}^{\,2}\text{.}\)\(D\) can be used to test whether the concavity at a point changes depending on direction. If \(D \gt 0\text{,}\) the concavity does not switch (i.e., at that point, the graph is concave up or down in all directions). If \(D\lt 0\text{,}\) the concavity does switch. If \(D=0\text{,}\) our test fails to determine whether concavity switches or not. We state the use of \(D\) in the following theorem.
Let \(R\) be an open set on which a function \(z=f(x,y)\) and all its first and second partial derivatives are defined, let \(P = (x_0,y_0)\) be a critical point of \(f\) in \(R\text{,}\) and let
\begin{equation*}
D = f_{xx}(x_0,y_0)f_{yy}(x_0,y_0)-f_{xy}^{\,2}(x_0,y_0)\text{.}
\end{equation*}
We first practice using this test with the function in the previous example, where we visually determined we had a relative maximum and a saddle point.
Let \(f(x,y) = x^3-3x-y^2+4y\) as in Example 14.5.10. Determine whether the function has a relative minimum, maximum, or saddle point at each critical point.
We determined previously that the critical points of \(f\) are \((-1,2)\) and \((1,2)\text{.}\) To use the Second Derivative Test, we must find the second partial derivatives of \(f\text{:}\)
At \((-1,2)\text{:}\)\(D(-1,2) = 12 \gt 0\text{,}\) and \(f_{xx}(-1,2) = -6\text{.}\) By the Second Derivative Test, \(f\) has a relative maximum at \((-1,2)\text{.}\)
We find the critical points by finding where \(f_x\) and \(f_y\) are simultaneously 0 (they are both never undefined). Setting \(f_x=0\text{,}\) we have:
Figure 14.5.17 shows a graph of \(f\) and the three critical points. Note how this function does not vary much near the critical points — that is, visually it is difficult to determine whether a point is a saddle point or relative minimum (or even a critical point at all!). This is one reason why the Second Derivative Test is so important to have.
When optimizing functions of one variable such as \(y=f(x)\text{,}\) we made use of Theorem 3.1.4, the Extreme Value Theorem, that said that over a closed interval \(I=[a,b]\text{,}\) a continuous function has both a maximum and minimum value. To find these maximum and minimum values, we evaluated \(f\) at all critical points in the interval, as well as at the endpoints (the “boundary”) of the interval.
A similar theorem and procedure applies to functions of two variables. A continuous function over a closed set also attains a maximum and minimum value (see the following theorem). We can find these values by evaluating the function at the critical values in the set and over the boundary of the set. After formally stating this extreme value theorem, we give examples.
Let \(f(x,y) = x^2-y^2+5\) and let \(S\) be the triangle with vertices \((-1,-2)\text{,}\)\((0,1)\) and \((2,-2)\text{.}\) Find the maximum and minimum values of \(f\) on \(S\text{.}\)
It can help to see a graph of \(f\) along with the set \(S\text{.}\) In Figure 14.5.21.(a) the triangle defining \(S\) is shown in the \(xy\)-plane in a dashed line. Above it is the graph of \(f\text{;}\) we are only concerned with the portion of the surface \(z=f(x,y)\) enclosed by the “triangle”.
We begin by finding the critical points of \(f\text{.}\) With \(f_x = 2x\) and \(f_y = -2y\text{,}\) we find only one critical point, at \((0,0)\text{.}\)
We now find the maximum and minimum values that \(f\) attains along the boundary of \(S\text{,}\) that is, along the edges of the triangle. In Figure 14.5.21.(b) we see the triangle sketched in the plane with the equations of the lines forming its edges labeled.
Start with the bottom edge, along the line \(y=-2\text{.}\) If \(y\) is \(-2\text{,}\) then on the surface, we are considering points \(f(x,-2)\text{;}\) that is, our function reduces to \(f(x,-2) = x^2-(-2)^2+5 = x^2+1=f_1(x)\text{.}\) We want to maximize/minimize \(f_1(x)=x^2+1\) on the interval \([-1,2]\text{.}\) To do so, we evaluate \(f_1(x)\) at its critical points and at the endpoints.
We want the maximum and minimum values of \(f_2\) on the interval \([-1,0]\text{,}\) so we evaluate \(f_2\) at its critical points and the endpoints of the interval. We find the critical points:
We have evaluated \(f\) at a total of 7 different places, all shown in Figure 14.5.21.(b). We checked each vertex of the triangle twice, as each showed up as the endpoint of an interval twice. Of all the \(z\)-values found, the maximum is 5.8, found at \((1.2,-0.8)\text{;}\) the minimum is 1, found at \((0,-2)\text{.}\)
This portion of the text is entitled “Constrained Optimization” because we want to optimize a function (i.e., find its maximum and/or minimum values) subject to a constraint — some limit to what values the function can attain. In the previous example, we constrained ourselves by considering a function only within the boundary of a triangle. This was largely arbitrary; the function and the boundary were chosen just as an example, with no real “meaning” behind the function or the chosen constraint.
However, solving constrained optimization problems is a very important topic in applied mathematics. The techniques developed here are the basis for solving larger problems, where more than two variables are involved.
The U.S. Postal Service states that the girth+length of Standard Post Package must not exceed 130’’. Given a rectangular box, the “length” is the longest side, and the “girth” is twice the width+height.
Given a rectangular box where the width and height are equal, what are the dimensions of the box that give the maximum volume subject to the constraint of the size of a Standard Post Package?
Let \(w\text{,}\)\(h\) and \(\ell\) denote the width, height and length of a rectangular box; we assume here that \(w=h\text{.}\) The girth is then \(2(w+h) = 4w\text{.}\) The volume of the box is \(V(w,\ell) = wh\ell = w^2\ell\text{.}\) We wish to maximize this volume subject to the constraint \(4w+\ell\leq 130\text{,}\) or \(\ell\leq 130-4w\text{.}\) (Common sense also indicates that \(\ell \gt 0, w \gt 0\text{.}\))
We begin by finding the critical values of \(V\text{.}\) We find that \(V_w = 2w\ell\) and \(V_\ell = w^2\text{;}\) these are simultaneously 0 at points of the form \((0,\ell)\text{.}\) These give a volume of 0, so we can ignore these critical points.
We found two critical values: when \(w=0\) and when \(w=21.67\text{.}\) We again ignore the \(w=0\) solution; the maximum volume, subject to the constraint, comes at \(w=h=21.67\text{,}\)\(\ell = 130-4(21.6) =43.33\text{.}\) This gives a volume of \(V(21.67,43.33) \approx 20,343\)in\(^3\text{.}\)
The volume function \(V(w,\ell)\) is shown in Figure 14.5.25 along with the constraint \(\ell = 130-4w\text{.}\) As done previously, the constraint is drawn dashed in the \(xy\)-plane and also along the graph of the function. The point where the volume is maximized is indicated.
It is hard to overemphasize the importance of optimization. In “the real world,” we routinely seek to make something better. By expressing the something as a mathematical function, “making something better” means “optimize some function.”
The techniques shown here are only the beginning of an incredibly important field. Many functions that we seek to optimize are incredibly complex, making the step of “find the gradient and set it equal to \(\vec 0\)” highly nontrivial. Mastery of the principles here are key to being able to tackle these more complicated problems.
Find the critical points of the given function. Use the Second Derivative Test to determine if each critical point corresponds to a relative maximum, minimum, or saddle point.