Skip to main content
Logo image

APEX Calculus: for University of Lethbridge

Section 14.6 The Derivative as a Linear Transformation

We defined what it means for a real-valued function of two variables to be differentiable in Definition Definition 14.1.3 in Section 14.1.
The definition there easily extends to real-valued functions of three or more variables, but it leaves unanswered a couple of natural questions:
  1. What about vector-valued functions of several variables? (That is, functions \(f\) with a domain \(D\subseteq \mathbb{R}^n\) and range in \(\mathbb{R}^m\) for some \(m >1\text{.}\))
  2. What is the derivative of a function of several variables? After all, we know how to define \(\fp(x)\) and \(\vrp(t)\) for real or vector-valued functions of one variable.
One might be tempted at first to simply mimic the definition of the derivative from Chapter 2, but we quickly run into trouble, for a reason that is immediately obvious.
Let \(\vec{a}\) be a fixed point in \(\mathbb{R}^n\text{,}\) and let \(\vec{h}\) represent a point \((h_1,h_2,\ldots, h_n)\text{.}\) Since we’re treating \(\vec{h}\) and \(\vec{a}\) as vectors, we can add them, and write down the limit
\begin{equation*} \lim_{\vec{h}\to\vec{0}}\frac{f(\vec{a}+\vec{h})-f(\vec{a})}{\norm{\vec{h}}}. \end{equation*}
(Note that division by a vector is nonsense, so we must divide by \(\norm{\vec{h}}\text{,}\) not \(\vec{h}\text{.}\)) But of course, we know that this limit does not exist, because it depends on the direction in which \(\vec{h}\) approaches \(\vec{0}\text{!}\) Indeed, if \(\vec{h} = h\vec{i}\) or \(h\vec{j}\text{,}\) we get a partial derivative, and for any unit vector \(\vec{u}\text{,}\) setting \(\vec{h}=h\vec{u}\) gives us a directional derivative, and we know from Section 14.3 that a directional derivative depends on \(\vec{u}\text{.}\) It seems this approach is doomed to failure. What can we try instead?
Figure 14.6.1. Defining differentiability in general

Subsection 14.6.1 The Definition of the Derivative

The key to generalizing the definition of the derivative given in Definition 2.1.7 in Chapter 2 is remembering the following essential property of the derivative: the derivative \(\fp(a)\) is used to compute the best linear approximation to \(f\) at \(a\text{.}\) Indeed, the linearization of \(f\) at \(a\) is the linear function
\begin{equation} L_a(x) = f(a) +\fp(a)(x-a)\text{.}\tag{14.6.1} \end{equation}
That this is the best linear approximation of \(f\) at \(a\) can be understood as follows: first, note that the graph \(y=L_a(x)\) is simply the equation of the tangent line to \(y=f(x)\) at \(a\text{.}\) Second, note that the difference between \(f(x)\) and \(L_a(x)\) vanishes faster than the difference \(x-a\) as \(x\) approaches \(a\text{:}\)
\begin{align*} \lim_{x\to a}\frac{f(x)-L_a(x)}{x-a} \amp= \lim_{x\to a}\frac{f(x)-(f(a)+\fp(a)(x-a))}{x-a}\\ \amp = \lim_{x\to a}\left(\frac{f(x)-f(a)}{x-a}-\fp(a)\frac{x-a}{x-a}\right)\\ \amp = \fp(a)-\fp(a)=0\text{.} \end{align*}
While the definition of the derivative doesn’t generalize well to several variables, the notion of linear approximation does. Recall from your first course in linear algebra that, given any \(m\times n\) matrix \(A\text{,}\) we can define a function \(T\text{,}\) called a linear transformation, that takes an \(n\times 1\) column vector as input, and produces an \(m\times 1\) column vector as output:
\begin{equation*} T(\vec{x}) = A\vec{x} = \begin{bmatrix} a_{11} \amp \cdots \amp a_{1n}\\ \vdots \amp \ddots \amp \vdots \\ a_{m1} \amp \cdots \amp a_{mn} \end{bmatrix} \begin{bmatrix}x_1\\x_2\\ \vdots\\x_n\end{bmatrix} = \begin{bmatrix}y_1\\y_2\\ \vdots \\y_m\end{bmatrix}=\vec{y}. \end{equation*}
In the above definition, the product \(A\vec{x}\) is the usual matrix product of the \(m\times n\) matrix \(A\) with the \(n\times 1\) matrix \(\vec{x}\text{.}\) In this text, we generally do not write our vectors as columns, so for a vector \(\vec{x}=\langle x_1,\ldots, x_n\rangle\) we will use the notation
\begin{equation*} A\cdot \vec{x} =\langle a_{11}x_1+\cdots + a_{1n}x_n, \ldots, a_{m1}x_1+\cdots +a_{mn}x_n\rangle \end{equation*}
to represent the same product in our notation. (And yes, the dot in this product is intended to remind you of the dot product between vectors: recall that the \((i,j)\)-entry of a matrix product \(AB\) is the dot product of the \(i^{\text{th}}\) row of \(A\) with the \(j^{\text{th}}\) column of \(B\text{.}\)) We can now make the following definition.

Definition 14.6.2. Linear function.

A function \(\ell\) from \(\mathbb{R}^n\) to \(\mathbb{R}^m\) will be called a linear function if \(\ell\) is of the form
\begin{equation*} \ell(\vec{x}) = M\cdot \vec{x}+\vec{b} \end{equation*}
for some \(m\times n\) matrix \(M\) and vector \(\vec{b}\) \(\mathbb{R}^m\text{.}\)
If we apply the convention of representing points in terms of their position vectors to the codomain as well as the domain, we can express such a function as \(f=\langle f_1,\ldots, f_n\rangle\text{,}\) where each function \(f_i\) is a real-valued function of \(n\) variables. We want differentiability of \(f\) to mean that \(f\) has a linear approximation \(\ell\) that agrees with \(f\) to first order at \(a\text{.}\) Since \(f(\vec{x})\) and \(\ell(\vec{x})\) are now vectors, saying that \(\ell\) is a good approximation of \(f\) requires that the magnitude \(\norm{f(\vec{x})-\ell(\vec{x})}\) is small relative to the size of \(\norm{\vec{x}-\vec{a}}\text{.}\)

Definition 14.6.3. General definition of differentiability.

Let \(D\) be an open subset of \(\mathbb{R}^n\) and let \(f\) be a function with domain \(D\) and values in \(\mathbb{R}^m\text{.}\) We say that \(f\) is differentiable at a point \(\vec{a}\in D\) if there exists a linear function \(\ell:\mathbb{R}^n\to\mathbb{R}^m\) that agrees with \(f\) to first order at \(\vec{a}\text{;}\) that is, if
\begin{equation*} \lim_{\vec{x}\to\vec{a}}\frac{\norm{f(\vec{x})-\ell(\vec{x})}}{\norm{\vec{x}-\vec{a}}} = 0. \end{equation*}
This definition is going to take a lot of unpacking. First of all, what is this function \(\ell\text{?}\) How do we compute it? Does this definition include Definition 14.1.3 from Section 14.1 as a special case? What about differentiability for vector-valued functions of one variable, or real-valued functions of one variable?
We will answer the first two questions in due course. The answer to the rest is, “Yes.” The above definition generalizes all the definitions of differentiability we’ve encountered so far. As a first step, let us note that for \(\ell(\vec{x})=M\cdot \vec{x}+\vec{b}\text{,}\) we must have \(\ell(\vec{a})=f(\vec{a})\text{,}\) or the limit above will not exist. Thus \(M\cdot \vec{a}+\vec{b} = f(\vec{a})\text{,}\) so \(\vec{b}=f(\vec{a})-M\cdot \vec{a}\text{.}\) This tells us that \(\ell\) must have the following form:
\begin{align} \ell(\vec{x}) \amp= M\cdot \vec{x} + \vec{b}\notag\\ \amp= M\cdot \vec{x} + (f(\vec{a}) - M\cdot \vec{a})\notag\\ \amp = f(\vec{a})+M\cdot (\vec{x}-\vec{a})\text{.}\tag{14.6.2} \end{align}
This should ring some bells: the form of \(\ell\) is very similar to that of the linearization given for a function of one variable in (14.6.1) above, with the matrix \(M\) playing the role of \(\fp(a)\text{.}\) Perhaps this matrix is the derivative we seek?

Subsection 14.6.2 Real-valued functions of several variables

Let \(f:D\subseteq \mathbb{R}^n\to \mathbb{R}\) be a given function of \(n\) variables (you can assume \(n=1, 2\) or 3 if you prefer). Let us denote a point \((x_1,x_2,\ldots, x_n)\in\mathbb{R}^n\) using the vector \(\vec x = \langle x_1,x_2,\ldots, x_n\rangle\text{,}\) so that \(f(\vec x) = f(x_1,x_2,\ldots,x_n)\text{.}\) Let \(\vec a = \langle a_1,a_2,\ldots, a_n\rangle\) denote a fixed point \((a_1,a_2,\ldots, a_n)\in D\text{.}\)
In Section 14.1, we saw that differentiability means that the difference \(\ddz = f(x+dx,y+dy) - f(x,y)\) can be approximated by the differential \(dz = f_x(x,y)\,dx+f_y(x,y)\,dy\text{.}\) Differentiability was defined to mean that the error functions \(E_x\) and \(E_y\text{,}\) defined by
\begin{equation*} E_x\,dx+E_y\,dy = \ddz -dz\text{,} \end{equation*}
go to zero as \(\langle dx,dy\rangle\) goes to zero. Let’s rephrase this so that it works for any number of variables. Recall that the gradient of \(f\) at \(\vec{a}\in D\) is the vector \(\nabla f(\vec{a})\) defined by
\begin{equation*} \nabla f(\vec{a}) = \left\langle \frac{\partial f}{\partial x_1}(\vec{a}),\frac{\partial f}{\partial x_2}(\vec{a}),\ldots, \frac{\partial f}{\partial x_n}(\vec{a})\right\rangle\text{.} \end{equation*}

Definition 14.6.4. The linearization of a function of several variables.

Let \(f\) be continuously differentiable on some open set \(D\subseteq\mathbb{R}^n\text{,}\) and let \(\vec{a}\in D\text{.}\) The linearization of \(f\) at \(\vec{a}\) is the function \(L_{\vec{a}}(\vec{x})\) defined by
\begin{equation*} L_{\vec{a}}(\vec{x}) = f(\vec{a}) + \nabla f(\vec{a})\cdot (\vec{x}-\vec{a})\text{.} \end{equation*}
When \(n=1\text{,}\) we get the linearization \(L_a(x) = f(a)+f'(a)(x-a)\text{,}\) which is the usual linearization from Calculus I. (You might also notice that \(L_a(x)\) is the first-degree Taylor polynomial of \(f\) about \(x=a\text{.}\) The same is true of the linearization of \(f\) for more than one variable, although we will not be considering Taylor polynomials in several variables.)
For \(n=2\text{,}\) we get the linear approximation associated to the total differential:
\begin{align*} L_{(a,b)}(x,y) \amp= f(a,b)+\langle f_x(a,b),f_y(a,b)\rangle\cdot \langle x-a,y-b\rangle\\ \amp = f(a,b) + f_x(a,b)(x-a)+f_y(a,b)(y-b)\text{.} \end{align*}
Compare this with Equation (14.6.2) above. It seems that the gradient \(\nabla f (\vec{a})\) is our matrix \(M\) in this case: for a real-valued function, \(m=1\text{,}\) so we expect a \(1\times n\) row matrix, and the gradient certainly can be interpreted to fit that description.
For real-valued functions, Definition 14.6.3 becomes the following:

Definition 14.6.5. Differentiability of real-valued functions.

We say that \(f\) is differentiable at \(\vec{a}\in D\) if \(\nabla f(\vec{a})\) exists, and \(f(\vec{x})\) and \(L_{\vec{a}}(\vec{x})\) agree to first order at \(\vec{a}\text{;}\) that is, if
\begin{equation*} \lim_{\vec{x}\to \vec{a}}\frac{\lvert f(\vec{x})-L_{\vec{a}}(\vec{x})\rvert}{\lVert \vec{x}-\vec{a}\rVert} = \lim_{\vec{x}\to\vec{a}}\frac{\lvert f(\vec{x})-f(\vec{a})-\nabla f(\vec{a})\cdot \langle \vec{x}-\vec{a}\rangle\rvert}{\lVert\vec{x} - \vec{a}\rVert} = 0\text{.} \end{equation*}
What this definition says is that the linearization \(L_{\vec{a}}(\vec{x})\) is a good linear approximation to \(f\) at \(\vec{a}\text{.}\) In fact, it’s the only (and hence, best) linear approximation: if a linear approximation exists, it has to be \(L_{\vec{a}}(\vec{x})\text{.}\)
If you want to see why this has to be true, recall that since the above limit exists, we have to be able to evaluate it along any path we like. Suppose we chose the path
\begin{equation*} \vec{r}(t) = \langle h,a_2,\ldots, a_n\rangle\text{.} \end{equation*}
Then \(\vec{x}-\vec{a} = \langle h,0,\ldots, 0\rangle = h\vec{i}\text{,}\) and our definition becomes:
\begin{equation*} \lim_{h\to 0}\left\lvert\frac{f(a_1+h,a_2,\ldots, a_n)-f(a_1,a_2,\ldots, a_n)}{h} - \frac{\partial f}{\partial x_1}(a_1,a_2,\ldots, a_n)\right\rvert = 0\text{,} \end{equation*}
which is just another way of stating the definition of the partial derivative with respect to \(x_1\text{.}\) Of course, approaching along any of the other coordinate directions will similarly produce the other partial derivatives.
Recall that in one variable, the derivative is often written instead in terms of \(h=x-a\text{,}\) so that
\begin{equation*} \fp(a) = \lim_{h\to 0}\frac{f(a+h)-f(a)}{h}\text{.} \end{equation*}
In more than one variable, we can define \(h_i = x_i-a_i\text{,}\) for \(i=1,\ldots, n\text{,}\) or the corresponding vector \(\vec{h} = \vec{x}-\vec{a}\text{.}\) The definition of differentiability then can be written as
\begin{equation} \lim_{\vec{h}\to \mathbf{0}}\frac{\lvert f(\vec{a}+\vec{h})-f(\vec{a})-\nabla f(\vec{a})\cdot\vec{h}\rvert}{\lVert \vec{h}\rVert} = 0\text{.}\tag{14.6.3} \end{equation}
Note that we want the difference between \(f(\vec{a}+\vec{h})\) and \(L_{\vec{a}}(\vec{h})\) to go to zero faster than \(\lVert \vec{h}\rVert\) goes to zero, and that it only makes sense to divide by the length of \(\vec{h}\text{,}\) since division by a vector (or the corresponding point) is not defined.
Let’s return to \(n=2\) and Definition 14.1.3 from Section 14.1. If we write \(\vec{h} = \langle dx, dy\rangle\text{,}\) then \(f(\vec{a}+\vec{h})-f(\vec{a}) = \ddz\text{,}\) and \(\nabla f(\vec{a})\cdot \vec{h} = dz\text{,}\) and Equation (14.6.3) becomes
\begin{equation*} \lim_{\vec{h}\to\vec{0}}\frac{\lvert \ddz-dz\rvert}{\norm{\vec{h}}} = \lim_{\vec{h}\to\vec{0}}\frac{\lvert E_x\,dx+E_y\,dz\rvert}{\norm{\langle dx,dy\rangle}} = 0\text{,} \end{equation*}
which is another way of saying that the error terms \(E_x,E_y\) must vanish as \(dx\) and \(dy\) approach zero. Success! Definition 14.6.3 is indeed a generalization of Definition 14.1.3.
Note that we’ve also generalized Definition 2.1.7 for functions of one variable as well: Equation (14.6.3) becomes
\begin{equation*} \lim_{h\to 0}\left\lvert \frac{f(a+h)-f(a)}{h}-f'(a)\right\rvert = 0\text{,} \end{equation*}
which is just another way of re-writing the usual definition of the derivative. In fact, we’ve also generalized Definition 13.2.10 from Chapter 13 for differentiability of vector-valued functions: all we have to do is write our vector-valued function as a column matrix.
For
\begin{equation*} \vec{r}(t) = \begin{bmatrix}x_1(t)\\x_2(t)\\\vdots \\x_m(t)\end{bmatrix} \quad \text{ and } \quad \vrp(t) = \begin{bmatrix}x_1\primeskip'(t)\\x_2\primeskip'(t)\\\vdots \\x_m\primeskip'(t)\end{bmatrix}\text{,} \end{equation*}
we have
\begin{equation*} \lim_{h\to 0}\left\lVert \frac{1}{h}(\vec{r}(a+h)-\vec{r}(a))- \vrp(a)\right\rVert = 0\text{,} \end{equation*}
which again reproduces the definition of \(\vrp(a)\text{.}\)
One of the results we learn in Calculus I is that differentiability implies continuity. The situation is no different in general, and with our new definition of differentiability, an easy proof is possible.

Proof.

Suppose that \(f\) is differentiable at \(\vec{a}\text{.}\) Then we know that
\begin{equation*} \lim_{\vec{x}\to \vec{a}}\frac{f(\vec{x})-L_{\vec{a}}(\vec{x})}{\lVert \vec{x}-\vec{a}\rVert} = \lim_{\vec{x}\to\vec{a}}\frac{f(\vec{x})-f(\vec{a})-\nabla f(\vec{a})\cdot \langle \vec{x}-\vec{a}\rangle}{\lVert\vec{x} - \vec{a}\rVert} = 0\text{.} \end{equation*}
By the definition of continuity, we need to show that \(\displaystyle \lim_{\vec{x}\to\vec{a}}f(\vec{x}) = f(\vec{a})\text{.}\) We have that
\begin{align*} f(\vec{x}) \amp = f(\vec{a}) + (f(\vec{x})-f(\vec{a}))\\ \amp = f(\vec{a}) + \left(f(\vec{x}) - f(\vec{a}) - \nabla f(\vec{a})\cdot (\vec{x}-\vec{a})\right) + \nabla f(\vec{a})\cdot (\vec{x}-\vec{a})\\ \amp = f(\vec{a}) +\left(\frac{f(\vec{x})-f(\vec{a})-\nabla f(\vec{a})\cdot (\vec{x}-\vec{a})}{\lVert\vec{x}-\vec{a}\rVert}\right)(\lVert\vec{x}-\vec{a}\rVert) + \nabla f(\vec{a})\cdot (\vec{x}-\vec{a})\text{.} \end{align*}
Thus, taking limits of the above as \(\vec{x}\to\vec{a}\text{,}\) we find \(\displaystyle \lim_{\vec{x}\to\vec{a}}f(\vec{x}) = f(\vec{a})\text{,}\) since the first term is a constant (\(f(\vec{a})\)), the second is the product of two terms that both go to zero (the first term is zero by the definition of differentiability, and clearly \(\lim_{\vec{x}\to\vec{a}}\lVert\vec{x}-\vec{a}\rVert = 0\)), and the last term vanishes since it’s linear (and thus continuous) in \(\vec{x}\text{,}\) and so, by direct substitution,
\begin{equation*} \lim_{\vec{x}\to\vec{a}}\nabla f(\vec{a})\cdot(\vec{x}-\vec{a}) = \nabla f(\vec{a})\cdot(\vec{a}-\vec{a}) = 0\text{.} \end{equation*}

Subsection 14.6.3 Vector-valued functions of several variables

Let us now consider Definition 14.6.3 for general functions \(f:D\subseteq \mathbb{R}^n\to \mathbb{R}^m\text{.}\) If \(f\) is differentiable at \(\vec{a}\text{,}\) then we must have
\begin{equation*} \lim_{\vec{x}\to\vec{a}}\frac{\norm{f(\vec{x})-\ell(\vec{x})}}{\norm{\vec{x}-\vec{a}}} = 0 \end{equation*}
for some linear function \(\ell(\vec{x})\text{.}\) Moreover, we’ll see below that (a) the matrix \(M\) is uniquely defined, and (b) \(M\) is deserving of the title of “the” derivative of \(f\text{.}\)
We saw in Equation (14.6.2) above that \(T\) must have the form of a linear approximation:
\begin{equation*} \ell(\vec{x})=L_{\vec{a}}(\vec{x}) = f(\vec{a})+M\cdot (\vec{x}-\vec{a})\text{.} \end{equation*}
Let’s compare again to the one variable case: \(L_a(x)=f(a)+f'(a)(x-a)\text{.}\) With this in mind, the matrix \(M\text{,}\) whatever it is, certainly seems to play the role of the derivative for general functions from \(\mathbb{R}^n\) to \(\mathbb{R}^m\text{.}\) It remains to determine the matrix \(M\text{,}\) and see that there can only be one possibility. To that end, let us write
\begin{equation*} M = \begin{bmatrix} c_{11} \amp c_{12} \amp \cdots \amp c_{1n}\\ c_{21} \amp c_{22} \amp \cdots \amp c_{2n}\\ \vdots \amp \vdots \amp \ddots \amp \vdots\\ c_{m1} \amp c_{m2} \amp \cdots \amp c_{mn}\end{bmatrix}\text{,} \end{equation*}
and consider what happens when we let \(\vec{x}\to\vec{a}\) along different paths.
If we consider the path \(x_1 = a_1+t, x_2=a_2, \ldots, x_n=a_n\) (that is, varying \(x_1\) while holding the other variables constant) then
\begin{equation*} \vec{x} - \vec{a} = \langle a_1+t,a_2,\ldots, a_n\rangle - \langle a_1,a_2,\vdots ,a_n\rangle = \langle t, 0, \ldots, 0\rangle\text{,} \end{equation*}
so \(M\cdot (\vec{x}-\vec{a})\) gives us \(t\) times the first column of \(M\text{,}\) since for each row of \(M\text{,}\) the first entry is multiplied by \(t\text{,}\) and the remaining entries are multiplied by zero. Thus,
\begin{equation*} M\cdot (\vec{x}-\vec{a}) = \langle c_{11}, c_{21}, \ldots, c_{m1}\rangle \end{equation*}
along this path.
Now we consider the limit as \(t\to 0\text{.}\)
\begin{equation*} \lim_{t\to 0}\left\lvert\frac{f(a_1+t,a_2,\ldots, a_n)-f(a_1,a_2,\ldots, a_n)}{t} - \langle c_{11}, c_{21}, \ldots, c_{m1}\rangle\right\rvert = 0\text{.} \end{equation*}
Since \(\langle c_{11}, c_{21}, \ldots, c_{m1}\rangle\) is a constant vector, from differentiability of \(f\text{,}\) together with Definition 14.6.3, we get
\begin{equation*} \lim_{t\to 0}\frac{f(a_1+t,a_2,\ldots, a_n)-f(a_1,a_2,\ldots, a_n)}{t} = \langle c_{11}, c_{21}, \ldots, c_{m1}\rangle\text{.} \end{equation*}
But this limit on the left is just the partial derivative of \(f\) with respect to \(x_1\text{!}\) If we write \(f(\vec{x}) = \langle f_1(\vec{x}),f_2(\vec{x}),\ldots, f_m(\vec{x})\langle\text{,}\) then we have
\begin{equation*} \lim_{t\to 0}\frac{f(a_1+t,a_2,\ldots, a_n)-f(a_1,a_2,\ldots, a_n)}{t} = \left\langle\frac{\partial f_1}{\partial x_1}(\vec{a}), \frac{\partial f_2}{\partial x_1}(\vec{a}), \ldots, \frac{\partial f_m}{\partial x_1}(\vec{a})\right\rangle\text{,} \end{equation*}
and this gives us the first column of \(M\text{!}\) Repeating this for each variable, we see that the matrix \(M\) is exactly the matrix of all the partial derivatives of \(f\text{.}\) This matrix is important enough to have a name:

Definition 14.6.7. The Jacobian matrix of a differentiable function.

Let \(D\subseteq \mathbb{R}^n\) be an open subset, and let \(f:D\to \mathbb{R}^m\) be a differentiable function. At any point \(\vec{a}\in D\text{,}\) the Jacobian matrix of \(f\) at \(\vec{a}\text{,}\) denoted \(Df(\vec{a})\text{,}\) is the \(m\times n\) matrix defined by
\begin{equation*} Df(\vec{a}) = \begin{bmatrix}\frac{\partial f_1}{\partial x_1} \amp \cdots \amp \frac{\partial f_1}{\partial x_n}\\ \vdots \amp \ddots \amp \vdots \\ \frac{\partial f_m}{\partial x_1} \amp \cdots \amp \frac{\partial f_m}{x_n} \end{bmatrix}\text{.} \end{equation*}
The linear transformation \(T_{f,\vec{a}}:\mathbb{R}^n\to \mathbb{R}^m\) defined by \(T_{f,\vec{a}}(\vec{x})=Df(\vec{a})\cdot \vec{x}\) is defined to be the derivative of \(f\) at \(\vec{a}\text{.}\)
Notice that if \(f\) is differentiable, the Jacobian matrix is the only matrix that can fit the definition: the fact that the limit must be zero along a path parallel to one of the coordinate axes forces the matrix \(M\) to contain the partial derivatives of \(f\text{.}\)
In particular, note that for a function \(f:\mathbb{R}^n\to \mathbb{R}\text{,}\) we recover the gradient vector. Technically, the derivative in this sense is a row vector (some might say dual vector), not a column vector. Note that multiplying a row vector by a column vector is the same as taking the dot product of two column vectors.
This definition also accounts for parametric curves, viewed as vector-valued functions of one variable. If \(\mathbf{r}:\mathbb{R}\to \mathbb{R}^n\) defines a parametric curve, then the derivative \(\mathbf{r}'(t) = \begin{bmatrix}x_1'(t)\\x_2'(t)\\\vdots \\x_n'(t)\end{bmatrix}\) as introduced in Chapter 13 is the same as the one obtained using this definition.

Subsection 14.6.4 The General Chain Rule

One of the big advantages of representing the derivative of a function of several variables in terms of its Jacobian matrix is that the Chain Rule becomes completely transparent. Arguably, the version of the Chain Rule we’re about to present is even more intuitive than the single-variable version!
Recall that the Chain Rule is all about derivatives of composite functions. In one variable, given \(h=f\circ g\text{,}\) if \(b=g(a)\text{,}\) we have
\begin{equation*} h'(a) = f'(g(a))g'(a) = f'(b)g'(a)\text{.} \end{equation*}
The derivative of the composition is the product of the derivatives of the functions being composed, as long as we take care to evaluate them at the appropriate points.
In Section 14.2 we saw that in several variables, the Chain Rule comes in various flavours, depending on the number of variables involved in each function being composed. If we think of derivatives in terms of the Jacobian matrix, then each of these flavours says exactly the same thing as the original Chain Rule above!
This is a remarkable result. Let’s unpack it in a couple of examples.

Example 14.6.9. Applying the general chain rule.

Let \(f:U\subseteq \mathbb{R}^3\to\mathbb{R}\) be a differentiable function of three variables, and let \(\vec{r}(t) = \la x(t),y(t),z(t)\ra\) be a vector-valued function of one variable. Use Theorem 14.6.8 to determine a formula for the derivative of \(h(t) = f(\vec{r}(t))\text{.}\)
Solution.
We already know what this derivative should look like from Section 14.2. The point is to confirm that this is a special case of Theorem 14.6.8. The Jacobian matrix of \(f\) is a \(1\times 3\) matrix and Jacobian matrix of \(\vec{r}\) is a \(3\times 1\) matrix. They are given, respectively, by
\begin{equation*} Df(\vec{x}) = \begin{bmatrix} f_x(\vec{x}) \amp f_y(\vec{x}) \amp f_z(\vec{x}) \end{bmatrix} \quad \text{ and } \quad D\vec{r}(t) = \begin{bmatrix}x'(t)\\y'(t)\\z'(t)\end{bmatrix}\text{.} \end{equation*}
Theorem 14.6.8 then gives us
\begin{equation*} h'(t) = Df(\vec{r}(t))D\vec{r}(t) = f_x(\vec{r(t)})x'(t)+f_y(\vec{r(t)})y'(t)+f_z(\vec{r(t)})z'(t)\text{,} \end{equation*}
as before. Of course, in this context we usually write \(Df(\vec{x})\) as \(\nabla f(\vec{x})\) and \(D\vec{r}(t)\) as \(\vrp(t)\text{,}\) and instead of a matrix product, we write a dot product. But this is simply a shift in notation — the quantities involved are no different than before.

Example 14.6.10. Applying the general chain rule.

Let \(f:U\subseteq \mathbb{R}^2\to\mathbb{R}\) be a function of 2 variables, and let \(g:V\subseteq \mathbb{R}^2\to\mathbb{R}^2\) be given by
\begin{equation*} g(u,v)=(x(u,v),y(u,v))\text{.} \end{equation*}
Given \(h = f\circ g\text{,}\) use Theorem 14.6.8 to determine \(h_u\) and \(h_v\text{.}\)
Solution.
First we compute the Jacobian matrices for \(f\) and \(g\text{.}\) We have
\begin{equation*} Df(x,y) = \begin{bmatrix}f_x(x,y) \amp f_y(x,y)\end{bmatrix} \quad \text{ and } \quad Dg(u,v) = \begin{bmatrix}x_u(u,v) \amp x_v(u,v)\\y_u(u,v) \amp y_v(u,v)\end{bmatrix}\text{.} \end{equation*}
The Chain Rule then gives
\begin{align*} Dh(u,v) \amp= \begin{bmatrix} h_u(u,v) \amp h_v(u,v) \end{bmatrix} = Df(h(u,v))Dh(u,v)\\ \amp=\begin{bmatrix}f_x(h(u,v)) \amp f_y(h(u,v))\end{bmatrix}\begin{bmatrix}x_u(u,v) \amp x_v(u,v)\\y_u(u,v) \amp y_v(u,v)\end{bmatrix}\\ \amp = [f_x(h(u,v)x_u(u,v)+f_y(h(u,v))y_u(u,v)\\ \amp \quad\quad\quad f_x(h(u,v))x_v(u,v)+f_y(h(u,v))y_v(u,v)]\text{.} \end{align*}
Equating coefficients of the first and last matrices, we have, in Leibniz notation,
\begin{align*} \frac{\partial h}{\partial u} \amp = \frac{\partial f}{\partial x}\frac{\partial x}{\partial u}+\frac{\partial f}{\partial y}\frac{\partial y}{\partial u}\\ \frac{\partial h}{\partial v} \amp = \frac{\partial f}{\partial x}\frac{\partial x}{\partial v}+\frac{\partial f}{\partial y}\frac{\partial y}{\partial v}\text{.} \end{align*}
Again, this reproduces another instance of the Chain Rule from Section 14.2.
With additional experimentation, you will find that every instance of the Chain Rule you have previously encountered can be interpreted as a special case of Theorem 14.6.8. Moreover, a slight shift in interpretation makes this version of the Chain Rule even more obvious! (There’s another detour coming, but stick with us.)
Let us digress briefly and discuss the progression of mathematics from Calculus to higher math. If you continue on to upper-level undergraduate mathematics, you will encounter courses in Analysis and Topology. Analysis deals with the theoretical underpinnings of Calculus: this is where you see all the careful proofs of theorems that have been omitted from this text. Topology is a further abstraction of Analysis. In Topology, one studies continuity (and its consequences) at its most fundamental, abstract level.
The corresponding successors to Calculus in several variables are known as differential geometry and differential topology. You probably won’t encounter these unless you continue on to graduate studies in mathematics. One of the core philosophies in these two (closely related) subjects is the following:
Functions map points. Derivatives map tangent vectors.
This can be understood in our context. At any point \(\vec{a}\) in \(\mathbb{R}^n\text{,}\) we can attach a copy of the vector space \(\mathbb{R}^n\text{,}\) thought of as all the possible tangent vectors to curves passing through that point.
Let \(\vec{r}:(a,b)\to \mathbb{R}^n\) be such a curve, and let \(f:\mathbb{R}^n\to \mathbb{R}^m\) be a differentiable function. The composite function \(\vec{s}=f\circ \vec{r}\) is then a curve in \(\mathbb{R}^m\text{.}\) The point \(\vec{a} = \vec{r}(t_0)\) on our first curve in \(\mathbb{R}^n\) becomes a point
\begin{equation*} \vec{b} = f(\vec{a}) = f(\vec{r}(t_0)) = \vec{s}(t_0) \end{equation*}
on our new curve in \(\mathbb{R}^m\text{.}\) What about tangent vectors?
At the point \(\vec{a}\text{,}\) we have the tangent vector \(\vec{v} = \vrp(t_0)\text{.}\) What is the tangent vector to \(\vec{s}(t)\) at the point \(\vec{b}\text{?}\) On the one hand, by definition, we have the tangent vector
\begin{equation*} \vec{w} = \vec{s}\primeskip ' (t_0)\text{.} \end{equation*}
On the other hand, the Chain Rule gives us
\begin{equation*} \vec{s}\primeskip ' (t_0) = (f\circ \vec{r})\primeskip '(t_0) = Df(\vec{r}(t_0))\vrp(t_0)\text{.} \end{equation*}
But \(\vrp(t_0)=\vec{v}\text{,}\) so we have
\begin{equation*} \vec{w} = Df(\vec{a})\cdot \vec{v}\text{.} \end{equation*}
Multiplying the original tangent vector by the derivative of \(f\) gives us the new tangent vector. Cool!
What’s more, we can view this as a linear transformation. Let \(V\) denote the vector space of all tangent vectors at the point \(\vec{a}\) in \(\mathbb{R}^n\) (this is just a copy of \(\mathbb{R}^n\)) and let \(W\) denote the space of all tangent vectors in \(\mathbb{R}^m\) at the point \(\vec{b}\text{.}\) Then we have the linear transformation \(T:V\to W\) given by
\begin{equation*} T(\vec{v}) = Df(\vec{a})\cdot \vec{v}\text{.} \end{equation*}
In more advanced Calculus, or Differential Geometry, we view this linear transformation as the derivative of \(f\) at \(\vec{a}\text{.}\) Now, recall from Linear Algebra that matrix multiplication corresponds to the composition of the corresponding linear transformations: if \(S(\vec{v}) = A\vec{v}\) and \(T(\vec{w}) = B\vec{w}\text{,}\) and the matrices \(A\) and \(B\) are of the appropriate sizes, then
\begin{equation*} S\circ T(\vec{w}) = S(T(\vec{w}))= A(B\vec{w}) = (AB)\vec{w}\text{.} \end{equation*}
Suppose we have differentiable functions \(f:\mathbb{R}^n\to \mathbb{R}^m\) and \(g:\mathbb{R}^m\to \mathbb{R}^p\text{.}\) Let \(T_f:\mathbb{R}^n\to \mathbb{R}^m\) be the linear function given by the derivative of \(f\text{,}\) and let \(T_g:\mathbb{R}^m\to \mathbb{R}^p\) be the linear function given by the derivative of \(g\text{.}\) The chain rule is then essentially telling us that the derivative of a composition is the composition of the derivatives: we have
\begin{equation*} T_f\circ T_g(\vec{v}) = T_f(T_g(\vec{v})) = D f(\vec{y})(D g(\vec{x})\vec{v}) = (D f(g(\vec{x}))D g(\vec{x}))\vec{v} = T_{f\circ g}(\vec{v})\text{.} \end{equation*}
In other words, given the composition
we have the corresponding composition
(But beware of the dual usage of \(\mathbb{R}^n\) here. In the first composition, we’re thinking of it as a set of points in the domain of a function. In the second composition, we’re thinking of it as the set of tangent vectors at a point.)
This turns out to be an extremely powerful way of looking at derivatives and the Chain Rule. You may want to keep this in mind in later sections, such as when we consider change of variables in multiple integrals at the end of Chapter 15, and when we define integrals over curves and surfaces in Chapter 16. We won’t use this language when we get there, but many of the results in those sections (for example, the formula for surface area of a parametric surface) can be understood according to the two principles we have just seen: functions map points, while derivatives map tangent vectors, and the derivative of a composition is the composition of the derivatives.