three - oa fakoafak.com/wp-content/uploads/2019/10/chapter-3.-tensor...this maps from the domain v...
TRANSCRIPT
1
Tensor Analysis: Differential and Integral Calculus with Tensors
âWe do not fuss over smoothness assumptions: Functions and boundaries of
regions are presumed to have continuity and differentiability properties sufficient
to make meaningful underlying analysisâŠâ Morton Gurtin, et al.
MetaData
1. The first chapter provided the necessary background to study tensors. Chapter 2 explored the properties of
tensors.
2. This third chapter begins the applications by introducing the concept of differentiation for objects larger
than scalars.
THREE
2
3. The gradient of scalar, vector or tensor is connected to the extension of the concept of directional derivative
called the Gateaux differential.
4. The complexity in differentiation of large objects comes, not from the objects themselves, but from the
domain of differentiation. Differentiation rules with respect to scalar arguments follows closely the similar
laws for scalar fields.
5. Integral theorems of Stokes, Green and Gauss are introduced with computational examples.
6. Orthogonal Curvilinear systems are dealt with in the addendum. Important results and the use of Computer
Algebra are given.
3
In the calculus we are embarking upon, there will be tensor valued functions and tensor domains.
In the most complex cases, the domains and function values are tensors. Most common is the
case where the domain itself is the three-dimensional Euclidean Point Space. Even though this
space embeds the vectors containing the position vectors, they are not tensors. Tensor-valued
functions defined in such domains are called tensor fields.
Differentiation & Large Objects
Differentiating with Respect to Scalar Arguments
We are already familiar with the techniques of differentiation of simple objects like scalars with
respect to other scalars. These scalars are defined in scalar domains. In the simplest case, ð¥, â â
â the derivative, ðâ²(ð¥), of the function, ð:â â â, is defined as,
ðâ²(ð¥) = limââ0
ð(ð¥ + â) â ð(ð¥)
â.
Things become more complex when we handle tensors. A second-order tensor, as we have seen,
when expressed in the component form, contains nine scalars. A vector â a first-order tensor, as
we know, is made of three scalar members.
The complication does not actually arise from the size of the objects themselves. In fact, the
derivation of tensor objects with respect to scalar domains, with some adjustments, basically
conforms to the same rules as the above derivation of scalars:
Division of a tensor by a scalar is accomplished by multiplying the tensor by the inverse of the
scalar. This operation is defined in all vector spaces to which our vectors and tensors belong.
Consequently, the derivative of the tensor ð(ð¡), with respect to a scalar argument, such as time,
for example, can be defined as,
ð
ðð¡ ð(ð¡) = lim
ð¡â0
ð(ð¡ + â) â ð(ð¡)
ð¡â¡ limð¡â0
1
ð¡(ð(ð¡ + â) â ð(ð¡))
If ðŒ(ð¡) â â, and tensor , ð(ð¡) â ð are both functions of time ð¡ â â, we find,
ð
ðð¡(ðŒð) = lim
ââ0
ðŒ(ð¡ + â)ð(ð¡ + â) â ðŒ(ð¡)ð(ð¡)
ð¡
= limââ0
ðŒ(ð¡ + â)ð(ð¡ + â) â ðŒ(ð¡)ð(ð¡ + â) + ðŒ(ð¡)ð(ð¡ + â) â ðŒ(ð¡)ð(ð¡)
ð¡
4
= limââ0
ðŒ(ð¡ + â)ð(ð¡ + â) â ðŒ(ð¡)ð(ð¡ + â)
ð¡+ limââ0
ðŒ(ð¡)ð(ð¡ + â) â ðŒ(ð¡)ð(ð¡)
ð¡
= (limââ0
ðŒ(ð¡ + â) â ðŒ(ð¡)
ð¡) (lim
ââ0ð(ð¡ + â)) + ðŒ(ð¡) lim
ââ0
ð(ð¡ + â) â ð(ð¡)
ð¡
=ð
ðð¡(ðŒð) = ðŒ
ðð
ðð¡+ððŒ
ðð¡ð
showing that the familiar product rule applies as expected.
Proceeding in a similar fashion, for ðŒ(ð¡) â â, ð®(ð¡), ð¯(ð¡) â ðŒ, and ð(ð¡), ð(ð¡) â ð, all being
functions of a scalar variable ð¡, the following results hold as expected.
Expression Note
ð
ðð¡(ðŒð®) = ðŒ
ðð®
ðð¡+ððŒ
ðð¡ð®
Each term on the RHS retains the commutative property
of multiplication by a scalar.
ð
ðð¡(ð® â ð¯) =
ðð®
ðð¡â ð¯ + ð® â
ðð¯
ðð¡
Each term on the RHS retains the commutative property
of the scalar product
ð
ðð¡(ð® à ð¯) =
ðð®
ðð¡Ã ð¯ + ð® Ã
ðð¯
ðð¡
Original product order must be maintained
ð
ðð¡(ð®â ð¯) =
ðð®
ðð¡â ð¯ + ð®â
ðð¯
ðð¡
Original product order must be maintained
ð
ðð¡(ð + ð) =
ðð
ðð¡+ðð
ðð¡
Sum of tensors
ð
ðð¡ðð =
ðð
ðð¡ð + ð
ðð
ðð¡
Product of tensors. Note that we must maintain the order
of the product as shown. ððð
ðð¡â ðð
ðð¡ð
ð
ðð¡ð: ð =
ðð
ðð¡: ð + ð:
ðð
ðð¡
Scalar Product of tensors. the order is not important here
ð:ðð
ðð¡=ðð
ðð¡: ð
ð
ðð¡(ðŒð) = ðŒ
ðð
ðð¡+ððŒ
ðð¡ð
Scalar product of a tensor. Order is not important in
multiplication by a scalar.
ð
ðð¡(ðð®) =
ðð
ðð¡ð® + ð
ðð®
ðð¡
The original operation order must be maintained in each
term of the derivatives.
ð
ðð¡ðT = (
ðð
ðð¡)T
Derivative of a transpose is the transpose of the
derivative
5
Scalar Argument: Time Derivatives of Tensors
1. Consequences of Differentiating the Identity Tensor
Differentiating a tensor with respect to time, or any other scalar parameter follows the expected
pattern with little modification. Here are some further results.
The Inverse. Certain important results come from the obvious fact that the identity tensor is a
constant so that
ðð
ðð¡= ð.
We recognize the fact that the derivative of the tensor with respect to a scalar must give a tensor.
The value here is the annihilator tensor, ð. For any invertible tensor valued scalar function, ð(ð¡),
we differentiate the equation, ðâð(ð¡) ð(ð¡) = ð to obtain,
ððâð
ðð¡ ð + ðâð
ðð
ðð¡= ð
âððâð
ðð¡= âðâð
ðð
ðð¡ðâð
If we post multiply both sides by ðâð, the following important expression results for the
derivative of the inverse tensor with respect to a scalar parameter, in terms of the derivative of
the original tensor function:
ððâð
ðð¡= âðâð
ðð
ðð¡ðâð
Conversely,
ðð
ðð¡= âð
ððâð
ðð¡ð
Orthogonal Tensors. An orthogonal tensor as well as its transpose can each be functions of a
scalar parameter. They still must equal the identity tensor at any value of the argument, so that
ð(ð¡)ðT(ð¡) = ð. One consequence of this relationship is that the tensor valued function, ð(ð¡) â¡
6
ðð(ð¡)
ðð¡ðT(ð¡) of the same scalar parameter must be skew. This is another result from differentiating
the identity tensor ððT = ð,
ð
ðð¡(ððT) =
ðððð¡ðT +ð
ððT
ðð¡=ðððð¡= ð
Consequently,
ðððð¡ðT = âð
ððT
ðð¡= â(
ðððð¡ðT)
T
So we have that the tensor ð =ðð
ðð¡ðT is negative of its own transpose, hence it is skew.
Angular Velocity. Consider a rigid body fixed at one end ð â
for example a spinning top shown. It is given a rotation ð(ð¡)
from rest so that each point ð is at a position vector ð«(ð¡) at a
time ð¡, related to the original position ð«ð by the equation,
ð«(ð¡) = ð(ð¡)ð«ð
We can find the velocity by differentiating the position vector,
ðð«
ðð¡=ðððð¡ð«ð =
ðððð¡ðâ1ð«
And the rotation is an orthogonal tensor, hence its inverse is
its transpose, so that,
ð¯ =ðð«
ðð¡=ðððð¡ðTð« = ðð«
And, ð as we have seen above, is a skew tensor (every rotation is a proper orthogonal tensor)
hence it is associated with an axial vector (ð Ã) = ð. From this fact we can see that every point
in the body has an angular velocity, ð, such that,
ð¯ = ð à ð«
ð, defined by this expression, the axial vector of the ðð
ðð¡ðT where ð(ð¡) is the rotation function,
is called the angular velocity.
7
2. The Magnitude of a Tensor.
We saw in the previous chapter that the tensor belongs to its own Euclidean vector space which
is equipped with a scalar product, and consequently, a scalar magnitude. Consider the
magnitude,
ð(ð¡) = âð(ð¡): ð(ð¡) = âð(ð¡)â
so that, ð2 = ð: ð. Differentiating this scalar equation, and remembering that the scalar
operand here is just a product, we have,
ð
ðð¡ð2 = 2ð
ðð
ðð¡=ðð
ðð¡: ð + ð:
ðð
ðð¡= 2
ðð
ðð¡: ð.
This simplifies to
ðð
ðð¡=ð
ðð¡|ð(ð¡)|
=ðð(ð¡)
ðð¡:ð(ð¡)
|ð(ð¡)|
3. Tensor Invariants.
Consider as before, that tensor ð(ð¡) varies with a scalar parameter ð¡. What are the responses of
the three invariants of ð? The invariants, as was shown in chapter 2 are the Trace, tr ð, Trace of
the cofactor, tr ðc and the Determinant, detð.
Trace is a linear operator. It follows immediately that
ð
ðð¡ tr ð = tr
ðð
ðð¡
- A fact that can also be established easily by observing that,
ð
ðð¡ðŒ1(ð) =
ððð¡
tr ð =[ðððð¡ð, ð, ð] + [ð,
ðððð¡ð, ð] + [ð, ð,
ðððð¡ð]
[ð, ð, ð]
= tr ðð
ðð¡
The second invariant is NOT a linear scalar valued function of its tensor argument. However,
we have the expression,
ðc = ðâT detð â trðc = tr(ðâT det ð)
Differentiating with respect to ð¡,
8
ð
ðð¡ trðc = tr
ð
ðð¡(ðâT det ð)
Not a very useful quantity. The derivative of the third invariant with respect to a scalar
argument is of momentous importance. It is the basis of Liouvilleâs theorem and is fundamental
to the study of continuum flow in general. Just like the second invariant, the third invariant is
not a linear function of its tensor argument.
ðŒ3(ð) =[ðð, ðð,ðð]
[ð, ð, ð]= det ð
so that, [ð, ð, ð] det ð = [ðð, ðð, ðð]. Differentiating, we have,
[ð, ð, ð]ð
ðð¡detð = [
ðð
ðð¡ð, ðð, ðð] + [ðð,
ðð
ðð¡ð, ðð] + [ðð, ðð,
ðð
ðð¡ð]
= [ðð
ðð¡ðâ1ðð, ðð,ðð] + [ðð,
ðð
ðð¡ðâ1ðð,ðð] + [ðð, ðð,
ðð
ðð¡ðâ1ðð]
= tr (ðð
ðð¡ðâ1) [ðð, ðð, ðð]
So that, ð
ðð¡det ð = tr (
ðð
ðð¡ðâ1) detð.
Differentiation: Vector & Tensor Arguments
So far, the domain of differentiation has remained in the real space. The objects we have
differentiated were larger in the sense that we dealt with vectors and tensors. The evaluation of
derivatives of larger objects presented a small difficulty but remained essentially the same so
long as the domain remained real. When the domain of differentiation itself is a made up of large
objects, the task of differentiation becomes a little more demanding. Such problems are standard
in Continuum mechanics. For example, the Energy function is a scalar, yet we can obtain the
strains from it by differentiating with respect to the stress. We are dealing there with the
differentiation of a scalar function of a tensor: stress. We may want to compute the velocity
gradient from the velocity function. Here, we are differentiating a vector field defined on the
Euclidean point space, â°, with respect to the position vector of the points in â°. In these and
several other derivatives of interest, the domains are no longer in the simple real space.
The approach to this challenge is twofold:
9
1. Recognize that the vectors and tensors live in their respective Euclidean VECTOR spaces
where the concept of length is already defined.
2. Use the above to extend the concept of directional derivative to include the derivative of
any object from a given Euclidean space with respect to objects from another.
Such a generalization is in the Gateaux differential. Consider a map,
ð :V â ð
This maps from the domain V to ð both of which are Euclidean vector spaces. The concepts of
limit and continuity carries naturally from the real space to any Euclidean vector space.
Let ð0 â V and ð0 â ð, as usual we can say that the limit
limðâ ð0
ð (ð¯) = ð°0
if for any pre-assigned real number ð > 0, no matter how small, we can always find a real number
ð¿ > 0 such that |ð (ð¯) â ð°0| †ð whenever |ð¯ â ð¯0| < ð¿. The function is said to be continuous
at ð¯0 if ð (ð¯0) exists and ð (ð¯0) = ð°0
The Gateaux Differential
Specifically, for ðŒ â â let this map be:
ð·ð (ð±, ð¡) â¡ limðŒâ0
ð (ð± + ðŒð¡) â ð (ð±)
ðŒ=ð
ððŒð (ð± + ðŒð¡)|
ðŒ=0
We focus attention on the second variable â while we allow the dependency on ð to be as general
as possible. We shall show that while the above function can be any given function of ð± (linear or
nonlinear), the above map is always linear in ð¡ irrespective of what kind of Euclidean space we
are mapping from or into. It is called the Gateaux Differential.
Real functions in Real Domains. Let us make the Gateaux differential a little more familiar in real
space in two steps: First, we move to the real space and allow â â ðð¥ and we obtain,
ð·ð¹(ð¥, ðð¥) = limðŒâ0
ð¹(ð¥ + ðŒðð¥) â ð¹(ð¥)
ðŒ=ð
ððŒð¹(ð¥ + ðŒðð¥)|
ðŒ=0
And let ðŒðð¥ â Îð¥, the middle term becomes,
limÎð¥â0
ð¹(ð¥ + Îð¥) â ð¹(ð¥)
Îð¥ðð¥ =
ðð¹
ðð¥ðð¥
10
from which it is obvious that the Gateaux derivative is a generalization of the well-known
differential from elementary calculus. The Gateaux differential helps to compute a local linear
approximation of any function (linear or nonlinear).
Linearity. It is easily shown that the Gateaux differential is linear in its second argument, ie,
for ðŒ â R. It is easily shown that,
ð·ð (ð±, ðŒð¡) = ðŒð·ð (ð±, ð¡)
Furthermore,
ð·ð (ð±, ð + ð¡) = limðŒâ0
ð (ð±, ðŒ(ð + ð¡)) â ð (ð±)
ðŒ
= limðŒâ0
ð (ð±, ðŒ(ð + ð¡)) â ð (ð±, ðŒð ) + ð (ð±, ðŒð ) + ð (ð±)
ðŒ
= limðŒâ0
ð (ð² + ðŒð¡) â ð (ð²)
ðŒ+ limðŒâ0
ð (ð±, ðŒð ) + ð (ð±)
ðŒ
= ð·ð (ð±, ð¡) + ð·ð (ð±, ð )
as the variable ð² â¡ ð± + ðŒð â ð± as ðŒ â 0; For ðŒ, ðœ â R, using similar arguments, we can also
show that,
ð·ð (ð±, ðŒð + ðœð¡) = ðŒð·ð (ð±, ð ) + ðœð·ð (ð±, ð¡)
Points to Note:
⢠The Gateaux differential is not unique to the point of evaluation. Rather, at each point ð±
there is a Gateaux differential for each âvectorâ ð¡. If the domain is a vector space, then
we have a Gateaux differential for each of the infinitely many directions at each point. In
two of more dimensions, there are infinitely many Gateaux differentials at each point!
Remember however, that ð¡ may not even be a vector, but second- or higher-order tensor.
It does not matter, as the tensors themselves are in a Euclidean space that define
magnitude and direction as a result of the embedded inner product.
⢠The Gateaux differential is a one-dimensional calculation along a specified direction ð¡.
Because itâs one-dimensional, you can use ordinary one-dimensional calculus to compute
it. Product rules and other constructs for the differentiation in real domains apply.
11
Gradient or Fréchet Derivatives
A scalar, vector or tensor valued function in a scalar, vector or tensor valued domain is said to be
Fréchet differentiable if a subdomain exists in which we can find grad ð (ð±) such that,
(grad ð (ð±)) ⢠ð¡ = ð·ð (ð±, ð¡)
This equation defines the gradient of a function in terms of its operating on the domain object to
obtain the Gateaux differential.
The nature of, grad ð (ð±) as well as the kind of product, " ⢠", between the gradient and the
differential depend on the value type of the function and the type of argument involved. In the
simple case of a scalar valued function of a scalar argument, we are back to the regular derivative
as can be seen in the first row of the table, and the product involved is simply multiplication of
two scalars. The Gateaux differential here is your regular differential.
For a scalar valued function, when the argument type is a vector, we get, for the Fréchet
derivative, the familiar gradient operation, grad ð(ð±). The Gateaux differential here is the
directional derivative, (grad ð(ð±)) â ðð±, in the direction given by the differential, ðð±. Notice two
things here:
1. The function value is a scalar;
2. The function differential, ð·ð(ð±, ðð±) = (grad ð(ð±)) â ðð± is also a scalar.
The product between grad ð(ð±) and the vector differential is a scalar product so that the
gradient of a scalar valued function of a vector argument is itself a vector. In the table, product
of the Fréchet derivative in column 2 with the argument in column 3 is a scalar hence the correct
product here is the scalar product.
No grad ð (ð±) Argument Product ⢠ð (ð±) Gateaux-Fréchet Example
1 Scalar Scalar Multiply Scalar ð·ð¹(ð¥) =
ðð¹(ð¥)
ðð¥ðð¥
2 Vector Vector Scalar product Scalar ð·ð(ð±, ðð±) = (grad ð(ð±)) â ðð±
3 Tensor Tensor Scalar product Scalar ð·ð(ð, ðð) =
ðð(ð)
ðð: ðð
4 Tensor Vector Contraction Vector ð·ð(ð±, ðð±) = (grad ð(ð±))ðð±
5 Tensor Scalar Scalar multiply Tensor ð·ð(ð¥) =
ðð(ð¥)
ðð¥ðð¥
12
6 Tensor (3) Vector Contraction Tensor ð·ð (ð±, ðð±) = (grad ð (ð±))ðð±
7 Tensor (4) Tensor Contraction Tensor ð·ð (ð, ðð) = (grad ð (ð))ðð
For a scalar valued function of a tensor argument, row 3, Gateaux differential is a scalar â notice
it will necessarily be the same type as the function value as it is simply a change in value: Change
in a scalar is scalar, change in vector is vector and change in tensor is tensor. The gradient here
is a second-order tensor. The proper product to recover the scalar value from the product of
these tensors is the tensor scalar product. On rows six and seven, the tensor order for the Fréchet
derivative is higher than two and so stated.
We will rely on a set of worked examples to demonstrate these.
Scalar-valued function of Tensors
Some of the most important functions you will differentiate are scalar-valued functions that take
tensor arguments. Here are some examples:
1. Principal Invariants of Tensors and related functions. This could include invariants of other
tensors such as the deviatoric parts of the original tensor, etc.
2. Traces of powers of the tensor. Traces of products and transposes, etc. These results are
powerful because we often able to convert scalar valued functions to the sums and
products of traces and their powers.
3. Magnitudes of tensors.
(a) Direct Application of Gateaux Differential
Given any scalar ð > 0, for the scalar-valued tensor function, ð
ðð tr ð = ð, show that, and
ð
ðð tr ð2 = ðT.
Compute the Gateaux differential directly here:
ð·ð(ð, ðð) =ð
ððŒð(ð + ðŒðð)|
ðŒ=0
=ð
ððŒtr(ð + ðŒðð)|
ðŒ=0 = tr(ð ðð)
= ð: ðð =ðð(ð)
ðð: ðð
13
So that, we have found a function that, multiplies the differential argument to give us the Gateaux
differential. That is the Fréchet derivative, or gradient. As you can see here, it is the Identity
tensor:
ð
ððtr(ð) = ð.
When ð = 2, The Gateaux differential in this case,
ð·ð(ð, ðð) =ð
ððŒð(ð + ðŒðð)|
ðŒ=0 =ð
ððŒtr(ð + ðŒðð)2|
ðŒ=0
=ð
ððŒtr{(ð + ðŒðð)(ð + ðŒðð)}|
ðŒ=0
= tr [ð
ððŒ(ð + ðŒðð)(ð + ðŒðð)]|
ðŒ=0
= tr[ðð(ð + ðŒðð) + (ð + ðŒðð)ðð]|ðŒ=0
= tr[ðð ð + ð ðð] = 2ðT: ðð
=ðð(ð)
ðð: ðð
So that,
ð
ððtr ð2 = ðT
Using these two results and the linearity of the trace operation, we can proceed to find the
derivative of the second principal invariant of the tensor ð:
ð
ðð ðŒ2(ð) =
1
2
ð
ðð[tr2(ð) â tr(ð2)]
=1
2[2tr(ð)ð â 2 ðT]
= tr(ð)ð â ðT
using the fact that differentiating tr2(ð) with respect to tr(ð) is a scalar derivative of a scalar
argument. To find the derivative of the third principal invariant of the tensor ð, we appeal to the
Cayley-Hamilton theorem, which expresses the determinant in terms of traces only,
ðŒ3(ð) =1
6[tr3(ð) â 3tr(ð)tr(ð2) + 2tr(ð3)]
ð
ðð ðŒ3(ð) =
1
6
ð
ðð[tr3(ð) â 3tr(ð)tr(ð2) + 2tr(ð3)]
14
=1
6[3tr2(ð)ð â 3tr(ð2)ð â 3tr(ð)2ðT + 2 Ã 3(ð2)T]
= ðŒ2ð â ðŒ1(ð)ðT + ð2T.
Given that ð is a constant tensor, show that â
âðtr(ðð) = ðT.
For this scalar-valued function, the Gateaux differential,
ð·ð(ð, ðð) =ð
ððŒtr(ðð + ðŒððð)|
ðŒ=0 =ðð(ð)
ðð: ðð
=ð
ððŒ tr(ðð)|
ðŒ=0 +ð
ððŒ ðŒtr(ððð)|
ðŒ=0
= tr(ððð) = ðð: ðð
Giving us,
ðð(ð)
ðð= ðð.
Finally, we look at the derivative of magnitude. We have found this by a simpler means earlier.
It is instructive to look more rigorously, using the Gateaux differential. Given a scalar ðŒ variable,
the derivative of a scalar function of a tensor ð(ð) is
ðð(ð)
ðð: ð = lim
ðŒâ0
ð
ððŒð(ð + ðŒð)
for any arbitrary tensor ð. In the case of ð(ð) = |ð|,
ð|ð|
ðð: ð = lim
ðŒâ0
ð
ððŒ|ð + ðŒð|
|ð + ðŒð| = âtr(ð + ðŒð)(ð + ðŒð)ð = âtr(ððT + ðŒððT + ðŒððT + ðŒ2ððT)
Note that everything under the root sign here is scalar and that the trace operation is linear.
Consequently, we can write,
limðŒâ0
ð
ððŒ|ð + ðŒð| = lim
ðŒâ0
tr (ððT) + tr (ððT) + 2ðŒ tr (ððT)
2âtr(ððT + ðŒððT + ðŒððT + ðŒ2ððT)=2ð: ð
2âð:ð=ð
|ð|: ð
So that,
ð|ð|
ðð: ð =
ð
|ð|: ð
or,
ð|ð|
ðð=ð
|ð|
15
as required since ð is arbitrary.
Differentiation of Fields
Euclidean Point Space in Concrete Terms
In Continuum Mechanics, we are often concerned with quantities that are scalars (density, mass,
temperature, etc.,) vectors (e.g. displacements, velocities, etc.,) or second-order tensors (e.g.,
stresses, strains, etc.). There are other things that are derivatives, in space, of these basic
quantities. Some of these are temperature gradients, velocity gradients, deformation gradient
(Which, some have argued, is not even a gradient in the strict sense).
One other fact is that these quantities of interest are often associated with points in space. A set
of examples will clarify this before we provide an abstract definition:
1. A Velocity Map
The arrows here show the velocity at each point. The color gives other information. About
the distribution.
2. Weather Map
16
The meteorologist has a lot of ways to tell about the weather and predicting it. There are
radar maps that give the pressure distribution, wind velocities, temperature,
precipitation, etc.:
3. Simulation Maps
The diagram below shows field functions, displacement, stress and factor of safety
displayed in the simulation of a vertically loaded curved bar that is supported at one end.
These and many others maps in our daily experience are concrete examples of point functions or
field functions. The domains here are a flow field â±, A geographical region ð¢ and the body of a
curved bar â¬. Each represents a subset of the Euclidean point space in concrete terms. More
precisely, â±, ð¢, ⬠â â°.
ð¯:ð â ðŒ, ð: ð¢ â â,ð:ð â ð
17
Expressing the facts that the velocity ð¯(ð¥) was defined at the subset â± in the Euclidean point
Space and a vector object is created at each point. Those objects themselves reside in the vector
space which we already understand how to deal with in that the structure and rules governing
them are well defined. The same goes for the pressure ð(ð¥), and the stress ð(ð¥) as they have all
been defined in the respective subsets.
When we talk about temperature, density, strain functions as fields, these are different kinds of
objects that share the common feature of being defined in the Euclidean Point Space.
Furthermore, there are other functions that are derived from these base functions. Some
examples such derived fields are Temperature Gradient, Velocity Gradient and the Deformation
Gradient. In this section, it will be clear that these derived point functions are one rank higher
than the ones they are derived from. Temperature Gradient, for example is a vector quantity
derived from scalar temperature while the velocity gradient is a second-order tensor derived
from the Fréchet derivative of the vector velocity.
Components of Gradient in ONB Systems
We now express the gradients measuring changes in the fields beginning with
ð·ð(ð±, ðð±) = (grad ð(ð±)) â ðð±
With a scalar-valued function with vector arguments that are now the position vectors in the
Euclidean Point space. Given a parameter ð¡, for the point {ð¥1, ð¥2, ð¥3} consider the neighboring
point is at {ð¥1 + ð1ð¡, ð¥2 + ð¡ð2, ð¥3 + ð¡ð3}
ð·ð(ð±, ðð±) = limðŒâ0
ð(ð± + ðŒðð±) â ð(ð±)
ðŒ
= (ðð(ð±)
ðð¥1ð1 +
ðð(ð±)
ðð¥2ð2 +
ðð(ð±)
ðð¥3ð3) â ðð±
So that, for a scalar-valued function, in an Orthonormal system,
grad ð(ð±) = (ðð(ð±)
ðð¥1ð1 +
ðð(ð±)
ðð¥2ð2 +
ðð(ð±)
ðð¥3ð3).
Without further ado, we shall generalize this expression and this provide the results for more
general cases. We use the comma notation to denote partial derivatives and apply it post so that
we can write,
18
grad ð(ð±) =ðð(ð±)
ðð¥ððð = ð,ð ðð
Addition, product and other rules apply to Gradient in the comma notation as follows:
grad (ð(ð±)ð(ð±)) = (ðð),ð ðð
= (ð,ð ð + ðð,ð )ðð
= ð grad ð(ð±) + ð grad ð(ð±)
If we define the gradient operator as a post fix operator,
grad (â) = (â),ðŒâððŒ
Where (as long as the coordinate system of reference is ONB) the comma signifies partial
derivative, with respect to the ðŒ coordinate. The tensor product applying in all cases except for
the scalar function where there is no existing basis vector to take the product with. It therefore
stands for ordinary product in this case.
Gradient of a Vector Function.
Consider a vector function, ð¯(ð¥), defined in a Euclidean space spanned by ONB. This can be
written in terms of its components,
ð¯(ð¥) = ð£ð(ð¥)ðð
Then,
grad ð¯(ð±) = (ð£ð(ð¥)ðð),ðŒâððŒ
= ð£ð ,ðŒ (ð¥)ððâððŒ
= [ð1 ð2 ð3]
(
ðð£1ðð¥1
ðð£1ðð¥2
ðð£1ðð¥3
ðð£2ðð¥1
ðð£2ðð¥2
ðð£2ðð¥3
ðð£3ðð¥1
ðð£3ðð¥2
ðð£3ðð¥3)
â(
ð1ð2ð3)
For a tensor field ð(ð±), the gradient can be obtained in the same way:
grad ð(ð±) = (ððð(ð±)ððâðð),ðŒâððŒ
= ððð,ðŒ (ð±)ððâððâððŒ
which is a third-order tensor containing 27 terms. Each set of nine terms can be written out as
we have done for the tensor gradient of a vector-valued field above.
19
Now, a temperature field is a scalar field. The forgoing shows that the gradient of such a filed is
a vector field on its own. A velocity field is a vector field. Gradient of such a field is a second-order
tensor. This is a well-known field called Velocity Gradient.
The Divergence
Gradients of objects larger than scalar are at least second-order tensors. Such derived fields can
be contracted in the following way by taking the trace of the last two bases (when they are more
than two.) Such a contraction is called the divergence, not of the derived field, but of the original
field.
Temperature and other scalar fields cannot have divergence because their gradients are vectors
and therefore cannot be contracted (you cannot take the trace). The gradient of a vector field
such as the Velocity Gradient, can only be contracted in one way (has only one possible trace).
Gradients of larger objects such as tensor fields can be contracted (traced) in more than one way:
In that case, the disambiguation rule for contraction to obtain a divergence is to contract with
the basis that came from the derivative. For example,
grad ð¯(ð±) = ð£ð ,ðŒ (ð¥)ððâððŒ. The divergence of the same field is
div ð¯(ð±) = tr grad ð¯(ð±)
= ð£ð,ðŒ (ð¥)ðð â ððŒ = ð£ð ,ðŒ (ð¥)ð¿ððŒ
= ð£ð,ð (ð¥) =ðð£ð(ð±)
ðð¥ð
=ðð£1(ð±)
ðð¥1+ðð£2(ð±)
ðð¥2+ðð£3(ð±)
ðð¥3
Similarly, for the tensor, ð(ð±), the divergence,
div ð(ð±) = tr grad ð(ð±)
= ððð,ðŒ (ð±)ðð(ðð â ððŒ)
= ððð,ðŒ (ð±)ððð¿ððŒ = ððð ,ð (ð±)ðð
= (ðð11ðð¥1
+ðð12ðð¥2
+ðð13ðð¥3
) ð1 + (ðð21ðð¥1
+ðð22ðð¥2
+ðð23ðð¥3
) ð2 + (ðð31ðð¥1
+ðð32ðð¥2
+ðð33ðð¥3
) ð3
20
The Laplacian or âgrad squaredâ.
If we take the gradient of a scalar twice, we obtain a second order tensor. The trace of this field
creates the well know Laplacian operator wrongly called grad square. The square of the gradient
of a scalar is the tensor of order 2 from which the trace is taken. The Laplacian is a scalar operator
that appears frequently in the governing equations of many systems. Let us find the expression
for this for a typical scalar field.
You will recall that, for a scalar field, ð(ð±),
grad ð(ð±) =ðð(ð±)
ðð¥ððð = ð,ð ðð.
Taking the gradient of this expression a second time, we have,
grad grad ð(ð±) = (ð,ð ðð),ðâðð = ð,ðð ððâðð
The trace of this tensor can be found by contracting the dyad bases:
Îð(ð±) = tr grad grad ð(ð±) = ð,ðð ðð â ðð
= ð,ðð ð¿ðð = ð,ðð
=ð2ð(ð±)
ðð¥12 +
ð2ð(ð±)
ðð¥22 +
ð2ð(ð±)
ðð¥32 .
The trace of the double gradient of any field can be taken in this way. The observation one can
make here is that while gradient raise the order of its operand by one, divergence decreases its
own by one while the Laplacian operation makes no order change. As an example, grad ð¯(ð±) =
ð£ð ,ð (ð¥)ððâðð so that,
grad grad ð¯(ð±) = ð£ð ,ðð (ð¥)ððâððâðð
so that, the Laplacian of ð¯(ð±) is,
Îð¯(ð±) = tr grad grad ð¯(ð±)
= ð£ð ,ðð (ð¥)ððð¿ðð = ð£ð ,ðð (ð¥)ðð
= (ð2ð£1
ðð¥12 +
ð2ð£1
ðð¥22 +
ð2ð£1
ðð¥32 )ð1 + (
ð2ð£2
ðð¥12 +
ð2ð£2
ðð¥22 +
ð2ð£2
ðð¥32 )ð2 + (
ð2ð£3
ðð¥12 +
ð2ð£3
ðð¥22 +
ð2ð£3
ðð¥32 )ð3
We can continue and find the trace of grad grad⢠of a tensor object. The process is similar and
the result of the trace, to give the Laplacian, will also be a tensor of the same order.
21
Curl of Vector and Tensor Fields.
The third-order alternating tensor, ð â¡ ððððððâððâðð was introduced in the last chapter.
Compositions of this tensor with vectors and other tensors yield some useful constructs in
Continuum Mechanics. We have already seen its action resulting in the axial vector for a skew
tensor. We are about see that the well-known curl of vectors and tensors can be neatly defined
by the divergence of products with this tensor.
1. Curl of a vector. Given any vector ð®(ð¥) = ð¢ðŒ(ð¥)ððŒ, the second-order tensor, the
composition ðð® is skew and is the transpose of the vector cross of ð®.
ðð® = ððððð¢ðŒ(ð¥)(ððâððâðð)ððŒ
= ððððð¢ðŒ(ð¥)(ððâðð)ðð â ððŒ
= ððððð¢ðŒ(ð¥)(ððâðð)ð¿ððŒ = ððððð¢ð(ð¥)(ððâðð)
Gradient of ðð® is,
grad(ðð®) = ððððð¢ð ,ðŒ (ð¥)ððâððâððŒ
And the trace gives us the divergence,
curl ð® â¡ div(ðð®) = tr grad(ðð®)
= ððððð¢ð,ðŒ ðð(ðð â ððŒ)
= ððððð¢ð,ð ðð
= (
ð1 ð2 ð3ð
ðð¥1
ð
ðð¥2
ð
ðð¥3ð¢1 ð¢2 ð¢3
)
which is the curl of the vector field ð®(ð¥). The curl of a vector has zero divergence:
grad curl ð® = ððððð¢ð,ðð ððâðð
The trace of this expression,
div curl ð® = tr grad curl ð® = ððððð¢ð,ðð ðð â ðð
= ððððð¢ð,ðð ð¿ðð = ððððð¢ð,ðð
= 0
2. Curl of a Tensor Field. Given a tensor ð(ð±) = (ððð(ð±)ððâðð), the composition,
ðð = (ððŒðœ(ð±)ððŒâððœ)(ððððððâððâðð)
= (ððŒðœ(ð±)ðððððαâððâðð)ð¿ðœð
22
= ððŒðððððððŒâððâðð
This third-order tensor, ððŒðððððððŒâððâðð is skew in two of its indices ð and ð. The
transpose of the divergence of this tensor is
curl ð(ð±) = (div ðð)T
= ððððððŒð,ð ððâððŒ
= (ð1 ð2 ð3)
(
ðð13ðð¥2
âðð12ðð¥3
ðð23ðð¥2
âðð22ðð¥3
ðð33ðð¥2
âðð32ðð¥3
ðð11ðð¥3
âðð13ðð¥1
ðð21ðð¥3
âðð23ðð¥1
ðð31ðð¥3
âðð33ðð¥1
ðð12ðð¥1
âðð11ðð¥2
ðð22ðð¥1
âðð21ðð¥2
ðð32ðð¥1
âðð31ðð¥2 )
â(
ð1 ð2 ð3
)
The curl of a symmetric tensor is traceless:
tr curl ð = ððððððŒð,ð ðð â ððŒ = ððððððð,ð
which vanishes when ð is symmetric.
Show that for a constant vector ð¬, curl ðTð¬ = (curl ð)ð¬
ð(ð±) = ððð(ð±)ððâðð
ðTð¬ = (ðððððâðð)ð ðŒððŒ = ðððð ððð = ððŒðð ðŒðð
Curl of this vector is,
ðððð(ððŒðð ðŒ),ð ðð = ððððððŒð,ð ð ðŒðð
since ð¬ is a constant vector. For the tensor ð = ðððððâðð itself, we have that, curl ð =
ððððððŒð,ð ððâððŒ. Hence,
(curl ð)ð¬ = (ððððððŒð ,ð ððâððŒ)ð ðœððœ
= ððððððŒð ,ð ð ðŒðð
So that curl ðTð¬ = (curl ð)ð¬ as required. Note that this relationship is often used as the
definition of the curl of a second-order tensor.
Integral Field Theorems
Integral field theorems are needed to derive the governing equations of continua. The most
relevant results, known by several names in the Literature, are often associated with the names
of Stokes, Green and Gauss. These theorems are related to one another in several ways. Our
23
practical uses are emphasized here rather than establishment of proofs that are available in
mathematical texts. We begin with Stokes Theorem:
Stokesâ Theorem
Consider a surface with a boundary curve as shown in the
picture. The outward drawn normal, ð§, is in such a way that
a traversal of the boundary curve in the direction shown
keeps the surface on the left-hand side. Such a surface is
said to be positively oriented. Stokes theorem relates the
integral of the curl of a vector or tensor field over this
surface ð® in the Euclidean Point Space to the line integral of
the vector field ð¯(ð±, ð¡) over its boundary curve Î. In this case, Stokes theorem states that, given
a positively oriented surface ð®, and bounded as shown by a path Î, the line integral,
â«ð¯(ð±, ð¡) â ðð±
Î
=â¬(curl ð¯) â ðð¬.
ð®
That is, the line integral taken over the path shown is equal to the surface integral of the curl of
the same field over the entire surface. For tensor fields, the theorem states:
â«ð(ð±, ð¡) â ðð±
Î
=â¬(curl ð)Tðð¬
ð®
=â¬divðððð¬
ð®
where ð = ððððððâððâðð. Notice that the transpose of the curl is taken in the case of the
tensor field. The curl of a vector field is a vector and admits no transpose. It is instructive to
demonstrate the meaning of these integrals by a simple computation, aided by Mathematica.
Computation Issues
The line integral in equations___ and ___ are easily computed by taking a parametrized position
vector, bound to the integration path, ð± = ð(ð) along the path. We therefore have,
â«ð¯(ð±, ð¡) â ðð
Î
= â« ð¯(ð(ð), ð¡) â ðð
ðð
ððž
ððµ
ðð
As the path is traversed beginning with ððµ and ending with ððž . ðð
ðð is the vector pointing along the
tangent to the path.
24
On the RHS, we are dealing with a surface integral. For a surface parametrized by the scalars,
ð1, ð2 if the position vector ð(ð1, ð2) spanning the surface as attain all possible values is
differentiated with respect to each variable, again we obtain the surface base vectors. The angle
between these vectors are NOT guaranteed to be orthogonal to each other, but their cross
product,
ðð(ð1, ð2)
ðð1Ãðð(ð1, ð2)
ðð2
gives the vector in the direction normal to the surface since the two vectors, defined by the
derivatives, lie on the surface. The orientation of the surface must be checked to see if a traversal
of the path puts the right handed system is such a way that the surface normal is outward. If not,
a reversal of the cross product.
The surface integral then becomes,
â¬(curl ð¯) â ðð¬ =
ð®
â« â« (curl ð¯) â (ðð(ð1, ð2)
ðð1Ãðð(ð1, ð2)
ðð2)
ð1ðž
ð1ðµ
ð2ðž
ð2ðµ
ðð1ðð2
Instead of offering a proof of this important result, we shall proceed to a simple example of a
demonstration of it involving a computation of both sides of the above expressions for the simple
vector case. Consider a closed counter-clockwise circular path, centered at the ð¥3 -axis and
perpendicular to it on the plane ð¥3 = 1, so that a typical position vector on it is inclined at angle
ð to the vertical as shown in figure ___. We
parametrize the path with the familiar spherical
coordinates, as ð¥ = ð sin ð cosð, ðŠ =
ð sin ð sin ð and ð§ = ð cos ð. On the circle, only
ð varies. If we take {ð = â2, ð =ð
4}, the
parameters are that of a unit circle at 1 unit
distance along the ð¥3axis.
The line integral of a vector field, according to
Stokes, equals that of the surface integral on a
surface with this curve as its edge.
The line integral can be computed as stated earlier;
ð(ð) = cos ð ð1 + sin ð e2 + e3
25
which is the position vector for all the points on this path as ððµ = 0; ððž = 2ð. Differentiating
this,
ðð(ð) =ðð(ð)
ðððð = (âsin ð ð1 + cos ð ð2)ðð
For any prescribed vector field, ð¯(ð±, ð¡) = ð¯(ð(ð), ð¡). A ððð¡âðððð¡ððð® implementation of this
integral with the function,
ð¯(ð±) = sin ð¥ ð1 + (cos ðŠ + ð¥ +ð¥2
2+ð¥3
3)ð2 + ð§ cos ð¥ cos ðŠ ð3
is given here:
There at least three easy ways we can realize such a surface:
1. A flat circular region on the same plane as the path. This is the region, ð¥12 + ð¥2
2 †1, on
the plane at ð¥3 = 1. It has the parametric equations,
ð¥1 = ð cosð, ð¥2 = ð sinð, ð¥3 = 1
By inspection, we can see that the normal to this plane surface is the ð3 axis. However,
this can be computed directly by using equation 1.__ giving us two vectors on the
coordinate lines on the plane:
In the plane region,
26
ð(ð, ð) = ð cosð ð1 + ð sinð ð2
With ð †1; 0 †ð †2ð covers the region. And the normal to the surface is
ðð(ð, ð)
ððÃðð(ð, ð)
ðð
Since both derivatives represent vectors lying on the surface. We can now integrate the
curl of any vector field with the domain set to this plane surface.
27
2. The cone surface, ð¥32 = ð¥1
2 + ð¥22 given that ð¥1
2 +
ð¥22 †1. The radius vectors constrained to this
region is,
ð(ð, ð) = ð sinð
4cosð ð1 + ð sin
ð
4sinð ð2
+ ð cosð
4 ð3
Contained within the surface for 0 †ð †â2, 0 â€
ð †2ð. Here the two surface vectors are: ðð(ð,ð)
ðð and
ðð(ð,ð)
ðð
The cross products of which give the normal to the surface.
28
3. Lastly, we can take the sphere centered at the origin
with radius â2 as shown in Figure ___. The radius
vector constrained to the surface is,
ð(ð, ð) = â2 sin ð cosð ð1 + â2 sin ð sinð ð2
+ â2 cos ð ð3
With ð
4†ð †ð; 0 †ð †2ð. Surface normal can
be evaluated from ðð(ð,ð)
ððÃðð(ð ð)
ðð. In the ððð¡âðððð¡ððð® code below,
Computations using the three surfaces give the same value of 5ð
4 which is the same as the field
integrated over the closed path shown earlier. For hand calculations, simpler functions may be
used.
Greenâs Theorem
Greenâs theorem can be viewed as a two-dimensional version of the 3-D Stokes Theorem:
Given two scalar fields ð1(ð¥1, ð¥2), ð2(ð¥1, ð¥2), functions defined over a two dimensional Euclidean
Point Space, we can form the vector field,
ð¯(ð±) = ð1(ð¥1, ð¥2)ð1 + ð2(ð¥1, ð¥2)ð2
curl ð¯(ð±) = (
ð1 ð2 ð3ð
ðð¥1
ð
ðð¥2
ð
ðð¥3ð1(ð¥1, ð¥2) ð2(ð¥1, ð¥2) 0
)
=ðð1ðð¥2
ð1 âðð2ðð¥1
ð2
29
From Stokes Theorem,
â®ð¯(ð±) â ðð
Î
=â¬(curl ð¯) â ðð¬
ð®
But ðð¬ = ðð¥1ð1 + ðð¥2ð2 so that,
â®(ð1(ð¥1, ð¥2)ð1 + ð2(ð¥1, ð¥2)ð2) â ðð
Î
=⬠(ðð1ðð¥2
ð1 âðð2ðð¥1
ð2) â (ðð¥1ð1 + ðð¥2ð2)
ð®
=⬠(ðð1ðð¥2
ðð¥1 âðð2ðð¥1
ðð¥2)
ð®
Gauss Divergence Theorem
The divergence theorem is central to several other results in Continuum Mechanics. It related
the volume integral of a field to the surface integral of the tensor product of the field and unit
normal. We present here a generalized form [Ogden] which states that, Gauss Divergence
Theorem
For a tensor field ðµ, The volume integral in the region Ω â â°,
â«(grad ðµ)Ω
ðð£ = â« ðµâð§âΩ
ðð
where ð§ is the outward drawn normal to ðΩ â the boundary of Ω. The tensor product applies
when the field is of order one or higher: vectors, second and higher-order tensor. For scalars, it
degenerates, as expected, to a simple multiplication by a scalar. We will now derive several more
familiar statements of this theorem from equation ____ above:
Vector field.
Replacing the tensor with the vector field ð and taking the trace on both sides, remembering that
the integral operator is linear and therefore the trace of an integral equals the integral of its trace,
we have,
trâ« (grad ð)Ω
ðð£ = trâ« ðâ ð§âΩ
ðð
so that,
â«(div ð)Ω
ðð£ = â« ð â ð§âΩ
ðð
30
Stating that the integral of the divergence of a vector field over the volume equals the integral
over the surface of the same function with the outwardly drawn normal. Which is the usual form
of the Gauss theorem. We can proceed immediately to the form of this theorem that applies to
a scalar function:
Scalar Fields
Consider a scalar field ð and a constant vector ð. We apply the vector form of the divergence
theorem to the product ðð:
â«(div[ðð])Ω
ðð£ = â« ðð â ð§ðΩ
ðð = ð â â« ðð§ðΩ
ðð
For the LHS, note that, div[ðð] = tr grad[ðð]
grad[ðð] = (ððð),ð ððâðð = ððð,ð ððâðð
The trace of which is,
ððð,ð ðð â ðð = ððð,ð ð¿ðð = ððð,ð = ð â grad ð
For the arbitrary constant vector ð, we therefore have that,
â«(div[ðð])Ω
ðð£ = ð â â«grad ð Ω
ðð£ = ð â â« ðð§ðΩ
ðð
â«grad ð Ω
ðð£ = â« ðð§ðΩ
ðð
Second-Order Tensors
We can apply the general form above to a second-order tensor field, ð(ð±) = ððð(ð±)ððâðð.
Applying equation ___ as before,
â«ððð,ð ððâððâΩ
ðððð£ = â« ðððððððâððâðððΩ
ðð
Contracting, we have
â«ððð,ðΩ
(ððâðð)ðððð£ = â« ððððð(ððâðð)ðððΩ
ðð
â«ððð,ð ð¿ððððΩ
ðð£ = â« ðððððð¿ðððððΩ
ðð
â«ððð,ð ððΩ
ðð£ = â« ððððððððΩ
ðð
31
Which is the same as,
â«(divð)Ω
ðð£ = â« ðð§ðΩ
ðð
Relating the volume integral of the divergence of a tensor field to the same tensor field integrated
over the surface after being contracted with the unit normal to the boundary of the same volume.
Curvilinear Coordinates
Tensors provide a unified way to express most of the quantities and objects we come across in
Continuum Mechanics. The material objects we deal with abide in the Euclidean 3D space. For
computation purposes, we refer to this space using a global chart or a coordinate syste. Apart
from those that have ventured into the advanced topics section in each chapter, we have
restricted consideration so far to the simplest of such systems: the Cartesian System with
Orthonormal Basis Vectors (ONB).
There are oblique Cartesian coordinates that are the same as the ONB but their axes do not
necessarily meet at right angles. These oblique systems are not our main concern. The Cartesian
system is useful and easy to use. Reasons for this ease of use include
1. Base vectors meet at right angles. They are not only mutually independent as base vectors
must be, by they also do not have zero components in other base vector directions: their
scalar products with one another vanish.
2. The basis vectors are spatial constants. Remain normal to each other and always point in
the same directions at any point.
3. Because of these, derivatives of tensors expressed in terms of such bases only concern
the coefficients as the base vectors are always constants.
4. In addition to these, the position vectors in Cartesian coordinates ale linear in the
coordinate variables.
In curvilinear coordinate systems, the basis vectors are spatial variables, they are not necessarily
unit vectors and they are not necessarily orthogonal.
For the purpose of this work, we restrict attention to coordinate systems that are orthogonal.
There are very useful and practical systems that meet this criterion, for example, the Cylindrical,
spherical and spheroidal systems we have already seen are curvilinear. While they may not
32
necessarily meet the normality criterion, they are orthogonal. Yet we have to cope with the fact
that the basis vectors are variables.
In the next section, we derive the derivatives of tensor fields with such varying bases. However,
it no longer necessary to go through the tedium to doing such computations unless specific
applications compels one to go through the labor.
In this section, we shall show some of the principal results and principles for working with not
Cartesian systems that obviate the necessity of going through the computation of the derivatives
with varying bases vectors.
The Covariant Derivative
We defined the gradient, in Cartesian systems as,
grad (â) = (â),ðŒâððŒ
The comma in the above equation has been interpreted as a partial derivative thus far. The fact
is that this equation is actually valid in every coordinate system. That includes the orthogonal
curvilinear coordinates we are considering! All we will need to do is to simply change our
interpretation of this equation.
From now on, when we are referring to coordinate systems other than Cartesian, the comma
operator will be interpreted as Covariant Derivatives. The full definition of covariant derivative
is in the advanced section. However, the explanation given here should suffice:
A covariant derivative is simply one where all the terms that would have arisen from the spatial
variability of the basis vectors are already added. Consequently, when we write,
grad (â) = (â),ðŒâð ðŒ
we have an expression that is valid everywhere. Specifically we can represent a second-order
tensor and write,
grad ð = (ðððð ðâð ð),ðŒâð ðŒ = (ð ð
ð ð ðâð ð),ðŒâð ðŒ = (ðð ðð ðâð ð),ðŒâð ðŒ
= (ðððð ðâð ð),ðŒâð ðŒ
which are four different ways of expressing the same gradient in curvilinear systems, they work
exactly in the same way as the ordinary Cartesian experience with fixed vectors. The covariant
derivative has so taken care of the variability of base issues that the vector bases behave like
constants under covariant differentiation! It does it by computing extra terms, called Christoffel
33
symbols, that add more terms to the expression to account for the variation of the basis vectors.
Just like in the Cartesian system, therefore, there are no further terms to worry about.
The heavy lifing therefore transfers to the computation of the covariant derivative and evaluating
the physical components of the tensor components. We do not emphasize the computations of
these things as they are easily programmed. One of the advantages of a Symbolic Algebra System
is that it facilitates such computation just for the asking! You do not need to do much
programming! A few examples will be given shortly. Remember, all you need to worry about are
the equations as they exist in Cartesian systems. With correct interpretations, they are valid in
every system. And, when you need to compute actual values for those coordinate systems, a
symbolic algebra system like ððð¡âðððð¡ððð© delivers them easily.
The consequence of these is that many textbooks, including modern books devote a large
number of pages on these items that are done with completeness by software to an extent far
beyond what can reasonable be placed in textbooks.
If there is any need (very doubtful) to compute the Christoffel symbols, they can be done by
programming also â no need for manual computations. If you want manual computations despite
all this, then read the advanced part of this chapter, the manual computations are explained
there.
Gradient Divergence and Curl in Curvilinear Coordinates.
In this section we will show how to find these differential operations in the common systems of
Cylindrical, Spherical and Oblate Spheroidal systems. The extension to any popular coordinate
system is immediate. Furthermore, you will see that Mathematica allows you to define a
completely new coordinate system by yourself. And, if your definition is correctly specified to it,
it will compute all the differential operators you need for you!
Gradient is easy to compute manually and we will do so right away:
For a scalar field, ð(ð¥1, ð¥2, ð¥3), in Cartesian coordinates, we already know that,
grad ð =ðð
ðð¥1ð1 +
ðð
ðð¥2ð2 +
ðð
ðð¥3ð3
The same expression can be obtained in cylindrical coordinates by observing that, in that system,
the position vector is,
34
ð« = ð cosð ð1 + ð sinð ð2 + ð§ðð§ = ððð + ð§ðð§
ð ð =ðð«
ðð= ðð; ð ð =
ðð«
ðð= ððð; ð ð =
ðð«
ðð§= ðð§
The natural basis vectors,
ð ð â¡ðð«
ððð
Obtained by differentiating the position vector with respect to the coordinates ðð in this case,
ð1 = ð, ð2 = ð and ð3 = ð§, are all unit vectors except the middle one. Suppose we did not know
that. What we will need to do for each system, to get the physically consistent gradient in physical
components, we need to divide each term by the magnitude of each basis vector. In this case
therefore,
â1 = âð 1 â ð 1 = 1
â2 = âð 2 â ð 2 = ð ððð
â2 = âð ð â ð 3 = 1
If we write, ð â²ðŒ, ðŒ = 1,2,3 as the normalized basis vectors in the curvilinear system, the gradient
is therefore,
grad (ð) = â1
âðŒ(ð),ðŒâð â²ðŒ
3
ðŒ=1
=1
â1
ðð
ðð1 ð â²1 +
1
â2
ðð
ðð2 ð â²2 +
1
â3
ðð
ðð3 ð â²3
=ðð
ðð ðð +
1
ð
ðð
ðð ðð +
ðð
ðð3 ðð§ .
Notice again, the summation convention cannot be used here, hence we revert to the regular
nomenclature and use summation directly. This is the gradient in the normalized natural bases.
Observe that we have re-introduces the summation symbol in the above computation to avoid
breaking the summation convention. Furthermore, we have just dealt with a scalar field. For such
a field, the covariant derivative coincides with the partial derivatives. Hence all we need do here
is to normalize the basis vectors and divide by the magnitude of the basis vectors to obtain the
physically consistent tensor expression.
In Spherical coordinates, with no further ado,
35
We can write:
grad (ð) =1
â1
ðð
ðð1 ð â²1 +
1
â2
ðð
ðð2 ð â²2 +
1
â3
ðð
ðð3 ð â²3
=ðð
ðð ðð +
1
ð
ðð
ðð ðð +
1
ð sin ð
ðð
ðð ðð.
To find the gradient in any coordinate systems is simply to invoke the command as follows:
Comparing the above output to the previous result, it is obvious that, in Cylindrical coordinates,
ðð
ðð= ð(1,0,0)(ð, ð, ð§),
ðð
ðð= ð(0,1,0)(ð, ð, ð§),
ðð
ðð§= ð(0,0,1)(ð, ð, ð§)
And that the result is given in lists with the understanding that we are referring to the normalized
basis vectors in each case.
Gradient of a vector
The gradient f a vector is a tensor. To compute it manually, we will have to carry out the covariant
derivative with the attendant Christoffel symbols. How about doing it by simply writing the
function on the Mathematica Notebook as follows?
36
You are able to get any vector operation on a tensor function with such simple commands.
Examples
3.1 Given that ðŒ(ð¡) â â, ð®(ð¡), ð¯(ð¡) â ðŒ, and ð(ð¡), ð(ð¡) â ð, are all functions of a
scalar variable ð¡, show that (ð)ð
ðð¡(ðŒð®) = ðŒ
ðð®
ðð¡+ððŒ
ðð¡ð®, (ð)
ð
ðð¡(ð® â ð¯) =
ðð®
ðð¡â
ð¯ + ð® â ðð¯
ðð¡, (ð)
ð
ðð¡(ð® à ð¯) =
ðð®
ðð¡Ã ð¯ + ð® Ã
ðð¯
ðð¡, and (ð)
ð
ðð¡(ð®â ð¯) =
ðð®
ðð¡âð¯+
ð®âðð¯
ðð¡
ð
ðð¡(ð®â ð¯) = lim
ââ0
ð®(ð¡ + â) â ð¯(ð¡ + â) â ð®(ð¡) â ð¯(ð¡)
ð¡
= limââ0
ð®(ð¡ + â)âð¯(ð¡ + â) â ð®(ð¡)âð¯(ð¡ + â)+ð®(ð¡)âð¯(ð¡ + â)âð®(ð¡)âð¯(ð¡)
ð¡
= limââ0
ð®(ð¡ + â)âð¯(ð¡ + â) â ð®(ð¡)âð¯(ð¡ + â)
ð¡
+ limââ0
ð®(ð¡)âð¯(ð¡ + â)âð®(ð¡)âð¯(ð¡)
ð¡
= (limââ0
ð®(ð¡ + â) â ð®(ð¡)
ð¡)â (lim
ââ0ð¯(ð¡ + â)) + ð®(ð¡)
â limââ0
ð¯(ð¡ + â)ââð¯(ð¡)
ð¡
=ðð®ðð¡â ð¯+ð®â
ðð¯ðð¡.
3.2 Given that ðŒ(ð¡) â â, ð®(ð¡), ð¯(ð¡) â ðŒ, and ð(ð¡), ð(ð¡) â ð, are all functions of a
scalar variable ð¡, show that (ð)ð
ðð¡(ð + ð) =
ðð
ðð¡+ðð
ðð¡, (ð)
ð
ðð¡ðð =
ðð
ðð¡ð +
ððð
ðð¡, (ð)
ð
ðð¡(ðð®) =
ðð
ðð¡ð® + ð
ðð®
ðð¡, and (ð)
ð
ðð¡ðT = (
ðð
ðð¡)T
b ð
ðð¡(ðð) = lim
ââ0
ð(ð¡ + â)ð(ð¡ + â) â ð(ð¡)ð(ð¡)
ð¡
= limââ0
ð(ð¡ + â)ð(ð¡ + â) â ð(ð¡)ð(ð¡ + â) + ð(ð¡)ð(ð¡ + â) â ð(ð¡)ð(ð¡)
ð¡
37
= limââ0
ð(ð¡ + â)ð(ð¡ + â) â ð(ð¡)ð(ð¡ + â)
ð¡+ limââ0
ð(ð¡)ð(ð¡ + â) â ð(ð¡)ð(ð¡)
ð¡
= (limââ0
ð(ð¡ + â) â ð(ð¡)
ð¡) (lim
ââ0ð(ð¡ + â)) + ð(ð¡) lim
ââ0
ð(ð¡ + â) â ð(ð¡)
ð¡
=ð
ðð¡(ðð) =
ðð
ðð¡ð + ð
ðð
ðð¡.
3.3 Given that ð(ð¡)ðT(ð¡) = ð, the identity tensor, show (ð) that ðð
ðð¡ðT is an
antisymmetric tensor, and (ð) that ðTðð
ðð¡ is an antisymmetric tensor
Differentiating ððT = ð,
ð
ðð¡(ððT) =
ðððð¡ðT +ð
ððT
ðð¡=ðððð¡= ð
Consequently,
ðððð¡ðT = âð
ððT
ðð¡= â(
ðððð¡ðT)
T
So we have that the tensor ð =ðð
ðð¡ðT is negative of its own transpose, hence it is skew.
ððT = ð = ðTð
Differentiating as before,
ð
ðð¡(ðTð) =
ððT
ðð¡ð+ðT
ðððð¡= ð
So that,
ððT
ðð¡ð = (ðT
ðððð¡)
T
= âðTðððð¡
Tensor ð = ðTðð
ðð¡ is negative of its own transpose, hence skew as required.
3.4 Show that ð
ððº tr (ðð) = ð(ðºðâ1)ð
38
The cases, ð = 1, ð = 2 are already provided in the text. When ð = 3,
ð·ð(ð, ðð) =ð
ððŒð(ð + ðŒðð)|
ðŒ=0 =ð
ððŒtr{(ð + ðŒðð)3}|
ðŒ=0
=ð
ððŒtr{(ð + ðŒðð)(ð + ðŒðð)(ð + ðŒðð)}|
ðŒ=0
= tr [ð
ððŒ(ð + ðŒðð)(ð + ðŒðð)(ð + ðŒðð)]|
ðŒ=0
= tr[ðð(ð + ðŒðð)(ð + ðŒðð) + (ð + ðŒðð)ðð(ð + ðŒðð)
+ (ð + ðŒðð)(ð + ðŒðð)ðð]|ðŒ=0
= tr[ðð ð ð + ð ðð ð + ð ð ð ð] = 3(ð2)ð: ðð
It easily follows by induction that, ð
ðð ð(ð) = ð(ððâ1)T.
3.5 Define the magnitude of tensor ð as, |ð| = âtr(ððT) Show that â|ð|
âð=
ð
|ð|
Given a scalar ðŒ variable, the derivative of a scalar function of a tensor ð(ðš) is
ðð(ð)
ðð: ð = lim
ðŒâ0
ð
ððŒð(ð + ðŒð)
for any arbitrary tensor ð.
In the case of ð(ð) = |ð|,
ð|ð|
ðð: ð = lim
ðŒâ0
ð
ððŒ|ð + ðŒð|
|ð + ðŒð| = âtr(ð + ðŒð)(ð + ðŒð)T = âtr(ððT + ðŒððT + ðŒððT + ðŒ2ððT)
Note that everything under the root sign here is scalar and that the trace operation is
linear. Consequently, we can write,
limðŒâ0
ð
ððŒ|ð + ðŒð| = lim
ðŒâ0
tr (ððT) + tr (ððT) + 2ðŒ tr (ððT)
2âtr(ððT + ðŒððT + ðŒððT + ðŒ2ððT)=2ð:ð
2âð:ð=ð
|ð|: ð
So that,
ð|ð|
ðð: ð =
ð
|ð|: ð
or,
ð|ð|
ðð=ð
|ð|
39
as required since ð is arbitrary.
3.6 Without breaking down into components, use Liouvilleâs theorem, â
âðŒdet(ð) =
det(ð) tr(ï¿œÌï¿œðâð), to establish the fact that ðdet(ð)
ðð= ðð
Start from Liouvilleâs Theorem, given a scalar parameter such that ð = ð(ðŒ),
ð
ððŒ(det ð) = det ð tr [(
ðð
ððŒ)ðâ1] = [(det ð)ðâT]: (
ðð
ððŒ)
By the chain rule,
ð
ððŒ(det ð) = [
ð
ðð(det ð)]: (
ðð
ððŒ)
It therefore follows that,
[ð
ðð(det ð) â [(det ð)ðâT]]: (
ðð
ððŒ) = 0
Hence
ð
ðð(det ð) = (det ð)ðâð = ðc
3.7 If ð is invertible, show that â
âð(log det(ð)) = ðâð
â
âð(log det ð) =
â(log det ð)
â det ð â det ð
âð
=1
det ððc =
1
det ð(det ð)ðâT
= ðâT
3.8 Use Caley-Hamilton theorem to express the third invariant in terms of traces
only (b) Use this result and the fact that ðc = (ð2 â ðŒ1ð + ðŒ2ð)T (2.19)to
show that the derivative of the determinant is the cofactor.
ð Given a tensor ð, by Cayley Hamilton theorem,
ð3 â ðŒ1ð2 + ðŒ2ð â ðŒ3 = 0
Note that ðŒ1 = ðŒ1(ð), ðŒ2 = ðŒ2(ð) ððð ðŒ3 = ðŒ3(ð) the first, second and third invariants of ðº
are all scalar functions of the tensor ð. Taking the trace of the above equation,
40
ðŒ1(ð3) = ðŒ1(ð)ðŒ1(ð
2) â ðŒ2(ð)ðŒ1(ð) + 3ðŒ3(ð)
= ðŒ1(ð)(ðŒ12(ð) â 2ðŒ2(ð)) â ðŒ1(ð)ðŒ2(ð) + 3ðŒ3(ð)
= ðŒ13(ð) â 3ðŒ1(ð)ðŒ2(ð) + 3ðŒ3(ð)
Or,
ðŒ3(ð) =1
3(ðŒ1(ð
3) â ðŒ13(ð) + 3ðŒ1(ð)ðŒ2(ð))
=1
3(ðŒ1(ð
3) â ðŒ13(ð) +
3
2ðŒ1(ð)(ðŒ1
2(ð) â ðŒ1(ð2)))
=1
6(2ðŒ1(ð
3) + ðŒ13(ð) â 3ðŒ1(ð)ðŒ1(ð
2))
which show that the third invariant is itself expressible in terms of traces only. It is
therefore invariant in value as a result of coordinate transformation.
ð Taking the Fréchet derivative of the third invariant expression above, noting that
ð
ððº tr (ðð) = ð(ððâ1)T
6ð
ðððŒ3(ð) = 6ð
2T + 3ðŒ12(ð)ð â 6ðŒ1(ð)ð
T â 3ðŒ1(ð2)ð
so that
ð
ðððŒ3(ð) = ð
2T â ðŒ12(ð)ð + ððŒ2(ð) + 3ðŒ1(ð)(ðŒ1(ð)ð â ð
T)
= (ð2 â ðŒ1(ð)ð+ðŒ2(ð)ð)T
= ðc
3.9 If ð is invertible, show that ð
ðð(log det(ðâ1)) = âðâð
ð
ðð(log det(ðâ1)) =
ð(log det(ðâ1))
ðdet(ðâ1) ðdet(ðâ1)
ððâ1ððâ1
ðð
= â1
det(ðâ1)ðâc(ðâ1â ðâT)
= â1
det(ðâ1)det(ðâ1) ðð(ðâ1â ðâT)
= âðâTðTðâð = âðâT
An easier method that does not require the evaluation of a fourth-order tensor:
41
ð
ðð(log det(ðâ1)) =
ð(log det(ðâ1))
ðdet(ðâ1) ð det(ðâ1)
ðdet(ð)
ð det(ð)
ðð
=1
det(ðâ1)
ð(1/det(ð))
ð det(ð)ðc
= âdet(ð)
det(ð2)ðð
= âðâT
3.10 Given that ð and ð are constant tensors, show that â
âðtr(ðððT) = ðTð
First observe that tr(ðððT) = tr(ðTðð). If we write, ð â¡ ðTð, it is obvious from the
above that ð
ððtr(ðð) = ðT. Therefore,
â
âðtr(ðððT) = (ðTð)ð = ðTð
3.11 Given that ð and ð are constant tensors, show that â
âðtr(ððTðT) = ðTð
Observe that
tr(ððTðT) = tr(ðTððT)
= tr[ð(ðTð)T]
= tr[(ðTð)Tð]
[The transposition does not alter trace; neither does a cyclic permutation. Ensure you
understand why each equality here is true.] Consequently,
â
âðtr(ððTðT) =
â
âð tr[(ðTð)Tð] = [(ðTð)T]ð = ðTð
3.12 Let ð be a symmetric and positive definite tensor and let ðŒ1(ð), ðŒ2(ð)andðŒ3(ð)
be the three principal invariants of ð show, using components, that (a)
ðð°1(ð)
ðð= ð the identity tensor, (b)
ðð°2(ð)
ðð= ðŒ1(ð)ð â ð and (c)
ððŒ3(ð)
ðð= ðŒ3(ð) ð
â1
42
ð ðð°1(ðº)
ððº can be written in the invariant component form as,
ððŒ1(ð)
ðð=ððŒ1(ð)
âðððððâðð
Recall that ðŒ1(ðº) = tr ð = ððŒðŒ hence
ððŒ1(ð)
ðð=ððŒ1(ð)
âðððððâðð =
ðððŒðŒâððð
ððâðð
= ð¿ððŒð¿ðŒðððâðð = ð
which is the identity tensor as expected.
ð ððŒ2(ðº)
ððº in a similar way can be written in the invariant component form as,
ððŒ2(ð)
ðð=1
2
ð
âððð[ððŒðŒððœðœ â ððŒðœððœðŒ]ððâðð
where we have utilized the fact that ðŒ2(ð) =1
2[tr2 ð â tr ð2]. Consequently,
ððŒ2(ð)
ðð=1
2
ð
âððð[ððŒðŒððœðœ â ððŒðœððœðŒ]ððâðð
=1
2[ð¿ððŒð¿ðŒðððœðœ + ð¿ððœð¿ðœðððŒðŒ â ð¿ðœðð¿ððŒððŒðœ â ð¿ðŒðð¿ððœððœðŒ]ððâðð
=1
2[ð¿ððððœðœ + ð¿ððððŒðŒ â ððð â ððð]ððâðð
= ðŒ1(ð)ð â ð
ð det ð =1
3!ððððððð ð¡ðððððð ððð¡
Differentiating wrt ððŒðœ, we obtain,
ðð
ðððŒðœððŒâððœ =
1
3!ððððððð ð¡ [
ðððððððŒðœ
ððð ððð¡ + ððððððð
ðððŒðœððð¡ + ðððððð
ðððð¡ðððŒðœ
] ððŒâððœ
=1
3!ððððððð ð¡[ð¿ððŒð¿ððœððð ððð¡ + ðððð¿ððŒð¿ð ðœððð¡ + ðððððð ð¿ððŒð¿ð¡ðœ]ððŒâððœ
=1
3!ððððððð ð¡[ððð ððð¡ + ððð ððð¡ + ððð ððð¡]ððŒâððœ
=1
2!ððððððð ð¡ððð ððð¡ððŒâððœ â¡ [ð
ð]ðŒðœððŒâððœ
Which is the cofactor of [ððŒðœ] or ðº
43
3.13 Divergence of a product: Given that ð is a scalar field and ð¯ a vector field,
show that div(ðð¯) = (gradð) â ð¯ + ð div ð¯
grad(ðð¯) = (ðð£ð),ð ððâðð
= ð,ð ð£ðððâðð + ðð£ð ,ð ððâðð
= ð¯â (grad ð) + ð grad ð¯
Now, div(ðð¯) = tr(grad(ðð¯)). Taking the trace of the above, we have:
div(ðð¯) = ð¯ â (grad ð) + ð div ð¯
3.14 Show that grad(ð® · ð¯) = (grad ð®)Tð¯ + (grad ð¯)Tð®
ð® · ð¯ = ð¢ðð£ð is a scalar sum of components.
grad(ð® · ð¯) = (ð¢ðð£ð),ð ðð
= ð¢ð,ð ð£ððð + ð¢ðð£ð ,ð ðð
Now grad ð® = ð¢ð ,ð ððâðð swapping the bases, we have that,
(grad ð®)T = ð¢ð ,ð (ððâðð).
Writing ð¯ = ð£ððð, we have that,
(grad ð®)Tð¯ = ð¢ð,ð ð£ð(ððâðð)ðð = ð¢ð ,ð ð£ðððð¿ðð = ð¢ð ,ð ð£ððð
It is easy to similarly show that ð¢ðð£ð ,ð ðð = (grad ð¯)Tð. Clearly,
grad(ð® · ð¯) = (ð¢ðð£ð),ð ðð
= ð¢ð ,ð ð£ððð + ð¢ðð£ð,ð ðð
= (grad ð®)Tð¯ + (grad ð¯)Tð®
As required.
3.16 Show that grad(ð® à ð¯) = (ð® Ã)grad ð¯ â (ð¯ Ã)grad ð®
ð® à ð¯ = ððððð¢ðð£ððð
Recall that the gradient of this vector is the tensor,
grad(ð® à ð¯) = (ððððð¢ðð£ð),ð ððâðð
= ððððð¢ð ,ð ð£ðððâðð + ððððð¢ðð£ð ,ð ððâðð
= âððððð¢ð ,ð ð£ðððâðð + ððððð¢ðð£ð ,ð ððâðð
= â (ð¯ Ã)grad ð® + (ð® Ã)grad ð¯
44
3.17 Show that div (ð® à ð¯) = ð¯ â curl ð® â ð® â curl ð¯
We already have the expression for grad(ð® à ð¯) above; remember that
div (ð® à ð¯) = tr[grad(ð® à ð¯)]
= âððððð¢ð ,ð ð£ððð â ðð + ððððð¢ðð£ð,ð ðð â ðð
= âððððð¢ð ,ð ð£ðð¿ðð + ððððð¢ðð£ð ,ð ð¿ðð
= âððððð¢ð ,ð ð£ð + ððððð¢ðð£ð,ð
= ð¯ â curl ð® â ð® â curl ð¯
3.18 Given a scalar point function ð and a vector field ð¯, show that curl (ðð¯) =
ð curl ð¯ + (grad ð) à ð¯.
curl (ðð¯) = ðððð(ðð£ð),ð ðð
= ðððð(ð,ð ð£ð + ðð£ð,ð )ðð
= ððððð,ð ð£ððð + ðððððð£ð,ð ðð
= (gradð) à ð¯ + ð curl ð¯
3.19 Show that div (ð®â ð¯) = (div ð¯)ð® + (grad ð®)ð¯
ð®â ð¯ is the tensor, ð¢ðð£ðððâðð. The gradient of this is the third order tensor,
grad (ð®â ð¯) = (ð¢ðð£ð),ð ððâððâðð
And by divergence, we mean the contraction of the last basis vector:
div (ð®â ð¯) = (ð¢ðð£ð),ð (ððâðð)ðð
= (ð¢ðð£ð),ð ððð¿ðð = (ð¢ðð£ð),ð ðð
= ð¢ð ,ð ð£ððð + ð¢ðð£ð ,ð ðð
= (grad ð®)ð¯ + (div ð¯)ð®
3.20 For a scalar field ð and a tensor field ð show that grad (ðð) = ðgrad ð +
ðâ gradð. Also show that div (ðð) = ð div ð + ðgradð
grad(ðð) = (ðððð),ð ððâððâðð
= (ð,ð ððð + ðððð,ð )ððâððâðð
= ðâ gradð + ðgrad ð
Furthermore, we can contract the last two bases and obtain,
45
div(ðð») = (ð,ð ððð + ðððð ,ð )ððâðð â ðð
= (ð,ð ððð + ðððð ,ð )ððð¿ðð
= ððð ð,ð ðð + ðððð,ð ðð
= ðgradð + ð div ð
3.21 For two arbitrary tensors ð and ð, show that grad(ðð) = (grad ðT)Tð+
ðgradð
grad(ðð) = (ðððððð),ðŒ ððâððâððŒ
= (ððð,ðŒððð + ðððððð ,ðŒ )ððâððâððŒ
= (ðððððð,ðŒ + ðððððð,ðŒ )ððâððâððŒ
= (grad ðT)Tð + ð gradð
3.22 Show that curl (grad ð¯)T = grad(curl ð¯)
From previous derivation, we can see that,
curl ð = ððððððŒð,ð ððâððŒ. Clearly,
curl ðT = ðððððððŒ,ð ððâððŒ
so that curl (grad ð¯)T = ððððð£ð,ðŒð ððâððŒ. But curl ð¯ = ððððð£ð ,ð ðð. The gradient of this
is,
grad(curl ð¯) = (ððððð£ð,ð ),ðŒ ððâððŒ = ððððð£ð,ððŒ ððâððŒ = curl (grad ð¯)T
3.23 For a second-order tensor ð = ðððððâðð, write the explicit expression for
curl ð in Cartesian Coordinates.
curl ð = ðððððððŒ,ð ððâððŒ
ð ðŒ ðððððððŒ,ð ððâððŒ
1 1 (ðð13ðð¥2
âðð12ðð¥3
) ð1âð1
ð 2 (ðð23ðð¥2
âðð22ðð¥3
) ð1âð3
ð 3 (ðð33ðð¥2
âðð32ðð¥3
) ð1âð2
2 1 (ðð11ðð¥3
âðð13ðð¥1
) ð2âð1
46
2 2 (ðð21ðð¥3
âðð23ðð¥1
) ð2âð2
2 3 (ðð31ðð¥2
âðð33ðð¥3
) ð2âð3
3 1 (ðð12ðð¥1
âðð11ðð¥2
) ð3âð1
3.24 For two arbitrary tensors ð and ð, show that div(ðð) = (gradð): ð + ðdiv ð
grad(ðð) = (ðððððð),ðŒ ððâððâððŒ
= (ððð,ðŒððð + ðððððð ,ðŒ )ððâððâððŒ
div(ðð) = (ððð,ðŒððð + ðððððð ,ðŒ )ðð(ðð â ððŒ)
= (ððð,ðŒððð + ðððððð ,ðŒ )ððð¿ððŒ
= (ððð,ðððð + ðððððð ,ð )ðð
= (grad ð): ð + ðdiv ð
3.25 For two arbitrary vectors, ð® and ð¯, show that grad(ð® à ð¯) = (ð® Ã)gradð¯ â
(ð¯ Ã)gradð®
grad(ð® à ð¯) = (ððððð¢ðð£ð),ð ððâðð
= (ððððð¢ð ,ð ð£ð + ððððð¢ðð£ð ,ð )ððâðð
= (ð¢ð ,ð ððððð£ð + ð£ð,ð ððððð¢ð)ððâðð
= â(ð¯ Ã)grad ð® + (ð® Ã)grad ð¯
3.26 For a vector field ð®, show that grad(ð® Ã) is a third ranked tensor. Hence or
otherwise show that div(ð® Ã) = âcurl ð®.
The secondâorder tensor (ð® Ã) is defined as ððððð¢ðððâðð. Taking the derivative with an
independent base, we have
grad(ð® Ã) = ððððð¢ð ,ð ððâððâðð
This gives a third order tensor as we have seen. Contracting on the last two bases,
div(ð® Ã) = ððððð¢ð ,ð ððâðð â ðð
= ððððð¢ð ,ð ððð¿ðð
= ððððð¢ð ,ð ðð
47
= âcurl ð®
3.27 Show that div (ðð) = grad ð
Note that ðð = (ðð¿ðŒðœ)ððŒâððœ. Also note that
grad ðð = (ðð¿ðŒðœ),ð ððŒâððœâðð
The divergence of this third order tensor is the contraction of the last two bases:
div (ðð) = tr(grad ðð) = (ðð¿ðŒðœ),ð (ððŒâððœ)ðð = (ðð¿ðŒðœ),ð ððŒð¿ðœð
= ð,ð ð¿ðŒðœððŒð¿ðœð
= ð,ð ð¿ðŒðððŒ = ð,ð ðð = grad ð
3.28 Show that curl (ðð) = ( grad ð) Ã
Note that ðð = (ðð¿ðŒðœ)ððŒâððœ, and that curl ð = ððððððŒð,ð ððâððŒ so that,
curl (ðð) = ðððð(ðð¿ðŒð),ð ððâððŒ
= ðððð(ð,ð ð¿ðŒð)ððâððŒ = ððððð,ð ððâðð
= ( grad ð) Ã
3.29 Show that the dyad ð®â ð¯ is NOT, in general symmetric: ð® â ð¯ = ð¯â ð®â
(ð® à ð¯) Ã
ð® à ð¯ = ððððð¢ðð£ð ðð
((ð® à ð¯) Ã) = ððŒððœððððð¢ðð£ðððŒâððœ
= â(ð¿ðŒðð¿ðœð â ð¿ðŒðð¿ðœð )ð¢ðð£ðððŒâððœ
= (âð¢ðŒð£ðœ + ð¢ðœð£ðŒ)ððŒâððœ
= ð¯â ð® â ð®â ð¯
3.30 Show that div (ð® à ð¯) = ð¯ â curl ð® â ð® â curl ð¯
div (ð® à ð¯) = (ððððð¢ðð£ð),ð
Noting that the tensor ðððð behaves is a constant, we can write,
div (ð® à ð¯) = (ððððð¢ðð£ð),ð
= ððððð¢ð ,ð ð£ð + ððððð¢ðð£ð,ð
= ð¯ â curl ð® â ð® â curl ð¯
48
3.31 Given a scalar point function ð and a vector field ð¯, show that curl (ðð¯) =
ð curl ð¯ + (gradð) à ð¯.
curl (ðð¯) = ðððð(ðð£ð),ð ðð
= ðððð(ð,ð ð£ð + ðð£ð ,ð )ðð
= ððððð,ð ð£ððð + ðððððð£ð,ð ðð
= (gradÏ) à ð¯ + Ï curl ð¯
3.32 Show that curl (ð¯ Ã) = (div ð¯)ð â grad ð¯
(ð¯ Ã) = ððŒðœðð£ðœ ððŒâðð
curl ð = ððððððŒð,ð ððâððŒ
so that
curl (ð¯ Ã) = ððððððŒðœðð£ðœ ,ð ððâððŒ
= (ð¿ððŒð¿ððœ â ð¿ððŒð¿ððœ) ð£ðœ ,ð ððâððŒ
= ð£ð ,ð ððŒâððŒ â ð£ð ,ð ððâðð
= (div ð¯)ð â grad ð¯
3.33 Show that curl (grad ð¯) = ð
For any tensor ð = ððŒðœððŒâððœ
curl ð = ððððððŒð,ð ððŒâððœ
Let ð = grad ð¯. Clearly, in this case, ððŒðœ = ð£ðŒ ,ðœ so that ððŒð,ð = ð£ðŒ ,ðð. It therefore follows
that,
curl (grad ð¯) = ððððð£ðŒ ,ðð ððŒâððœ = ð.
The contraction of symmetric tensors with anti-symmetric led to this conclusion. Note that
this presupposes that the order of differentiation in the vector field is immaterial. This will
be true only if the vector field is continuous â a proposition we have assumed in the above.
3.34 Show that curl (grad ð) = ðš
For any tensor ð¯ = ð£ðŒððŒ
curl ð¯ = ððððð£ð,ð ðð
49
Let ð¯ = grad ð. Clearly, in this case, ð£ð = ð,ð so that ð£ð,ð = ð,ðð. It therefore follows
that,
curl (grad ð) = ððððð,ðð ðð = ð.
The contraction of symmetric tensors with anti-symmetric led to this conclusion. Note that
this presupposes that the order of differentiation in the scalar field is immaterial. This will
be true only if the scalar field is continuous â a proposition we have assumed in the above.
3.35 Show that div (grad ð à grad Ξ) = 0
grad ð à grad Ξ = ððððð,ð ð,ð ðð
The gradient of this vector is the tensor,
grad(grad ð Ã grad ð) = (ððððð,ð ð,ð ),ð ððâðð
= ððððð,ðð ð,ð ððâðð + ððððð,ð ð,ðð ððâðð
The trace of the above result is the divergence we are seeking:
div (grad ð à grad Ξ) = tr[grad(grad ð à grad ð)]
= ððððð,ðð ð,ð ðð â ðð + ððððð,ð ð,ðð ðð â ðð
= ððððð,ðð ð,ð ð¿ðð + ððððð,ð ð,ðð ð¿ðð
= ððððð,ðð ð,ð+ ððððð,ð ð,ðð = 0
Each term vanishing on account of the contraction of a symmetric tensor with an
antisymmetric.
3.36 Show that curl curl ð¯ = grad(div ð¯) â grad2ð¯
Let ð = ðð¢ðð ð â¡ ððððð£ð,ð ðð. But curl ð â¡ ððŒðœðŸð€ðŸ,ðœ ððŒ. Upon inspection, we find that
ð€ðŸ = ð¿ðŸðððððð£ð,ð so that
curl ð° â¡ ððŒðœðŸ(ð¿ðŸðððððð£ð ,ð ),ðœ ððŒ = ð¿ðŸðððŒðœðŸððððð£ð,ððœ ððŒ
Now, it can be shown that ð¿ðŸðððŒðœðŸðððð = ð¿ðŒðð¿ðœð â ð¿ðŒðð¿ðœð so that,
curl ð° = (ð¿ðŒðð¿ðœð â ð¿ðŒðð¿ðœð)ð£ð,ððœ ððŒ
= ð£ðœ ,ððœ ðð â ð¿ðœðð£ðŒ ,ððœ ððŒ
= grad(div ð¯) â grad2ð¯
Also recall that the Laplacian (grad2) of a scalar field ð is, grad2ð = ð¿ððð,ðð. In Cartesian
coordinates, this becomes,
50
grad2ð = ð¿ððð,ðð = ð,ðð
as the unit (metric) tensor now degenerates to the Kronecker delta in this special case.
For a vector field, grad2ð¯ = ð£ðŒ ,ðð ððŒ.
Also note that while grad is a vector operator, the Laplacian (grad2) is a scalar operator.
3.37 For a second-order tensor ð define curl ð â¡ ððððððŒð ,ð ððâ ððŒ show that for
any constant vector ð, (curl ð) ð = curl (ðTð)
Express vector ð in the invariant form with covariant components as ð = ððœððœ . It follows
that
(curl ð) ð = ððððððŒð,ð (ððâððŒ)ð
= ððððððŒð,ð ððœ(ððâððŒ)ððœ
= ððððððŒð,ð ððœððð¿ðœðŒ
= ðððð(ððŒð),ð ððððŒ
= ðððð(ððŒðððŒ),ð ðð
The last equality resulting from the fact that vector ð is a constant vector. Clearly,
(curl ð) ð = curl (ðTð)
3.38 For any two vectors ð® and ð¯, show that curl (ð®â ð¯) = [(grad ð®)ð¯ Ã]T +
(curl ð¯)â ð® where ð¯ à is the skew tensor ððððð£ð ððâðð.
Recall that the curl of a tensor ð is defined by curl ð â¡ ððððððŒð,ð ððâððŒ. Clearly
therefore,
curl (ð®â ð¯) = ðððð(ð¢ðŒð£ð),ð ððâððŒ
= ðððð(ð¢ðŒ,ð ð£ð + ð¢ðŒð£ð,ð ) ððâððŒ
= ððððð¢ðŒ ,ð ð£ð ððâððŒ + ððððð¢ðŒð£ð,ð ððâððŒ
= (ððððð£ð ðð) â (ð¢ðŒ ,ð ððŒ) + (ððððð£ð,ð ðð) â (ð¢ðŒððŒ)
= (ððððð£ðððâðð)(ð¢ðŒ,ðœ ððœâððŒ) + (ððððð£ð,ð ðð) â (ð¢ðŒððŒ)
= â(ð¯ Ã)(grad ð®)ð + (curl ð¯) â ð®
= [(grad ð®)ð¯ Ã]ð + (curl ð¯) â ð®
upon noting that the vector cross is a skew tensor.
51
3.39 For a second-order tensor field ð, show that div(curl ð) = curl(div ðT)
Define the second order tensor ð as
curl ð â¡ ððððððŒð,ð ððâððŒ = ðððŒððâððŒ
The gradient of ð is ðððŒ,ðœ ððâððŒâððœ = ððððððŒð,ððœ ððâððŒâððœ
Clearly,
div(curl ð) = ððððððŒð,ððœ (ððâððŒ)ððœ
= ððððððŒð,ððœ ðð ð¿ðŒðœ
= ððððððœð,ððœ ðð = curl(div ðT)
3.40 Show that ((curl ð®) Ã) = grad ð® â gradTð®
((curl ð®) Ã) = ððððð¢ð,ð ðð Ã
= ððŒððœððððð¢ð ,ð ððŒâððœ
= (ð¿ðœðð¿ðŒð â ð¿ðœðð¿ðŒð)ð¢ð,ð ððŒâððœ
= ð¢ð ,ð ððâððâð¢ð,ð ððâðð
= grad ð® â gradTð®
3.41 Show that curl (ð® à ð¯) = div(ð®â ð¯ â ð¯â ð®)
The vector ð° â¡ ð® à ð¯ = ð€ððð = ðððŒðœð¢ðŒð£ðœðð and curl ð° = ððððð€ð ,ð ðð. Therefore,
curl (ð® à ð¯) = ððððð€ð,ð ðð
= ðððððððŒðœ(ð¢ðŒð£ðœ),ð ðð
= (ð¿ððŒð¿ððœ â ð¿ððŒð¿ððœ)(ð¢ðŒð£ðœ),ð ðð
= (ð¿ððŒð¿ððœ â ð¿ððŒð¿ððœ)(ð¢ðŒ,ð ð£
ðœ + ð¢ðŒð£ðœ ,ð )ðð
= [ð¢ð,ð ð£ð + ð¢ðð£ð ,ðâ (ð¢ð ,ð ð£ð + ð¢ðð£ð ,ð )]ðð
= [(ð¢ðð£ð),ðâ (ð¢ðð£ð),ð ]ðð
= div(ð®â ð¯ â ð¯â ð®)
since div(ð®â ð¯) = (ð¢ðð£ð),ðŒ (ððâðð)ððŒ = (ð¢ðð£ð),ð ðð.
3.42
52
3.43 Given a scalar point function ð and a second-order tensor field ð, show that
curl (ðð) = ð curl ð + ((gradð) Ã)ðT where [(gradð) Ã] is the skew
tensor ððððð,ð ððâðð
curl (ðð) â¡ ðððð(ðððŒð),ð ððâððŒ
= ðððð(ð,ð ððŒð + ðððŒð ,ð ) ððâððŒ
= ððððð,ð ððŒð ððâððŒ + ðððððððŒð,ð ððâððŒ
= (ððððð,ð ððâðð) (ððŒðœððœâððŒ) + ðððððððŒð ,ð ððâððŒ
= ð curl ð + ((gradð) Ã)ðT
3.44 Given a tensor field ð, obtain the vector ð° â¡ ðTð¯ and show that its
divergence is ð: (gradð¯) + ð¯ â div ð
The gradient of ð° is the tensor, (ðððð£ð),ð ððâðð. Therefore, divergence of ð (the trace
of the gradient) is the scalar sum, ðððð£ð ,ð ð¿ðð + ððð ,ð ð£ðð¿ðð. Expanding, we obtain,
div (ðTð¯) = ðððð£ð ,ð ð¿ðð + ððð,ð ð£ðð¿ðð
= ððð ,ð ð£ð + ðððð£ð ,ð
= (div ð) â ð¯ + tr(ðTgrad ð¯)
= (div ð) â ð¯ + ð: (grad ð¯)
Recall that scalar product of two vectors is commutative so that
div (ðTð¯) = ð: (grad ð¯) + ð¯ â div ð
3.45
3.46
3.47
3.48
53
3.49
3.50
3.51
3.52
3.53 Show that if ð is a scalar field in the Euclidean space spanned by orthogonal
coordinates ð¥ð, with unit basis vectors, ðð, then (a) for the position vector ð«,
div grad(ð«ð) = 2 grad ð + ð« div grad ð .(b) Will this expression be valid in
the spherical coordinate system?
(ð) In such a coordinate system, the radius vector will be given by,
ð« = ð¥ððð
where ðð is the unit vector along coordinate ð¥ð .
grad(ð«ð) = (ð¥ðð),ð ððâðð
grad(grad(ð«ð)) = ((ð¥ðð),ð ),ð ððâððâðð
div(grad(ð«ð)) = tr (grad(grad(ð«ð))) = ((ð¥ðð),ð ),ð ððð¿ðð
= (ð¿ððð + ð¥ðð,ð ),ð ðð
= (ð¿ððð,ð+ ð¥ð ,ð ð,ð+ ð¥ðð,ðð )ðð
div grad(ð«ð) = 2 grad ð + ð« div grad ð
(ð) The position vector in spherical coordinates are non-linear in the coordinate variables. This
form of the expression will not be correct. It works only for Cartesian systems.
54
3.54 In Cartesian coordinates let ð¥ denote the magnitude of the position vector
ð« = ð¥ððð. Show that (a) ð¥,ð =ð¥ð
ð¥, (b) ð¥,ðð =
1
ð¥ð¿ðð â
ð¥ðð¥ð
(ð¥)3, (c) ð¥,ðð =
ð
ð, (d) If ð =
1
ð¥, then ð,ðð =
âð¹ðð
ðð+3ð¥ðð¥ð
ð¥5 ð,ðð = 0 and div (
ð«
ð¥) =
2
ð¥.
(ð) ð¥ = âð¥ðð¥ð
ð¥,ð =ðâð¥ðð¥ð
ðð¥ð=ðâð¥ðð¥ð
ð(ð¥ðð¥ð)Ãð(ð¥ðð¥ð)
ðð¥ð=
1
2âð¥ðð¥ð[ð¥ðð¿ðð + ð¥ðð¿ðð] =
ð¥ð
ð¥.
(ð) ð¥,ðð =
ð
ðð¥ð(ðâð¥ðð¥ð
ðð¥ð) =
ð
ðð¥ð(ð¥ðð¥) =
ð¥ðð¥ððð¥ð
â ð¥ððð¥ðð¥ð
(ð¥)2=ð¥ð¿ðð â
ð¥ðð¥ðð¥
(ð¥)2=1
ð¥ð¿ðð â
ð¥ðð¥ð(ð¥)3
(ð) ð¥,ðð =1
ð¥ð¿ðð â
ð¥ðð¥ð(ð¥)3
=3
ð¥â(ð¥)2
(ð¥)3=2
ð¥.
(ð ) ð =1
ð¥ so that
ð,ð =ð1ð¥
ðð¥ð=ð1ð¥ðð¥Ãðð¥
ðð¥ð= â
1
ð¥21
ð¥ð¥ð = â
ð¥ð
ð¥3
Consequently,
ð,ðð =ð
ðð¥ð(ð,ð ) = â
ð
ðð¥ð(ð¥ðð¥3) =
ð¥3 (ððð¥ð
(âð¥2)) + ð¥ðððð¥ð
(ð¥3)
ð¥6
=
ð¥3(âð¿ðð) + ð¥ð (ð(ð¥3)ðð¥
ðð¥ðð¥ð)
ð¥6=âð¥3ð¿ðð + ð¥ð (3ð¥
2 ð¥ðð¥ )
ð¥6=âð¿ðð
ð¥3+3ð¥ðð¥ð
ð¥5
ð,ðð =âð¿ððð¥3
+3ð¥ðð¥ðð¥5
=â3
ð¥3+3ð¥2
ð¥5= 0.
ððð£ (ð
ð¥) = (
ð¥ð
ð¥) ,ð =
1
ð¥ð¥ð ,ð+ (
1
ð¥),ð=3
ð¥+ ð¥ð (
ð
ðð¥(1
ð¥) ðð¥
ðð¥ð)
=3
ð¥+ ð¥ð [â (
1
ð¥2) ð¥ð
ð¥] =
3
ð¥âð¥ðð¥ð
ð¥3=3
ð¥â1
ð¥=2
ð¥
3.55 Show that curl ð® à ð¯ = (gradð®)ð¯ â (divð®)ð¯ â (gradð¯)ð® + (div ð¯)ð®
ð® à ð¯ = ððððð¢ðð£ððð
55
curl ð° = ððŒðœðð€ð ,ðœ ððŒ
Clearly,
curl ð® à ð¯ = ððŒðœð(ððððð¢ðð£ð),ðœ ððŒ
= ððŒðœðððððð¢ð,ðœð£ðððŒ + ððŒðœðððððð¢ðð£ð,ðœ ððŒ
= (ð¿ðŒðð¿ðœð â ð¿ðŒðð¿ðœð)ð¢ð,ðœð£ðððŒ+ (ð¿ðŒðð¿ðœð â ð¿ðŒðð¿ðœð)ð¢ðð£ð,ðœ ððŒ
= ð¢ð ,ð ð£ððð â ð¢ð ,ð ð£ððð + ð¢ðð£ð,ð ðð â ð¢ðð£ð,ð ðð
= (gradð®)ð¯ â (divð®)ð¯ â (grad ð¯)ð® + (div ð¯)ð®
3.56 For a scalar function ðand a vector ð¯ show that the divergence of the vector
ð¯ð is equal to, ð¯ â grad ð + ð div ð¯
grad (ð¯ð) = (ð£ðð),ð ððâðð = (ðð£ð,ð+ ð£ðð,ð )ððâðð
Taking the trace of this equation,
div ð¯ð = tr(grad (ð¯ð)) = (ðð£ð,ð+ ð£ðð,ð )ðð â ðð
= (ðð£ð ,ð+ ð£ðð,ð )ð¿ðð
= ðð£ð ,ð+ ð£ðð,ð
= ð¯ â grad ð+ð div ð¯
Hence the result.
3.57 When ð is symmetric, show that tr(curl ð) vanishes.
When ð is symmetric, show that tr(curl ð) vanishes.
curl ð = ððððððœð,ð ððâððœ
tr(curl ð) = ððððððœð,ð ðð â ððœ
= ððððððœð,ð ð¿ððœ = ððððððð,ð
which obviously vanishes on account of the symmetry and antisymmetry in ð and ð. In this
case,
3.58 For a general tensor field ð show that,
curl(curl ð) = [grad2(tr ð) â div(div ð)]ð + grad(div ð)
56
+ (grad(div ð))Tâ grad(grad (tr ð)) â grad2ðT
curl ð = ððð ð¡ððœð¡,ð ððŒâððœ
= ððŒðœððŒâððœ
curl ð = ððððððŒð,ð ððâððŒ
so that
curl ð = curl(curl ð) = ððððððð ð¡ððð¡,ð ð ððâððŒ
= |
ð¿ððŒ ð¿ðð ð¿ðð¡ð¿ððŒ ð¿ðð ð¿ðð¡ð¿ððŒ ð¿ðð ð¿ðð¡
| ððð¡,ð ð ððâððŒ
= [ð¿ððŒ(ð¿ðð ð¿ðð¡ â ð¿ðð¡ð¿ðð ) + ð¿ðð (ð¿ðð¡ð¿ððŒ â ð¿ððŒð¿ðð¡)
+ð¿ðð¡(ð¿ððŒð¿ðð â ð¿ðð ð¿ððŒ)] ððð¡,ð ð ððâððŒ
= [ð¿ðð ðð¡ð¡ ,ð ðâ ðð ð ,ð ð ](ððŒâððŒ) + [ððŒð,ð ðâ ð¿ððŒðð¡ð¡,ð ð ](ðð âððŒ)
+ [ð¿ððŒðð ð¡ ,ð ðâ ð¿ðð ððŒð¡,ð ð ](ðð¡âððŒ)
= [grad2(tr ð) â div(div ð»)]ð + (grad(div ð))Tâ grad(grad (tr ð)) + (grad(div ð))
â grad2ðT
3.59 For any Euclidean coordinate system, show that div ð® à ð¯ = ð¯ â curl ð® â ð® â
curl ð¯
ð® à ð¯ = ððððð¢ðð£ððð
grad (ð® à ð¯) = (ððððð¢ðð£ð),ð ððâðð
div (ð® à ð¯) = tr(grad (ð® à ð¯)) = (ððððð¢ðð£ð),ð ðð â ðð
= (ððððð¢ð ,ð ð£ð + ððððð¢ðð£ð,ð )ð¿ðð
= ððððð¢ð ,ð ð£ð + ððððð¢ðð£ð,ð
= ð¯ â curl ð® â ð® â curl ð¯
3.60 In Cartesian coordinates, If the volume ð is enclosed by the surface ð, the
position vector ð = ð¥ðð ð and ð is the external unit normal to each surface
element, show that 1
6â« â(ð â ð) â ðððð
equals the volume contained in ð.
57
ð« â ð« = ð¥ðð¥ððð â ðð = ð¥ðð¥ðð¿ðð
By the Divergence Theorem,
â«grad(ð â ð) â ðððð
= â«ð» â [ð»(ð â ð)]ððð
= â«ðð[ðð(ð¥ðð¥ðððð)] ð
ð â ðð ððð
= â«ðð[ððð(ð¥ð ,ð ð¥
ð + ð¥ðð¥ð ,ð )] ðð â ðð ðð
ð
= â«ðððððð(ð¿ð
ðð¥ð + ð¥ðð¿ðð),ð ðð
ð
= â«2ððððððð¥ð,ð ðð
ð
= â«2ð¿ððð¿ðð ðð
ð
= 6â« ððð
??
3.61
3.62
3.63
3.64
3.65
3.66
3.67
3.68
58
3.69
3.70
3.71
Fields in Curvilinear Coordinates
Given a vector point function ð®(ð¥1, ð¥2, ð¥3) and its covariant components, ð¢ðŒ(ð¥1, ð¥2, ð¥3), ðŒ =
1,2,3, with ððŒas the reciprocal basis vectors, Let
ð® = ð¢ðŒð ðŒ, then ðð® =
ð
ðð¥ð(ð¢ðŒð
ðŒ)ðð¥ð
= (ðð¢ðŒðð¥ð
ð ðŒ +ðð ðŒ
ðð¥ðð¢ðŒ) ðð¥
ð
Clearly,
ðð®
ðð¥ð=ðð¢ðŒðð¥ð
ð ðŒ +ðð ðŒ
ðð¥ðð¢ðŒ
And the projection of this quantity on the ððdirection is,
ðð®
ðð¥ðâ ð ð = (
ðð¢ðŒðð¥ð
ð ðŒ +ðð ðŒ
ðð¥ðð¢ðŒ) â ð ð
=ðð¢ðŒðð¥ð
ð ðŒ â ð ð +ðð ðŒ
ðð¥ðâ ð ðð¢ðŒ
=ðð¢ðŒðð¥ð
ð¿ððŒ +
ðð ðŒ
ðð¥ðâ ð ðð¢ðŒ
Now ð ð â ð ð = ð¿ðð so that,
ðð ð
ðð¥ðâ ð ð + ð
ð âðð ð
ðð¥ð= 0.
ðð ð
ðð¥ðâ ð ð = âð
ð âðð ð
ðð¥ðâ¡ â {
ððð}.
59
This important quantity, necessary to quantify the derivative of a tensor in general coordinates,
is called the Christoffel Symbol of the second kind.
Using this, we can now write that,
ðð®
ðð¥ðâ ð ð =
ðð¢ðŒðð¥ð
ð¿ððŒ +
ðð ðŒ
ðð¥ðâ ð ðð¢ðŒ =
ðð¢ððð¥ð
â {ðŒðð} ð¢ðŒ
The quantity on the RHS is the component of the derivative of vector ð® along the ðð direction
using covariant components. It is the covariant derivative of ð®. Using contravariant components,
we could write,
ðð® = (ðð¢ð
ðð¥ðð ð +
ðð ððð¥ð
ð¢ð)ðð¥ð
= (ðð¢ð
ðð¥ðð ð + {
ðŒðð} ð ðŒð¢
ð)ðð¥ð
So that,
ðð®
ðð¥ð=ðð¢ðŒ
ðð¥ðð ðŒ +
ðð ðŒðð¥ð
ð¢ðŒ
The components of this in the direction of ðð can be obtained by taking a dot product as before:
ð
ðð¥ðâ ð ð = (
ðð¢ðŒ
ðð¥ðð ðŒ +
ðð ðŒðð¥ð
ð¢ðŒ) â ð ð
=ðð¢ðŒ
ðð¥ðð¿ðŒð +
ðð ðŒðð¥ð
â ð ðð¢ðŒ
=ðð¢ð
ðð¥ð+ {
ððŒð} ð¢ðŒ
The two results above are represented symbolically as,
ð¢ð,ð =ðð¢ððð¥ð
â {ðŒðð} ð¢ðŒ
for covariant components, and
ð¢,ðð =
ðð¢ðŒ
ðð¥ð+ {
ððŒð} ð¢ðŒ
for conravariant vector components. Which are the components of the covariant derivatives in
terms of the covariant and contravariant components respectively.
It now becomes useful to establish the fact that our definition of the Christoffel symbols here
conforms to the definition you find in the books using the transformation rules to define the
tensor quantities.
60
We observe that the derivative of the covariant basis, ð ð (=ðð
ðð¥ð),
ðð ððð¥ð
=ð2ð«
ðð¥ððð¥ð
=ð2ð«
ðð¥ððð¥ð=ðð ð
ðð¥ð
Taking the dot product with ð ð
ðð ððð¥ð
â ð ð =1
2(ðð ð
ðð¥ðâ ð ð +
ðð ððð¥ð
â ð ð)
=1
2(ð
ðð¥ð[ð ð â ð ð] +
ð
ðð¥ð[ð ð â ð ð] â ð ð â
ðð ððð¥ð
â ð ð âðð ððð¥ð
)
=1
2(ð
ðð¥ð[ð ð â ð ð] +
ð
ðð¥ð[ð ð â ð ð] â ð ð â
ðð ððð¥ð
â ð ð âðð ð
ðð¥ð)
=1
2(ð
ðð¥ð[ð ð â ð ð] +
ð
ðð¥ð[ð ð â ð ð] â
ð
ðð¥ð[ð ð â ð ð])
=1
2(ðððð
ðð¥ð+ðððððð¥ð
âðððð
ðð¥ð)
Which is the quantity defined as the Christoffel symbols of the first kind in the textbooks. It is
therefore possible for us to write,
[ðð, ð] â¡ ðð ððð¥ð
â ð ð
=ðð ð
ðð¥ðâ ð ð
=1
2(ðððð
ðð¥ð+ðððððð¥ð
âðððð
ðð¥ð)
It should be emphasized that the Christoffel symbols, even though the play a critical role in
several tensor relationships, are themselves NOT tensor quantities. (Prove this). However, notice
their symmetry in the ð and ð. The extension of this definition to the Christoffel symbols of the
second kind is immediate:
Contract the above equation with the conjugate metric tensor, we have,
ðððŒ[ðð, ðŒ] â¡ ðððŒ ðð ððð¥ð
â ð ðŒ
= ðððŒðð ð
ðð¥ðâ ð ðŒ =
ðð ððð¥ð
â ð ð
61
= {ððð} = â
ðð ð
ðð¥ðâ ð ð
Which connects the common definition of the second Christoffel symbol with the one defined in
the above derivation. The relationship,
ðððŒ[ðð, ðŒ] = {ððð}
apart from defining the relationship between the Christoffel symbols of the first kind and that
second kind, also highlights, once more, the index-raising property of the conjugate metric
tensor.
We contract the above equation with ðððœand obtain,
ðððœðððŒ[ðð, ðŒ] = ðððœ {
ððð} ð¿ðœ
ðŒ [ðð, ðŒ]
= [ðð, ðœ] = ðððœ {ððð}
so that,
ðððŒ {ðŒðð} = [ðð, ð]
Showing that the metric tensor can be used to lower the contravariant index of the Christoffel
symbol of the second kind to obtain the Christoffel symbol of the first kind.
We are now in a position to express the derivatives of higher order tensor fields in terms of the
Christoffel symbols.
Higher Order Tensors
For a second-order tensor ð, we can express the components in dyadic form along the product
basis as follows:
ð = ðððð ðâð ð
= ðððð ðâšð ð
= ð.ððð ðâð ð
= ðð. ðð ðâšð ð
This is perfectly analogous to our expanding vectors in terms of basis and reciprocal bases.
Derivatives of the tensor may therefore be expressible in any of these product bases. As an
example, take the product covariant bases.
62
We have:
ðð
ðð¥ð=ðððð
ðð¥ðð ðâšð ð + ð
ðððð ððð¥ð
âšð ð + ðððð ðâš
ðð ð
ðð¥ð
Recall that, ðð ð
ðð¥ðâ ð ð = {
ððð}. It follows therefore that,
ðð ððð¥ð
â ð ð â {ðŒðð} ð¿ðŒ
ð=ðð ððð¥ð
â ð ð â {ðŒðð} ð ðŒ â ð
ð
= (ðð ððð¥ð
â {ðŒðð} ð ðŒ) â ð
ð = 0.
Clearly, ðð ð
ðð¥ð= {
ðŒðð} ð ðŒ
(Obviously since ð ð is a basis vector it cannot vanish)
ðð
ðð¥ð=ðððð
ðð¥ðð ðâšð ð + ð
ðððð ððð¥ð
âšð ð + ðððð ðâš
ðð ð
ðð¥ð
=ðððð
ðð¥ðð ðâšð ð + ð
ðð ({ðŒðð} ð ðŒ)âšð ð + ð
ððð ðâš({ðŒðð} ð ðŒ)
=ðððð
ðð¥ðð ðâšð ð + ð
ðŒð ({ððŒð} ð ð)âšð ð + ð
ððŒð ðâš({ððŒð} ð ð)
= (ðððð
ðð¥ð+ððŒð {
ððŒð} + ðððŒ {
ððŒð}) ð ðâšð ð = ð ,ð
ððð ðâšð ð
Where
ððð ,ð=ðððð
ðð¥ð+ððŒð {
ððŒð} + ðððŒ {
ððŒð} or
ðððð
ðð¥ð+ððŒðÎðŒð
ð + ðððŒÎðŒðð
are the components of the covariant derivative of the tensor ð in terms of contravariant
components on the product covariant bases as shown.
In the same way, by taking the tensor expression in the dyadic form of its contravariant product
bases, we can write,
ðð
ðð¥ð=ðððð
ðð¥ðð ðâð ð + ððð
ðð ð
ðð¥ðâð ð + ðððð
ðâðð ð
ðð¥ð
=ðððð
ðð¥ðð ðâð ð + ðððÎðŒð
ð âð ð + ðððð ðâ
ðð ð
ðð¥ð
Again, notice from previous derivation above, {ððð} = â
ðð ð
ðð¥ðâ ð ð so that,
ðð ð
ðð¥ð= â{
ððŒð} ð ðŒ =
âÎðŒðð ð ðŒ. (The Literature has the two representations for the Christoffel Symbols.) Therefore,
63
ðð
ðð¥ð=ðððð
ðð¥ðð ðâð ð + ððð
ðð ð
ðð¥ðâð ð + ðððð
ðâðð ð
ðð¥ð
=ðððð
ðð¥ðð ðâð ð â ðððÎðŒð
ð ð ðŒâð ð + ðððð ðâÎðŒð
ðð ðŒ
= (ðððð
ðð¥ðâ ððŒðÎðð
ðŒ â ðððŒÎðððŒ)ð ðâð ð = ððð,ðð
ðâð ð
so that
ððð,ð =ðððð
ðð¥ðâ ððŒðÎðð
ðŒ â ðððŒÎðððŒ
Two other expressions can be found for the covariant derivative components in terms of the
mixed tensor components using the mixed product bases defined above. It is a good exercise to
derive these.
The formula for covariant differentiation of higher order tensors follow the same kind of logic as
the above definitions. Each covariant index will produce an additional term similar to that in 3
with a dummy index supplanting the appropriate covariant index. In the same way, each
contravariant index produces an additional term like that in 3 with a dummy index supplanting
an appropriate contravariant index.
The covariant derivative of the mixed tensor, ðŽð1 , ð2 ,âŠ, ðð
ð1 , ð2 ,âŠ, ðð is the most general case for the
covariant derivative of an absolute tensor:
ðŽð1 , ð2 ,âŠ, ðð
ð1 , ð2 ,âŠ, ðð
,ð= ððŽð1 , ð2 ,âŠ, ðð
ð1 , ð2 ,âŠ, ðð
ðð¥ðâ {
ðŒð1ð} ðŽðŒ , ð2 ,âŠ, ðð
ð1 , ð2 ,âŠ, ðð â {ðŒð2ð} ðŽð1, ðŒ,âŠ, ðð
ð1 , ð2 ,âŠ, ðð ââ¯â {ðŒððð} ðŽð1 , ð2 ,âŠ,ðŒ
ð1 , ð2 ,âŠ, ðð
+ {ð1ðœð} ðŽð1 , ð2 ,âŠ, ðð
ðœ , ð2 ,âŠ, ðð + {ð2ðœð}ðŽð1 , ð2 ,âŠ, ðð
ð1 , ðœ,âŠ, ðð +â¯+
Physical Components in Orthogonal Systems
The components of a vector or tensor in a Cartesian system are projections of the vector on
directions that have no dimensions and a value of unity.
These components therefore have the same units as the vectors themselves.
It is natural therefore to expect that the components of a tensor have the same dimensions.
In general, this is not so. In curvilinear coordinates, components of tensors do not necessarily
have a direct physical meaning. This comes from the fact that base vectors are not guaranteed
to have unit values (âð â 1 in general).
64
They may not be dimensionless. For example, in orthogonal spherical polar, the base vectors are
ð 1, ð 2 and ð 3.
These can be expressed in terms of dimensionless unit vectors as, ððð, ð sinð ðð, and ðð since
the magnitudes of the basis vectors are ð, ð sinð , and 1 or (âð11, âð22, âð33) respectively.
As an example consider a force with the contravariant components ð¹1, ð¹2 and ð¹3,
ð = ð¹1ð 1 + ð¹2ð 2 + ð¹
3 ð 3
= ðð¹1ðð + ð sinð ð¹2ðð + ð¹3ðð
Which may also be expressed in terms of physical components,
ð = ð¹ððð + ð¹ððð + ð¹ððð.
While these physical components {ð¹ð, ð¹ð, ð¹ð} have the dimensions of force, for the contravariant
components normalized by the in terms of the unit vectors along these axes to be consistent,
{ðð¹1, ð sin ð ð¹2, ð¹3} must each be in the units of a force. Hence, ð¹1, ð¹2 and ð¹3 may not
themselves be in force units. The consistency requirement implies,
ð¹ð = ðð¹1, ð¹ð = ð sin ð ð¹2, and ð¹ð = ð¹3
For the same reasons, if we had used covariant components, the relationships,
ð¹ð = ð¹1ð, ð¹ð =
ð¹2ð sin ð
, and ð¹ð = ð¹3
The magnitudes of the reciprocal base vectors are 1
ð,
1
ð sinð, and 1. While the physical
components have the dimensions of force, ð¹1 = ðð¹ð and ð¹2 = ð sinð ð¹ð have the dimensions of
moment, while ð¹1 =ð¹ð
ð and ð¹2 =
ð¹ð
ð sinð are in dimensions of force per unit length. Only the
third components in both cases are given in the dimensions of force.
Physical
Component
Contravariant Covariant Mixed
ð(ðð) ð11â1â1 = ð11ð2 ð11
â1 â1=ð11ð2
ð11
â1â1 = ð1
1
65
ð(ðð) ð12â1â2
= ð12ð2 ð ðð ð
ð12â1 â1
=ð12
ð2 ð ðð ð ð2
1
â2â1 =
ð21
ð ðð ð
ð(ðð) ð22â2â2
= ð22ð2 ð ðð2ð
ð22â2 â2
=ð22
ð2 ð ðð2 ð ð2
2
â2â2 = ð2
2
ð(ðð) ð23â2â3
= ð23ð ð ðð ð
ð23â2 â3
=ð23
ð ð ðð ð ð3
2
â3â2
= ð32ð ð ðð ð
ð(ðð) ð33â3â3 = ð33 ð33
â3 â3= ð33 ð3
3
â3â3 = ð3
3
ð(ðð) ð31â3â1 = ð31ð ð31
â3 â1=ð31ð
ð13
â1â3 =
ð13
ð
The transformation equations from the Cartesian to the oblate spheroidal
coordinates ð, ð and ð are: ð¥ = ððð ð ðð ð, ðŠ = ðâ(ð2 â 1)(1 â ð2) , and ð§ =ððð ððð ð, where f is a constant representing the half the distance between the foci of a family of confocal ellipses. Find the components of the metric tensor in this system. The metric tensor components are:
ððð = (ðð¥
ðð)2
+ (ððŠ
ðð)2
+ (ðð§
ðð)2
= (ðð sinð)2 + ð2ð2 (1 â ð2
ð2 â 1) + (ðð cosð)2 = ð2 (
ð2 â ð2
ð2 â 1)
ððð = (ðð¥
ðð)2
+ (ððŠ
ðð)2
+ (ðð§
ðð)2
= ð2 (ð2 â ð2
1 â ð2)
66
ððð = (ðð¥
ðð)2
+ (ððŠ
ðð)2
+ (ðð§
ðð)2
= (ððð)2
ððð = (ðð¥
ðð) (ðð¥
ðð) + (
ððŠ
ðð) (ððŠ
ðð) + (
ðð§
ðð) (ðð§
ðð)
= (ðð sinð)(ðð sinð) â (ððâð2 â 1
1 â ð2)(ððâ
1 â ð2
ð2 â 1) + (ðð cosð)(ðð cosð)
= 0 = ððð = ððð
Find an expression for the divergence of a vector in orthogonal curvilinear coordinates. (***Rework***) The gradient of a vector ð = ð¹ðð ð is ð» â ð = ððððâ (ð¹ððð) = ð¹
ð,ðð
ðâðð. The divergence is
the contraction of the gradient. While we may use this to evaluate the divergence directly it is
often easier to use the computation formula in equation Ex 15:
div ð = ð¹ð ,ðð ð â ð ð = ð¹
ð ,ð=1
âð
ð(âð ð¹ð)
ðð¥ð
=1
â1â2â3[ð
ðð¥1(â1â2â3ð¹
1) +ð
ðð¥2(â1â2â3ð¹
2) +ð
ðð¥3(â1â2â3ð¹
3)]
Recall that the physical (components having the same units as the tensor in question)
components of a contravariant tensor are not equal to the tensor components unless the
coordinate system is Cartesian. The physical component ð¹(ð) = ð¹ðâð (no sum on i). In terms of
the physical components therefore, the divergence becomes,
=1
â1â2â3[ð
ðð¥1(â2â3ð¹(1)) +
ð
ðð¥2(â1â3ð¹(2)) +
ð
ðð¥3(â1â2ð¹(3))]
Find an expression for the Laplacian operator in Orthogonal coordinates.
For the contravariant component of a vector, ð¹ð,
ð¹ð,ð =1
âð
ð(âð ð¹ð)
ðð¥ð.
Now the contravariant component of gradient ð¹ð = ðððð,ð. Using this in place of the vector ð¹ð,
we can write,
ðððð,ðð =1
âð
ð(âð ðððð,ð )
ðð¥ð
67
given scalar ð, the Laplacian ð»2ð is defined as, ðððð,ðð so that,
ð»2ð = ðððð,ðð =1
âð
ð
ðð¥ð(âð ððð
ðð
ðð¥ð)
When coordinates are orthogonal, ððð = ððð = 0 whenever ð â ð. Expanding the computation
formula therefore, we can write,
ð»2ð =1
â1â2â3[ð
ðð¥1(â1â2â3â1
ðð
ðð¥1) +
ð
ðð¥2(â1â2â3â2
ðð
ðð¥2) +
ð
ðð¥3(â1â2â3â3
ðð
ðð¥3)]
=1
â1â2â3[ð
ðð¥1(â2â3
ðð
ðð¥1) +
ð
ðð¥2(â1â3
ðð
ðð¥2) +
ð
ðð¥3(â1â2
ðð
ðð¥3)]
Show that the oblate spheroidal coordinate systems are orthogonal. Find an
expression for the Laplacian of a scalar function in this system.
Example above shows that ððð = ððð = ððð = 0 . This is the required proof of orthogonality.
Using the computation formula in example 11, we may write for the oblate spheroidal
coordinates that,
ð»2Ί =â(ð2 â 1)(1 â ð2)
ð3ð2(ð2 â ð2)[ð
ðð(ðððâ
ð2 â 1
1 â ð2ðΊ
ðð) +
ð
ðð(ðððâ
1 â ð2
ð2 â 1
ðΊ
ðð)]
+ð
ðð(
ð(ð2 â ð2)
ððâ(ð2 â 1)(1 â ð2)
ðΊ
ðð)
3.61 Given tensors ð,ð and ð we define the tensor square product with operator, â , in the
equation, (ðâ ð)ð = ðððT, show that the square product of two tensors ð and ð has
the component form, ðâ ð = ðŽðððµððð ðâð ðâð ðâð ð
Let ð = ðŽððð ðâð ð, ð© = ðµððð
ðâð ð, ð = ð¶ðŒðœð ðŒâð ðœ . If we assume that ðâ ð =
ðŽðððµððð ðâð ðâð ðâð ð, Then the product,
(ðâ ð)ð = ðŽðððµðð(ð ðâð ðâð ðâð ð)(ð¶ðŒðœð ðŒâð ðœ)
= ðŽðððµððð¶ðŒðœð¿ðŒ
ðð¿ðœðð ðâð ð
= ðŽðððµððð¶ððð ðâð ð
To complete the proof, we only need to show that the above result equals ðððT. To do
so, we change the basis order in ð so that,
68
ðððT = (ðŽððð ðâð ð)(ð¶ðŒðœð ðŒâð ðœ)(ðµððð
ðâð ð)
= ðŽððð¶ðŒðœðµððð
ðâð ðð¿ðŒðð¿ðœð = ðŽððð¶
ðððµððð ðâð ð
= ðŽðððµððð¶ððð ðâð ð
= (ðâ ð)ð
We can therefore conclude that ðâ ð = ðŽðððµððð ðâð ðâð ðâð ð.
3.62
3.63
3.64
3.65
3.66
3.67
3.68
3.69
3.70
69
3.71
3.61
3.62
3.63
3.64
3.65
3.66
3.67
3.68
3.69
70
3.70
3.71
References
Abramowitz, M & Stegun, I, Handbook of Mathematical Functions with Formulas, Graphs and Mathematical Tables, Dover Publications, Inc., New York 1970
Bertram, A, Elasticity and Plasticity of Large Deformations, An Introduction, Springer, 2008 Heinbockel, JH, Introduction to Tensor Calculus and Continuum Mechanics, Trafford, 2003 Bower, AF, Applied Mechanics of Solids, CRC Press, NY, 2010 Brannon, RM, Functional and Structured Tensor Analysis for Engineers, UNM BOOK DRAFT,
2006, pp 177-184. Dill, EH, Continuum Mechanics, Elasticity, Plasticity, Viscoelasticity, CRC Press, 2007 Gurtin, ME, Fried, E & Anand, L, The Mechanics and Thermodynamics of Continua, Cambridge
University Press, 2010 Holzapfel, GA, Nonlinear Solid Mechanics, Wiley NY, 2007 Humphrey, JD, Cadiovascular Solid Mechanics: Cells, Tissues and Organs, Springer-Verlag, NY,
2002 Itskov, M, Tensor Algebra and Tensor Analysis for Engineers with applications to Continuum
Mechanics, Springer, 2010 Jog, BS, Continuum Mechanics: Foundations and Applications of Mechanics, Cambridge-IISc
Series, 2015 Kay, DC, Tensor Calculus, Schaum Outline Series, McGraw-Hill, 1988 Mase,GE, Theory and Problems of Continuum Mechanics, Schaum Outline Series, McGraw-Hill,
1970 McConnell, AJ, Applications of Tensor Analysis, Dover Publications, NY 1951 Ogden, RW, Nonlinear Elastic Deformations, Dover Publications, Inc. NY, 1997 Romano, A, Lancellotta, R, Marasco, A, Continuum Mechanics using Mathematica, Birkhauser,
2006 Sokolnikoff, I.S., Tensor Analysis with Applications to Geometry and the Mechanics of
Continua, John Wiley and Sons, Inc., New York, NY 1964 Spiegel, MR, Theory and Problems of Vector Analysis, Schaum Outline Series, McGraw-Hill,
1974 Taber, LA, Nonlinear Theory of Elasticity, World Scientific, 2008 Wegner, JL & Haddow, JB, Elements of Continuum Mechanics and Thermodynamics,
Cambridge University Press, 2009.