1
Tensor Analysis: Differential and Integral Calculus with Tensors
โWe do not fuss over smoothness assumptions: Functions and boundaries of
regions are presumed to have continuity and differentiability properties sufficient
to make meaningful underlying analysisโฆโ Morton Gurtin, et al.
MetaData
1. The first chapter provided the necessary background to study tensors. Chapter 2 explored the properties of
tensors.
2. This third chapter begins the applications by introducing the concept of differentiation for objects larger
than scalars.
THREE
2
3. The gradient of scalar, vector or tensor is connected to the extension of the concept of directional derivative
called the Gateaux differential.
4. The complexity in differentiation of large objects comes, not from the objects themselves, but from the
domain of differentiation. Differentiation rules with respect to scalar arguments follows closely the similar
laws for scalar fields.
5. Integral theorems of Stokes, Green and Gauss are introduced with computational examples.
6. Orthogonal Curvilinear systems are dealt with in the addendum. Important results and the use of Computer
Algebra are given.
3
In the calculus we are embarking upon, there will be tensor valued functions and tensor domains.
In the most complex cases, the domains and function values are tensors. Most common is the
case where the domain itself is the three-dimensional Euclidean Point Space. Even though this
space embeds the vectors containing the position vectors, they are not tensors. Tensor-valued
functions defined in such domains are called tensor fields.
Differentiation & Large Objects
Differentiating with Respect to Scalar Arguments
We are already familiar with the techniques of differentiation of simple objects like scalars with
respect to other scalars. These scalars are defined in scalar domains. In the simplest case, ๐ฅ, โ โ
โ the derivative, ๐โฒ(๐ฅ), of the function, ๐:โ โ โ, is defined as,
๐โฒ(๐ฅ) = limโโ0
๐(๐ฅ + โ) โ ๐(๐ฅ)
โ.
Things become more complex when we handle tensors. A second-order tensor, as we have seen,
when expressed in the component form, contains nine scalars. A vector โ a first-order tensor, as
we know, is made of three scalar members.
The complication does not actually arise from the size of the objects themselves. In fact, the
derivation of tensor objects with respect to scalar domains, with some adjustments, basically
conforms to the same rules as the above derivation of scalars:
Division of a tensor by a scalar is accomplished by multiplying the tensor by the inverse of the
scalar. This operation is defined in all vector spaces to which our vectors and tensors belong.
Consequently, the derivative of the tensor ๐(๐ก), with respect to a scalar argument, such as time,
for example, can be defined as,
๐
๐๐ก ๐(๐ก) = lim
๐กโ0
๐(๐ก + โ) โ ๐(๐ก)
๐กโก lim๐กโ0
1
๐ก(๐(๐ก + โ) โ ๐(๐ก))
If ๐ผ(๐ก) โ โ, and tensor , ๐(๐ก) โ ๐ are both functions of time ๐ก โ โ, we find,
๐
๐๐ก(๐ผ๐) = lim
โโ0
๐ผ(๐ก + โ)๐(๐ก + โ) โ ๐ผ(๐ก)๐(๐ก)
๐ก
= limโโ0
๐ผ(๐ก + โ)๐(๐ก + โ) โ ๐ผ(๐ก)๐(๐ก + โ) + ๐ผ(๐ก)๐(๐ก + โ) โ ๐ผ(๐ก)๐(๐ก)
๐ก
4
= limโโ0
๐ผ(๐ก + โ)๐(๐ก + โ) โ ๐ผ(๐ก)๐(๐ก + โ)
๐ก+ limโโ0
๐ผ(๐ก)๐(๐ก + โ) โ ๐ผ(๐ก)๐(๐ก)
๐ก
= (limโโ0
๐ผ(๐ก + โ) โ ๐ผ(๐ก)
๐ก) (lim
โโ0๐(๐ก + โ)) + ๐ผ(๐ก) lim
โโ0
๐(๐ก + โ) โ ๐(๐ก)
๐ก
=๐
๐๐ก(๐ผ๐) = ๐ผ
๐๐
๐๐ก+๐๐ผ
๐๐ก๐
showing that the familiar product rule applies as expected.
Proceeding in a similar fashion, for ๐ผ(๐ก) โ โ, ๐ฎ(๐ก), ๐ฏ(๐ก) โ ๐ผ, and ๐(๐ก), ๐(๐ก) โ ๐, all being
functions of a scalar variable ๐ก, the following results hold as expected.
Expression Note
๐
๐๐ก(๐ผ๐ฎ) = ๐ผ
๐๐ฎ
๐๐ก+๐๐ผ
๐๐ก๐ฎ
Each term on the RHS retains the commutative property
of multiplication by a scalar.
๐
๐๐ก(๐ฎ โ ๐ฏ) =
๐๐ฎ
๐๐กโ ๐ฏ + ๐ฎ โ
๐๐ฏ
๐๐ก
Each term on the RHS retains the commutative property
of the scalar product
๐
๐๐ก(๐ฎ ร ๐ฏ) =
๐๐ฎ
๐๐กร ๐ฏ + ๐ฎ ร
๐๐ฏ
๐๐ก
Original product order must be maintained
๐
๐๐ก(๐ฎโ ๐ฏ) =
๐๐ฎ
๐๐กโ ๐ฏ + ๐ฎโ
๐๐ฏ
๐๐ก
Original product order must be maintained
๐
๐๐ก(๐ + ๐) =
๐๐
๐๐ก+๐๐
๐๐ก
Sum of tensors
๐
๐๐ก๐๐ =
๐๐
๐๐ก๐ + ๐
๐๐
๐๐ก
Product of tensors. Note that we must maintain the order
of the product as shown. ๐๐๐
๐๐กโ ๐๐
๐๐ก๐
๐
๐๐ก๐: ๐ =
๐๐
๐๐ก: ๐ + ๐:
๐๐
๐๐ก
Scalar Product of tensors. the order is not important here
๐:๐๐
๐๐ก=๐๐
๐๐ก: ๐
๐
๐๐ก(๐ผ๐) = ๐ผ
๐๐
๐๐ก+๐๐ผ
๐๐ก๐
Scalar product of a tensor. Order is not important in
multiplication by a scalar.
๐
๐๐ก(๐๐ฎ) =
๐๐
๐๐ก๐ฎ + ๐
๐๐ฎ
๐๐ก
The original operation order must be maintained in each
term of the derivatives.
๐
๐๐ก๐T = (
๐๐
๐๐ก)T
Derivative of a transpose is the transpose of the
derivative
5
Scalar Argument: Time Derivatives of Tensors
1. Consequences of Differentiating the Identity Tensor
Differentiating a tensor with respect to time, or any other scalar parameter follows the expected
pattern with little modification. Here are some further results.
The Inverse. Certain important results come from the obvious fact that the identity tensor is a
constant so that
๐๐
๐๐ก= ๐.
We recognize the fact that the derivative of the tensor with respect to a scalar must give a tensor.
The value here is the annihilator tensor, ๐. For any invertible tensor valued scalar function, ๐(๐ก),
we differentiate the equation, ๐โ๐(๐ก) ๐(๐ก) = ๐ to obtain,
๐๐โ๐
๐๐ก ๐ + ๐โ๐
๐๐
๐๐ก= ๐
โ๐๐โ๐
๐๐ก= โ๐โ๐
๐๐
๐๐ก๐โ๐
If we post multiply both sides by ๐โ๐, the following important expression results for the
derivative of the inverse tensor with respect to a scalar parameter, in terms of the derivative of
the original tensor function:
๐๐โ๐
๐๐ก= โ๐โ๐
๐๐
๐๐ก๐โ๐
Conversely,
๐๐
๐๐ก= โ๐
๐๐โ๐
๐๐ก๐
Orthogonal Tensors. An orthogonal tensor as well as its transpose can each be functions of a
scalar parameter. They still must equal the identity tensor at any value of the argument, so that
๐(๐ก)๐T(๐ก) = ๐. One consequence of this relationship is that the tensor valued function, ๐(๐ก) โก
6
๐๐(๐ก)
๐๐ก๐T(๐ก) of the same scalar parameter must be skew. This is another result from differentiating
the identity tensor ๐๐T = ๐,
๐
๐๐ก(๐๐T) =
๐๐๐๐ก๐T +๐
๐๐T
๐๐ก=๐๐๐๐ก= ๐
Consequently,
๐๐๐๐ก๐T = โ๐
๐๐T
๐๐ก= โ(
๐๐๐๐ก๐T)
T
So we have that the tensor ๐ =๐๐
๐๐ก๐T is negative of its own transpose, hence it is skew.
Angular Velocity. Consider a rigid body fixed at one end ๐ โ
for example a spinning top shown. It is given a rotation ๐(๐ก)
from rest so that each point ๐ is at a position vector ๐ซ(๐ก) at a
time ๐ก, related to the original position ๐ซ๐ by the equation,
๐ซ(๐ก) = ๐(๐ก)๐ซ๐
We can find the velocity by differentiating the position vector,
๐๐ซ
๐๐ก=๐๐๐๐ก๐ซ๐ =
๐๐๐๐ก๐โ1๐ซ
And the rotation is an orthogonal tensor, hence its inverse is
its transpose, so that,
๐ฏ =๐๐ซ
๐๐ก=๐๐๐๐ก๐T๐ซ = ๐๐ซ
And, ๐ as we have seen above, is a skew tensor (every rotation is a proper orthogonal tensor)
hence it is associated with an axial vector (๐ ร) = ๐. From this fact we can see that every point
in the body has an angular velocity, ๐, such that,
๐ฏ = ๐ ร ๐ซ
๐, defined by this expression, the axial vector of the ๐๐
๐๐ก๐T where ๐(๐ก) is the rotation function,
is called the angular velocity.
7
2. The Magnitude of a Tensor.
We saw in the previous chapter that the tensor belongs to its own Euclidean vector space which
is equipped with a scalar product, and consequently, a scalar magnitude. Consider the
magnitude,
๐(๐ก) = โ๐(๐ก): ๐(๐ก) = โ๐(๐ก)โ
so that, ๐2 = ๐: ๐. Differentiating this scalar equation, and remembering that the scalar
operand here is just a product, we have,
๐
๐๐ก๐2 = 2๐
๐๐
๐๐ก=๐๐
๐๐ก: ๐ + ๐:
๐๐
๐๐ก= 2
๐๐
๐๐ก: ๐.
This simplifies to
๐๐
๐๐ก=๐
๐๐ก|๐(๐ก)|
=๐๐(๐ก)
๐๐ก:๐(๐ก)
|๐(๐ก)|
3. Tensor Invariants.
Consider as before, that tensor ๐(๐ก) varies with a scalar parameter ๐ก. What are the responses of
the three invariants of ๐? The invariants, as was shown in chapter 2 are the Trace, tr ๐, Trace of
the cofactor, tr ๐c and the Determinant, det๐.
Trace is a linear operator. It follows immediately that
๐
๐๐ก tr ๐ = tr
๐๐
๐๐ก
- A fact that can also be established easily by observing that,
๐
๐๐ก๐ผ1(๐) =
๐๐๐ก
tr ๐ =[๐๐๐๐ก๐, ๐, ๐] + [๐,
๐๐๐๐ก๐, ๐] + [๐, ๐,
๐๐๐๐ก๐]
[๐, ๐, ๐]
= tr ๐๐
๐๐ก
The second invariant is NOT a linear scalar valued function of its tensor argument. However,
we have the expression,
๐c = ๐โT det๐ โ tr๐c = tr(๐โT det ๐)
Differentiating with respect to ๐ก,
8
๐
๐๐ก tr๐c = tr
๐
๐๐ก(๐โT det ๐)
Not a very useful quantity. The derivative of the third invariant with respect to a scalar
argument is of momentous importance. It is the basis of Liouvilleโs theorem and is fundamental
to the study of continuum flow in general. Just like the second invariant, the third invariant is
not a linear function of its tensor argument.
๐ผ3(๐) =[๐๐, ๐๐,๐๐]
[๐, ๐, ๐]= det ๐
so that, [๐, ๐, ๐] det ๐ = [๐๐, ๐๐, ๐๐]. Differentiating, we have,
[๐, ๐, ๐]๐
๐๐กdet๐ = [
๐๐
๐๐ก๐, ๐๐, ๐๐] + [๐๐,
๐๐
๐๐ก๐, ๐๐] + [๐๐, ๐๐,
๐๐
๐๐ก๐]
= [๐๐
๐๐ก๐โ1๐๐, ๐๐,๐๐] + [๐๐,
๐๐
๐๐ก๐โ1๐๐,๐๐] + [๐๐, ๐๐,
๐๐
๐๐ก๐โ1๐๐]
= tr (๐๐
๐๐ก๐โ1) [๐๐, ๐๐, ๐๐]
So that, ๐
๐๐กdet ๐ = tr (
๐๐
๐๐ก๐โ1) det๐.
Differentiation: Vector & Tensor Arguments
So far, the domain of differentiation has remained in the real space. The objects we have
differentiated were larger in the sense that we dealt with vectors and tensors. The evaluation of
derivatives of larger objects presented a small difficulty but remained essentially the same so
long as the domain remained real. When the domain of differentiation itself is a made up of large
objects, the task of differentiation becomes a little more demanding. Such problems are standard
in Continuum mechanics. For example, the Energy function is a scalar, yet we can obtain the
strains from it by differentiating with respect to the stress. We are dealing there with the
differentiation of a scalar function of a tensor: stress. We may want to compute the velocity
gradient from the velocity function. Here, we are differentiating a vector field defined on the
Euclidean point space, โฐ, with respect to the position vector of the points in โฐ. In these and
several other derivatives of interest, the domains are no longer in the simple real space.
The approach to this challenge is twofold:
9
1. Recognize that the vectors and tensors live in their respective Euclidean VECTOR spaces
where the concept of length is already defined.
2. Use the above to extend the concept of directional derivative to include the derivative of
any object from a given Euclidean space with respect to objects from another.
Such a generalization is in the Gateaux differential. Consider a map,
๐ :V โ ๐
This maps from the domain V to ๐ both of which are Euclidean vector spaces. The concepts of
limit and continuity carries naturally from the real space to any Euclidean vector space.
Let ๐0 โ V and ๐0 โ ๐, as usual we can say that the limit
lim๐โ ๐0
๐ (๐ฏ) = ๐ฐ0
if for any pre-assigned real number ๐ > 0, no matter how small, we can always find a real number
๐ฟ > 0 such that |๐ (๐ฏ) โ ๐ฐ0| โค ๐ whenever |๐ฏ โ ๐ฏ0| < ๐ฟ. The function is said to be continuous
at ๐ฏ0 if ๐ (๐ฏ0) exists and ๐ (๐ฏ0) = ๐ฐ0
The Gateaux Differential
Specifically, for ๐ผ โ โ let this map be:
๐ท๐ (๐ฑ, ๐ก) โก lim๐ผโ0
๐ (๐ฑ + ๐ผ๐ก) โ ๐ (๐ฑ)
๐ผ=๐
๐๐ผ๐ (๐ฑ + ๐ผ๐ก)|
๐ผ=0
We focus attention on the second variable โ while we allow the dependency on ๐ to be as general
as possible. We shall show that while the above function can be any given function of ๐ฑ (linear or
nonlinear), the above map is always linear in ๐ก irrespective of what kind of Euclidean space we
are mapping from or into. It is called the Gateaux Differential.
Real functions in Real Domains. Let us make the Gateaux differential a little more familiar in real
space in two steps: First, we move to the real space and allow โ โ ๐๐ฅ and we obtain,
๐ท๐น(๐ฅ, ๐๐ฅ) = lim๐ผโ0
๐น(๐ฅ + ๐ผ๐๐ฅ) โ ๐น(๐ฅ)
๐ผ=๐
๐๐ผ๐น(๐ฅ + ๐ผ๐๐ฅ)|
๐ผ=0
And let ๐ผ๐๐ฅ โ ฮ๐ฅ, the middle term becomes,
limฮ๐ฅโ0
๐น(๐ฅ + ฮ๐ฅ) โ ๐น(๐ฅ)
ฮ๐ฅ๐๐ฅ =
๐๐น
๐๐ฅ๐๐ฅ
10
from which it is obvious that the Gateaux derivative is a generalization of the well-known
differential from elementary calculus. The Gateaux differential helps to compute a local linear
approximation of any function (linear or nonlinear).
Linearity. It is easily shown that the Gateaux differential is linear in its second argument, ie,
for ๐ผ โ R. It is easily shown that,
๐ท๐ (๐ฑ, ๐ผ๐ก) = ๐ผ๐ท๐ (๐ฑ, ๐ก)
Furthermore,
๐ท๐ (๐ฑ, ๐ + ๐ก) = lim๐ผโ0
๐ (๐ฑ, ๐ผ(๐ + ๐ก)) โ ๐ (๐ฑ)
๐ผ
= lim๐ผโ0
๐ (๐ฑ, ๐ผ(๐ + ๐ก)) โ ๐ (๐ฑ, ๐ผ๐ ) + ๐ (๐ฑ, ๐ผ๐ ) + ๐ (๐ฑ)
๐ผ
= lim๐ผโ0
๐ (๐ฒ + ๐ผ๐ก) โ ๐ (๐ฒ)
๐ผ+ lim๐ผโ0
๐ (๐ฑ, ๐ผ๐ ) + ๐ (๐ฑ)
๐ผ
= ๐ท๐ (๐ฑ, ๐ก) + ๐ท๐ (๐ฑ, ๐ )
as the variable ๐ฒ โก ๐ฑ + ๐ผ๐ โ ๐ฑ as ๐ผ โ 0; For ๐ผ, ๐ฝ โ R, using similar arguments, we can also
show that,
๐ท๐ (๐ฑ, ๐ผ๐ + ๐ฝ๐ก) = ๐ผ๐ท๐ (๐ฑ, ๐ ) + ๐ฝ๐ท๐ (๐ฑ, ๐ก)
Points to Note:
โข The Gateaux differential is not unique to the point of evaluation. Rather, at each point ๐ฑ
there is a Gateaux differential for each โvectorโ ๐ก. If the domain is a vector space, then
we have a Gateaux differential for each of the infinitely many directions at each point. In
two of more dimensions, there are infinitely many Gateaux differentials at each point!
Remember however, that ๐ก may not even be a vector, but second- or higher-order tensor.
It does not matter, as the tensors themselves are in a Euclidean space that define
magnitude and direction as a result of the embedded inner product.
โข The Gateaux differential is a one-dimensional calculation along a specified direction ๐ก.
Because itโs one-dimensional, you can use ordinary one-dimensional calculus to compute
it. Product rules and other constructs for the differentiation in real domains apply.
11
Gradient or Frรฉchet Derivatives
A scalar, vector or tensor valued function in a scalar, vector or tensor valued domain is said to be
Frรฉchet differentiable if a subdomain exists in which we can find grad ๐ (๐ฑ) such that,
(grad ๐ (๐ฑ)) โข ๐ก = ๐ท๐ (๐ฑ, ๐ก)
This equation defines the gradient of a function in terms of its operating on the domain object to
obtain the Gateaux differential.
The nature of, grad ๐ (๐ฑ) as well as the kind of product, " โข ", between the gradient and the
differential depend on the value type of the function and the type of argument involved. In the
simple case of a scalar valued function of a scalar argument, we are back to the regular derivative
as can be seen in the first row of the table, and the product involved is simply multiplication of
two scalars. The Gateaux differential here is your regular differential.
For a scalar valued function, when the argument type is a vector, we get, for the Frรฉchet
derivative, the familiar gradient operation, grad ๐(๐ฑ). The Gateaux differential here is the
directional derivative, (grad ๐(๐ฑ)) โ ๐๐ฑ, in the direction given by the differential, ๐๐ฑ. Notice two
things here:
1. The function value is a scalar;
2. The function differential, ๐ท๐(๐ฑ, ๐๐ฑ) = (grad ๐(๐ฑ)) โ ๐๐ฑ is also a scalar.
The product between grad ๐(๐ฑ) and the vector differential is a scalar product so that the
gradient of a scalar valued function of a vector argument is itself a vector. In the table, product
of the Frรฉchet derivative in column 2 with the argument in column 3 is a scalar hence the correct
product here is the scalar product.
No grad ๐ (๐ฑ) Argument Product โข ๐ (๐ฑ) Gateaux-Frรฉchet Example
1 Scalar Scalar Multiply Scalar ๐ท๐น(๐ฅ) =
๐๐น(๐ฅ)
๐๐ฅ๐๐ฅ
2 Vector Vector Scalar product Scalar ๐ท๐(๐ฑ, ๐๐ฑ) = (grad ๐(๐ฑ)) โ ๐๐ฑ
3 Tensor Tensor Scalar product Scalar ๐ท๐(๐, ๐๐) =
๐๐(๐)
๐๐: ๐๐
4 Tensor Vector Contraction Vector ๐ท๐(๐ฑ, ๐๐ฑ) = (grad ๐(๐ฑ))๐๐ฑ
5 Tensor Scalar Scalar multiply Tensor ๐ท๐(๐ฅ) =
๐๐(๐ฅ)
๐๐ฅ๐๐ฅ
12
6 Tensor (3) Vector Contraction Tensor ๐ท๐ (๐ฑ, ๐๐ฑ) = (grad ๐ (๐ฑ))๐๐ฑ
7 Tensor (4) Tensor Contraction Tensor ๐ท๐ (๐, ๐๐) = (grad ๐ (๐))๐๐
For a scalar valued function of a tensor argument, row 3, Gateaux differential is a scalar โ notice
it will necessarily be the same type as the function value as it is simply a change in value: Change
in a scalar is scalar, change in vector is vector and change in tensor is tensor. The gradient here
is a second-order tensor. The proper product to recover the scalar value from the product of
these tensors is the tensor scalar product. On rows six and seven, the tensor order for the Frรฉchet
derivative is higher than two and so stated.
We will rely on a set of worked examples to demonstrate these.
Scalar-valued function of Tensors
Some of the most important functions you will differentiate are scalar-valued functions that take
tensor arguments. Here are some examples:
1. Principal Invariants of Tensors and related functions. This could include invariants of other
tensors such as the deviatoric parts of the original tensor, etc.
2. Traces of powers of the tensor. Traces of products and transposes, etc. These results are
powerful because we often able to convert scalar valued functions to the sums and
products of traces and their powers.
3. Magnitudes of tensors.
(a) Direct Application of Gateaux Differential
Given any scalar ๐ > 0, for the scalar-valued tensor function, ๐
๐๐ tr ๐ = ๐, show that, and
๐
๐๐ tr ๐2 = ๐T.
Compute the Gateaux differential directly here:
๐ท๐(๐, ๐๐) =๐
๐๐ผ๐(๐ + ๐ผ๐๐)|
๐ผ=0
=๐
๐๐ผtr(๐ + ๐ผ๐๐)|
๐ผ=0 = tr(๐ ๐๐)
= ๐: ๐๐ =๐๐(๐)
๐๐: ๐๐
13
So that, we have found a function that, multiplies the differential argument to give us the Gateaux
differential. That is the Frรฉchet derivative, or gradient. As you can see here, it is the Identity
tensor:
๐
๐๐tr(๐) = ๐.
When ๐ = 2, The Gateaux differential in this case,
๐ท๐(๐, ๐๐) =๐
๐๐ผ๐(๐ + ๐ผ๐๐)|
๐ผ=0 =๐
๐๐ผtr(๐ + ๐ผ๐๐)2|
๐ผ=0
=๐
๐๐ผtr{(๐ + ๐ผ๐๐)(๐ + ๐ผ๐๐)}|
๐ผ=0
= tr [๐
๐๐ผ(๐ + ๐ผ๐๐)(๐ + ๐ผ๐๐)]|
๐ผ=0
= tr[๐๐(๐ + ๐ผ๐๐) + (๐ + ๐ผ๐๐)๐๐]|๐ผ=0
= tr[๐๐ ๐ + ๐ ๐๐] = 2๐T: ๐๐
=๐๐(๐)
๐๐: ๐๐
So that,
๐
๐๐tr ๐2 = ๐T
Using these two results and the linearity of the trace operation, we can proceed to find the
derivative of the second principal invariant of the tensor ๐:
๐
๐๐ ๐ผ2(๐) =
1
2
๐
๐๐[tr2(๐) โ tr(๐2)]
=1
2[2tr(๐)๐ โ 2 ๐T]
= tr(๐)๐ โ ๐T
using the fact that differentiating tr2(๐) with respect to tr(๐) is a scalar derivative of a scalar
argument. To find the derivative of the third principal invariant of the tensor ๐, we appeal to the
Cayley-Hamilton theorem, which expresses the determinant in terms of traces only,
๐ผ3(๐) =1
6[tr3(๐) โ 3tr(๐)tr(๐2) + 2tr(๐3)]
๐
๐๐ ๐ผ3(๐) =
1
6
๐
๐๐[tr3(๐) โ 3tr(๐)tr(๐2) + 2tr(๐3)]
14
=1
6[3tr2(๐)๐ โ 3tr(๐2)๐ โ 3tr(๐)2๐T + 2 ร 3(๐2)T]
= ๐ผ2๐ โ ๐ผ1(๐)๐T + ๐2T.
Given that ๐ is a constant tensor, show that โ
โ๐tr(๐๐) = ๐T.
For this scalar-valued function, the Gateaux differential,
๐ท๐(๐, ๐๐) =๐
๐๐ผtr(๐๐ + ๐ผ๐๐๐)|
๐ผ=0 =๐๐(๐)
๐๐: ๐๐
=๐
๐๐ผ tr(๐๐)|
๐ผ=0 +๐
๐๐ผ ๐ผtr(๐๐๐)|
๐ผ=0
= tr(๐๐๐) = ๐๐: ๐๐
Giving us,
๐๐(๐)
๐๐= ๐๐.
Finally, we look at the derivative of magnitude. We have found this by a simpler means earlier.
It is instructive to look more rigorously, using the Gateaux differential. Given a scalar ๐ผ variable,
the derivative of a scalar function of a tensor ๐(๐) is
๐๐(๐)
๐๐: ๐ = lim
๐ผโ0
๐
๐๐ผ๐(๐ + ๐ผ๐)
for any arbitrary tensor ๐. In the case of ๐(๐) = |๐|,
๐|๐|
๐๐: ๐ = lim
๐ผโ0
๐
๐๐ผ|๐ + ๐ผ๐|
|๐ + ๐ผ๐| = โtr(๐ + ๐ผ๐)(๐ + ๐ผ๐)๐ = โtr(๐๐T + ๐ผ๐๐T + ๐ผ๐๐T + ๐ผ2๐๐T)
Note that everything under the root sign here is scalar and that the trace operation is linear.
Consequently, we can write,
lim๐ผโ0
๐
๐๐ผ|๐ + ๐ผ๐| = lim
๐ผโ0
tr (๐๐T) + tr (๐๐T) + 2๐ผ tr (๐๐T)
2โtr(๐๐T + ๐ผ๐๐T + ๐ผ๐๐T + ๐ผ2๐๐T)=2๐: ๐
2โ๐:๐=๐
|๐|: ๐
So that,
๐|๐|
๐๐: ๐ =
๐
|๐|: ๐
or,
๐|๐|
๐๐=๐
|๐|
15
as required since ๐ is arbitrary.
Differentiation of Fields
Euclidean Point Space in Concrete Terms
In Continuum Mechanics, we are often concerned with quantities that are scalars (density, mass,
temperature, etc.,) vectors (e.g. displacements, velocities, etc.,) or second-order tensors (e.g.,
stresses, strains, etc.). There are other things that are derivatives, in space, of these basic
quantities. Some of these are temperature gradients, velocity gradients, deformation gradient
(Which, some have argued, is not even a gradient in the strict sense).
One other fact is that these quantities of interest are often associated with points in space. A set
of examples will clarify this before we provide an abstract definition:
1. A Velocity Map
The arrows here show the velocity at each point. The color gives other information. About
the distribution.
2. Weather Map
16
The meteorologist has a lot of ways to tell about the weather and predicting it. There are
radar maps that give the pressure distribution, wind velocities, temperature,
precipitation, etc.:
3. Simulation Maps
The diagram below shows field functions, displacement, stress and factor of safety
displayed in the simulation of a vertically loaded curved bar that is supported at one end.
These and many others maps in our daily experience are concrete examples of point functions or
field functions. The domains here are a flow field โฑ, A geographical region ๐ข and the body of a
curved bar โฌ. Each represents a subset of the Euclidean point space in concrete terms. More
precisely, โฑ, ๐ข, โฌ โ โฐ.
๐ฏ:๐ โ ๐ผ, ๐: ๐ข โ โ,๐:๐ โ ๐
17
Expressing the facts that the velocity ๐ฏ(๐ฅ) was defined at the subset โฑ in the Euclidean point
Space and a vector object is created at each point. Those objects themselves reside in the vector
space which we already understand how to deal with in that the structure and rules governing
them are well defined. The same goes for the pressure ๐(๐ฅ), and the stress ๐(๐ฅ) as they have all
been defined in the respective subsets.
When we talk about temperature, density, strain functions as fields, these are different kinds of
objects that share the common feature of being defined in the Euclidean Point Space.
Furthermore, there are other functions that are derived from these base functions. Some
examples such derived fields are Temperature Gradient, Velocity Gradient and the Deformation
Gradient. In this section, it will be clear that these derived point functions are one rank higher
than the ones they are derived from. Temperature Gradient, for example is a vector quantity
derived from scalar temperature while the velocity gradient is a second-order tensor derived
from the Frรฉchet derivative of the vector velocity.
Components of Gradient in ONB Systems
We now express the gradients measuring changes in the fields beginning with
๐ท๐(๐ฑ, ๐๐ฑ) = (grad ๐(๐ฑ)) โ ๐๐ฑ
With a scalar-valued function with vector arguments that are now the position vectors in the
Euclidean Point space. Given a parameter ๐ก, for the point {๐ฅ1, ๐ฅ2, ๐ฅ3} consider the neighboring
point is at {๐ฅ1 + ๐1๐ก, ๐ฅ2 + ๐ก๐2, ๐ฅ3 + ๐ก๐3}
๐ท๐(๐ฑ, ๐๐ฑ) = lim๐ผโ0
๐(๐ฑ + ๐ผ๐๐ฑ) โ ๐(๐ฑ)
๐ผ
= (๐๐(๐ฑ)
๐๐ฅ1๐1 +
๐๐(๐ฑ)
๐๐ฅ2๐2 +
๐๐(๐ฑ)
๐๐ฅ3๐3) โ ๐๐ฑ
So that, for a scalar-valued function, in an Orthonormal system,
grad ๐(๐ฑ) = (๐๐(๐ฑ)
๐๐ฅ1๐1 +
๐๐(๐ฑ)
๐๐ฅ2๐2 +
๐๐(๐ฑ)
๐๐ฅ3๐3).
Without further ado, we shall generalize this expression and this provide the results for more
general cases. We use the comma notation to denote partial derivatives and apply it post so that
we can write,
18
grad ๐(๐ฑ) =๐๐(๐ฑ)
๐๐ฅ๐๐๐ = ๐,๐ ๐๐
Addition, product and other rules apply to Gradient in the comma notation as follows:
grad (๐(๐ฑ)๐(๐ฑ)) = (๐๐),๐ ๐๐
= (๐,๐ ๐ + ๐๐,๐ )๐๐
= ๐ grad ๐(๐ฑ) + ๐ grad ๐(๐ฑ)
If we define the gradient operator as a post fix operator,
grad (โ) = (โ),๐ผโ๐๐ผ
Where (as long as the coordinate system of reference is ONB) the comma signifies partial
derivative, with respect to the ๐ผ coordinate. The tensor product applying in all cases except for
the scalar function where there is no existing basis vector to take the product with. It therefore
stands for ordinary product in this case.
Gradient of a Vector Function.
Consider a vector function, ๐ฏ(๐ฅ), defined in a Euclidean space spanned by ONB. This can be
written in terms of its components,
๐ฏ(๐ฅ) = ๐ฃ๐(๐ฅ)๐๐
Then,
grad ๐ฏ(๐ฑ) = (๐ฃ๐(๐ฅ)๐๐),๐ผโ๐๐ผ
= ๐ฃ๐ ,๐ผ (๐ฅ)๐๐โ๐๐ผ
= [๐1 ๐2 ๐3]
(
๐๐ฃ1๐๐ฅ1
๐๐ฃ1๐๐ฅ2
๐๐ฃ1๐๐ฅ3
๐๐ฃ2๐๐ฅ1
๐๐ฃ2๐๐ฅ2
๐๐ฃ2๐๐ฅ3
๐๐ฃ3๐๐ฅ1
๐๐ฃ3๐๐ฅ2
๐๐ฃ3๐๐ฅ3)
โ(
๐1๐2๐3)
For a tensor field ๐(๐ฑ), the gradient can be obtained in the same way:
grad ๐(๐ฑ) = (๐๐๐(๐ฑ)๐๐โ๐๐),๐ผโ๐๐ผ
= ๐๐๐,๐ผ (๐ฑ)๐๐โ๐๐โ๐๐ผ
which is a third-order tensor containing 27 terms. Each set of nine terms can be written out as
we have done for the tensor gradient of a vector-valued field above.
19
Now, a temperature field is a scalar field. The forgoing shows that the gradient of such a filed is
a vector field on its own. A velocity field is a vector field. Gradient of such a field is a second-order
tensor. This is a well-known field called Velocity Gradient.
The Divergence
Gradients of objects larger than scalar are at least second-order tensors. Such derived fields can
be contracted in the following way by taking the trace of the last two bases (when they are more
than two.) Such a contraction is called the divergence, not of the derived field, but of the original
field.
Temperature and other scalar fields cannot have divergence because their gradients are vectors
and therefore cannot be contracted (you cannot take the trace). The gradient of a vector field
such as the Velocity Gradient, can only be contracted in one way (has only one possible trace).
Gradients of larger objects such as tensor fields can be contracted (traced) in more than one way:
In that case, the disambiguation rule for contraction to obtain a divergence is to contract with
the basis that came from the derivative. For example,
grad ๐ฏ(๐ฑ) = ๐ฃ๐ ,๐ผ (๐ฅ)๐๐โ๐๐ผ. The divergence of the same field is
div ๐ฏ(๐ฑ) = tr grad ๐ฏ(๐ฑ)
= ๐ฃ๐,๐ผ (๐ฅ)๐๐ โ ๐๐ผ = ๐ฃ๐ ,๐ผ (๐ฅ)๐ฟ๐๐ผ
= ๐ฃ๐,๐ (๐ฅ) =๐๐ฃ๐(๐ฑ)
๐๐ฅ๐
=๐๐ฃ1(๐ฑ)
๐๐ฅ1+๐๐ฃ2(๐ฑ)
๐๐ฅ2+๐๐ฃ3(๐ฑ)
๐๐ฅ3
Similarly, for the tensor, ๐(๐ฑ), the divergence,
div ๐(๐ฑ) = tr grad ๐(๐ฑ)
= ๐๐๐,๐ผ (๐ฑ)๐๐(๐๐ โ ๐๐ผ)
= ๐๐๐,๐ผ (๐ฑ)๐๐๐ฟ๐๐ผ = ๐๐๐ ,๐ (๐ฑ)๐๐
= (๐๐11๐๐ฅ1
+๐๐12๐๐ฅ2
+๐๐13๐๐ฅ3
) ๐1 + (๐๐21๐๐ฅ1
+๐๐22๐๐ฅ2
+๐๐23๐๐ฅ3
) ๐2 + (๐๐31๐๐ฅ1
+๐๐32๐๐ฅ2
+๐๐33๐๐ฅ3
) ๐3
20
The Laplacian or โgrad squaredโ.
If we take the gradient of a scalar twice, we obtain a second order tensor. The trace of this field
creates the well know Laplacian operator wrongly called grad square. The square of the gradient
of a scalar is the tensor of order 2 from which the trace is taken. The Laplacian is a scalar operator
that appears frequently in the governing equations of many systems. Let us find the expression
for this for a typical scalar field.
You will recall that, for a scalar field, ๐(๐ฑ),
grad ๐(๐ฑ) =๐๐(๐ฑ)
๐๐ฅ๐๐๐ = ๐,๐ ๐๐.
Taking the gradient of this expression a second time, we have,
grad grad ๐(๐ฑ) = (๐,๐ ๐๐),๐โ๐๐ = ๐,๐๐ ๐๐โ๐๐
The trace of this tensor can be found by contracting the dyad bases:
ฮ๐(๐ฑ) = tr grad grad ๐(๐ฑ) = ๐,๐๐ ๐๐ โ ๐๐
= ๐,๐๐ ๐ฟ๐๐ = ๐,๐๐
=๐2๐(๐ฑ)
๐๐ฅ12 +
๐2๐(๐ฑ)
๐๐ฅ22 +
๐2๐(๐ฑ)
๐๐ฅ32 .
The trace of the double gradient of any field can be taken in this way. The observation one can
make here is that while gradient raise the order of its operand by one, divergence decreases its
own by one while the Laplacian operation makes no order change. As an example, grad ๐ฏ(๐ฑ) =
๐ฃ๐ ,๐ (๐ฅ)๐๐โ๐๐ so that,
grad grad ๐ฏ(๐ฑ) = ๐ฃ๐ ,๐๐ (๐ฅ)๐๐โ๐๐โ๐๐
so that, the Laplacian of ๐ฏ(๐ฑ) is,
ฮ๐ฏ(๐ฑ) = tr grad grad ๐ฏ(๐ฑ)
= ๐ฃ๐ ,๐๐ (๐ฅ)๐๐๐ฟ๐๐ = ๐ฃ๐ ,๐๐ (๐ฅ)๐๐
= (๐2๐ฃ1
๐๐ฅ12 +
๐2๐ฃ1
๐๐ฅ22 +
๐2๐ฃ1
๐๐ฅ32 )๐1 + (
๐2๐ฃ2
๐๐ฅ12 +
๐2๐ฃ2
๐๐ฅ22 +
๐2๐ฃ2
๐๐ฅ32 )๐2 + (
๐2๐ฃ3
๐๐ฅ12 +
๐2๐ฃ3
๐๐ฅ22 +
๐2๐ฃ3
๐๐ฅ32 )๐3
We can continue and find the trace of grad gradโข of a tensor object. The process is similar and
the result of the trace, to give the Laplacian, will also be a tensor of the same order.
21
Curl of Vector and Tensor Fields.
The third-order alternating tensor, ๐ โก ๐๐๐๐๐๐โ๐๐โ๐๐ was introduced in the last chapter.
Compositions of this tensor with vectors and other tensors yield some useful constructs in
Continuum Mechanics. We have already seen its action resulting in the axial vector for a skew
tensor. We are about see that the well-known curl of vectors and tensors can be neatly defined
by the divergence of products with this tensor.
1. Curl of a vector. Given any vector ๐ฎ(๐ฅ) = ๐ข๐ผ(๐ฅ)๐๐ผ, the second-order tensor, the
composition ๐๐ฎ is skew and is the transpose of the vector cross of ๐ฎ.
๐๐ฎ = ๐๐๐๐๐ข๐ผ(๐ฅ)(๐๐โ๐๐โ๐๐)๐๐ผ
= ๐๐๐๐๐ข๐ผ(๐ฅ)(๐๐โ๐๐)๐๐ โ ๐๐ผ
= ๐๐๐๐๐ข๐ผ(๐ฅ)(๐๐โ๐๐)๐ฟ๐๐ผ = ๐๐๐๐๐ข๐(๐ฅ)(๐๐โ๐๐)
Gradient of ๐๐ฎ is,
grad(๐๐ฎ) = ๐๐๐๐๐ข๐ ,๐ผ (๐ฅ)๐๐โ๐๐โ๐๐ผ
And the trace gives us the divergence,
curl ๐ฎ โก div(๐๐ฎ) = tr grad(๐๐ฎ)
= ๐๐๐๐๐ข๐,๐ผ ๐๐(๐๐ โ ๐๐ผ)
= ๐๐๐๐๐ข๐,๐ ๐๐
= (
๐1 ๐2 ๐3๐
๐๐ฅ1
๐
๐๐ฅ2
๐
๐๐ฅ3๐ข1 ๐ข2 ๐ข3
)
which is the curl of the vector field ๐ฎ(๐ฅ). The curl of a vector has zero divergence:
grad curl ๐ฎ = ๐๐๐๐๐ข๐,๐๐ ๐๐โ๐๐
The trace of this expression,
div curl ๐ฎ = tr grad curl ๐ฎ = ๐๐๐๐๐ข๐,๐๐ ๐๐ โ ๐๐
= ๐๐๐๐๐ข๐,๐๐ ๐ฟ๐๐ = ๐๐๐๐๐ข๐,๐๐
= 0
2. Curl of a Tensor Field. Given a tensor ๐(๐ฑ) = (๐๐๐(๐ฑ)๐๐โ๐๐), the composition,
๐๐ = (๐๐ผ๐ฝ(๐ฑ)๐๐ผโ๐๐ฝ)(๐๐๐๐๐๐โ๐๐โ๐๐)
= (๐๐ผ๐ฝ(๐ฑ)๐๐๐๐๐ฮฑโ๐๐โ๐๐)๐ฟ๐ฝ๐
22
= ๐๐ผ๐๐๐๐๐๐๐ผโ๐๐โ๐๐
This third-order tensor, ๐๐ผ๐๐๐๐๐๐๐ผโ๐๐โ๐๐ is skew in two of its indices ๐ and ๐. The
transpose of the divergence of this tensor is
curl ๐(๐ฑ) = (div ๐๐)T
= ๐๐๐๐๐๐ผ๐,๐ ๐๐โ๐๐ผ
= (๐1 ๐2 ๐3)
(
๐๐13๐๐ฅ2
โ๐๐12๐๐ฅ3
๐๐23๐๐ฅ2
โ๐๐22๐๐ฅ3
๐๐33๐๐ฅ2
โ๐๐32๐๐ฅ3
๐๐11๐๐ฅ3
โ๐๐13๐๐ฅ1
๐๐21๐๐ฅ3
โ๐๐23๐๐ฅ1
๐๐31๐๐ฅ3
โ๐๐33๐๐ฅ1
๐๐12๐๐ฅ1
โ๐๐11๐๐ฅ2
๐๐22๐๐ฅ1
โ๐๐21๐๐ฅ2
๐๐32๐๐ฅ1
โ๐๐31๐๐ฅ2 )
โ(
๐1 ๐2 ๐3
)
The curl of a symmetric tensor is traceless:
tr curl ๐ = ๐๐๐๐๐๐ผ๐,๐ ๐๐ โ ๐๐ผ = ๐๐๐๐๐๐๐,๐
which vanishes when ๐ is symmetric.
Show that for a constant vector ๐ฌ, curl ๐T๐ฌ = (curl ๐)๐ฌ
๐(๐ฑ) = ๐๐๐(๐ฑ)๐๐โ๐๐
๐T๐ฌ = (๐๐๐๐๐โ๐๐)๐ ๐ผ๐๐ผ = ๐๐๐๐ ๐๐๐ = ๐๐ผ๐๐ ๐ผ๐๐
Curl of this vector is,
๐๐๐๐(๐๐ผ๐๐ ๐ผ),๐ ๐๐ = ๐๐๐๐๐๐ผ๐,๐ ๐ ๐ผ๐๐
since ๐ฌ is a constant vector. For the tensor ๐ = ๐๐๐๐๐โ๐๐ itself, we have that, curl ๐ =
๐๐๐๐๐๐ผ๐,๐ ๐๐โ๐๐ผ. Hence,
(curl ๐)๐ฌ = (๐๐๐๐๐๐ผ๐ ,๐ ๐๐โ๐๐ผ)๐ ๐ฝ๐๐ฝ
= ๐๐๐๐๐๐ผ๐ ,๐ ๐ ๐ผ๐๐
So that curl ๐T๐ฌ = (curl ๐)๐ฌ as required. Note that this relationship is often used as the
definition of the curl of a second-order tensor.
Integral Field Theorems
Integral field theorems are needed to derive the governing equations of continua. The most
relevant results, known by several names in the Literature, are often associated with the names
of Stokes, Green and Gauss. These theorems are related to one another in several ways. Our
23
practical uses are emphasized here rather than establishment of proofs that are available in
mathematical texts. We begin with Stokes Theorem:
Stokesโ Theorem
Consider a surface with a boundary curve as shown in the
picture. The outward drawn normal, ๐ง, is in such a way that
a traversal of the boundary curve in the direction shown
keeps the surface on the left-hand side. Such a surface is
said to be positively oriented. Stokes theorem relates the
integral of the curl of a vector or tensor field over this
surface ๐ฎ in the Euclidean Point Space to the line integral of
the vector field ๐ฏ(๐ฑ, ๐ก) over its boundary curve ฮ. In this case, Stokes theorem states that, given
a positively oriented surface ๐ฎ, and bounded as shown by a path ฮ, the line integral,
โซ๐ฏ(๐ฑ, ๐ก) โ ๐๐ฑ
ฮ
=โฌ(curl ๐ฏ) โ ๐๐ฌ.
๐ฎ
That is, the line integral taken over the path shown is equal to the surface integral of the curl of
the same field over the entire surface. For tensor fields, the theorem states:
โซ๐(๐ฑ, ๐ก) โ ๐๐ฑ
ฮ
=โฌ(curl ๐)T๐๐ฌ
๐ฎ
=โฌdiv๐๐๐๐ฌ
๐ฎ
where ๐ = ๐๐๐๐๐๐โ๐๐โ๐๐. Notice that the transpose of the curl is taken in the case of the
tensor field. The curl of a vector field is a vector and admits no transpose. It is instructive to
demonstrate the meaning of these integrals by a simple computation, aided by Mathematica.
Computation Issues
The line integral in equations___ and ___ are easily computed by taking a parametrized position
vector, bound to the integration path, ๐ฑ = ๐(๐) along the path. We therefore have,
โซ๐ฏ(๐ฑ, ๐ก) โ ๐๐
ฮ
= โซ ๐ฏ(๐(๐), ๐ก) โ ๐๐
๐๐
๐๐ธ
๐๐ต
๐๐
As the path is traversed beginning with ๐๐ต and ending with ๐๐ธ . ๐๐
๐๐ is the vector pointing along the
tangent to the path.
24
On the RHS, we are dealing with a surface integral. For a surface parametrized by the scalars,
๐1, ๐2 if the position vector ๐(๐1, ๐2) spanning the surface as attain all possible values is
differentiated with respect to each variable, again we obtain the surface base vectors. The angle
between these vectors are NOT guaranteed to be orthogonal to each other, but their cross
product,
๐๐(๐1, ๐2)
๐๐1ร๐๐(๐1, ๐2)
๐๐2
gives the vector in the direction normal to the surface since the two vectors, defined by the
derivatives, lie on the surface. The orientation of the surface must be checked to see if a traversal
of the path puts the right handed system is such a way that the surface normal is outward. If not,
a reversal of the cross product.
The surface integral then becomes,
โฌ(curl ๐ฏ) โ ๐๐ฌ =
๐ฎ
โซ โซ (curl ๐ฏ) โ (๐๐(๐1, ๐2)
๐๐1ร๐๐(๐1, ๐2)
๐๐2)
๐1๐ธ
๐1๐ต
๐2๐ธ
๐2๐ต
๐๐1๐๐2
Instead of offering a proof of this important result, we shall proceed to a simple example of a
demonstration of it involving a computation of both sides of the above expressions for the simple
vector case. Consider a closed counter-clockwise circular path, centered at the ๐ฅ3 -axis and
perpendicular to it on the plane ๐ฅ3 = 1, so that a typical position vector on it is inclined at angle
๐ to the vertical as shown in figure ___. We
parametrize the path with the familiar spherical
coordinates, as ๐ฅ = ๐ sin ๐ cos๐, ๐ฆ =
๐ sin ๐ sin ๐ and ๐ง = ๐ cos ๐. On the circle, only
๐ varies. If we take {๐ = โ2, ๐ =๐
4}, the
parameters are that of a unit circle at 1 unit
distance along the ๐ฅ3axis.
The line integral of a vector field, according to
Stokes, equals that of the surface integral on a
surface with this curve as its edge.
The line integral can be computed as stated earlier;
๐(๐) = cos ๐ ๐1 + sin ๐ e2 + e3
25
which is the position vector for all the points on this path as ๐๐ต = 0; ๐๐ธ = 2๐. Differentiating
this,
๐๐(๐) =๐๐(๐)
๐๐๐๐ = (โsin ๐ ๐1 + cos ๐ ๐2)๐๐
For any prescribed vector field, ๐ฏ(๐ฑ, ๐ก) = ๐ฏ(๐(๐), ๐ก). A ๐๐๐กโ๐๐๐๐ก๐๐๐ยฎ implementation of this
integral with the function,
๐ฏ(๐ฑ) = sin ๐ฅ ๐1 + (cos ๐ฆ + ๐ฅ +๐ฅ2
2+๐ฅ3
3)๐2 + ๐ง cos ๐ฅ cos ๐ฆ ๐3
is given here:
There at least three easy ways we can realize such a surface:
1. A flat circular region on the same plane as the path. This is the region, ๐ฅ12 + ๐ฅ2
2 โค 1, on
the plane at ๐ฅ3 = 1. It has the parametric equations,
๐ฅ1 = ๐ cos๐, ๐ฅ2 = ๐ sin๐, ๐ฅ3 = 1
By inspection, we can see that the normal to this plane surface is the ๐3 axis. However,
this can be computed directly by using equation 1.__ giving us two vectors on the
coordinate lines on the plane:
In the plane region,
26
๐(๐, ๐) = ๐ cos๐ ๐1 + ๐ sin๐ ๐2
With ๐ โค 1; 0 โค ๐ โค 2๐ covers the region. And the normal to the surface is
๐๐(๐, ๐)
๐๐ร๐๐(๐, ๐)
๐๐
Since both derivatives represent vectors lying on the surface. We can now integrate the
curl of any vector field with the domain set to this plane surface.
27
2. The cone surface, ๐ฅ32 = ๐ฅ1
2 + ๐ฅ22 given that ๐ฅ1
2 +
๐ฅ22 โค 1. The radius vectors constrained to this
region is,
๐(๐, ๐) = ๐ sin๐
4cos๐ ๐1 + ๐ sin
๐
4sin๐ ๐2
+ ๐ cos๐
4 ๐3
Contained within the surface for 0 โค ๐ โค โ2, 0 โค
๐ โค 2๐. Here the two surface vectors are: ๐๐(๐,๐)
๐๐ and
๐๐(๐,๐)
๐๐
The cross products of which give the normal to the surface.
28
3. Lastly, we can take the sphere centered at the origin
with radius โ2 as shown in Figure ___. The radius
vector constrained to the surface is,
๐(๐, ๐) = โ2 sin ๐ cos๐ ๐1 + โ2 sin ๐ sin๐ ๐2
+ โ2 cos ๐ ๐3
With ๐
4โค ๐ โค ๐; 0 โค ๐ โค 2๐. Surface normal can
be evaluated from ๐๐(๐,๐)
๐๐ร๐๐(๐ ๐)
๐๐. In the ๐๐๐กโ๐๐๐๐ก๐๐๐ยฎ code below,
Computations using the three surfaces give the same value of 5๐
4 which is the same as the field
integrated over the closed path shown earlier. For hand calculations, simpler functions may be
used.
Greenโs Theorem
Greenโs theorem can be viewed as a two-dimensional version of the 3-D Stokes Theorem:
Given two scalar fields ๐1(๐ฅ1, ๐ฅ2), ๐2(๐ฅ1, ๐ฅ2), functions defined over a two dimensional Euclidean
Point Space, we can form the vector field,
๐ฏ(๐ฑ) = ๐1(๐ฅ1, ๐ฅ2)๐1 + ๐2(๐ฅ1, ๐ฅ2)๐2
curl ๐ฏ(๐ฑ) = (
๐1 ๐2 ๐3๐
๐๐ฅ1
๐
๐๐ฅ2
๐
๐๐ฅ3๐1(๐ฅ1, ๐ฅ2) ๐2(๐ฅ1, ๐ฅ2) 0
)
=๐๐1๐๐ฅ2
๐1 โ๐๐2๐๐ฅ1
๐2
29
From Stokes Theorem,
โฎ๐ฏ(๐ฑ) โ ๐๐
ฮ
=โฌ(curl ๐ฏ) โ ๐๐ฌ
๐ฎ
But ๐๐ฌ = ๐๐ฅ1๐1 + ๐๐ฅ2๐2 so that,
โฎ(๐1(๐ฅ1, ๐ฅ2)๐1 + ๐2(๐ฅ1, ๐ฅ2)๐2) โ ๐๐
ฮ
=โฌ (๐๐1๐๐ฅ2
๐1 โ๐๐2๐๐ฅ1
๐2) โ (๐๐ฅ1๐1 + ๐๐ฅ2๐2)
๐ฎ
=โฌ (๐๐1๐๐ฅ2
๐๐ฅ1 โ๐๐2๐๐ฅ1
๐๐ฅ2)
๐ฎ
Gauss Divergence Theorem
The divergence theorem is central to several other results in Continuum Mechanics. It related
the volume integral of a field to the surface integral of the tensor product of the field and unit
normal. We present here a generalized form [Ogden] which states that, Gauss Divergence
Theorem
For a tensor field ๐ต, The volume integral in the region ฮฉ โ โฐ,
โซ(grad ๐ต)ฮฉ
๐๐ฃ = โซ ๐ตโ๐งโฮฉ
๐๐
where ๐ง is the outward drawn normal to ๐ฮฉ โ the boundary of ฮฉ. The tensor product applies
when the field is of order one or higher: vectors, second and higher-order tensor. For scalars, it
degenerates, as expected, to a simple multiplication by a scalar. We will now derive several more
familiar statements of this theorem from equation ____ above:
Vector field.
Replacing the tensor with the vector field ๐ and taking the trace on both sides, remembering that
the integral operator is linear and therefore the trace of an integral equals the integral of its trace,
we have,
trโซ (grad ๐)ฮฉ
๐๐ฃ = trโซ ๐โ ๐งโฮฉ
๐๐
so that,
โซ(div ๐)ฮฉ
๐๐ฃ = โซ ๐ โ ๐งโฮฉ
๐๐
30
Stating that the integral of the divergence of a vector field over the volume equals the integral
over the surface of the same function with the outwardly drawn normal. Which is the usual form
of the Gauss theorem. We can proceed immediately to the form of this theorem that applies to
a scalar function:
Scalar Fields
Consider a scalar field ๐ and a constant vector ๐. We apply the vector form of the divergence
theorem to the product ๐๐:
โซ(div[๐๐])ฮฉ
๐๐ฃ = โซ ๐๐ โ ๐ง๐ฮฉ
๐๐ = ๐ โ โซ ๐๐ง๐ฮฉ
๐๐
For the LHS, note that, div[๐๐] = tr grad[๐๐]
grad[๐๐] = (๐๐๐),๐ ๐๐โ๐๐ = ๐๐๐,๐ ๐๐โ๐๐
The trace of which is,
๐๐๐,๐ ๐๐ โ ๐๐ = ๐๐๐,๐ ๐ฟ๐๐ = ๐๐๐,๐ = ๐ โ grad ๐
For the arbitrary constant vector ๐, we therefore have that,
โซ(div[๐๐])ฮฉ
๐๐ฃ = ๐ โ โซgrad ๐ ฮฉ
๐๐ฃ = ๐ โ โซ ๐๐ง๐ฮฉ
๐๐
โซgrad ๐ ฮฉ
๐๐ฃ = โซ ๐๐ง๐ฮฉ
๐๐
Second-Order Tensors
We can apply the general form above to a second-order tensor field, ๐(๐ฑ) = ๐๐๐(๐ฑ)๐๐โ๐๐.
Applying equation ___ as before,
โซ๐๐๐,๐ ๐๐โ๐๐โฮฉ
๐๐๐๐ฃ = โซ ๐๐๐๐๐๐๐โ๐๐โ๐๐๐ฮฉ
๐๐
Contracting, we have
โซ๐๐๐,๐ฮฉ
(๐๐โ๐๐)๐๐๐๐ฃ = โซ ๐๐๐๐๐(๐๐โ๐๐)๐๐๐ฮฉ
๐๐
โซ๐๐๐,๐ ๐ฟ๐๐๐๐ฮฉ
๐๐ฃ = โซ ๐๐๐๐๐๐ฟ๐๐๐๐๐ฮฉ
๐๐
โซ๐๐๐,๐ ๐๐ฮฉ
๐๐ฃ = โซ ๐๐๐๐๐๐๐๐ฮฉ
๐๐
31
Which is the same as,
โซ(div๐)ฮฉ
๐๐ฃ = โซ ๐๐ง๐ฮฉ
๐๐
Relating the volume integral of the divergence of a tensor field to the same tensor field integrated
over the surface after being contracted with the unit normal to the boundary of the same volume.
Curvilinear Coordinates
Tensors provide a unified way to express most of the quantities and objects we come across in
Continuum Mechanics. The material objects we deal with abide in the Euclidean 3D space. For
computation purposes, we refer to this space using a global chart or a coordinate syste. Apart
from those that have ventured into the advanced topics section in each chapter, we have
restricted consideration so far to the simplest of such systems: the Cartesian System with
Orthonormal Basis Vectors (ONB).
There are oblique Cartesian coordinates that are the same as the ONB but their axes do not
necessarily meet at right angles. These oblique systems are not our main concern. The Cartesian
system is useful and easy to use. Reasons for this ease of use include
1. Base vectors meet at right angles. They are not only mutually independent as base vectors
must be, by they also do not have zero components in other base vector directions: their
scalar products with one another vanish.
2. The basis vectors are spatial constants. Remain normal to each other and always point in
the same directions at any point.
3. Because of these, derivatives of tensors expressed in terms of such bases only concern
the coefficients as the base vectors are always constants.
4. In addition to these, the position vectors in Cartesian coordinates ale linear in the
coordinate variables.
In curvilinear coordinate systems, the basis vectors are spatial variables, they are not necessarily
unit vectors and they are not necessarily orthogonal.
For the purpose of this work, we restrict attention to coordinate systems that are orthogonal.
There are very useful and practical systems that meet this criterion, for example, the Cylindrical,
spherical and spheroidal systems we have already seen are curvilinear. While they may not
32
necessarily meet the normality criterion, they are orthogonal. Yet we have to cope with the fact
that the basis vectors are variables.
In the next section, we derive the derivatives of tensor fields with such varying bases. However,
it no longer necessary to go through the tedium to doing such computations unless specific
applications compels one to go through the labor.
In this section, we shall show some of the principal results and principles for working with not
Cartesian systems that obviate the necessity of going through the computation of the derivatives
with varying bases vectors.
The Covariant Derivative
We defined the gradient, in Cartesian systems as,
grad (โ) = (โ),๐ผโ๐๐ผ
The comma in the above equation has been interpreted as a partial derivative thus far. The fact
is that this equation is actually valid in every coordinate system. That includes the orthogonal
curvilinear coordinates we are considering! All we will need to do is to simply change our
interpretation of this equation.
From now on, when we are referring to coordinate systems other than Cartesian, the comma
operator will be interpreted as Covariant Derivatives. The full definition of covariant derivative
is in the advanced section. However, the explanation given here should suffice:
A covariant derivative is simply one where all the terms that would have arisen from the spatial
variability of the basis vectors are already added. Consequently, when we write,
grad (โ) = (โ),๐ผโ๐ ๐ผ
we have an expression that is valid everywhere. Specifically we can represent a second-order
tensor and write,
grad ๐ = (๐๐๐๐ ๐โ๐ ๐),๐ผโ๐ ๐ผ = (๐ ๐
๐ ๐ ๐โ๐ ๐),๐ผโ๐ ๐ผ = (๐๐ ๐๐ ๐โ๐ ๐),๐ผโ๐ ๐ผ
= (๐๐๐๐ ๐โ๐ ๐),๐ผโ๐ ๐ผ
which are four different ways of expressing the same gradient in curvilinear systems, they work
exactly in the same way as the ordinary Cartesian experience with fixed vectors. The covariant
derivative has so taken care of the variability of base issues that the vector bases behave like
constants under covariant differentiation! It does it by computing extra terms, called Christoffel
33
symbols, that add more terms to the expression to account for the variation of the basis vectors.
Just like in the Cartesian system, therefore, there are no further terms to worry about.
The heavy lifing therefore transfers to the computation of the covariant derivative and evaluating
the physical components of the tensor components. We do not emphasize the computations of
these things as they are easily programmed. One of the advantages of a Symbolic Algebra System
is that it facilitates such computation just for the asking! You do not need to do much
programming! A few examples will be given shortly. Remember, all you need to worry about are
the equations as they exist in Cartesian systems. With correct interpretations, they are valid in
every system. And, when you need to compute actual values for those coordinate systems, a
symbolic algebra system like ๐๐๐กโ๐๐๐๐ก๐๐๐ยฉ delivers them easily.
The consequence of these is that many textbooks, including modern books devote a large
number of pages on these items that are done with completeness by software to an extent far
beyond what can reasonable be placed in textbooks.
If there is any need (very doubtful) to compute the Christoffel symbols, they can be done by
programming also โ no need for manual computations. If you want manual computations despite
all this, then read the advanced part of this chapter, the manual computations are explained
there.
Gradient Divergence and Curl in Curvilinear Coordinates.
In this section we will show how to find these differential operations in the common systems of
Cylindrical, Spherical and Oblate Spheroidal systems. The extension to any popular coordinate
system is immediate. Furthermore, you will see that Mathematica allows you to define a
completely new coordinate system by yourself. And, if your definition is correctly specified to it,
it will compute all the differential operators you need for you!
Gradient is easy to compute manually and we will do so right away:
For a scalar field, ๐(๐ฅ1, ๐ฅ2, ๐ฅ3), in Cartesian coordinates, we already know that,
grad ๐ =๐๐
๐๐ฅ1๐1 +
๐๐
๐๐ฅ2๐2 +
๐๐
๐๐ฅ3๐3
The same expression can be obtained in cylindrical coordinates by observing that, in that system,
the position vector is,
34
๐ซ = ๐ cos๐ ๐1 + ๐ sin๐ ๐2 + ๐ง๐๐ง = ๐๐๐ + ๐ง๐๐ง
๐ ๐ =๐๐ซ
๐๐= ๐๐; ๐ ๐ =
๐๐ซ
๐๐= ๐๐๐; ๐ ๐ =
๐๐ซ
๐๐ง= ๐๐ง
The natural basis vectors,
๐ ๐ โก๐๐ซ
๐๐๐
Obtained by differentiating the position vector with respect to the coordinates ๐๐ in this case,
๐1 = ๐, ๐2 = ๐ and ๐3 = ๐ง, are all unit vectors except the middle one. Suppose we did not know
that. What we will need to do for each system, to get the physically consistent gradient in physical
components, we need to divide each term by the magnitude of each basis vector. In this case
therefore,
โ1 = โ๐ 1 โ ๐ 1 = 1
โ2 = โ๐ 2 โ ๐ 2 = ๐ ๐๐๐
โ2 = โ๐ ๐ โ ๐ 3 = 1
If we write, ๐ โฒ๐ผ, ๐ผ = 1,2,3 as the normalized basis vectors in the curvilinear system, the gradient
is therefore,
grad (๐) = โ1
โ๐ผ(๐),๐ผโ๐ โฒ๐ผ
3
๐ผ=1
=1
โ1
๐๐
๐๐1 ๐ โฒ1 +
1
โ2
๐๐
๐๐2 ๐ โฒ2 +
1
โ3
๐๐
๐๐3 ๐ โฒ3
=๐๐
๐๐ ๐๐ +
1
๐
๐๐
๐๐ ๐๐ +
๐๐
๐๐3 ๐๐ง .
Notice again, the summation convention cannot be used here, hence we revert to the regular
nomenclature and use summation directly. This is the gradient in the normalized natural bases.
Observe that we have re-introduces the summation symbol in the above computation to avoid
breaking the summation convention. Furthermore, we have just dealt with a scalar field. For such
a field, the covariant derivative coincides with the partial derivatives. Hence all we need do here
is to normalize the basis vectors and divide by the magnitude of the basis vectors to obtain the
physically consistent tensor expression.
In Spherical coordinates, with no further ado,
35
We can write:
grad (๐) =1
โ1
๐๐
๐๐1 ๐ โฒ1 +
1
โ2
๐๐
๐๐2 ๐ โฒ2 +
1
โ3
๐๐
๐๐3 ๐ โฒ3
=๐๐
๐๐ ๐๐ +
1
๐
๐๐
๐๐ ๐๐ +
1
๐ sin ๐
๐๐
๐๐ ๐๐.
To find the gradient in any coordinate systems is simply to invoke the command as follows:
Comparing the above output to the previous result, it is obvious that, in Cylindrical coordinates,
๐๐
๐๐= ๐(1,0,0)(๐, ๐, ๐ง),
๐๐
๐๐= ๐(0,1,0)(๐, ๐, ๐ง),
๐๐
๐๐ง= ๐(0,0,1)(๐, ๐, ๐ง)
And that the result is given in lists with the understanding that we are referring to the normalized
basis vectors in each case.
Gradient of a vector
The gradient f a vector is a tensor. To compute it manually, we will have to carry out the covariant
derivative with the attendant Christoffel symbols. How about doing it by simply writing the
function on the Mathematica Notebook as follows?
36
You are able to get any vector operation on a tensor function with such simple commands.
Examples
3.1 Given that ๐ผ(๐ก) โ โ, ๐ฎ(๐ก), ๐ฏ(๐ก) โ ๐ผ, and ๐(๐ก), ๐(๐ก) โ ๐, are all functions of a
scalar variable ๐ก, show that (๐)๐
๐๐ก(๐ผ๐ฎ) = ๐ผ
๐๐ฎ
๐๐ก+๐๐ผ
๐๐ก๐ฎ, (๐)
๐
๐๐ก(๐ฎ โ ๐ฏ) =
๐๐ฎ
๐๐กโ
๐ฏ + ๐ฎ โ ๐๐ฏ
๐๐ก, (๐)
๐
๐๐ก(๐ฎ ร ๐ฏ) =
๐๐ฎ
๐๐กร ๐ฏ + ๐ฎ ร
๐๐ฏ
๐๐ก, and (๐)
๐
๐๐ก(๐ฎโ ๐ฏ) =
๐๐ฎ
๐๐กโ๐ฏ+
๐ฎโ๐๐ฏ
๐๐ก
๐
๐๐ก(๐ฎโ ๐ฏ) = lim
โโ0
๐ฎ(๐ก + โ) โ ๐ฏ(๐ก + โ) โ ๐ฎ(๐ก) โ ๐ฏ(๐ก)
๐ก
= limโโ0
๐ฎ(๐ก + โ)โ๐ฏ(๐ก + โ) โ ๐ฎ(๐ก)โ๐ฏ(๐ก + โ)+๐ฎ(๐ก)โ๐ฏ(๐ก + โ)โ๐ฎ(๐ก)โ๐ฏ(๐ก)
๐ก
= limโโ0
๐ฎ(๐ก + โ)โ๐ฏ(๐ก + โ) โ ๐ฎ(๐ก)โ๐ฏ(๐ก + โ)
๐ก
+ limโโ0
๐ฎ(๐ก)โ๐ฏ(๐ก + โ)โ๐ฎ(๐ก)โ๐ฏ(๐ก)
๐ก
= (limโโ0
๐ฎ(๐ก + โ) โ ๐ฎ(๐ก)
๐ก)โ (lim
โโ0๐ฏ(๐ก + โ)) + ๐ฎ(๐ก)
โ limโโ0
๐ฏ(๐ก + โ)โโ๐ฏ(๐ก)
๐ก
=๐๐ฎ๐๐กโ ๐ฏ+๐ฎโ
๐๐ฏ๐๐ก.
3.2 Given that ๐ผ(๐ก) โ โ, ๐ฎ(๐ก), ๐ฏ(๐ก) โ ๐ผ, and ๐(๐ก), ๐(๐ก) โ ๐, are all functions of a
scalar variable ๐ก, show that (๐)๐
๐๐ก(๐ + ๐) =
๐๐
๐๐ก+๐๐
๐๐ก, (๐)
๐
๐๐ก๐๐ =
๐๐
๐๐ก๐ +
๐๐๐
๐๐ก, (๐)
๐
๐๐ก(๐๐ฎ) =
๐๐
๐๐ก๐ฎ + ๐
๐๐ฎ
๐๐ก, and (๐)
๐
๐๐ก๐T = (
๐๐
๐๐ก)T
b ๐
๐๐ก(๐๐) = lim
โโ0
๐(๐ก + โ)๐(๐ก + โ) โ ๐(๐ก)๐(๐ก)
๐ก
= limโโ0
๐(๐ก + โ)๐(๐ก + โ) โ ๐(๐ก)๐(๐ก + โ) + ๐(๐ก)๐(๐ก + โ) โ ๐(๐ก)๐(๐ก)
๐ก
37
= limโโ0
๐(๐ก + โ)๐(๐ก + โ) โ ๐(๐ก)๐(๐ก + โ)
๐ก+ limโโ0
๐(๐ก)๐(๐ก + โ) โ ๐(๐ก)๐(๐ก)
๐ก
= (limโโ0
๐(๐ก + โ) โ ๐(๐ก)
๐ก) (lim
โโ0๐(๐ก + โ)) + ๐(๐ก) lim
โโ0
๐(๐ก + โ) โ ๐(๐ก)
๐ก
=๐
๐๐ก(๐๐) =
๐๐
๐๐ก๐ + ๐
๐๐
๐๐ก.
3.3 Given that ๐(๐ก)๐T(๐ก) = ๐, the identity tensor, show (๐) that ๐๐
๐๐ก๐T is an
antisymmetric tensor, and (๐) that ๐T๐๐
๐๐ก is an antisymmetric tensor
Differentiating ๐๐T = ๐,
๐
๐๐ก(๐๐T) =
๐๐๐๐ก๐T +๐
๐๐T
๐๐ก=๐๐๐๐ก= ๐
Consequently,
๐๐๐๐ก๐T = โ๐
๐๐T
๐๐ก= โ(
๐๐๐๐ก๐T)
T
So we have that the tensor ๐ =๐๐
๐๐ก๐T is negative of its own transpose, hence it is skew.
๐๐T = ๐ = ๐T๐
Differentiating as before,
๐
๐๐ก(๐T๐) =
๐๐T
๐๐ก๐+๐T
๐๐๐๐ก= ๐
So that,
๐๐T
๐๐ก๐ = (๐T
๐๐๐๐ก)
T
= โ๐T๐๐๐๐ก
Tensor ๐ = ๐T๐๐
๐๐ก is negative of its own transpose, hence skew as required.
3.4 Show that ๐
๐๐บ tr (๐๐) = ๐(๐บ๐โ1)๐
38
The cases, ๐ = 1, ๐ = 2 are already provided in the text. When ๐ = 3,
๐ท๐(๐, ๐๐) =๐
๐๐ผ๐(๐ + ๐ผ๐๐)|
๐ผ=0 =๐
๐๐ผtr{(๐ + ๐ผ๐๐)3}|
๐ผ=0
=๐
๐๐ผtr{(๐ + ๐ผ๐๐)(๐ + ๐ผ๐๐)(๐ + ๐ผ๐๐)}|
๐ผ=0
= tr [๐
๐๐ผ(๐ + ๐ผ๐๐)(๐ + ๐ผ๐๐)(๐ + ๐ผ๐๐)]|
๐ผ=0
= tr[๐๐(๐ + ๐ผ๐๐)(๐ + ๐ผ๐๐) + (๐ + ๐ผ๐๐)๐๐(๐ + ๐ผ๐๐)
+ (๐ + ๐ผ๐๐)(๐ + ๐ผ๐๐)๐๐]|๐ผ=0
= tr[๐๐ ๐ ๐ + ๐ ๐๐ ๐ + ๐ ๐ ๐ ๐] = 3(๐2)๐: ๐๐
It easily follows by induction that, ๐
๐๐ ๐(๐) = ๐(๐๐โ1)T.
3.5 Define the magnitude of tensor ๐ as, |๐| = โtr(๐๐T) Show that โ|๐|
โ๐=
๐
|๐|
Given a scalar ๐ผ variable, the derivative of a scalar function of a tensor ๐(๐จ) is
๐๐(๐)
๐๐: ๐ = lim
๐ผโ0
๐
๐๐ผ๐(๐ + ๐ผ๐)
for any arbitrary tensor ๐.
In the case of ๐(๐) = |๐|,
๐|๐|
๐๐: ๐ = lim
๐ผโ0
๐
๐๐ผ|๐ + ๐ผ๐|
|๐ + ๐ผ๐| = โtr(๐ + ๐ผ๐)(๐ + ๐ผ๐)T = โtr(๐๐T + ๐ผ๐๐T + ๐ผ๐๐T + ๐ผ2๐๐T)
Note that everything under the root sign here is scalar and that the trace operation is
linear. Consequently, we can write,
lim๐ผโ0
๐
๐๐ผ|๐ + ๐ผ๐| = lim
๐ผโ0
tr (๐๐T) + tr (๐๐T) + 2๐ผ tr (๐๐T)
2โtr(๐๐T + ๐ผ๐๐T + ๐ผ๐๐T + ๐ผ2๐๐T)=2๐:๐
2โ๐:๐=๐
|๐|: ๐
So that,
๐|๐|
๐๐: ๐ =
๐
|๐|: ๐
or,
๐|๐|
๐๐=๐
|๐|
39
as required since ๐ is arbitrary.
3.6 Without breaking down into components, use Liouvilleโs theorem, โ
โ๐ผdet(๐) =
det(๐) tr(๏ฟฝฬ๏ฟฝ๐โ๐), to establish the fact that ๐det(๐)
๐๐= ๐๐
Start from Liouvilleโs Theorem, given a scalar parameter such that ๐ = ๐(๐ผ),
๐
๐๐ผ(det ๐) = det ๐ tr [(
๐๐
๐๐ผ)๐โ1] = [(det ๐)๐โT]: (
๐๐
๐๐ผ)
By the chain rule,
๐
๐๐ผ(det ๐) = [
๐
๐๐(det ๐)]: (
๐๐
๐๐ผ)
It therefore follows that,
[๐
๐๐(det ๐) โ [(det ๐)๐โT]]: (
๐๐
๐๐ผ) = 0
Hence
๐
๐๐(det ๐) = (det ๐)๐โ๐ = ๐c
3.7 If ๐ is invertible, show that โ
โ๐(log det(๐)) = ๐โ๐
โ
โ๐(log det ๐) =
โ(log det ๐)
โ det ๐ โ det ๐
โ๐
=1
det ๐๐c =
1
det ๐(det ๐)๐โT
= ๐โT
3.8 Use Caley-Hamilton theorem to express the third invariant in terms of traces
only (b) Use this result and the fact that ๐c = (๐2 โ ๐ผ1๐ + ๐ผ2๐)T (2.19)to
show that the derivative of the determinant is the cofactor.
๐ Given a tensor ๐, by Cayley Hamilton theorem,
๐3 โ ๐ผ1๐2 + ๐ผ2๐ โ ๐ผ3 = 0
Note that ๐ผ1 = ๐ผ1(๐), ๐ผ2 = ๐ผ2(๐) ๐๐๐ ๐ผ3 = ๐ผ3(๐) the first, second and third invariants of ๐บ
are all scalar functions of the tensor ๐. Taking the trace of the above equation,
40
๐ผ1(๐3) = ๐ผ1(๐)๐ผ1(๐
2) โ ๐ผ2(๐)๐ผ1(๐) + 3๐ผ3(๐)
= ๐ผ1(๐)(๐ผ12(๐) โ 2๐ผ2(๐)) โ ๐ผ1(๐)๐ผ2(๐) + 3๐ผ3(๐)
= ๐ผ13(๐) โ 3๐ผ1(๐)๐ผ2(๐) + 3๐ผ3(๐)
Or,
๐ผ3(๐) =1
3(๐ผ1(๐
3) โ ๐ผ13(๐) + 3๐ผ1(๐)๐ผ2(๐))
=1
3(๐ผ1(๐
3) โ ๐ผ13(๐) +
3
2๐ผ1(๐)(๐ผ1
2(๐) โ ๐ผ1(๐2)))
=1
6(2๐ผ1(๐
3) + ๐ผ13(๐) โ 3๐ผ1(๐)๐ผ1(๐
2))
which show that the third invariant is itself expressible in terms of traces only. It is
therefore invariant in value as a result of coordinate transformation.
๐ Taking the Frรฉchet derivative of the third invariant expression above, noting that
๐
๐๐บ tr (๐๐) = ๐(๐๐โ1)T
6๐
๐๐๐ผ3(๐) = 6๐
2T + 3๐ผ12(๐)๐ โ 6๐ผ1(๐)๐
T โ 3๐ผ1(๐2)๐
so that
๐
๐๐๐ผ3(๐) = ๐
2T โ ๐ผ12(๐)๐ + ๐๐ผ2(๐) + 3๐ผ1(๐)(๐ผ1(๐)๐ โ ๐
T)
= (๐2 โ ๐ผ1(๐)๐+๐ผ2(๐)๐)T
= ๐c
3.9 If ๐ is invertible, show that ๐
๐๐(log det(๐โ1)) = โ๐โ๐
๐
๐๐(log det(๐โ1)) =
๐(log det(๐โ1))
๐det(๐โ1) ๐det(๐โ1)
๐๐โ1๐๐โ1
๐๐
= โ1
det(๐โ1)๐โc(๐โ1โ ๐โT)
= โ1
det(๐โ1)det(๐โ1) ๐๐(๐โ1โ ๐โT)
= โ๐โT๐T๐โ๐ = โ๐โT
An easier method that does not require the evaluation of a fourth-order tensor:
41
๐
๐๐(log det(๐โ1)) =
๐(log det(๐โ1))
๐det(๐โ1) ๐ det(๐โ1)
๐det(๐)
๐ det(๐)
๐๐
=1
det(๐โ1)
๐(1/det(๐))
๐ det(๐)๐c
= โdet(๐)
det(๐2)๐๐
= โ๐โT
3.10 Given that ๐ and ๐ are constant tensors, show that โ
โ๐tr(๐๐๐T) = ๐T๐
First observe that tr(๐๐๐T) = tr(๐T๐๐). If we write, ๐ โก ๐T๐, it is obvious from the
above that ๐
๐๐tr(๐๐) = ๐T. Therefore,
โ
โ๐tr(๐๐๐T) = (๐T๐)๐ = ๐T๐
3.11 Given that ๐ and ๐ are constant tensors, show that โ
โ๐tr(๐๐T๐T) = ๐T๐
Observe that
tr(๐๐T๐T) = tr(๐T๐๐T)
= tr[๐(๐T๐)T]
= tr[(๐T๐)T๐]
[The transposition does not alter trace; neither does a cyclic permutation. Ensure you
understand why each equality here is true.] Consequently,
โ
โ๐tr(๐๐T๐T) =
โ
โ๐ tr[(๐T๐)T๐] = [(๐T๐)T]๐ = ๐T๐
3.12 Let ๐ be a symmetric and positive definite tensor and let ๐ผ1(๐), ๐ผ2(๐)and๐ผ3(๐)
be the three principal invariants of ๐ show, using components, that (a)
๐๐ฐ1(๐)
๐๐= ๐ the identity tensor, (b)
๐๐ฐ2(๐)
๐๐= ๐ผ1(๐)๐ โ ๐ and (c)
๐๐ผ3(๐)
๐๐= ๐ผ3(๐) ๐
โ1
42
๐ ๐๐ฐ1(๐บ)
๐๐บ can be written in the invariant component form as,
๐๐ผ1(๐)
๐๐=๐๐ผ1(๐)
โ๐๐๐๐๐โ๐๐
Recall that ๐ผ1(๐บ) = tr ๐ = ๐๐ผ๐ผ hence
๐๐ผ1(๐)
๐๐=๐๐ผ1(๐)
โ๐๐๐๐๐โ๐๐ =
๐๐๐ผ๐ผโ๐๐๐
๐๐โ๐๐
= ๐ฟ๐๐ผ๐ฟ๐ผ๐๐๐โ๐๐ = ๐
which is the identity tensor as expected.
๐ ๐๐ผ2(๐บ)
๐๐บ in a similar way can be written in the invariant component form as,
๐๐ผ2(๐)
๐๐=1
2
๐
โ๐๐๐[๐๐ผ๐ผ๐๐ฝ๐ฝ โ ๐๐ผ๐ฝ๐๐ฝ๐ผ]๐๐โ๐๐
where we have utilized the fact that ๐ผ2(๐) =1
2[tr2 ๐ โ tr ๐2]. Consequently,
๐๐ผ2(๐)
๐๐=1
2
๐
โ๐๐๐[๐๐ผ๐ผ๐๐ฝ๐ฝ โ ๐๐ผ๐ฝ๐๐ฝ๐ผ]๐๐โ๐๐
=1
2[๐ฟ๐๐ผ๐ฟ๐ผ๐๐๐ฝ๐ฝ + ๐ฟ๐๐ฝ๐ฟ๐ฝ๐๐๐ผ๐ผ โ ๐ฟ๐ฝ๐๐ฟ๐๐ผ๐๐ผ๐ฝ โ ๐ฟ๐ผ๐๐ฟ๐๐ฝ๐๐ฝ๐ผ]๐๐โ๐๐
=1
2[๐ฟ๐๐๐๐ฝ๐ฝ + ๐ฟ๐๐๐๐ผ๐ผ โ ๐๐๐ โ ๐๐๐]๐๐โ๐๐
= ๐ผ1(๐)๐ โ ๐
๐ det ๐ =1
3!๐๐๐๐๐๐๐ ๐ก๐๐๐๐๐๐ ๐๐๐ก
Differentiating wrt ๐๐ผ๐ฝ, we obtain,
๐๐
๐๐๐ผ๐ฝ๐๐ผโ๐๐ฝ =
1
3!๐๐๐๐๐๐๐ ๐ก [
๐๐๐๐๐๐๐ผ๐ฝ
๐๐๐ ๐๐๐ก + ๐๐๐๐๐๐๐
๐๐๐ผ๐ฝ๐๐๐ก + ๐๐๐๐๐๐
๐๐๐๐ก๐๐๐ผ๐ฝ
] ๐๐ผโ๐๐ฝ
=1
3!๐๐๐๐๐๐๐ ๐ก[๐ฟ๐๐ผ๐ฟ๐๐ฝ๐๐๐ ๐๐๐ก + ๐๐๐๐ฟ๐๐ผ๐ฟ๐ ๐ฝ๐๐๐ก + ๐๐๐๐๐๐ ๐ฟ๐๐ผ๐ฟ๐ก๐ฝ]๐๐ผโ๐๐ฝ
=1
3!๐๐๐๐๐๐๐ ๐ก[๐๐๐ ๐๐๐ก + ๐๐๐ ๐๐๐ก + ๐๐๐ ๐๐๐ก]๐๐ผโ๐๐ฝ
=1
2!๐๐๐๐๐๐๐ ๐ก๐๐๐ ๐๐๐ก๐๐ผโ๐๐ฝ โก [๐
๐]๐ผ๐ฝ๐๐ผโ๐๐ฝ
Which is the cofactor of [๐๐ผ๐ฝ] or ๐บ
43
3.13 Divergence of a product: Given that ๐ is a scalar field and ๐ฏ a vector field,
show that div(๐๐ฏ) = (grad๐) โ ๐ฏ + ๐ div ๐ฏ
grad(๐๐ฏ) = (๐๐ฃ๐),๐ ๐๐โ๐๐
= ๐,๐ ๐ฃ๐๐๐โ๐๐ + ๐๐ฃ๐ ,๐ ๐๐โ๐๐
= ๐ฏโ (grad ๐) + ๐ grad ๐ฏ
Now, div(๐๐ฏ) = tr(grad(๐๐ฏ)). Taking the trace of the above, we have:
div(๐๐ฏ) = ๐ฏ โ (grad ๐) + ๐ div ๐ฏ
3.14 Show that grad(๐ฎ ยท ๐ฏ) = (grad ๐ฎ)T๐ฏ + (grad ๐ฏ)T๐ฎ
๐ฎ ยท ๐ฏ = ๐ข๐๐ฃ๐ is a scalar sum of components.
grad(๐ฎ ยท ๐ฏ) = (๐ข๐๐ฃ๐),๐ ๐๐
= ๐ข๐,๐ ๐ฃ๐๐๐ + ๐ข๐๐ฃ๐ ,๐ ๐๐
Now grad ๐ฎ = ๐ข๐ ,๐ ๐๐โ๐๐ swapping the bases, we have that,
(grad ๐ฎ)T = ๐ข๐ ,๐ (๐๐โ๐๐).
Writing ๐ฏ = ๐ฃ๐๐๐, we have that,
(grad ๐ฎ)T๐ฏ = ๐ข๐,๐ ๐ฃ๐(๐๐โ๐๐)๐๐ = ๐ข๐ ,๐ ๐ฃ๐๐๐๐ฟ๐๐ = ๐ข๐ ,๐ ๐ฃ๐๐๐
It is easy to similarly show that ๐ข๐๐ฃ๐ ,๐ ๐๐ = (grad ๐ฏ)T๐. Clearly,
grad(๐ฎ ยท ๐ฏ) = (๐ข๐๐ฃ๐),๐ ๐๐
= ๐ข๐ ,๐ ๐ฃ๐๐๐ + ๐ข๐๐ฃ๐,๐ ๐๐
= (grad ๐ฎ)T๐ฏ + (grad ๐ฏ)T๐ฎ
As required.
3.16 Show that grad(๐ฎ ร ๐ฏ) = (๐ฎ ร)grad ๐ฏ โ (๐ฏ ร)grad ๐ฎ
๐ฎ ร ๐ฏ = ๐๐๐๐๐ข๐๐ฃ๐๐๐
Recall that the gradient of this vector is the tensor,
grad(๐ฎ ร ๐ฏ) = (๐๐๐๐๐ข๐๐ฃ๐),๐ ๐๐โ๐๐
= ๐๐๐๐๐ข๐ ,๐ ๐ฃ๐๐๐โ๐๐ + ๐๐๐๐๐ข๐๐ฃ๐ ,๐ ๐๐โ๐๐
= โ๐๐๐๐๐ข๐ ,๐ ๐ฃ๐๐๐โ๐๐ + ๐๐๐๐๐ข๐๐ฃ๐ ,๐ ๐๐โ๐๐
= โ (๐ฏ ร)grad ๐ฎ + (๐ฎ ร)grad ๐ฏ
44
3.17 Show that div (๐ฎ ร ๐ฏ) = ๐ฏ โ curl ๐ฎ โ ๐ฎ โ curl ๐ฏ
We already have the expression for grad(๐ฎ ร ๐ฏ) above; remember that
div (๐ฎ ร ๐ฏ) = tr[grad(๐ฎ ร ๐ฏ)]
= โ๐๐๐๐๐ข๐ ,๐ ๐ฃ๐๐๐ โ ๐๐ + ๐๐๐๐๐ข๐๐ฃ๐,๐ ๐๐ โ ๐๐
= โ๐๐๐๐๐ข๐ ,๐ ๐ฃ๐๐ฟ๐๐ + ๐๐๐๐๐ข๐๐ฃ๐ ,๐ ๐ฟ๐๐
= โ๐๐๐๐๐ข๐ ,๐ ๐ฃ๐ + ๐๐๐๐๐ข๐๐ฃ๐,๐
= ๐ฏ โ curl ๐ฎ โ ๐ฎ โ curl ๐ฏ
3.18 Given a scalar point function ๐ and a vector field ๐ฏ, show that curl (๐๐ฏ) =
๐ curl ๐ฏ + (grad ๐) ร ๐ฏ.
curl (๐๐ฏ) = ๐๐๐๐(๐๐ฃ๐),๐ ๐๐
= ๐๐๐๐(๐,๐ ๐ฃ๐ + ๐๐ฃ๐,๐ )๐๐
= ๐๐๐๐๐,๐ ๐ฃ๐๐๐ + ๐๐๐๐๐๐ฃ๐,๐ ๐๐
= (grad๐) ร ๐ฏ + ๐ curl ๐ฏ
3.19 Show that div (๐ฎโ ๐ฏ) = (div ๐ฏ)๐ฎ + (grad ๐ฎ)๐ฏ
๐ฎโ ๐ฏ is the tensor, ๐ข๐๐ฃ๐๐๐โ๐๐. The gradient of this is the third order tensor,
grad (๐ฎโ ๐ฏ) = (๐ข๐๐ฃ๐),๐ ๐๐โ๐๐โ๐๐
And by divergence, we mean the contraction of the last basis vector:
div (๐ฎโ ๐ฏ) = (๐ข๐๐ฃ๐),๐ (๐๐โ๐๐)๐๐
= (๐ข๐๐ฃ๐),๐ ๐๐๐ฟ๐๐ = (๐ข๐๐ฃ๐),๐ ๐๐
= ๐ข๐ ,๐ ๐ฃ๐๐๐ + ๐ข๐๐ฃ๐ ,๐ ๐๐
= (grad ๐ฎ)๐ฏ + (div ๐ฏ)๐ฎ
3.20 For a scalar field ๐ and a tensor field ๐ show that grad (๐๐) = ๐grad ๐ +
๐โ grad๐. Also show that div (๐๐) = ๐ div ๐ + ๐grad๐
grad(๐๐) = (๐๐๐๐),๐ ๐๐โ๐๐โ๐๐
= (๐,๐ ๐๐๐ + ๐๐๐๐,๐ )๐๐โ๐๐โ๐๐
= ๐โ grad๐ + ๐grad ๐
Furthermore, we can contract the last two bases and obtain,
45
div(๐๐ป) = (๐,๐ ๐๐๐ + ๐๐๐๐ ,๐ )๐๐โ๐๐ โ ๐๐
= (๐,๐ ๐๐๐ + ๐๐๐๐ ,๐ )๐๐๐ฟ๐๐
= ๐๐๐ ๐,๐ ๐๐ + ๐๐๐๐,๐ ๐๐
= ๐grad๐ + ๐ div ๐
3.21 For two arbitrary tensors ๐ and ๐, show that grad(๐๐) = (grad ๐T)T๐+
๐grad๐
grad(๐๐) = (๐๐๐๐๐๐),๐ผ ๐๐โ๐๐โ๐๐ผ
= (๐๐๐,๐ผ๐๐๐ + ๐๐๐๐๐๐ ,๐ผ )๐๐โ๐๐โ๐๐ผ
= (๐๐๐๐๐๐,๐ผ + ๐๐๐๐๐๐,๐ผ )๐๐โ๐๐โ๐๐ผ
= (grad ๐T)T๐ + ๐ grad๐
3.22 Show that curl (grad ๐ฏ)T = grad(curl ๐ฏ)
From previous derivation, we can see that,
curl ๐ = ๐๐๐๐๐๐ผ๐,๐ ๐๐โ๐๐ผ. Clearly,
curl ๐T = ๐๐๐๐๐๐๐ผ,๐ ๐๐โ๐๐ผ
so that curl (grad ๐ฏ)T = ๐๐๐๐๐ฃ๐,๐ผ๐ ๐๐โ๐๐ผ. But curl ๐ฏ = ๐๐๐๐๐ฃ๐ ,๐ ๐๐. The gradient of this
is,
grad(curl ๐ฏ) = (๐๐๐๐๐ฃ๐,๐ ),๐ผ ๐๐โ๐๐ผ = ๐๐๐๐๐ฃ๐,๐๐ผ ๐๐โ๐๐ผ = curl (grad ๐ฏ)T
3.23 For a second-order tensor ๐ = ๐๐๐๐๐โ๐๐, write the explicit expression for
curl ๐ in Cartesian Coordinates.
curl ๐ = ๐๐๐๐๐๐๐ผ,๐ ๐๐โ๐๐ผ
๐ ๐ผ ๐๐๐๐๐๐๐ผ,๐ ๐๐โ๐๐ผ
1 1 (๐๐13๐๐ฅ2
โ๐๐12๐๐ฅ3
) ๐1โ๐1
๐ 2 (๐๐23๐๐ฅ2
โ๐๐22๐๐ฅ3
) ๐1โ๐3
๐ 3 (๐๐33๐๐ฅ2
โ๐๐32๐๐ฅ3
) ๐1โ๐2
2 1 (๐๐11๐๐ฅ3
โ๐๐13๐๐ฅ1
) ๐2โ๐1
46
2 2 (๐๐21๐๐ฅ3
โ๐๐23๐๐ฅ1
) ๐2โ๐2
2 3 (๐๐31๐๐ฅ2
โ๐๐33๐๐ฅ3
) ๐2โ๐3
3 1 (๐๐12๐๐ฅ1
โ๐๐11๐๐ฅ2
) ๐3โ๐1
3.24 For two arbitrary tensors ๐ and ๐, show that div(๐๐) = (grad๐): ๐ + ๐div ๐
grad(๐๐) = (๐๐๐๐๐๐),๐ผ ๐๐โ๐๐โ๐๐ผ
= (๐๐๐,๐ผ๐๐๐ + ๐๐๐๐๐๐ ,๐ผ )๐๐โ๐๐โ๐๐ผ
div(๐๐) = (๐๐๐,๐ผ๐๐๐ + ๐๐๐๐๐๐ ,๐ผ )๐๐(๐๐ โ ๐๐ผ)
= (๐๐๐,๐ผ๐๐๐ + ๐๐๐๐๐๐ ,๐ผ )๐๐๐ฟ๐๐ผ
= (๐๐๐,๐๐๐๐ + ๐๐๐๐๐๐ ,๐ )๐๐
= (grad ๐): ๐ + ๐div ๐
3.25 For two arbitrary vectors, ๐ฎ and ๐ฏ, show that grad(๐ฎ ร ๐ฏ) = (๐ฎ ร)grad๐ฏ โ
(๐ฏ ร)grad๐ฎ
grad(๐ฎ ร ๐ฏ) = (๐๐๐๐๐ข๐๐ฃ๐),๐ ๐๐โ๐๐
= (๐๐๐๐๐ข๐ ,๐ ๐ฃ๐ + ๐๐๐๐๐ข๐๐ฃ๐ ,๐ )๐๐โ๐๐
= (๐ข๐ ,๐ ๐๐๐๐๐ฃ๐ + ๐ฃ๐,๐ ๐๐๐๐๐ข๐)๐๐โ๐๐
= โ(๐ฏ ร)grad ๐ฎ + (๐ฎ ร)grad ๐ฏ
3.26 For a vector field ๐ฎ, show that grad(๐ฎ ร) is a third ranked tensor. Hence or
otherwise show that div(๐ฎ ร) = โcurl ๐ฎ.
The secondโorder tensor (๐ฎ ร) is defined as ๐๐๐๐๐ข๐๐๐โ๐๐. Taking the derivative with an
independent base, we have
grad(๐ฎ ร) = ๐๐๐๐๐ข๐ ,๐ ๐๐โ๐๐โ๐๐
This gives a third order tensor as we have seen. Contracting on the last two bases,
div(๐ฎ ร) = ๐๐๐๐๐ข๐ ,๐ ๐๐โ๐๐ โ ๐๐
= ๐๐๐๐๐ข๐ ,๐ ๐๐๐ฟ๐๐
= ๐๐๐๐๐ข๐ ,๐ ๐๐
47
= โcurl ๐ฎ
3.27 Show that div (๐๐) = grad ๐
Note that ๐๐ = (๐๐ฟ๐ผ๐ฝ)๐๐ผโ๐๐ฝ. Also note that
grad ๐๐ = (๐๐ฟ๐ผ๐ฝ),๐ ๐๐ผโ๐๐ฝโ๐๐
The divergence of this third order tensor is the contraction of the last two bases:
div (๐๐) = tr(grad ๐๐) = (๐๐ฟ๐ผ๐ฝ),๐ (๐๐ผโ๐๐ฝ)๐๐ = (๐๐ฟ๐ผ๐ฝ),๐ ๐๐ผ๐ฟ๐ฝ๐
= ๐,๐ ๐ฟ๐ผ๐ฝ๐๐ผ๐ฟ๐ฝ๐
= ๐,๐ ๐ฟ๐ผ๐๐๐ผ = ๐,๐ ๐๐ = grad ๐
3.28 Show that curl (๐๐) = ( grad ๐) ร
Note that ๐๐ = (๐๐ฟ๐ผ๐ฝ)๐๐ผโ๐๐ฝ, and that curl ๐ = ๐๐๐๐๐๐ผ๐,๐ ๐๐โ๐๐ผ so that,
curl (๐๐) = ๐๐๐๐(๐๐ฟ๐ผ๐),๐ ๐๐โ๐๐ผ
= ๐๐๐๐(๐,๐ ๐ฟ๐ผ๐)๐๐โ๐๐ผ = ๐๐๐๐๐,๐ ๐๐โ๐๐
= ( grad ๐) ร
3.29 Show that the dyad ๐ฎโ ๐ฏ is NOT, in general symmetric: ๐ฎ โ ๐ฏ = ๐ฏโ ๐ฎโ
(๐ฎ ร ๐ฏ) ร
๐ฎ ร ๐ฏ = ๐๐๐๐๐ข๐๐ฃ๐ ๐๐
((๐ฎ ร ๐ฏ) ร) = ๐๐ผ๐๐ฝ๐๐๐๐๐ข๐๐ฃ๐๐๐ผโ๐๐ฝ
= โ(๐ฟ๐ผ๐๐ฟ๐ฝ๐ โ ๐ฟ๐ผ๐๐ฟ๐ฝ๐ )๐ข๐๐ฃ๐๐๐ผโ๐๐ฝ
= (โ๐ข๐ผ๐ฃ๐ฝ + ๐ข๐ฝ๐ฃ๐ผ)๐๐ผโ๐๐ฝ
= ๐ฏโ ๐ฎ โ ๐ฎโ ๐ฏ
3.30 Show that div (๐ฎ ร ๐ฏ) = ๐ฏ โ curl ๐ฎ โ ๐ฎ โ curl ๐ฏ
div (๐ฎ ร ๐ฏ) = (๐๐๐๐๐ข๐๐ฃ๐),๐
Noting that the tensor ๐๐๐๐ behaves is a constant, we can write,
div (๐ฎ ร ๐ฏ) = (๐๐๐๐๐ข๐๐ฃ๐),๐
= ๐๐๐๐๐ข๐ ,๐ ๐ฃ๐ + ๐๐๐๐๐ข๐๐ฃ๐,๐
= ๐ฏ โ curl ๐ฎ โ ๐ฎ โ curl ๐ฏ
48
3.31 Given a scalar point function ๐ and a vector field ๐ฏ, show that curl (๐๐ฏ) =
๐ curl ๐ฏ + (grad๐) ร ๐ฏ.
curl (๐๐ฏ) = ๐๐๐๐(๐๐ฃ๐),๐ ๐๐
= ๐๐๐๐(๐,๐ ๐ฃ๐ + ๐๐ฃ๐ ,๐ )๐๐
= ๐๐๐๐๐,๐ ๐ฃ๐๐๐ + ๐๐๐๐๐๐ฃ๐,๐ ๐๐
= (gradฯ) ร ๐ฏ + ฯ curl ๐ฏ
3.32 Show that curl (๐ฏ ร) = (div ๐ฏ)๐ โ grad ๐ฏ
(๐ฏ ร) = ๐๐ผ๐ฝ๐๐ฃ๐ฝ ๐๐ผโ๐๐
curl ๐ = ๐๐๐๐๐๐ผ๐,๐ ๐๐โ๐๐ผ
so that
curl (๐ฏ ร) = ๐๐๐๐๐๐ผ๐ฝ๐๐ฃ๐ฝ ,๐ ๐๐โ๐๐ผ
= (๐ฟ๐๐ผ๐ฟ๐๐ฝ โ ๐ฟ๐๐ผ๐ฟ๐๐ฝ) ๐ฃ๐ฝ ,๐ ๐๐โ๐๐ผ
= ๐ฃ๐ ,๐ ๐๐ผโ๐๐ผ โ ๐ฃ๐ ,๐ ๐๐โ๐๐
= (div ๐ฏ)๐ โ grad ๐ฏ
3.33 Show that curl (grad ๐ฏ) = ๐
For any tensor ๐ = ๐๐ผ๐ฝ๐๐ผโ๐๐ฝ
curl ๐ = ๐๐๐๐๐๐ผ๐,๐ ๐๐ผโ๐๐ฝ
Let ๐ = grad ๐ฏ. Clearly, in this case, ๐๐ผ๐ฝ = ๐ฃ๐ผ ,๐ฝ so that ๐๐ผ๐,๐ = ๐ฃ๐ผ ,๐๐. It therefore follows
that,
curl (grad ๐ฏ) = ๐๐๐๐๐ฃ๐ผ ,๐๐ ๐๐ผโ๐๐ฝ = ๐.
The contraction of symmetric tensors with anti-symmetric led to this conclusion. Note that
this presupposes that the order of differentiation in the vector field is immaterial. This will
be true only if the vector field is continuous โ a proposition we have assumed in the above.
3.34 Show that curl (grad ๐) = ๐จ
For any tensor ๐ฏ = ๐ฃ๐ผ๐๐ผ
curl ๐ฏ = ๐๐๐๐๐ฃ๐,๐ ๐๐
49
Let ๐ฏ = grad ๐. Clearly, in this case, ๐ฃ๐ = ๐,๐ so that ๐ฃ๐,๐ = ๐,๐๐. It therefore follows
that,
curl (grad ๐) = ๐๐๐๐๐,๐๐ ๐๐ = ๐.
The contraction of symmetric tensors with anti-symmetric led to this conclusion. Note that
this presupposes that the order of differentiation in the scalar field is immaterial. This will
be true only if the scalar field is continuous โ a proposition we have assumed in the above.
3.35 Show that div (grad ๐ ร grad ฮธ) = 0
grad ๐ ร grad ฮธ = ๐๐๐๐๐,๐ ๐,๐ ๐๐
The gradient of this vector is the tensor,
grad(grad ๐ ร grad ๐) = (๐๐๐๐๐,๐ ๐,๐ ),๐ ๐๐โ๐๐
= ๐๐๐๐๐,๐๐ ๐,๐ ๐๐โ๐๐ + ๐๐๐๐๐,๐ ๐,๐๐ ๐๐โ๐๐
The trace of the above result is the divergence we are seeking:
div (grad ๐ ร grad ฮธ) = tr[grad(grad ๐ ร grad ๐)]
= ๐๐๐๐๐,๐๐ ๐,๐ ๐๐ โ ๐๐ + ๐๐๐๐๐,๐ ๐,๐๐ ๐๐ โ ๐๐
= ๐๐๐๐๐,๐๐ ๐,๐ ๐ฟ๐๐ + ๐๐๐๐๐,๐ ๐,๐๐ ๐ฟ๐๐
= ๐๐๐๐๐,๐๐ ๐,๐+ ๐๐๐๐๐,๐ ๐,๐๐ = 0
Each term vanishing on account of the contraction of a symmetric tensor with an
antisymmetric.
3.36 Show that curl curl ๐ฏ = grad(div ๐ฏ) โ grad2๐ฏ
Let ๐ = ๐๐ข๐๐ ๐ โก ๐๐๐๐๐ฃ๐,๐ ๐๐. But curl ๐ โก ๐๐ผ๐ฝ๐พ๐ค๐พ,๐ฝ ๐๐ผ. Upon inspection, we find that
๐ค๐พ = ๐ฟ๐พ๐๐๐๐๐๐ฃ๐,๐ so that
curl ๐ฐ โก ๐๐ผ๐ฝ๐พ(๐ฟ๐พ๐๐๐๐๐๐ฃ๐ ,๐ ),๐ฝ ๐๐ผ = ๐ฟ๐พ๐๐๐ผ๐ฝ๐พ๐๐๐๐๐ฃ๐,๐๐ฝ ๐๐ผ
Now, it can be shown that ๐ฟ๐พ๐๐๐ผ๐ฝ๐พ๐๐๐๐ = ๐ฟ๐ผ๐๐ฟ๐ฝ๐ โ ๐ฟ๐ผ๐๐ฟ๐ฝ๐ so that,
curl ๐ฐ = (๐ฟ๐ผ๐๐ฟ๐ฝ๐ โ ๐ฟ๐ผ๐๐ฟ๐ฝ๐)๐ฃ๐,๐๐ฝ ๐๐ผ
= ๐ฃ๐ฝ ,๐๐ฝ ๐๐ โ ๐ฟ๐ฝ๐๐ฃ๐ผ ,๐๐ฝ ๐๐ผ
= grad(div ๐ฏ) โ grad2๐ฏ
Also recall that the Laplacian (grad2) of a scalar field ๐ is, grad2๐ = ๐ฟ๐๐๐,๐๐. In Cartesian
coordinates, this becomes,
50
grad2๐ = ๐ฟ๐๐๐,๐๐ = ๐,๐๐
as the unit (metric) tensor now degenerates to the Kronecker delta in this special case.
For a vector field, grad2๐ฏ = ๐ฃ๐ผ ,๐๐ ๐๐ผ.
Also note that while grad is a vector operator, the Laplacian (grad2) is a scalar operator.
3.37 For a second-order tensor ๐ define curl ๐ โก ๐๐๐๐๐๐ผ๐ ,๐ ๐๐โ ๐๐ผ show that for
any constant vector ๐, (curl ๐) ๐ = curl (๐T๐)
Express vector ๐ in the invariant form with covariant components as ๐ = ๐๐ฝ๐๐ฝ . It follows
that
(curl ๐) ๐ = ๐๐๐๐๐๐ผ๐,๐ (๐๐โ๐๐ผ)๐
= ๐๐๐๐๐๐ผ๐,๐ ๐๐ฝ(๐๐โ๐๐ผ)๐๐ฝ
= ๐๐๐๐๐๐ผ๐,๐ ๐๐ฝ๐๐๐ฟ๐ฝ๐ผ
= ๐๐๐๐(๐๐ผ๐),๐ ๐๐๐๐ผ
= ๐๐๐๐(๐๐ผ๐๐๐ผ),๐ ๐๐
The last equality resulting from the fact that vector ๐ is a constant vector. Clearly,
(curl ๐) ๐ = curl (๐T๐)
3.38 For any two vectors ๐ฎ and ๐ฏ, show that curl (๐ฎโ ๐ฏ) = [(grad ๐ฎ)๐ฏ ร]T +
(curl ๐ฏ)โ ๐ฎ where ๐ฏ ร is the skew tensor ๐๐๐๐๐ฃ๐ ๐๐โ๐๐.
Recall that the curl of a tensor ๐ is defined by curl ๐ โก ๐๐๐๐๐๐ผ๐,๐ ๐๐โ๐๐ผ. Clearly
therefore,
curl (๐ฎโ ๐ฏ) = ๐๐๐๐(๐ข๐ผ๐ฃ๐),๐ ๐๐โ๐๐ผ
= ๐๐๐๐(๐ข๐ผ,๐ ๐ฃ๐ + ๐ข๐ผ๐ฃ๐,๐ ) ๐๐โ๐๐ผ
= ๐๐๐๐๐ข๐ผ ,๐ ๐ฃ๐ ๐๐โ๐๐ผ + ๐๐๐๐๐ข๐ผ๐ฃ๐,๐ ๐๐โ๐๐ผ
= (๐๐๐๐๐ฃ๐ ๐๐) โ (๐ข๐ผ ,๐ ๐๐ผ) + (๐๐๐๐๐ฃ๐,๐ ๐๐) โ (๐ข๐ผ๐๐ผ)
= (๐๐๐๐๐ฃ๐๐๐โ๐๐)(๐ข๐ผ,๐ฝ ๐๐ฝโ๐๐ผ) + (๐๐๐๐๐ฃ๐,๐ ๐๐) โ (๐ข๐ผ๐๐ผ)
= โ(๐ฏ ร)(grad ๐ฎ)๐ + (curl ๐ฏ) โ ๐ฎ
= [(grad ๐ฎ)๐ฏ ร]๐ + (curl ๐ฏ) โ ๐ฎ
upon noting that the vector cross is a skew tensor.
51
3.39 For a second-order tensor field ๐, show that div(curl ๐) = curl(div ๐T)
Define the second order tensor ๐ as
curl ๐ โก ๐๐๐๐๐๐ผ๐,๐ ๐๐โ๐๐ผ = ๐๐๐ผ๐๐โ๐๐ผ
The gradient of ๐ is ๐๐๐ผ,๐ฝ ๐๐โ๐๐ผโ๐๐ฝ = ๐๐๐๐๐๐ผ๐,๐๐ฝ ๐๐โ๐๐ผโ๐๐ฝ
Clearly,
div(curl ๐) = ๐๐๐๐๐๐ผ๐,๐๐ฝ (๐๐โ๐๐ผ)๐๐ฝ
= ๐๐๐๐๐๐ผ๐,๐๐ฝ ๐๐ ๐ฟ๐ผ๐ฝ
= ๐๐๐๐๐๐ฝ๐,๐๐ฝ ๐๐ = curl(div ๐T)
3.40 Show that ((curl ๐ฎ) ร) = grad ๐ฎ โ gradT๐ฎ
((curl ๐ฎ) ร) = ๐๐๐๐๐ข๐,๐ ๐๐ ร
= ๐๐ผ๐๐ฝ๐๐๐๐๐ข๐ ,๐ ๐๐ผโ๐๐ฝ
= (๐ฟ๐ฝ๐๐ฟ๐ผ๐ โ ๐ฟ๐ฝ๐๐ฟ๐ผ๐)๐ข๐,๐ ๐๐ผโ๐๐ฝ
= ๐ข๐ ,๐ ๐๐โ๐๐โ๐ข๐,๐ ๐๐โ๐๐
= grad ๐ฎ โ gradT๐ฎ
3.41 Show that curl (๐ฎ ร ๐ฏ) = div(๐ฎโ ๐ฏ โ ๐ฏโ ๐ฎ)
The vector ๐ฐ โก ๐ฎ ร ๐ฏ = ๐ค๐๐๐ = ๐๐๐ผ๐ฝ๐ข๐ผ๐ฃ๐ฝ๐๐ and curl ๐ฐ = ๐๐๐๐๐ค๐ ,๐ ๐๐. Therefore,
curl (๐ฎ ร ๐ฏ) = ๐๐๐๐๐ค๐,๐ ๐๐
= ๐๐๐๐๐๐๐ผ๐ฝ(๐ข๐ผ๐ฃ๐ฝ),๐ ๐๐
= (๐ฟ๐๐ผ๐ฟ๐๐ฝ โ ๐ฟ๐๐ผ๐ฟ๐๐ฝ)(๐ข๐ผ๐ฃ๐ฝ),๐ ๐๐
= (๐ฟ๐๐ผ๐ฟ๐๐ฝ โ ๐ฟ๐๐ผ๐ฟ๐๐ฝ)(๐ข๐ผ,๐ ๐ฃ
๐ฝ + ๐ข๐ผ๐ฃ๐ฝ ,๐ )๐๐
= [๐ข๐,๐ ๐ฃ๐ + ๐ข๐๐ฃ๐ ,๐โ (๐ข๐ ,๐ ๐ฃ๐ + ๐ข๐๐ฃ๐ ,๐ )]๐๐
= [(๐ข๐๐ฃ๐),๐โ (๐ข๐๐ฃ๐),๐ ]๐๐
= div(๐ฎโ ๐ฏ โ ๐ฏโ ๐ฎ)
since div(๐ฎโ ๐ฏ) = (๐ข๐๐ฃ๐),๐ผ (๐๐โ๐๐)๐๐ผ = (๐ข๐๐ฃ๐),๐ ๐๐.
3.42
52
3.43 Given a scalar point function ๐ and a second-order tensor field ๐, show that
curl (๐๐) = ๐ curl ๐ + ((grad๐) ร)๐T where [(grad๐) ร] is the skew
tensor ๐๐๐๐๐,๐ ๐๐โ๐๐
curl (๐๐) โก ๐๐๐๐(๐๐๐ผ๐),๐ ๐๐โ๐๐ผ
= ๐๐๐๐(๐,๐ ๐๐ผ๐ + ๐๐๐ผ๐ ,๐ ) ๐๐โ๐๐ผ
= ๐๐๐๐๐,๐ ๐๐ผ๐ ๐๐โ๐๐ผ + ๐๐๐๐๐๐๐ผ๐,๐ ๐๐โ๐๐ผ
= (๐๐๐๐๐,๐ ๐๐โ๐๐) (๐๐ผ๐ฝ๐๐ฝโ๐๐ผ) + ๐๐๐๐๐๐๐ผ๐ ,๐ ๐๐โ๐๐ผ
= ๐ curl ๐ + ((grad๐) ร)๐T
3.44 Given a tensor field ๐, obtain the vector ๐ฐ โก ๐T๐ฏ and show that its
divergence is ๐: (grad๐ฏ) + ๐ฏ โ div ๐
The gradient of ๐ฐ is the tensor, (๐๐๐๐ฃ๐),๐ ๐๐โ๐๐. Therefore, divergence of ๐ (the trace
of the gradient) is the scalar sum, ๐๐๐๐ฃ๐ ,๐ ๐ฟ๐๐ + ๐๐๐ ,๐ ๐ฃ๐๐ฟ๐๐. Expanding, we obtain,
div (๐T๐ฏ) = ๐๐๐๐ฃ๐ ,๐ ๐ฟ๐๐ + ๐๐๐,๐ ๐ฃ๐๐ฟ๐๐
= ๐๐๐ ,๐ ๐ฃ๐ + ๐๐๐๐ฃ๐ ,๐
= (div ๐) โ ๐ฏ + tr(๐Tgrad ๐ฏ)
= (div ๐) โ ๐ฏ + ๐: (grad ๐ฏ)
Recall that scalar product of two vectors is commutative so that
div (๐T๐ฏ) = ๐: (grad ๐ฏ) + ๐ฏ โ div ๐
3.45
3.46
3.47
3.48
53
3.49
3.50
3.51
3.52
3.53 Show that if ๐ is a scalar field in the Euclidean space spanned by orthogonal
coordinates ๐ฅ๐, with unit basis vectors, ๐๐, then (a) for the position vector ๐ซ,
div grad(๐ซ๐) = 2 grad ๐ + ๐ซ div grad ๐ .(b) Will this expression be valid in
the spherical coordinate system?
(๐) In such a coordinate system, the radius vector will be given by,
๐ซ = ๐ฅ๐๐๐
where ๐๐ is the unit vector along coordinate ๐ฅ๐ .
grad(๐ซ๐) = (๐ฅ๐๐),๐ ๐๐โ๐๐
grad(grad(๐ซ๐)) = ((๐ฅ๐๐),๐ ),๐ ๐๐โ๐๐โ๐๐
div(grad(๐ซ๐)) = tr (grad(grad(๐ซ๐))) = ((๐ฅ๐๐),๐ ),๐ ๐๐๐ฟ๐๐
= (๐ฟ๐๐๐ + ๐ฅ๐๐,๐ ),๐ ๐๐
= (๐ฟ๐๐๐,๐+ ๐ฅ๐ ,๐ ๐,๐+ ๐ฅ๐๐,๐๐ )๐๐
div grad(๐ซ๐) = 2 grad ๐ + ๐ซ div grad ๐
(๐) The position vector in spherical coordinates are non-linear in the coordinate variables. This
form of the expression will not be correct. It works only for Cartesian systems.
54
3.54 In Cartesian coordinates let ๐ฅ denote the magnitude of the position vector
๐ซ = ๐ฅ๐๐๐. Show that (a) ๐ฅ,๐ =๐ฅ๐
๐ฅ, (b) ๐ฅ,๐๐ =
1
๐ฅ๐ฟ๐๐ โ
๐ฅ๐๐ฅ๐
(๐ฅ)3, (c) ๐ฅ,๐๐ =
๐
๐, (d) If ๐ =
1
๐ฅ, then ๐,๐๐ =
โ๐น๐๐
๐๐+3๐ฅ๐๐ฅ๐
๐ฅ5 ๐,๐๐ = 0 and div (
๐ซ
๐ฅ) =
2
๐ฅ.
(๐) ๐ฅ = โ๐ฅ๐๐ฅ๐
๐ฅ,๐ =๐โ๐ฅ๐๐ฅ๐
๐๐ฅ๐=๐โ๐ฅ๐๐ฅ๐
๐(๐ฅ๐๐ฅ๐)ร๐(๐ฅ๐๐ฅ๐)
๐๐ฅ๐=
1
2โ๐ฅ๐๐ฅ๐[๐ฅ๐๐ฟ๐๐ + ๐ฅ๐๐ฟ๐๐] =
๐ฅ๐
๐ฅ.
(๐) ๐ฅ,๐๐ =
๐
๐๐ฅ๐(๐โ๐ฅ๐๐ฅ๐
๐๐ฅ๐) =
๐
๐๐ฅ๐(๐ฅ๐๐ฅ) =
๐ฅ๐๐ฅ๐๐๐ฅ๐
โ ๐ฅ๐๐๐ฅ๐๐ฅ๐
(๐ฅ)2=๐ฅ๐ฟ๐๐ โ
๐ฅ๐๐ฅ๐๐ฅ
(๐ฅ)2=1
๐ฅ๐ฟ๐๐ โ
๐ฅ๐๐ฅ๐(๐ฅ)3
(๐) ๐ฅ,๐๐ =1
๐ฅ๐ฟ๐๐ โ
๐ฅ๐๐ฅ๐(๐ฅ)3
=3
๐ฅโ(๐ฅ)2
(๐ฅ)3=2
๐ฅ.
(๐ ) ๐ =1
๐ฅ so that
๐,๐ =๐1๐ฅ
๐๐ฅ๐=๐1๐ฅ๐๐ฅร๐๐ฅ
๐๐ฅ๐= โ
1
๐ฅ21
๐ฅ๐ฅ๐ = โ
๐ฅ๐
๐ฅ3
Consequently,
๐,๐๐ =๐
๐๐ฅ๐(๐,๐ ) = โ
๐
๐๐ฅ๐(๐ฅ๐๐ฅ3) =
๐ฅ3 (๐๐๐ฅ๐
(โ๐ฅ2)) + ๐ฅ๐๐๐๐ฅ๐
(๐ฅ3)
๐ฅ6
=
๐ฅ3(โ๐ฟ๐๐) + ๐ฅ๐ (๐(๐ฅ3)๐๐ฅ
๐๐ฅ๐๐ฅ๐)
๐ฅ6=โ๐ฅ3๐ฟ๐๐ + ๐ฅ๐ (3๐ฅ
2 ๐ฅ๐๐ฅ )
๐ฅ6=โ๐ฟ๐๐
๐ฅ3+3๐ฅ๐๐ฅ๐
๐ฅ5
๐,๐๐ =โ๐ฟ๐๐๐ฅ3
+3๐ฅ๐๐ฅ๐๐ฅ5
=โ3
๐ฅ3+3๐ฅ2
๐ฅ5= 0.
๐๐๐ฃ (๐
๐ฅ) = (
๐ฅ๐
๐ฅ) ,๐ =
1
๐ฅ๐ฅ๐ ,๐+ (
1
๐ฅ),๐=3
๐ฅ+ ๐ฅ๐ (
๐
๐๐ฅ(1
๐ฅ) ๐๐ฅ
๐๐ฅ๐)
=3
๐ฅ+ ๐ฅ๐ [โ (
1
๐ฅ2) ๐ฅ๐
๐ฅ] =
3
๐ฅโ๐ฅ๐๐ฅ๐
๐ฅ3=3
๐ฅโ1
๐ฅ=2
๐ฅ
3.55 Show that curl ๐ฎ ร ๐ฏ = (grad๐ฎ)๐ฏ โ (div๐ฎ)๐ฏ โ (grad๐ฏ)๐ฎ + (div ๐ฏ)๐ฎ
๐ฎ ร ๐ฏ = ๐๐๐๐๐ข๐๐ฃ๐๐๐
55
curl ๐ฐ = ๐๐ผ๐ฝ๐๐ค๐ ,๐ฝ ๐๐ผ
Clearly,
curl ๐ฎ ร ๐ฏ = ๐๐ผ๐ฝ๐(๐๐๐๐๐ข๐๐ฃ๐),๐ฝ ๐๐ผ
= ๐๐ผ๐ฝ๐๐๐๐๐๐ข๐,๐ฝ๐ฃ๐๐๐ผ + ๐๐ผ๐ฝ๐๐๐๐๐๐ข๐๐ฃ๐,๐ฝ ๐๐ผ
= (๐ฟ๐ผ๐๐ฟ๐ฝ๐ โ ๐ฟ๐ผ๐๐ฟ๐ฝ๐)๐ข๐,๐ฝ๐ฃ๐๐๐ผ+ (๐ฟ๐ผ๐๐ฟ๐ฝ๐ โ ๐ฟ๐ผ๐๐ฟ๐ฝ๐)๐ข๐๐ฃ๐,๐ฝ ๐๐ผ
= ๐ข๐ ,๐ ๐ฃ๐๐๐ โ ๐ข๐ ,๐ ๐ฃ๐๐๐ + ๐ข๐๐ฃ๐,๐ ๐๐ โ ๐ข๐๐ฃ๐,๐ ๐๐
= (grad๐ฎ)๐ฏ โ (div๐ฎ)๐ฏ โ (grad ๐ฏ)๐ฎ + (div ๐ฏ)๐ฎ
3.56 For a scalar function ๐and a vector ๐ฏ show that the divergence of the vector
๐ฏ๐ is equal to, ๐ฏ โ grad ๐ + ๐ div ๐ฏ
grad (๐ฏ๐) = (๐ฃ๐๐),๐ ๐๐โ๐๐ = (๐๐ฃ๐,๐+ ๐ฃ๐๐,๐ )๐๐โ๐๐
Taking the trace of this equation,
div ๐ฏ๐ = tr(grad (๐ฏ๐)) = (๐๐ฃ๐,๐+ ๐ฃ๐๐,๐ )๐๐ โ ๐๐
= (๐๐ฃ๐ ,๐+ ๐ฃ๐๐,๐ )๐ฟ๐๐
= ๐๐ฃ๐ ,๐+ ๐ฃ๐๐,๐
= ๐ฏ โ grad ๐+๐ div ๐ฏ
Hence the result.
3.57 When ๐ is symmetric, show that tr(curl ๐) vanishes.
When ๐ is symmetric, show that tr(curl ๐) vanishes.
curl ๐ = ๐๐๐๐๐๐ฝ๐,๐ ๐๐โ๐๐ฝ
tr(curl ๐) = ๐๐๐๐๐๐ฝ๐,๐ ๐๐ โ ๐๐ฝ
= ๐๐๐๐๐๐ฝ๐,๐ ๐ฟ๐๐ฝ = ๐๐๐๐๐๐๐,๐
which obviously vanishes on account of the symmetry and antisymmetry in ๐ and ๐. In this
case,
3.58 For a general tensor field ๐ show that,
curl(curl ๐) = [grad2(tr ๐) โ div(div ๐)]๐ + grad(div ๐)
56
+ (grad(div ๐))Tโ grad(grad (tr ๐)) โ grad2๐T
curl ๐ = ๐๐๐ ๐ก๐๐ฝ๐ก,๐ ๐๐ผโ๐๐ฝ
= ๐๐ผ๐ฝ๐๐ผโ๐๐ฝ
curl ๐ = ๐๐๐๐๐๐ผ๐,๐ ๐๐โ๐๐ผ
so that
curl ๐ = curl(curl ๐) = ๐๐๐๐๐๐๐ ๐ก๐๐๐ก,๐ ๐ ๐๐โ๐๐ผ
= |
๐ฟ๐๐ผ ๐ฟ๐๐ ๐ฟ๐๐ก๐ฟ๐๐ผ ๐ฟ๐๐ ๐ฟ๐๐ก๐ฟ๐๐ผ ๐ฟ๐๐ ๐ฟ๐๐ก
| ๐๐๐ก,๐ ๐ ๐๐โ๐๐ผ
= [๐ฟ๐๐ผ(๐ฟ๐๐ ๐ฟ๐๐ก โ ๐ฟ๐๐ก๐ฟ๐๐ ) + ๐ฟ๐๐ (๐ฟ๐๐ก๐ฟ๐๐ผ โ ๐ฟ๐๐ผ๐ฟ๐๐ก)
+๐ฟ๐๐ก(๐ฟ๐๐ผ๐ฟ๐๐ โ ๐ฟ๐๐ ๐ฟ๐๐ผ)] ๐๐๐ก,๐ ๐ ๐๐โ๐๐ผ
= [๐ฟ๐๐ ๐๐ก๐ก ,๐ ๐โ ๐๐ ๐ ,๐ ๐ ](๐๐ผโ๐๐ผ) + [๐๐ผ๐,๐ ๐โ ๐ฟ๐๐ผ๐๐ก๐ก,๐ ๐ ](๐๐ โ๐๐ผ)
+ [๐ฟ๐๐ผ๐๐ ๐ก ,๐ ๐โ ๐ฟ๐๐ ๐๐ผ๐ก,๐ ๐ ](๐๐กโ๐๐ผ)
= [grad2(tr ๐) โ div(div ๐ป)]๐ + (grad(div ๐))Tโ grad(grad (tr ๐)) + (grad(div ๐))
โ grad2๐T
3.59 For any Euclidean coordinate system, show that div ๐ฎ ร ๐ฏ = ๐ฏ โ curl ๐ฎ โ ๐ฎ โ
curl ๐ฏ
๐ฎ ร ๐ฏ = ๐๐๐๐๐ข๐๐ฃ๐๐๐
grad (๐ฎ ร ๐ฏ) = (๐๐๐๐๐ข๐๐ฃ๐),๐ ๐๐โ๐๐
div (๐ฎ ร ๐ฏ) = tr(grad (๐ฎ ร ๐ฏ)) = (๐๐๐๐๐ข๐๐ฃ๐),๐ ๐๐ โ ๐๐
= (๐๐๐๐๐ข๐ ,๐ ๐ฃ๐ + ๐๐๐๐๐ข๐๐ฃ๐,๐ )๐ฟ๐๐
= ๐๐๐๐๐ข๐ ,๐ ๐ฃ๐ + ๐๐๐๐๐ข๐๐ฃ๐,๐
= ๐ฏ โ curl ๐ฎ โ ๐ฎ โ curl ๐ฏ
3.60 In Cartesian coordinates, If the volume ๐ is enclosed by the surface ๐, the
position vector ๐ = ๐ฅ๐๐ ๐ and ๐ is the external unit normal to each surface
element, show that 1
6โซ โ(๐ โ ๐) โ ๐๐๐๐
equals the volume contained in ๐.
57
๐ซ โ ๐ซ = ๐ฅ๐๐ฅ๐๐๐ โ ๐๐ = ๐ฅ๐๐ฅ๐๐ฟ๐๐
By the Divergence Theorem,
โซgrad(๐ โ ๐) โ ๐๐๐๐
= โซ๐ป โ [๐ป(๐ โ ๐)]๐๐๐
= โซ๐๐[๐๐(๐ฅ๐๐ฅ๐๐๐๐)] ๐
๐ โ ๐๐ ๐๐๐
= โซ๐๐[๐๐๐(๐ฅ๐ ,๐ ๐ฅ
๐ + ๐ฅ๐๐ฅ๐ ,๐ )] ๐๐ โ ๐๐ ๐๐
๐
= โซ๐๐๐๐๐๐(๐ฟ๐
๐๐ฅ๐ + ๐ฅ๐๐ฟ๐๐),๐ ๐๐
๐
= โซ2๐๐๐๐๐๐๐ฅ๐,๐ ๐๐
๐
= โซ2๐ฟ๐๐๐ฟ๐๐ ๐๐
๐
= 6โซ ๐๐๐
??
3.61
3.62
3.63
3.64
3.65
3.66
3.67
3.68
58
3.69
3.70
3.71
Fields in Curvilinear Coordinates
Given a vector point function ๐ฎ(๐ฅ1, ๐ฅ2, ๐ฅ3) and its covariant components, ๐ข๐ผ(๐ฅ1, ๐ฅ2, ๐ฅ3), ๐ผ =
1,2,3, with ๐๐ผas the reciprocal basis vectors, Let
๐ฎ = ๐ข๐ผ๐ ๐ผ, then ๐๐ฎ =
๐
๐๐ฅ๐(๐ข๐ผ๐
๐ผ)๐๐ฅ๐
= (๐๐ข๐ผ๐๐ฅ๐
๐ ๐ผ +๐๐ ๐ผ
๐๐ฅ๐๐ข๐ผ) ๐๐ฅ
๐
Clearly,
๐๐ฎ
๐๐ฅ๐=๐๐ข๐ผ๐๐ฅ๐
๐ ๐ผ +๐๐ ๐ผ
๐๐ฅ๐๐ข๐ผ
And the projection of this quantity on the ๐๐direction is,
๐๐ฎ
๐๐ฅ๐โ ๐ ๐ = (
๐๐ข๐ผ๐๐ฅ๐
๐ ๐ผ +๐๐ ๐ผ
๐๐ฅ๐๐ข๐ผ) โ ๐ ๐
=๐๐ข๐ผ๐๐ฅ๐
๐ ๐ผ โ ๐ ๐ +๐๐ ๐ผ
๐๐ฅ๐โ ๐ ๐๐ข๐ผ
=๐๐ข๐ผ๐๐ฅ๐
๐ฟ๐๐ผ +
๐๐ ๐ผ
๐๐ฅ๐โ ๐ ๐๐ข๐ผ
Now ๐ ๐ โ ๐ ๐ = ๐ฟ๐๐ so that,
๐๐ ๐
๐๐ฅ๐โ ๐ ๐ + ๐
๐ โ๐๐ ๐
๐๐ฅ๐= 0.
๐๐ ๐
๐๐ฅ๐โ ๐ ๐ = โ๐
๐ โ๐๐ ๐
๐๐ฅ๐โก โ {
๐๐๐}.
59
This important quantity, necessary to quantify the derivative of a tensor in general coordinates,
is called the Christoffel Symbol of the second kind.
Using this, we can now write that,
๐๐ฎ
๐๐ฅ๐โ ๐ ๐ =
๐๐ข๐ผ๐๐ฅ๐
๐ฟ๐๐ผ +
๐๐ ๐ผ
๐๐ฅ๐โ ๐ ๐๐ข๐ผ =
๐๐ข๐๐๐ฅ๐
โ {๐ผ๐๐} ๐ข๐ผ
The quantity on the RHS is the component of the derivative of vector ๐ฎ along the ๐๐ direction
using covariant components. It is the covariant derivative of ๐ฎ. Using contravariant components,
we could write,
๐๐ฎ = (๐๐ข๐
๐๐ฅ๐๐ ๐ +
๐๐ ๐๐๐ฅ๐
๐ข๐)๐๐ฅ๐
= (๐๐ข๐
๐๐ฅ๐๐ ๐ + {
๐ผ๐๐} ๐ ๐ผ๐ข
๐)๐๐ฅ๐
So that,
๐๐ฎ
๐๐ฅ๐=๐๐ข๐ผ
๐๐ฅ๐๐ ๐ผ +
๐๐ ๐ผ๐๐ฅ๐
๐ข๐ผ
The components of this in the direction of ๐๐ can be obtained by taking a dot product as before:
๐
๐๐ฅ๐โ ๐ ๐ = (
๐๐ข๐ผ
๐๐ฅ๐๐ ๐ผ +
๐๐ ๐ผ๐๐ฅ๐
๐ข๐ผ) โ ๐ ๐
=๐๐ข๐ผ
๐๐ฅ๐๐ฟ๐ผ๐ +
๐๐ ๐ผ๐๐ฅ๐
โ ๐ ๐๐ข๐ผ
=๐๐ข๐
๐๐ฅ๐+ {
๐๐ผ๐} ๐ข๐ผ
The two results above are represented symbolically as,
๐ข๐,๐ =๐๐ข๐๐๐ฅ๐
โ {๐ผ๐๐} ๐ข๐ผ
for covariant components, and
๐ข,๐๐ =
๐๐ข๐ผ
๐๐ฅ๐+ {
๐๐ผ๐} ๐ข๐ผ
for conravariant vector components. Which are the components of the covariant derivatives in
terms of the covariant and contravariant components respectively.
It now becomes useful to establish the fact that our definition of the Christoffel symbols here
conforms to the definition you find in the books using the transformation rules to define the
tensor quantities.
60
We observe that the derivative of the covariant basis, ๐ ๐ (=๐๐
๐๐ฅ๐),
๐๐ ๐๐๐ฅ๐
=๐2๐ซ
๐๐ฅ๐๐๐ฅ๐
=๐2๐ซ
๐๐ฅ๐๐๐ฅ๐=๐๐ ๐
๐๐ฅ๐
Taking the dot product with ๐ ๐
๐๐ ๐๐๐ฅ๐
โ ๐ ๐ =1
2(๐๐ ๐
๐๐ฅ๐โ ๐ ๐ +
๐๐ ๐๐๐ฅ๐
โ ๐ ๐)
=1
2(๐
๐๐ฅ๐[๐ ๐ โ ๐ ๐] +
๐
๐๐ฅ๐[๐ ๐ โ ๐ ๐] โ ๐ ๐ โ
๐๐ ๐๐๐ฅ๐
โ ๐ ๐ โ๐๐ ๐๐๐ฅ๐
)
=1
2(๐
๐๐ฅ๐[๐ ๐ โ ๐ ๐] +
๐
๐๐ฅ๐[๐ ๐ โ ๐ ๐] โ ๐ ๐ โ
๐๐ ๐๐๐ฅ๐
โ ๐ ๐ โ๐๐ ๐
๐๐ฅ๐)
=1
2(๐
๐๐ฅ๐[๐ ๐ โ ๐ ๐] +
๐
๐๐ฅ๐[๐ ๐ โ ๐ ๐] โ
๐
๐๐ฅ๐[๐ ๐ โ ๐ ๐])
=1
2(๐๐๐๐
๐๐ฅ๐+๐๐๐๐๐๐ฅ๐
โ๐๐๐๐
๐๐ฅ๐)
Which is the quantity defined as the Christoffel symbols of the first kind in the textbooks. It is
therefore possible for us to write,
[๐๐, ๐] โก ๐๐ ๐๐๐ฅ๐
โ ๐ ๐
=๐๐ ๐
๐๐ฅ๐โ ๐ ๐
=1
2(๐๐๐๐
๐๐ฅ๐+๐๐๐๐๐๐ฅ๐
โ๐๐๐๐
๐๐ฅ๐)
It should be emphasized that the Christoffel symbols, even though the play a critical role in
several tensor relationships, are themselves NOT tensor quantities. (Prove this). However, notice
their symmetry in the ๐ and ๐. The extension of this definition to the Christoffel symbols of the
second kind is immediate:
Contract the above equation with the conjugate metric tensor, we have,
๐๐๐ผ[๐๐, ๐ผ] โก ๐๐๐ผ ๐๐ ๐๐๐ฅ๐
โ ๐ ๐ผ
= ๐๐๐ผ๐๐ ๐
๐๐ฅ๐โ ๐ ๐ผ =
๐๐ ๐๐๐ฅ๐
โ ๐ ๐
61
= {๐๐๐} = โ
๐๐ ๐
๐๐ฅ๐โ ๐ ๐
Which connects the common definition of the second Christoffel symbol with the one defined in
the above derivation. The relationship,
๐๐๐ผ[๐๐, ๐ผ] = {๐๐๐}
apart from defining the relationship between the Christoffel symbols of the first kind and that
second kind, also highlights, once more, the index-raising property of the conjugate metric
tensor.
We contract the above equation with ๐๐๐ฝand obtain,
๐๐๐ฝ๐๐๐ผ[๐๐, ๐ผ] = ๐๐๐ฝ {
๐๐๐} ๐ฟ๐ฝ
๐ผ [๐๐, ๐ผ]
= [๐๐, ๐ฝ] = ๐๐๐ฝ {๐๐๐}
so that,
๐๐๐ผ {๐ผ๐๐} = [๐๐, ๐]
Showing that the metric tensor can be used to lower the contravariant index of the Christoffel
symbol of the second kind to obtain the Christoffel symbol of the first kind.
We are now in a position to express the derivatives of higher order tensor fields in terms of the
Christoffel symbols.
Higher Order Tensors
For a second-order tensor ๐, we can express the components in dyadic form along the product
basis as follows:
๐ = ๐๐๐๐ ๐โ๐ ๐
= ๐๐๐๐ ๐โจ๐ ๐
= ๐.๐๐๐ ๐โ๐ ๐
= ๐๐. ๐๐ ๐โจ๐ ๐
This is perfectly analogous to our expanding vectors in terms of basis and reciprocal bases.
Derivatives of the tensor may therefore be expressible in any of these product bases. As an
example, take the product covariant bases.
62
We have:
๐๐
๐๐ฅ๐=๐๐๐๐
๐๐ฅ๐๐ ๐โจ๐ ๐ + ๐
๐๐๐๐ ๐๐๐ฅ๐
โจ๐ ๐ + ๐๐๐๐ ๐โจ
๐๐ ๐
๐๐ฅ๐
Recall that, ๐๐ ๐
๐๐ฅ๐โ ๐ ๐ = {
๐๐๐}. It follows therefore that,
๐๐ ๐๐๐ฅ๐
โ ๐ ๐ โ {๐ผ๐๐} ๐ฟ๐ผ
๐=๐๐ ๐๐๐ฅ๐
โ ๐ ๐ โ {๐ผ๐๐} ๐ ๐ผ โ ๐
๐
= (๐๐ ๐๐๐ฅ๐
โ {๐ผ๐๐} ๐ ๐ผ) โ ๐
๐ = 0.
Clearly, ๐๐ ๐
๐๐ฅ๐= {
๐ผ๐๐} ๐ ๐ผ
(Obviously since ๐ ๐ is a basis vector it cannot vanish)
๐๐
๐๐ฅ๐=๐๐๐๐
๐๐ฅ๐๐ ๐โจ๐ ๐ + ๐
๐๐๐๐ ๐๐๐ฅ๐
โจ๐ ๐ + ๐๐๐๐ ๐โจ
๐๐ ๐
๐๐ฅ๐
=๐๐๐๐
๐๐ฅ๐๐ ๐โจ๐ ๐ + ๐
๐๐ ({๐ผ๐๐} ๐ ๐ผ)โจ๐ ๐ + ๐
๐๐๐ ๐โจ({๐ผ๐๐} ๐ ๐ผ)
=๐๐๐๐
๐๐ฅ๐๐ ๐โจ๐ ๐ + ๐
๐ผ๐ ({๐๐ผ๐} ๐ ๐)โจ๐ ๐ + ๐
๐๐ผ๐ ๐โจ({๐๐ผ๐} ๐ ๐)
= (๐๐๐๐
๐๐ฅ๐+๐๐ผ๐ {
๐๐ผ๐} + ๐๐๐ผ {
๐๐ผ๐}) ๐ ๐โจ๐ ๐ = ๐ ,๐
๐๐๐ ๐โจ๐ ๐
Where
๐๐๐ ,๐=๐๐๐๐
๐๐ฅ๐+๐๐ผ๐ {
๐๐ผ๐} + ๐๐๐ผ {
๐๐ผ๐} or
๐๐๐๐
๐๐ฅ๐+๐๐ผ๐ฮ๐ผ๐
๐ + ๐๐๐ผฮ๐ผ๐๐
are the components of the covariant derivative of the tensor ๐ in terms of contravariant
components on the product covariant bases as shown.
In the same way, by taking the tensor expression in the dyadic form of its contravariant product
bases, we can write,
๐๐
๐๐ฅ๐=๐๐๐๐
๐๐ฅ๐๐ ๐โ๐ ๐ + ๐๐๐
๐๐ ๐
๐๐ฅ๐โ๐ ๐ + ๐๐๐๐
๐โ๐๐ ๐
๐๐ฅ๐
=๐๐๐๐
๐๐ฅ๐๐ ๐โ๐ ๐ + ๐๐๐ฮ๐ผ๐
๐ โ๐ ๐ + ๐๐๐๐ ๐โ
๐๐ ๐
๐๐ฅ๐
Again, notice from previous derivation above, {๐๐๐} = โ
๐๐ ๐
๐๐ฅ๐โ ๐ ๐ so that,
๐๐ ๐
๐๐ฅ๐= โ{
๐๐ผ๐} ๐ ๐ผ =
โฮ๐ผ๐๐ ๐ ๐ผ. (The Literature has the two representations for the Christoffel Symbols.) Therefore,
63
๐๐
๐๐ฅ๐=๐๐๐๐
๐๐ฅ๐๐ ๐โ๐ ๐ + ๐๐๐
๐๐ ๐
๐๐ฅ๐โ๐ ๐ + ๐๐๐๐
๐โ๐๐ ๐
๐๐ฅ๐
=๐๐๐๐
๐๐ฅ๐๐ ๐โ๐ ๐ โ ๐๐๐ฮ๐ผ๐
๐ ๐ ๐ผโ๐ ๐ + ๐๐๐๐ ๐โฮ๐ผ๐
๐๐ ๐ผ
= (๐๐๐๐
๐๐ฅ๐โ ๐๐ผ๐ฮ๐๐
๐ผ โ ๐๐๐ผฮ๐๐๐ผ)๐ ๐โ๐ ๐ = ๐๐๐,๐๐
๐โ๐ ๐
so that
๐๐๐,๐ =๐๐๐๐
๐๐ฅ๐โ ๐๐ผ๐ฮ๐๐
๐ผ โ ๐๐๐ผฮ๐๐๐ผ
Two other expressions can be found for the covariant derivative components in terms of the
mixed tensor components using the mixed product bases defined above. It is a good exercise to
derive these.
The formula for covariant differentiation of higher order tensors follow the same kind of logic as
the above definitions. Each covariant index will produce an additional term similar to that in 3
with a dummy index supplanting the appropriate covariant index. In the same way, each
contravariant index produces an additional term like that in 3 with a dummy index supplanting
an appropriate contravariant index.
The covariant derivative of the mixed tensor, ๐ด๐1 , ๐2 ,โฆ, ๐๐
๐1 , ๐2 ,โฆ, ๐๐ is the most general case for the
covariant derivative of an absolute tensor:
๐ด๐1 , ๐2 ,โฆ, ๐๐
๐1 , ๐2 ,โฆ, ๐๐
,๐= ๐๐ด๐1 , ๐2 ,โฆ, ๐๐
๐1 , ๐2 ,โฆ, ๐๐
๐๐ฅ๐โ {
๐ผ๐1๐} ๐ด๐ผ , ๐2 ,โฆ, ๐๐
๐1 , ๐2 ,โฆ, ๐๐ โ {๐ผ๐2๐} ๐ด๐1, ๐ผ,โฆ, ๐๐
๐1 , ๐2 ,โฆ, ๐๐ โโฏโ {๐ผ๐๐๐} ๐ด๐1 , ๐2 ,โฆ,๐ผ
๐1 , ๐2 ,โฆ, ๐๐
+ {๐1๐ฝ๐} ๐ด๐1 , ๐2 ,โฆ, ๐๐
๐ฝ , ๐2 ,โฆ, ๐๐ + {๐2๐ฝ๐}๐ด๐1 , ๐2 ,โฆ, ๐๐
๐1 , ๐ฝ,โฆ, ๐๐ +โฏ+
Physical Components in Orthogonal Systems
The components of a vector or tensor in a Cartesian system are projections of the vector on
directions that have no dimensions and a value of unity.
These components therefore have the same units as the vectors themselves.
It is natural therefore to expect that the components of a tensor have the same dimensions.
In general, this is not so. In curvilinear coordinates, components of tensors do not necessarily
have a direct physical meaning. This comes from the fact that base vectors are not guaranteed
to have unit values (โ๐ โ 1 in general).
64
They may not be dimensionless. For example, in orthogonal spherical polar, the base vectors are
๐ 1, ๐ 2 and ๐ 3.
These can be expressed in terms of dimensionless unit vectors as, ๐๐๐, ๐ sin๐ ๐๐, and ๐๐ since
the magnitudes of the basis vectors are ๐, ๐ sin๐ , and 1 or (โ๐11, โ๐22, โ๐33) respectively.
As an example consider a force with the contravariant components ๐น1, ๐น2 and ๐น3,
๐ = ๐น1๐ 1 + ๐น2๐ 2 + ๐น
3 ๐ 3
= ๐๐น1๐๐ + ๐ sin๐ ๐น2๐๐ + ๐น3๐๐
Which may also be expressed in terms of physical components,
๐ = ๐น๐๐๐ + ๐น๐๐๐ + ๐น๐๐๐.
While these physical components {๐น๐, ๐น๐, ๐น๐} have the dimensions of force, for the contravariant
components normalized by the in terms of the unit vectors along these axes to be consistent,
{๐๐น1, ๐ sin ๐ ๐น2, ๐น3} must each be in the units of a force. Hence, ๐น1, ๐น2 and ๐น3 may not
themselves be in force units. The consistency requirement implies,
๐น๐ = ๐๐น1, ๐น๐ = ๐ sin ๐ ๐น2, and ๐น๐ = ๐น3
For the same reasons, if we had used covariant components, the relationships,
๐น๐ = ๐น1๐, ๐น๐ =
๐น2๐ sin ๐
, and ๐น๐ = ๐น3
The magnitudes of the reciprocal base vectors are 1
๐,
1
๐ sin๐, and 1. While the physical
components have the dimensions of force, ๐น1 = ๐๐น๐ and ๐น2 = ๐ sin๐ ๐น๐ have the dimensions of
moment, while ๐น1 =๐น๐
๐ and ๐น2 =
๐น๐
๐ sin๐ are in dimensions of force per unit length. Only the
third components in both cases are given in the dimensions of force.
Physical
Component
Contravariant Covariant Mixed
๐(๐๐) ๐11โ1โ1 = ๐11๐2 ๐11
โ1 โ1=๐11๐2
๐11
โ1โ1 = ๐1
1
65
๐(๐๐) ๐12โ1โ2
= ๐12๐2 ๐ ๐๐ ๐
๐12โ1 โ1
=๐12
๐2 ๐ ๐๐ ๐ ๐2
1
โ2โ1 =
๐21
๐ ๐๐ ๐
๐(๐๐) ๐22โ2โ2
= ๐22๐2 ๐ ๐๐2๐
๐22โ2 โ2
=๐22
๐2 ๐ ๐๐2 ๐ ๐2
2
โ2โ2 = ๐2
2
๐(๐๐) ๐23โ2โ3
= ๐23๐ ๐ ๐๐ ๐
๐23โ2 โ3
=๐23
๐ ๐ ๐๐ ๐ ๐3
2
โ3โ2
= ๐32๐ ๐ ๐๐ ๐
๐(๐๐) ๐33โ3โ3 = ๐33 ๐33
โ3 โ3= ๐33 ๐3
3
โ3โ3 = ๐3
3
๐(๐๐) ๐31โ3โ1 = ๐31๐ ๐31
โ3 โ1=๐31๐
๐13
โ1โ3 =
๐13
๐
The transformation equations from the Cartesian to the oblate spheroidal
coordinates ๐, ๐ and ๐ are: ๐ฅ = ๐๐๐ ๐ ๐๐ ๐, ๐ฆ = ๐โ(๐2 โ 1)(1 โ ๐2) , and ๐ง =๐๐๐ ๐๐๐ ๐, where f is a constant representing the half the distance between the foci of a family of confocal ellipses. Find the components of the metric tensor in this system. The metric tensor components are:
๐๐๐ = (๐๐ฅ
๐๐)2
+ (๐๐ฆ
๐๐)2
+ (๐๐ง
๐๐)2
= (๐๐ sin๐)2 + ๐2๐2 (1 โ ๐2
๐2 โ 1) + (๐๐ cos๐)2 = ๐2 (
๐2 โ ๐2
๐2 โ 1)
๐๐๐ = (๐๐ฅ
๐๐)2
+ (๐๐ฆ
๐๐)2
+ (๐๐ง
๐๐)2
= ๐2 (๐2 โ ๐2
1 โ ๐2)
66
๐๐๐ = (๐๐ฅ
๐๐)2
+ (๐๐ฆ
๐๐)2
+ (๐๐ง
๐๐)2
= (๐๐๐)2
๐๐๐ = (๐๐ฅ
๐๐) (๐๐ฅ
๐๐) + (
๐๐ฆ
๐๐) (๐๐ฆ
๐๐) + (
๐๐ง
๐๐) (๐๐ง
๐๐)
= (๐๐ sin๐)(๐๐ sin๐) โ (๐๐โ๐2 โ 1
1 โ ๐2)(๐๐โ
1 โ ๐2
๐2 โ 1) + (๐๐ cos๐)(๐๐ cos๐)
= 0 = ๐๐๐ = ๐๐๐
Find an expression for the divergence of a vector in orthogonal curvilinear coordinates. (***Rework***) The gradient of a vector ๐ = ๐น๐๐ ๐ is ๐ป โ ๐ญ = ๐๐๐๐โ (๐น๐๐๐) = ๐น
๐,๐๐
๐โ๐๐. The divergence is
the contraction of the gradient. While we may use this to evaluate the divergence directly it is
often easier to use the computation formula in equation Ex 15:
div ๐ = ๐น๐ ,๐๐ ๐ โ ๐ ๐ = ๐น
๐ ,๐=1
โ๐
๐(โ๐ ๐น๐)
๐๐ฅ๐
=1
โ1โ2โ3[๐
๐๐ฅ1(โ1โ2โ3๐น
1) +๐
๐๐ฅ2(โ1โ2โ3๐น
2) +๐
๐๐ฅ3(โ1โ2โ3๐น
3)]
Recall that the physical (components having the same units as the tensor in question)
components of a contravariant tensor are not equal to the tensor components unless the
coordinate system is Cartesian. The physical component ๐น(๐) = ๐น๐โ๐ (no sum on i). In terms of
the physical components therefore, the divergence becomes,
=1
โ1โ2โ3[๐
๐๐ฅ1(โ2โ3๐น(1)) +
๐
๐๐ฅ2(โ1โ3๐น(2)) +
๐
๐๐ฅ3(โ1โ2๐น(3))]
Find an expression for the Laplacian operator in Orthogonal coordinates.
For the contravariant component of a vector, ๐น๐,
๐น๐,๐ =1
โ๐
๐(โ๐ ๐น๐)
๐๐ฅ๐.
Now the contravariant component of gradient ๐น๐ = ๐๐๐๐,๐. Using this in place of the vector ๐น๐,
we can write,
๐๐๐๐,๐๐ =1
โ๐
๐(โ๐ ๐๐๐๐,๐ )
๐๐ฅ๐
67
given scalar ๐, the Laplacian ๐ป2๐ is defined as, ๐๐๐๐,๐๐ so that,
๐ป2๐ = ๐๐๐๐,๐๐ =1
โ๐
๐
๐๐ฅ๐(โ๐ ๐๐๐
๐๐
๐๐ฅ๐)
When coordinates are orthogonal, ๐๐๐ = ๐๐๐ = 0 whenever ๐ โ ๐. Expanding the computation
formula therefore, we can write,
๐ป2๐ =1
โ1โ2โ3[๐
๐๐ฅ1(โ1โ2โ3โ1
๐๐
๐๐ฅ1) +
๐
๐๐ฅ2(โ1โ2โ3โ2
๐๐
๐๐ฅ2) +
๐
๐๐ฅ3(โ1โ2โ3โ3
๐๐
๐๐ฅ3)]
=1
โ1โ2โ3[๐
๐๐ฅ1(โ2โ3
๐๐
๐๐ฅ1) +
๐
๐๐ฅ2(โ1โ3
๐๐
๐๐ฅ2) +
๐
๐๐ฅ3(โ1โ2
๐๐
๐๐ฅ3)]
Show that the oblate spheroidal coordinate systems are orthogonal. Find an
expression for the Laplacian of a scalar function in this system.
Example above shows that ๐๐๐ = ๐๐๐ = ๐๐๐ = 0 . This is the required proof of orthogonality.
Using the computation formula in example 11, we may write for the oblate spheroidal
coordinates that,
๐ป2ฮฆ =โ(๐2 โ 1)(1 โ ๐2)
๐3๐2(๐2 โ ๐2)[๐
๐๐(๐๐๐โ
๐2 โ 1
1 โ ๐2๐ฮฆ
๐๐) +
๐
๐๐(๐๐๐โ
1 โ ๐2
๐2 โ 1
๐ฮฆ
๐๐)]
+๐
๐๐(
๐(๐2 โ ๐2)
๐๐โ(๐2 โ 1)(1 โ ๐2)
๐ฮฆ
๐๐)
3.61 Given tensors ๐,๐ and ๐ we define the tensor square product with operator, โ , in the
equation, (๐โ ๐)๐ = ๐๐๐T, show that the square product of two tensors ๐ and ๐ has
the component form, ๐โ ๐ = ๐ด๐๐๐ต๐๐๐ ๐โ๐ ๐โ๐ ๐โ๐ ๐
Let ๐ = ๐ด๐๐๐ ๐โ๐ ๐, ๐ฉ = ๐ต๐๐๐
๐โ๐ ๐, ๐ = ๐ถ๐ผ๐ฝ๐ ๐ผโ๐ ๐ฝ . If we assume that ๐โ ๐ =
๐ด๐๐๐ต๐๐๐ ๐โ๐ ๐โ๐ ๐โ๐ ๐, Then the product,
(๐โ ๐)๐ = ๐ด๐๐๐ต๐๐(๐ ๐โ๐ ๐โ๐ ๐โ๐ ๐)(๐ถ๐ผ๐ฝ๐ ๐ผโ๐ ๐ฝ)
= ๐ด๐๐๐ต๐๐๐ถ๐ผ๐ฝ๐ฟ๐ผ
๐๐ฟ๐ฝ๐๐ ๐โ๐ ๐
= ๐ด๐๐๐ต๐๐๐ถ๐๐๐ ๐โ๐ ๐
To complete the proof, we only need to show that the above result equals ๐๐๐T. To do
so, we change the basis order in ๐ so that,
68
๐๐๐T = (๐ด๐๐๐ ๐โ๐ ๐)(๐ถ๐ผ๐ฝ๐ ๐ผโ๐ ๐ฝ)(๐ต๐๐๐
๐โ๐ ๐)
= ๐ด๐๐๐ถ๐ผ๐ฝ๐ต๐๐๐
๐โ๐ ๐๐ฟ๐ผ๐๐ฟ๐ฝ๐ = ๐ด๐๐๐ถ
๐๐๐ต๐๐๐ ๐โ๐ ๐
= ๐ด๐๐๐ต๐๐๐ถ๐๐๐ ๐โ๐ ๐
= (๐โ ๐)๐
We can therefore conclude that ๐โ ๐ = ๐ด๐๐๐ต๐๐๐ ๐โ๐ ๐โ๐ ๐โ๐ ๐.
3.62
3.63
3.64
3.65
3.66
3.67
3.68
3.69
3.70
69
3.71
3.61
3.62
3.63
3.64
3.65
3.66
3.67
3.68
3.69
70
3.70
3.71
References
Abramowitz, M & Stegun, I, Handbook of Mathematical Functions with Formulas, Graphs and Mathematical Tables, Dover Publications, Inc., New York 1970
Bertram, A, Elasticity and Plasticity of Large Deformations, An Introduction, Springer, 2008 Heinbockel, JH, Introduction to Tensor Calculus and Continuum Mechanics, Trafford, 2003 Bower, AF, Applied Mechanics of Solids, CRC Press, NY, 2010 Brannon, RM, Functional and Structured Tensor Analysis for Engineers, UNM BOOK DRAFT,
2006, pp 177-184. Dill, EH, Continuum Mechanics, Elasticity, Plasticity, Viscoelasticity, CRC Press, 2007 Gurtin, ME, Fried, E & Anand, L, The Mechanics and Thermodynamics of Continua, Cambridge
University Press, 2010 Holzapfel, GA, Nonlinear Solid Mechanics, Wiley NY, 2007 Humphrey, JD, Cadiovascular Solid Mechanics: Cells, Tissues and Organs, Springer-Verlag, NY,
2002 Itskov, M, Tensor Algebra and Tensor Analysis for Engineers with applications to Continuum
Mechanics, Springer, 2010 Jog, BS, Continuum Mechanics: Foundations and Applications of Mechanics, Cambridge-IISc
Series, 2015 Kay, DC, Tensor Calculus, Schaum Outline Series, McGraw-Hill, 1988 Mase,GE, Theory and Problems of Continuum Mechanics, Schaum Outline Series, McGraw-Hill,
1970 McConnell, AJ, Applications of Tensor Analysis, Dover Publications, NY 1951 Ogden, RW, Nonlinear Elastic Deformations, Dover Publications, Inc. NY, 1997 Romano, A, Lancellotta, R, Marasco, A, Continuum Mechanics using Mathematica, Birkhauser,
2006 Sokolnikoff, I.S., Tensor Analysis with Applications to Geometry and the Mechanics of
Continua, John Wiley and Sons, Inc., New York, NY 1964 Spiegel, MR, Theory and Problems of Vector Analysis, Schaum Outline Series, McGraw-Hill,
1974 Taber, LA, Nonlinear Theory of Elasticity, World Scientific, 2008 Wegner, JL & Haddow, JB, Elements of Continuum Mechanics and Thermodynamics,
Cambridge University Press, 2009.