Eötvös Loránd University
Faculty of Science
András Csirik
Differential Forms
and Applications
Bachelor's Thesis in Mathematics
Supervisor:
Dr. Sándor Kovács
Department of Numerical Analysis
Budapest, 2019
Acknowledgement
I would like to express my gratitude to my supervisor, Dr. Sándor Kovács for his
contributions to my thesis. His guidance and expert advice have been invaluable
throughout all stages of the work.
1
Contents
1 Introduction 3
2 Dierential forms 5
2.1 The elements of multilinear algebra . . . . . . . . . . . . . . . . . . 5
2.2 Dierential forms in Rn . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.2.1 Derivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.2.2 The induced form . . . . . . . . . . . . . . . . . . . . . . . . 24
2.2.3 Closed and exact forms . . . . . . . . . . . . . . . . . . . . . 28
2.2.4 Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.3 The Poincaré-Stokes theorem . . . . . . . . . . . . . . . . . . . . . 34
3 Applications 41
3.1 Maxwell's equations . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3.1.1 The classical form of the Maxwell's equations . . . . . . . . 41
3.1.2 Minkowski spacetime . . . . . . . . . . . . . . . . . . . . . . 42
3.1.3 The Hodge-star operator . . . . . . . . . . . . . . . . . . . . 44
3.1.4 Maxwell's equations in terms of dierential forms . . . . . . 49
3.2 Brouwer's Fixed Point Theorem . . . . . . . . . . . . . . . . . . . . 54
2
Chapter 1
Introduction
The theory of dierential forms is a relatively new eld in mathematics. It was
rst introduced at the beginning of the 20th century with the pioneer work of
H. Poincare, E. Goursat and E. Cartan. Dierential forms contributed to the
evolution of many elds in mathematics, such as geometry or topology. Moreover,
they provide an essential tool in modern physics, for example in the areas of
classical mechanics, electrodynamics and general relativity.
One of the main advantages of using dierential forms is that it does not require
coordinates. Originally, the laws of classical physics were described by using vector
calculus. In this case one rst had to choose a coordinate system for calculations.
But nature does not come "equipped" with a coordinate system. This is merely a
human construction in order to make the computations less dicult. However, the
laws of nature are general truths, independent from any chosen coordinate system.
Therefore we can say that describing the laws of physics with the language of
dierential forms captures the real and essential properties of nature.
Another advantage of this formalism in comparison with tensors is that tensor
elds "do not behave themselves" under mappings. This means that if Φ is a map
from the X space to the Y space and there is a given tensor eld on the X space
then there is no naturally induced eld on the Y space. However, with dierential
forms it can be dened quite naturally. So if we have a dierential form in a space,
we can automatically dene it in other spaces using various maps.
3
From a theoretical point of view, dierential forms can be considered as an elegant
generalization of vector calculus. This general, more abstract theory involves a
lot about what an undergraduate student comes across in their studies. Consider
the main theorems of multi-variable calculus:, the Green-, Stokes- and the Gauss-
theorems. All these theorems can be derived as a special case from the more
general Poincaré-Stokes theorem.
The purpose of this thesis is to construct the theoretical background of dierential
forms and to present some elegant applications of this formalism. As a physical
application we will rewrite Maxwell's equations in terms of dierential forms and
as a mathematical we will present a proof for Brouwer's xed point theorem.
4
Chapter 2
Dierential forms
2.1 The elements of multilinear algebra
Denition 2.1.1. Let k ∈ N and let X1, ..., Xk, Y be vector spaces over the eld
K. We say that the
f : X1 × ...×Xk → Y
map is k-linear (in notation: f ∈ Lk(X1, ...Xk;Y )), if for all i ∈ 1, ..., k andfor all aj ∈ Xj (j ∈ 1, ..., k, j 6= i) the map
φi : Xi → Y, φi(x) := f(a1, ...ai−1, x, ai+1, ..., ak)
is linear. In the special case
X := X1 = ... = Xk, Y := K.
f is called a k-form and we use the notation f ∈ Lk(Xk, K). By convention
L0(X0, K) := K
Examples.
1. If k ∈ N, and X1, ..., Xk, Y are vector spaces then
f0 : X1 × ...×Xk → Y, f0(a1, ..., ak) := 0 ∈ Y
5
is obviously a k-linear map.
2. If a ∈ Rn, then
fa : Rn → R, fa(x) := 〈a, x〉 :=n∑i=1
aixi
is a 1-form. If e1, ..., en is the canonical basis of Rn, then for all i ∈ 1, ..., n
fei(x) = 〈ei, x〉 = xi (x ∈ Rn)
is the 1-form which returns the ith coordinate of a vector. So for an arbitrary
a = (a1, ..., an) ∈ Rn vector
fa(x) = 〈a, x〉 =n∑i=1
aixi =n∑i=1
aifei(x) (x ∈ Rn)
which means
fa =n∑i=1
aifei .
So the 1-forms fe1 , ..., fen compose the basis of the dual space Rn)′. The
following notations are often used as well for the dual basis:
fei =: ei =: dxi.
In this thesis we will mainly use dxi.
3. If k, n ∈ N, N := 1, ..., n, i = (i1, ..., ik) ∈ Nk, then
∆n,ki : (Rn)k → R, ∆n,k
i (x1, ...xk) := det
x1i1 . . . xki1...
...
x1ik . . . xkik
(x1, ..., xk ∈ Rn)
is a k-form.
These ∆n,ki forms can be constructed in more general spaces. In the euclidean
space Rn there is a scalar product. For an x ∈ Rn element the ith coordinate
6
of x can be expressed as 〈x, ei〉, so the above formula can be written as:
∆n,ki (x1, ...xk) := det
〈x1, ei1〉 . . . 〈xk, ei1〉
......
〈x1, eik〉 . . . 〈xk, eik〉
(x1, ..., xk ∈ Rn).
And it can be generalized even more. In a so-called pseudo-euclidean space
V (which we will cover in chapter 3) there is no scalar product only a g
non-degenerate symmetric bilinear map. In these spaces the denition is:
∆n,ki (x1, ...xk) := det
g(x1, ei1) . . . g(xk, ei1)
......
g(x1, eik) . . . g(xk, eik)
(x1, ..., xk ∈ V ).
Due to the properties of the determinant if i is not an injective multiindex:
∃r, s ∈ N, r 6= s : ir = is,
then
∆n,ki = f0 (∈ Lk(Rn)k),R).
If i is a permutation of i then
∆n,ki = ±∆n,k
i
Thus, we only need to focus on the strictly increasing multiindice:
Nk∗ =
N := 1, ..., n (k = 1),
i ∈ Nk : i1 < ... < ik (k > 1).
Theorem 2.1.1. Let e1, ..., en be a basis in X. Then any f ∈ Lk(Xk,R) form
is uniquely determined by its f(ei1 , ...eik) (i ∈ 1, ..., nk) values on the k-tuples of
basis vectors.
Proof. Let x1, ..., xk be arbitrary elements in X. Since e1, ..., en is a basis of X
7
these elements can be expressed as follows:
xi =n∑
j1=1
aijieji (aijl ∈ R).
Using the k-linearity of f we get:
f(x1, ..., xk) = f
(n∑
j1=1
a1j1ej1 , ...,
n∑jk=1
a1jkejk
)=
n∑j1=1
a1j1 , ...,
n∑jk=1
a1jkf(ej1 , ..., ejk).
On the other hand if we are given the numbers f(ei1 , ...eik), the above expression
determines a k-form.
Denition 2.1.2. Let 2 ≤ n ∈ N, X be a vector space over the eld K and
f ∈ Lk(Xk, K). f is called alternating if it changes sing whenever two of its
arguments are interchanged:
f(a1, ..., ai, , ..., aj, ..., ak) = −f(a1, ..., aj, ..., ai, ..., ak)
for all i, j ∈ N, i 6= j, ai ∈ X.
The set of all alternating k-forms
Ak(X) :=
K (k = 0),
X ′ (k = 1),
f ∈ Lk(Xk, K) : f is alternating (k > 1).
builds a subspace in Lk(Xk, K). Examples.
1. The previously dened ∆n,ki forms are alternating: ∆n,k
i ∈ Ak(Rn).
2. If φ, ψ ∈ A1(X) then their wedge product is φ ∧ ψ ∈ A2(X):
φ ∧ ψ(x1, x2) := φ(x1)ψ(x2)− φ(x2)ψ(x1) (x1, x2 ∈ X).
8
3. If x1, ..., xn ∈ Rn, then
f(x1, ..., xn) := det(x1, ...xn)
is an alternating n-form: f ∈ An(Rn).
Denition 2.1.3. The wedge product of k 1-form is dened as the following.
Let φ1, ..., φk ∈ A1(X), x1, ..., xk ∈ X. Then
φ1 ∧ ... ∧ φk ∈ Ak(X), φ1 ∧ ... ∧ φk(x1, ..., xk) := det(φi(xj))ni,j=1.
This denition coincides with the one given before for two 1-forms. Moreover the
following equation holds:
∆n,ki = dx1 ∧ ... ∧ dxk.
Theorem 2.1.2. Let e1, ..., en be a basis in X. Then the set ∆n,ki : i ∈ Nk
∗ isa basis of Ak(X).
Proof.
Step 1
First we show that these elements are linearly independent. Let's suppose
that some linear combination of these forms is zero and show that all the
coecients must be zero. ∑i∈Nk
∗
ai∆n,ki = f0.
For an arbitrary l ∈ Nk∗ multiindex the following equation holds:
0 = f0(el1 , ..., elk) =∑i∈Nk
∗
ai∆n,ki (el1 , ..., elk) = al.
The last equality is true, because
∆n,ki (el1 , ..., elk) =
1 (i = l),
0 (i 6= l).
9
(If i = l then we get the determinant of the identity matrix and if i 6= l there
will be a full zero column in the determinant.)
Step 2
Now we prove that the ∆n,ki forms span the space, meaning that any arbitrary
f ∈ Ak(Rn) element can be expressed as their linear combination:
f =∑i∈Nk
∗
ai∆n,ki
with some ai coecients. For this set
g =∑i∈Nk
∗
f(ei1 , ..., eik)∆n,ki .
This way g ∈ Ak(Rn) and
g(ei1 , ..., eik) = f(ei1 , ..., eik)
for all i ∈ Nk∗ . Which means f = g, and if we choose the coecients
ai := f(ei1 , ..., eik) we get the desired format for f .
We will almost always use the basis of strictly increasing multiindice except in the
case n = 3, k = 2. Then we will use the basis (∆3,2(2,3),∆
3,2(3,1),∆
3,2(1,2)).
Denition 2.1.4. Let f ∈ Ak(X) and g ∈ Al(X). The wedge product or
exterior product of f and g is the f ∧ g ∈ Ak+l(X) alternating form dened as
follows:
f ∧ g(x1, ...xk+l) :=1
k!l!
∑σ∈Sk+l
sgn(σ)f(xσ(1), ...xσ(k))g(xσ(k+1), ..., xσ(k+l)).
This denition is made in a way that it coincides with the previous ones. Moreover
the following equation holds:
∆n,kj ∧∆n,l
i = ∆n,k+lj,i (j ∈ Nk
∗ , i ∈ N l∗).
10
The next theorem summarizes the properties of the exterior product.
Theorem 2.1.3. Suppose that f1, f2, f ∈ Ak(X), g1, g2, g ∈ Al(X), h ∈ Am(X)
and a ∈ K. Then
1. (f1 + f2) ∧ g = f1 ∧ g + f2 ∧ g,
2. f ∧ (g1 + g2) = f ∧ g1 + f ∧ g2,
3. (af) ∧ g = f ∧ (ag) = a(f ∧ g),
4. (f ∧ g) ∧ h = f ∧ (g ∧ h),
5. f ∧ g = (−1)klg ∧ f .
Proof.
1. Let x1, ..., xk+l ∈ X arbitrary elements. Then (f1 + f2) ∧ g(x1, ..., xk+l) =
=1
k!l!
∑σ∈Sk+l
sgn(σ)(f1 + f2)(xσ(1), ...xσ(k))g(xσ(k+1), ..., xσ(k+l)) =
=1
k!l!
∑σ∈Sk+l
sgn(σ)f1(xσ(1), ...xσ(k))g(xσ(k+1), ..., xσ(k+l))+
+1
k!l!
∑σ∈Sk+l
sgn(σ)f2(xσ(1), ...xσ(k))g(xσ(k+1), ..., xσ(k+l)) =
=f1 ∧ g(x1, ..., xk+l) + f2 ∧ g(x1, ..., xk+l).
2. For x1, ..., xk+l ∈ X we have f ∧ (g1 + g2)(x1, ..., xk+l) =
=1
k!l!
∑σ∈Sk+l
sgn(σ)f(xσ(1), ...xσ(k))(g1 + g2)(xσ(k+1), ..., xσ(k+l)) =
=1
k!l!
∑σ∈Sk+l
sgn(σ)f(xσ(1), ...xσ(k))g1(xσ(k+1), ..., xσ(k+l))+
+1
k!l!
∑σ∈Sk+l
sgn(σ)f(xσ(1), ...xσ(k))g2(xσ(k+1), ..., xσ(k+l)) =
=f ∧ g1(x1, ..., xk+l) + f ∧ g2(x1, ..., xk+l).
11
3. If x1, ..., xk+l ∈ X, then (af) ∧ g(x1, ...xk+l) =
=1
k!l!
∑σ∈Sk+l
sgn(σ)(af)(xσ(1), ...xσ(k))g(xσ(k+1), ..., xσ(k+l)) =
= a1
k!l!
∑σ∈Sk+l
sgn(σ)f(xσ(1), ...xσ(k))g(xσ(k+1), ..., xσ(k+l)) =
= a(f ∧ g)(x1, ..., xk+l) =
=1
k!l!
∑σ∈Sk+l
sgn(σ)f(xσ(1), ...xσ(k))(ag)(xσ(k+1), ..., xσ(k+l)) =
= f ∧ (ag)(x1, ..., xk+l).
4. Set again any elements. x1, ..., xk+l+m ∈ X. From the denition we get
((f ∧ g) ∧ h)(x1, ..., xk+l+m) =
=1
(k + l)!m!
∑σ∈Sk+l+m
sgn(σ)(f ∧ g)(xσ(1), ...xσ(k+l))h(xσ(k+l+1), ..., xσ(k+l+m)).
Now we decompose the permutation group Sk+l+m into so-called residual
classes with respect to the subgroup Sk+l. This means grouping the permu-
tations of Sk+l+m based on how they act on the last m element. This way
each class contains (k + l)! elements. Let C denote the set of these classes.
Let R be an arbitrary class and σR a particular element in R. Each element
σ ∈ R can be decomposed as σ = σR π where π ∈ Sk+l. Then the above
sum can be written as
1
(k + l)!m!
∑R∈C
sgn(σR)
(∑σ∈R
sgn(π)(f ∧ g)(xσ(1), ..., xσ(k+l))
)·
·h(xσR(k+l+1), ..., xσR(k+l+m)).
Now note that all terms in the big parenthesis are equal, because they are
all the permutation π from a xed ordering given by σR. Since all classes in
12
C have (k + l)! elements we can write the equation
1
m!
∑R∈C
sgn(σR)(f ∧ g)(xσR(1), ..., xσR(k+l))h(xσR(k+l+1), ..., xσR(k+l+m)).
Now calculating the wedge product of f and g we obtain
1
k!l!m!
∑R∈C
sgn(σR)∑τ∈Sk+l
sgn(τ)f(xτ(σR(1)), ..., xτ(σR(k)))g(xτ(σR(k+1)), ..., xτ(σR(k+l)))·
·h(xσR(k+l+1), ..., xσR(k+l+m))
But again, all permutations σ ∈ Sk+l+m can be decomposed as σ = τ σRand, since τ acts as the identity on the last m indices, for the indices used
in the argument of h σ = σR is true. Thus we nally obtain:
((f ∧ g) ∧ h)(x1, ..., xk+l+m) =1
k!l!m!
∑σ∈Sk+l+m
sgn(σ)f(xσ(1), ..., xσ(k))·
·g(xσ(k+1), ..., xσ(k+l))h(xσ(k+l+1), ..., xσ(k+l+m)).
Because this result does not depend on the order we associate the operations
the statement is proven.
5. Let's write f and g as the following:
f =:∑i∈Nk
∗
aidxi1 ∧ ... ∧ dxik (ai ∈ R),
g =:∑j∈N l
∗
bjdxj1 ∧ ... ∧ dxjl (bj ∈ R).
Then
13
f ∧ g =
∑i∈Nk
∗
aidxi1 ∧ ... ∧ dxik
∧∑j∈N l
∗
bjdxj1 ∧ ... ∧ dxjl
=
=∑i∈Nk
∗
∑j∈N l
∗
aibjdxi1 ∧ ... ∧ dxik ∧ dxj1 ∧ ... ∧ dxjl
Now we want to interchange the dxi forms with the dxj ones and use the
fact that
dxi ∧ dxj = −dxj ∧ dxi.
All k dxi-s have to "move through" l dxj-s, thus the expression changes sign
kl times:
f ∧ g =∑i∈Nk
∗
∑j∈N l
∗
aibj(−1)kdxj1 ∧ (dxi1 ∧ ... ∧ dxik) ∧ (dxj2 ∧ ... ∧ dxjl) =
=∑i∈Nk
∗
∑j∈N l
∗
aibj(−1)kl(dxj1 ∧ ... ∧ dxjl) ∧ (dxi1 ∧ ... ∧ dxik) =
= (−1)kl∑i∈Nk
∗
∑j∈N l
∗
bjai(dxj1 ∧ ... ∧ dxjl) ∧ (dxi1 ∧ ... ∧ dxik) =
= (−1)kl
∑j∈N l
∗
bjdxj1 ∧ ... ∧ dxjl
∧∑i∈Nk
∗
aidxi1 ∧ ... ∧ dxik
=
= (−1)klg ∧ f.
14
Remarks.
1. The dimension of Ak(X) is:
dim(Ak(X)) =
(nk
)(k ≤ n),
0 (k > n).
2. The A(X) symbol denotes the direct sum of the Ak(X) vector spaces:
A(X) := A0(X)⊕ ...⊕An(X)⊕ ... := ⊕m∈NAm(X).
It is clear that if dim(X) = n, then:
dim(A(X)) = 1 +
(n
1
)+ ...+
(n
n
)+ 0 + ... = 2n.
If r, s ∈ N0 and φ ∈ Ar(X), ψ ∈ As(X), then let
f :=∞∑r=0
φr ∈ A(X), g :=∞∑s=0
ψs ∈ A(X)
and
f ∧ g :=∞∑l=1
∑r+s=l
φr ∧ ψs ∈ A(X).
Then the A(X) vector space equipped with the exterior product constitutes
the (A(X),∧) exterior algebra.
2.2 Dierential forms in Rn
Denition 2.2.1. Let n ∈ N, k ∈ N0, ∅ 6= V ⊂ Rn be an open set. Then the map:
ω : V → Ak(Rn)
is called a dierential form of degree k (over Rn).
Since ∆n,ki (i ∈ Nk
∗ ) is a basis in Ak(Rn) it is clear, that for every x ∈ V exist the
15
unique numbers
ωi(x) ∈ R (i ∈ Nk∗ )
such that
ω(x) =∑i∈Nk
∗
ωi(x)∆n,ki .
The ωi : V → R maps are called the coordinate functions of the dierential
form ω. The above discussed notations can be written shortly as:
ω =∑i∈Nk
∗
ωi∆n,ki .
This is the so-called canonical form of ω.
Denition 2.2.2. Let r ∈ N. The space Λrk(V ) denotes the dierential forms of
degree k, whose coordinate functions are r-times continuously dierentiable:
Λrk(V ) := ω : V → Ak(Rn) : ω ∈ Cr
The property ω ∈ Cr is equivalent to ωi ∈ Cr ∀i ∈ Nk∗ .
The following operations are dened between dierential forms:
Denition 2.2.3. Let f : V → R, ω, w ∈ Λrk(V ), σ ∈ Λr
l (V ). Then:
1. (fω)(x) := f(x)ω(x),
2. (ω + w)(x) := ω(x) + w(x),
3. (ω ∧ σ)(x) := ω(x) ∧ σ(x).
Remarks.
1. In the special case k = 0 A0(Rn) = R, hence a dierential 0-form is an
ω : V → R n-variable function (scalar eld).
2. If k > n then Ak(Rn) = f0.
16
3. It is clear that if α : V → R, ω, w ∈ Λrk(V ) and σ ∈ Λr
l (V ), then
αω =∑i∈Nk
∗
(aωi)∆n,ki ∈ Λr
k(V ), ω + w =∑i∈Nk
∗
(ωi + wi)∆n,ki ∈ Λr
k(V )
and
ω ∧ σ =∑i∈Nk
∗
∑j∈N l
∗
(ωiσi)∆n,ki ∧∆n,l
j ∈ Λrk+l(V ).
4. Let us introduce the following somewhat simplifying notation:
ω(x;x1, ..., xk) := ω(x)(x1, ..., xk) =∑i∈Nk
∗
ωi(x)∆n,ki (x1, ..., xk)
(x ∈ V, x1, ..., xk ∈ Rn).
5. If e1, ..., en are the canonical basis vectors of Rn, then for all i ∈ Nk∗ the
coordinate functions of a dierential k-form can be computed as follows:
ω(x; ei1 , ..., eik) =∑j∈Nk
∗
ωj(x)∆n,kj (ei1 , ..., eik) = ωi(x) (x ∈ V ).
2.2.1 Derivation
Denition 2.2.4. The exterior derivative of a dierential form:
d : Λrk(V )→ Λr−1
k+1(V ), dω := d(ω) :=
∑
j∈N ∂jω∆n,1j (k = 0),
∑j∈N
∑i∈Nk
∗∂jωi∆
n,k+1(j,i) (k > 0)
where (j, i) := (j, i1, ..., ik) ∈ N ×Nk∗ .
Remarks.
1. If k = 0 and ω ∈ C1(V,R), then for all x ∈ V , and ξ ∈ Rn we have
dω(x)(ξ) =n∑j=1
∂jω(x)∆n,1j (ξ) = 〈gradω(x), ξ〉.
17
With the commonly used notation
∂jω =∂ω
∂xjand ∆n,1
j = dxj
we get
dω =n∑j=1
∂ω
∂xjdxj,
which is the so-called dierential of the ω scalar eld at the point x.
2. If k ∈ N , then using the equality ∆n,k+1(j,i) = ∆n,1
j ∧∆n,ki we have
dω =∑j∈N
∑i∈Nk
∗
∂jωi∆n,1j ∧∆n,k
i =∑i∈Nk
∗
∑j∈N
∂jωi∆n,1j ∧∆n,k
i =∑i∈Nk
∗
dωi ∧∆n,ki .
Theorem 2.2.1. The exterior derivative has the following properties:
1. The d operator is linear, meaning if ω,w,∈ Λrk(V ) and α ∈ R, then
d(ω + w) = dω + dw and d(αω) = αdω.
2. If ω ∈ Λrk(V ) and σ ∈ Λr
l (V ), then
d(ω ∧ σ) = dω ∧ σ + (−1)kω ∧ (dσ).
3. If ω ∈ Λ2k(V ), then
d(dω) = 0 (∈ Λ0k+2(V )).
Proof.
1. (a) First we prove the additivity.
18
• If k = 0 then
d(ω + w) =∑j∈N
∂j(ω + w)∆n,1j =
∑j∈N
(∂jω + ∂jw)∆n,1j =
=∑j∈N
∂jω∆n,1j +
∑j∈N
∂jw∆n,1j = dω + dw.
• If k > 0 then
d(ω + w) =∑j∈N
∑i∈Nk
∗
∂j(ωi + wi)∆n,k+1(j,i) =
=∑j∈N
∑i∈Nk
∗
(∂jωi + ∂jwi)∆n,k+1(j,i) =
=∑j∈N
∑i∈Nk
∗
∂jωi∆n,k+1(j,i) +
∑j∈N
∑i∈Nk
∗
∂jwi∆n,k+1(j,i) =
= dω + dw.
(b) Now we prove the homogeneity, which goes analogously.
• If k = 0 then
d(αω) =∑j∈N
∂j(αω)∆n,1j = α
∑j∈N
∂jω∆n,1j = αdω.
• If k > 0 then
d(αω) =∑j∈N
∑i∈Nk
∗
∂j(αωi)∆n,k+1(j,i) = α
∑j∈N
∑i∈Nk
∗
∂jωi∆n,k+1(j,i) = αdω.
19
2. Using the linearity of d:
d(ω ∧ σ) = d
∑i∈Nk
∗
∑j∈N l
∗
(ωiσj)∆n,ki ∧∆n,l
j
=∑i∈Nk
∗
∑j∈N l
∗
d(ωiσj) ∧∆n,ki ∧∆n,l
j =
=∑i∈Nk
∗
∑j∈N l
∗
(σjdωi + ωidσj) ∧∆n,ki ∧∆n,l
j =
=∑i∈Nk
∗
∑j∈N l
∗
(σjdωi) ∧∆n,ki ∧∆n,l
j +∑i∈Nk
∗
∑j∈N l
∗
(ωidσj) ∧∆n,ki ∧∆n,l
j =
=
∑i∈Nk
∗
dωi ∧∆n,ki
∧∑j∈N l
∗
σj∆n,lj
+
+∑i∈Nk
∗
∑j∈N l
∗
ωi(dσj ∧∆n,ki ) ∧∆n,l
j =
= (dω ∧ σ) +∑i∈Nk
∗
∑j∈N l
∗
ωi((−1)k∆n,ki ∧ dσj) ∧∆n,l
j =
= (dω ∧ σ) + (−1)k
∑i∈Nk
∗
ωi∆n,ki
∧∑j∈N l
∗
dσj ∧∆n,lj
=
= dω ∧ σ + (−1)kω ∧ dσ.
3. • If k = 0 then by using the linearity of d and Young's theorem
20
d(dω) = d
(∑j∈N
∂jω∆n,1j
)=∑j∈N
d(∂jω) ∧∆n,1j =
=∑j∈N
(∑i∈N
∂i(∂jω)∆n,1i
)∧∆n,1
j =∑i,j∈N
∂ijω∆n,1i ∧∆n,1
j =
=∑i<j
(∂ijω − ∂ijω)∆n,1i ∧∆n,1
j = 0 ∈ Λ02(V ).
• If k > 0 then using the linearity, the rule for dierentiating an exterior
product and the k = 0 case we have
d(dω) = d
∑i∈Nk
∗
dωi ∧∆n,ki
=∑i∈Nk
∗
d(dωi ∧∆n,ki ) =
=∑i∈Nk
∗
[d(dωi) ∧∆n,ki − (d∆n,k
i ) ∧ (dωi)] = 0 ∈ Λ0k+2(V ).
The last equation holds, since for all i ∈ Nk∗ : ωi ∈ Λ2
0(V ), so because of
the k = 0 case we have
d(dωi) = 0 ∈ Λ02(V )
and
d∆n,ki = d(1∆n,k
i ) =∑j∈N
(∂j1)∆n,k+1(j,i) =
∑j∈N
0∆n,k+1(j,i) = 0 ∈ Λ1
k+1 (i ∈ Nk∗ ).
21
Denition 2.2.5. Let fi : V → R (i ∈ Nk∗ ) and f := (fi, i ∈ Nk
∗ ) : V → R(nk),
then the dierential form generated by f is:
ωf :=∑i∈Nk
∗
fi∆n,ki
meaning that:
ωf (x;x1, ..., xk) :=∑i∈Nk
∗
fi(x)∆n,ki (x1, ..., xk) (x ∈ V, x1, ..., xk ∈ Rn).
Special cases.
1. If k = 0, then f : V → R, so ωf = f , thus:
dωf =∑j∈N
∂jf∆n,1j = ωgrad(f).
2. If n = 2, k = 1, then f : V → R2, so ωf = f1∆2,11 + f2∆2,1
2 , thus:
dωf =2∑j=1
2∑i=1
∂jfi∆2,2(j,i) =
2∑j=1
(∂jf1∆2,2
(j,1) + ∂jf2∆2,2(j,2)
)=
= ∂1f1∆2,2(1,1) + ∂1f2∆2,2
(1,2) + ∂2f1∆2,2(2,1) + ∂2f2∆2,2
(2,2) =
= 0 + ∂1f2∆2,2(1,2) − ∂2f1∆2,2
(1,2) + 0 = ω∂1f2−∂2f1 .
3. If n = 3, k = 1, then f : V → R3, so ωf = f1∆3,11 + f2∆3,1
2 + f3∆3,13 , thus
22
dωf =3∑j=1
3∑i=1
∂jfi∆3,2(j,i) =
3∑j=1
(∂jf1∆3,2
(j,i) + ∂jf2∆3,2(j,2) + ∂jf3∆3,2
(j,3)
)=
=∂1f1∆3,2(1,1) + ∂1f2∆3,2
(1,2) + ∂1f3∆3,2(1,3)+
+ ∂2f1∆3,2(2,1) + ∂2f2∆3,2
(2,2) + ∂2f3∆3,2(2,3)+
+ ∂3f1∆3,2(3,1) + ∂3f2∆3,2
(3,2) + ∂3f3∆3,2(3,3) =
=(∂2f3 − ∂3f2)∆3,2(2,3) + (∂3f1 − ∂1f3)∆3,2
(3,1) + (∂1f2 − ∂2f1)∆3,2(1,2) =
=ωrot(f).
4. If n = 3, k = 2, then f : V → R3, so
ωf = f1∆3,2(2,3) + f2∆3,2
(3,1) + f3∆3,2(1,2),
thus
dωf =3∑j=1
3∑i=1
∂jfi∆3,3(j,i) =
3∑j=1
(∂jf1∆3,3
(j,2,3) + ∂jf2∆3,3(j,3,1) + ∂jf3∆3,3
(j,1,2)
)=
= (∂1f1∆3,3(1,2,3) + 0 + 0) + (0 + ∂2f2∆3,3
(2,3,1) + 0) + (0 + 0 + ∂3f3∆3,3(3,1,2)) =
= (∂1f1 + ∂2f2 + ∂3f3)∆3,3(1,2,3) = ωdiv(f).
23
2.2.2 The induced form
One of the most important features of dierential forms is the way they behave
under dierentiable maps.
Denition 2.2.6. Let r ∈ N0, n,m ∈ N, k ∈ 0, 1, ..., n, U ⊂ Rm, V ⊂ Rn be
open sets, ω ∈ Λrk(V ) and Φ ∈ Cr+1(U, V ). Let us dene the induced form or the
pullback of ω as follows:
ω ∗ Φ ∈ Λrk(U).
If k = 0, then
ω ∗ Φ := ω Φ.
If k > 0, then
ω ∗ Φ(x;x1, ...xk) := ω(Φ(x); Φ′(x)x1, ...,Φ′(x)xk) (x ∈ U, x1, ...xk ∈ Rm).
This denition makes sense since Φ(x) ∈ V , Φ′(x) ∈ Rn×m, thus Φ′(x)xi ∈ Rn.
Theorem 2.2.2. Let U ⊂ Rm, V ⊂ Rn be open subsets, Φ ∈ Cr+1(Rm,Rn)
k ∈ 0, 1, ..., n, ω, σ ∈ Λrk(Rn) and g ∈ Λr
0(Rn). Then:
1. (ω + σ) ∗ Φ = ω ∗ Φ + σ ∗ Φ,
2. (gω) ∗ Φ = g Φ · ω ∗ Φ,
3. if ω1, ..., ωk ∈ Λr1(Rn), then (ω1 ∧ ... ∧ ωk) ∗ Φ = ω1 ∗ Φ ∧ ... ∧ ωk ∗ Φ.
Proof.
1. If x ∈ U and x1, ..., xk ∈ Rm then
((ω + σ) ∗ Φ)(x;x1, ..., xk) = (ω + σ)(Φ(x); Φ′(x)x1, ...,Φ′(x)xk) =
= ω(Φ(x); Φ′(x)x1, ...,Φ′(x)xk) + σ(Φ(x); Φ′(x)x1, ...,Φ
′(x)xk) =
= ω ∗ Φ(x;x1, ..., xk) + σ ∗ Φ(x;x1, ..., xk).
24
2. For x ∈ U and x1, ..., xk ∈ Rm we have
((gω) ∗ Φ)(x;x1, ..., xk) = (gω)(Φ(x); Φ′(x)x1, ...Φ′(x)xk) =
= g(Φ(x))ω(Φ(x); Φ′(x)x1, ...Φ′(x)xk) = g Φ(x) · ω ∗ Φ(x1, ..., xk).
3. Let x ∈ U and x1, ..., xk ∈ Rm. Then
((ω1 ∧ ... ∧ ωk) ∗ Φ)(x;x1, ..., xk) = (ω1 ∧ ... ∧ ωk)(Φ(x); Φ′(x)x1, ...,Φ′(x)xk) =
= det(ωi(Φ(x; Φ′(x)xj))) = det(ωi ∗ Φ(x;xj)) =
= (ω1 ∗ Φ ∧ ... ∧ ωk ∗ Φ)(x;x1, ..., xk).
Now we can interpret the meaning of the induced form. Let (x1, ..., xn) be co-
ordinates in Rn and (y1, ..., ym) be coordinates in Rm. Let Φ : Rn → Rm be a
dierentiable function which "substitutes" the coordinates:
y1 = Φ1(x1, ..., xn), ..., ym = Φm(x1, ..., xn).
Let ω =∑
i∈Nk∗ωidyi1∧...∧dyik be a k-form in Rm. Then using the above properties
of the induced form we obtain
ω ∗ Φ =
∑i∈Nk
∗
ωidyi1 ∧ ... ∧ dyik
∗ Φ =∑i∈Nk
∗
(ωidyi1 ∧ ... ∧ dyik) ∗ Φ =
=∑i∈Nk
∗
(ωi Φ) · (dyi1 ∧ ... ∧ dyik) ∗ Φ =∑i∈Nk
∗
(ωi Φ) · dyi1 ∗ Φ ∧ ... ∧ dyik ∗ Φ.
25
Since for all x ∈ U and for all v ∈ Rm
(dyj ∗ Φ)(x; v) = dyj(Φ(x); Φ′(x)v) = dyj(Φ′(x)v) = (Φ′(x)v)j =
= ∂1Φj(x)v1 + ...+ ∂nΦj(x)vn = 〈Φ′j(x), v〉 = dΦj(x)(v),
we have
ω ∗ Φ =∑i∈Nk
∗
(ωi Φ)dΦi1 ∧ ... ∧ dΦik .
Thus the pullback of ω is the same as substituting the xi variables and their dxi
dierentials by the functions of xk and dxk.
Theorem 2.2.3. Let Φ ∈ Cr+1(Rm,Rn), ω, σ ∈ Λrk(Rn) and Ψ ∈ Cr+1(Rp,Rm).
Then
1. (ω ∧ σ) ∗ Φ = ω ∗ Φ ∧ σ ∗ Φ,
2. ω ∗ (Φ Ψ) = (ω ∗ Φ) ∗Ψ,
3. dω ∗ Φ = d(ω ∗ Φ).
Proof.
Let (y1, ..., yn) = (Φ1(x1, ..., xm), ...,Φn(x1, ..., xm)) ∈ Rn, (x1, ..., xm) ∈ Rm, and
ω =∑i∈Nk
∗
ωidyi1 ∧ ... ∧ dyik , σ =∑j∈Nk
∗
σidyj1 ∧ ... ∧ dyjk .
1. Then (ω ∧ σ) ∗ Φ =
=
∑i∈Nk
∗
∑j∈Nk
∗
ωiσj(dyi1 ∧ ... ∧ dyik) ∧ (dyj1 ∧ ... ∧ dyjk)
∗ Φ =
=∑i∈Nk
∗
∑j∈Nk
∗
ωi Φ · σj Φ · (dΦi1 ∧ ... ∧ dΦik) ∧ (dΦj1 ∧ ... ∧ dΦjk) =
26
=
∑i∈Nk
∗
ωi Φ · dΦi1 ∧ ... ∧ dΦik
∧∑j∈Nk
∗
σj Φ · dΦj1 ∧ ... ∧ dΦjk
=
= ω ∗ Φ ∧ σ ∗ Φ.
2. Let x, x1, ..., xk ∈ Rp. Then
(ω ∗ (Φ Ψ))(x;x1, ..., xk) = ω(Φ Ψ(x); (Φ Ψ)′(x)x1, ..., (Φ Ψ)′(x)xk) =
= ω(Φ(Ψ(x)); Φ′(Ψ(x))Ψ′(x)x1, ...,Φ′(Ψ(x))Ψ′(x)xk) =
= ω ∗ Φ(Ψ(x); Ψ′(x)x1, ...,Ψ′(x)xk) =
= (ω ∗ Φ) ∗Ψ(x;x1, ..., xk).
3. Step 1
We will rst prove the statement for 0-forms. This case ω : Rn → R is
a dierentiable function.
dω ∗ Φ =
(n∑i=1
∂ωi∂yi
dyi
)∗ Φ =
n∑i=1
m∑j=1
∂ωi∂yi Φ · ∂Φi
∂xjdxj =
m∑j=1
∂(ω Φ)
∂xjdxj =
= d(ω Φ) = d(ω ∗ Φ).
Step 2 Now we consider the k > 0 case.
d(ω ∗ Φ) = d
∑i∈Nk
∗
ωi Φ · (dyi1 ∗ Φ) ∧ ... ∧ (dyik ∗ Φ)
=
27
=∑i∈Nk
∗
d(ωi Φ · (dΦi1 ∧ ... ∧ dΦik)) =
=∑i∈Nk
∗
d(ωi Φ) ∧ (dΦi1 ∧ ... ∧ dΦik) =
=∑i∈Nk
∗
(dωi ∗ Φ) ∧ (dyi1 ∗ Φ) ∧ ... ∧ (dyik ∗ Φ) =
=
∑i∈Nk
∗
dωi ∧ dyi1 ∧ ... ∧ dyik
∗ Φ = dω ∗ Φ.
2.2.3 Closed and exact forms
From the theory of the Riemann-integral it is a well-known fact, that every contin-
uous function on R has a primitive function. In the language of dierential forms
it means that if ω ∈ Λ01(I) and has the form: ω(x) = f(x)dx with some continuous
function f, there exists some function F such that dF = ω.
Now we want to ask the analogous question of whether any dierential form has
a "primitive form". We would like to give a necessary and sucient condition for
the existence of this form.
Denition 2.2.7. The ω ∈ Λrk(V ) dierential form is said to be
1. closed if dω = 0,
2. exact if there is an η ∈ Λr+1k−1 such that dη = ω.
Since for all ω ∈ Λ2k(V ) : ddω = 0 it is clear that if a form is exact, then it is
closed. The following "lemma" presents a necessary condition for closed forms to
be exact.
28
Theorem 2.2.4 (Poincaré-lemma). Let k ∈ N, 2 ≤ r ∈ N ∪ ∞ and V be a
star-shaped1 domain. In this case ω ∈ Λrk(V ) is closed if and only if it is exact.
Proof. It only remains to show that if ω is closed, then it is exact. Let us suppose
that the star-point of V is 02, and the closed form ω is the folowing:
ω =∑i∈Nk
∗
ωi∆n,ki .
We will dene an operation P : Λrk(V ) → Λr+1
k−1(V ) for which dP (ω) = ω, thus
proves, that ω is exact. Let x ∈ V and P be the following:
P (ω)(x) :=∑i∈Nk
∗
k∑l=1
(−1)l−1
(∫ 1
0
tk−1ωi(tx)dt
)xildxi1∧...∧dxil−1
∧dxil+1∧...∧dxik .
Now if we use the rule for the derivative of a wedge product for the coordinate
functions we get: dP (ω)(x) =
=∑i∈Nk
∗
k∑l=1
(∫ 1
0
tk−1ωi(tx)dt
)∆n,ki +
+n∑j=1
∑i∈Nk
∗
k∑l=1
(−1)l−1
(∫ 1
0
ttk−1∂jωi(tx)dt
)xildxi1 ∧ ... ∧ dxil−1
∧ dxil+1∧ ... ∧ dxik =
=k∑i∈Nk
∗
(∫ 1
0
tk−1ωi(tx)dt
)∆n,ki +
+n∑j=1
∑i∈Nk
∗
k∑l=1
(−1)l−1
(∫ 1
0
ttk−1∂jωi(tx)dt
)xildxi1 ∧ ... ∧ dxil−1
∧ dxil+1∧ ... ∧ dxik .
1It means that there exists an a ∈ V (so-colled star-point) such that for all x ∈ V
[a, x] := a + t(x− a) ∈ Rn : t ∈ [0, 1] ⊂ V.
2If 0 6= a is a star-point of V (and 0 is not) we can apply the transformation y := x− a.
29
Now let's calculate P (dω)(x). Since
dω =n∑j=1
∑i∈Nk
∗
∂jωi∆n,k(j,i)
we have P (dω)(x) =
=n∑j=1
∑i∈Nk
∗
(∫ 1
0
tk−1ωi(tx)dt
)xj∆
n,ki −
−n∑j=1
∑i∈Nk
∗
k∑l=1
(−1)l−1
(∫ 1
0
ttk−1∂jωi(tx)dt
)xildxi1 ∧ ... ∧ dxil−1
∧ dxil+1∧ ... ∧ dxik .
If we add the two formulas there are a lot of therms which cancel each other out,
thus we get:
(P (dω) + dP (ω))(x) = k∑i∈Nk
∗
(∫ 1
0
tk−1ωi(tx)dt
)∆n,ki +
+n∑j=1
∑i∈Nk
∗
(∫ 1
0
tk−1ωi(tx)dt
)xj∆
n,ki =
=∑i∈Nk
∗
(∫ 1
0
d
dt[tkωi(tx)]dt
)∆n,ki =
=∑i∈Nk
∗
[1 · ωi(x)− 0 · ωi(0)] ∆n,ki = ω.
So we have
P (dω) + dP (ω) = ω.
Using our assumption that ω is closed this simplies to dP (ω) = ω, which means
that P (ω) is the form we were looking for.
30
2.2.4 Integration
Let us introduce the following notation:
Ik := [0, 1]k (k ∈ N)
the closed unit cube in Rk. By convention
I := I1 and I0 := 0.
Denition 2.2.8. Let k ∈ N0, 2 ≤ n ∈ N, r ∈ N ∪ ∞ and ∅ 6= V ⊂ Rn be an
open subset. Consider a map
Φ : Ik → V.
Φ is called a k-cube, if it is continuous. If Φ is also r-times continuously dier-
entiable, then it is called an (r-times) smooth k-cube.
Denition 2.2.9. Let k, n ∈ N, ∅ 6= V ∈ Rn be an open subset and Φ be a k-cube.
If j ∈ 1, ..., k and s ∈ 0, 1, then Φjs denotes the following k − 1-cube:
Φjs : Ik−1 → V, Φjs(x1, ..., xk−1) :=
Φ(s) (k = 1),
Φ(x1, ...xj−1, s, xj+1, ..., xk−1) (k > 1).
The set
∂Φ := Φjs : j ∈ 1, ..., k, s ∈ 0, 1
is called the boundary of the k-cube Φ.
Since dim(Ak(Rk)) =(kk
)= 1, Ak(Rk) has only one basis element: ∆k,k
(1,...,k). So if
k ∈ 1, ..., n and Φ ∈ Cr+1(Ik, V ), then ω ? Φ ∈ Λrk(Ik), which means the induced
form has the following form:
ω ? Φ = Ω∆k,k(1,...,k)
with some function Ω ∈ Cr+1(Ik,R).
Denition 2.2.10. Let k ∈ 0, ..., n, ω ∈ Λrk(V ) and Φ ∈ Cr+1(Ik, V ). We dene
the integral of the k-form ω over the k-cube Φ as follows:
31
If k = 0, then ∫Φ
ω := ω(Φ(0)),
If k > 0, then ∫Φ
ω :=
∫Ik
Ω.
Special cases.
• The case k = 1.
In this case ω and Φ are the following:
ω = ωf =n∑i=1
fi∆n,1i , Φ ∈ Cr+1([0, 1], V ),
where f = (f1, ..., fn) ∈ Cr(V,Rn). Let's compute the ωf ∗ Φ ∈ Λr1([0, 1])
induced form. Let x ∈ [0, 1] and y ∈ R.
(ωf ∗ Φ)(x)(y) =n∑i=1
fi(Φ(x))∆n,1i (Φ′(x)y) =
n∑i=1
fi(Φ(x))(Φ′(x)y)i =
=n∑i=1
fi(Φ(x))Φ′i(x)y =n∑i=1
fi(Φ(x))Φ′i(x)∆1,11 (y) =
= 〈f Φ,Φ′〉 (x)∆1,11 (y) =
(〈f Φ,Φ′〉∆1,1
1
)(x)(y)
So in this case Ω = 〈f Φ,Φ′〉, thus∫Φ
ωf =
∫[0,1]
〈f Φ,Φ′〉 =
∫Φ
f (line integral).
• The case k = n.
In this case ω and Φ are the following:
ω = ωf = f∆n,n(1,...,n), Φ ∈ Cr+1(In, V ),
where f ∈ Cr(V,R). Let x ∈ In, x1, ..., xn ∈ Rn and suppose that Φ is
32
regular.3
(ωf ∗ Φ)(x)(x1, ..., xn) = f(Φ(x))∆n,n(1,...,n)(Φ
′(x)x1, ...,Φ′(x)xn) =
= f(Φ(x)) det(Φ′(x)x1, ...,Φ′(x)xn) = f(Φ(x)) det(Φ′(x)) det(x1, ..., xn) =
= f(Φ(x)) det(Φ′(x))∆n,n(1,...,n)(x1, ..., xn) = ((f Φ) det(Φ′))(x)∆n,n
(1,...,n)(x1, ..., xn) =
= (((f Φ) det(Φ′))∆n,n(1,...,n))(x)(x1, ..., xn).
So in this case Ω = (f Φ) det(Φ′). Thus∫Φ
ωf =
∫In
(fΦ) det(Φ′) =
∫In
(fΦ)| det(Φ′)| =∫RΦ
f (multiple integral)
• The case n = 3, k = 2.
In this case ω and Φ are the following:
ω = ωf = f1∆3,2(2,3) + f2∆3,2
(3,1) + f3∆3,2(1,2), Φ ∈ Cr+1(I2, V ),
where f = (f1, f2, f3) ∈ Cr(V,R3). Here we take a dierent approach to
compute Ω. We use the formula for computing the coordinate functions of a
dierential form. Since ωf ∗ Φ = Ω∆2,2(1,2), we have:
Ω(x) =(ωf ∗ Φ)(x; e1, e2) = ωf (Φ(x); Φ′(x)e1,Φ′(x)e2) = ωf (Φ(x); ∂1Φ(x), ∂2Φ(x))
=f1(Φ(x))∆3,2(2,3)(∂1Φ(x), ∂2Φ(x)) + f2(Φ(x))∆3,2
(3,1)(∂1Φ(x), ∂2Φ(x))+
+ f3(Φ(x))∆3,2(1,2)(∂1Φ(x), ∂2Φ(x)) = f1(Φ(x))(∂1Φ2∂2Φ3 − ∂2Φ2∂1Φ3)+
+ f2(Φ(x))(∂1Φ1∂2Φ3 − ∂2Φ1∂1Φ3) + f3(Φ(x))(∂1Φ1∂2Φ2 − ∂2Φ1∂1Φ2) =
=(f Φ)1(x)(∂1Φ× ∂2Φ)1(x) + (f Φ)2(x)(∂1Φ× ∂2Φ)2(x)+
+ (f Φ)3(x)(∂1Φ× ∂2Φ)3(x) = 〈f Φ, ∂1Φ× ∂2Φ〉(x) (x ∈ I2).
3It means that det(Φ′(x)) > 0 for all x ∈ V .
33
Thus ∫Φ
ωf =
∫I2〈f Φ, ∂1Φ× ∂2Φ〉 =
∫Φ
f (surface integral).
Denition 2.2.11. Let k ∈ 0, ..., n, ω ∈ Λrk(V ) and Φ ∈ Cr+1(Ik+1, V ). We
dene the integral of the k-form ω over the boundary of the k+1-cube Φ
as follows: ∫∂Φ
ω :=k+1∑j=1
1∑s=0
(−1)j+s∫
Φjs
ω.
2.3 The Poincaré-Stokes theorem
Theorem 2.3.1. Let ∅ 6= V ⊂ Rn be an open set and ω ∈ Λrk(V ) (k = 0, ..., n−1).
Let Φ ∈ Cr+1(Ik+1, V ) be a k + 1-cell. Then∫Φ
dω =
∫∂Φ
ω.
Proof.
Step 1 First we prove the k = 0 case.
In this case ω : V → R and Φ : [0, 1] → V . Using the Newton-Leibniz-rule
for line integrals we have:∫∂Φ
ω =
∫Φ11
ω −∫
Φ10
ω = ω(Φ(1))− ω(Φ(0)) =
∫Φ
gradω =
∫Φ
dω.
Step 2 Now we prove the k > 0 case where Φ is the identity map.
V = Ik+1, Φ : Ik+1 → Ik+1 Φ(x) = x (x ∈ Ik+1).
Let us denote the the multiindex i∗ := (1, ..., i− 1, i+ 1, ..., k+ 1). This way
34
ω ∈ Λrk(Ik+1) can be expressed as:
ω =k+1∑i=1
ωi∆k+1,ki∗ .
The derivative of ω is then
dω =k+1∑i=1
∑j∈(K+1)k∗
∂jωi∆k+1,k+1(j,i∗) .
The only non-zero terms are when j = i in which case the number i needs
to be interchanged with i−1 other to reach the identity permutation. So we
have
dω =
(k+1∑i=1
(−1)i−1∂iωi
)∆k+1,k+1
(1,...,k+1).
Hence, by the denition of the integral
∫Φ
dω =k+1∑i=1
(−1)i−1
∫Ik+1
∂iωi.
Let's compute the other side of the equation. The boundary of Φ by denition
is
∂Φ = Φjs : j ∈ 1, ..., k + 1; s ∈ 0, 1,
where
Φjs : Ik → Ik+1 Φjs(x1, ..., xk) = Φ(x1, ..., xj−1, s, xj+1, ..., xk)
(x1, ..., xk ∈ [0, 1]).
Since
ω ∗ Φjs ∈ Λrk(Ik)
it can be written as
ω ∗ Φjs = Ωjs∆k,k(1,...,k) (Ωjs : Ik → R)
35
Let x = (x1, ..., xk) ∈ Ik and let e1, ..., ek be the canonical basis of Rk.
Using the formula for the coordinate functions we have
Ωjs(x) = (ω ∗ Φjs)(x; e1, ..., ek) = ω(Φjs(x); Φjs′(x)e1, ...,Φjs
′(x)ek) =
=k+1∑i=1
ωi(Φjs(x))∆k+1,ki∗ (∂1Φjs(x), ..., ∂kΦjs(x))
Here the partial derivatives are
∂iΦjs(x1, ..., xk) = ∂iΦ(x1, ..., xj−1, s, xj+1, ..., xk) = (0, ..., 1, ..., 0) (i ∈ 1, ..., k),
where the 1 is in the ith position if i < j and in the (i + 1)th position if
i ≥ j. It means that the argument of ∆k+1,ki∗ is a matrix with k + 1 row and
k column, which is almost the "identity" matrix, but the jth row is lled
with 0-s. It follows that the only non-zero term in the summation is where
i = j for which
∆k+1,kj∗ (∂1Φjs(x), ..., ∂kΦjs(x)) = 1,
the determinant of the identity matrix. It follows that
Ωjs(x) = ωj(Φjs(x)) = ωj(Φjs(x1, ..., xk)) = ωj(Φ(x1, ..., xj−1, s, xj+1, ..., xk)).
And since Φ is the identity map we get
Ωjs(x) = ωj(x1, ..., xj−1, s, xj+1, ..., xk),
so
(ω ∗ Φjs)(x) = ωj(x1, ..., xj−1, s, xj+1, ..., xk)∆k,k(1,...,k).
36
Now we can compute the integral over the boundary:
∫∂Φ
ω =k+1∑j=1
1∑s=0
(−1)j+s∫
Φjs
ω =k+1∑j=1
(−1)j+1
(∫Φj1
ω −∫
Φj0
ω
)=
=k+1∑j=1
(−1)j−1
(∫Ikωj(x1, ..., xj−1, 1, xj+1, ..., xk)−
−∫Ikωj(x1, ..., xj−1, 0, xj+1, ..., xk)
)=
=k+1∑j=1
(−1)j−1
∫Ik
(ωj(x1, ..., xj−1, 1, xj+1, ..., xk)−
− ωj(x1, ..., xj−1, 0, xj+1, ..., xk)) =
=k+1∑j=1
(−1)j−1
∫Ik
(∫ 1
0
∂jωj(x1, ..., xj−1, t, xj+1, ..., xk)dt
)=
=k+1∑j=1
(−1)j−1
∫Ik+1
∂jωj
which is the same result we got earlier.
Step 3 Now we prove the theorem for arbitrary Φ : Ik+1 → V k+1-cubes. We use
the fact that the exterior derivative commutes with the pullback of forms.∫Φ
dω =
∫Ik+1
dω ∗ Φ =
∫Ik+1
d(ω ∗ Φ)
37
Since d(ω ∗ Φ) ∈ Λr−1k+1(Ik+1) we can use the result from step 2:
∫Ik+1
d(ω ∗ Φ) =
∫∂Ik+1
ω ∗ Φ =k+1∑j=1
1∑s=0
∫Ik+1js
ω ∗ Φ =k+1∑j=1
1∑s=0
∫ΦIk+1
js
ω =
∫∂Φ
ω.
Special cases.
• The case k = 0.
In this case
ω ∈ Λr0 = Cr(V,R) Φ ∈ Cr+1([0, 1], V ).
We have already shown that if
f ∈ Cr(V,R) ω := ωf
then
dω = ωgrad(f).
Thus∫Φ
ωgrad(f) =
∫Φ
dω =
∫∂Φ
ω =1∑s=0
(−1)1+s
∫Φ1s
ω =1∑s=0
(−1)1+sω(Φ1s(0)) =
=1∑s=0
(−1)1+sω(Φ(s)) = ω(Φ(1))− ω(Φ(0)).
This is the same as∫Φ
grad(f) = f(Φ(1))−f(Φ(0)) (Newton-Leibniz formula for line integrals).
• The case n = 2, k = 2.
In this case
ω ∈ Λr1(V ), Φ ∈ Cr+1(I2, V ).
38
We have already shown that if
f = (f1, f2) ∈ Cr(V,R2), ω := ωf = f1∆2,11 + f2∆2,1
2
then
dω = ω∂1f2−∂2f1 .
Thus ∫RΦ
∂1f2 − ∂2f1 =
∫Φ
ω∂1f2−∂2f1 =
∫Φ
dω =
∫∂Φ
ω =
∫∂Φ
f
This is the Green-theorem.
• The case n = 3, k = 2.
In this case
ω ∈ Λr1(V ), Φ ∈ Cr+1(I2, V ).
We have already shown that if
f = (f1, f2, f3) ∈ Cr(V,R3), ω := ωf = f1∆3,11 + f2∆3,1
2 + f3∆3,13
then
dω = ωrot(f).
Thus if Φ is regular∫Φ
rot(f) =
∫Φ
ωrot(f) =
∫Φ
dω =
∫∂Φ
ω =
∫∂Φ
f.
This is the Stokes-theorem.
• The case n = 3, k = 3.
In this case
ω ∈ Λr2(V ), Φ ∈ Cr+1(I3, V ).
We have already shown that if
f = (f1, f2, f3) ∈ Cr(V,R3), ω := ωf = f1∆3,2(2,3) + f2∆3,2
(3,1) + f3∆3,2(1,2)
39
then
dω = ωdiv(f)
thus if Φ is regular∫RΦ
div(f) =
∫Φ
ωdiv(f) =
∫Φ
dω =
∫∂Φ
ω =
∫∂Φ
f.
This is the Gauss-theorem.
40
Chapter 3
Applications
3.1 Maxwell's equations
3.1.1 The classical form of the Maxwell's equations
In his work published in 1865 Maxwell worked out a unied theory which con-
nected the seemingly dierent phenomena of electricity and magnetism. His four
equations describe the impact of an electromagnetic eld on a distribution of elec-
trical charges in space as well as the interaction between the electric eld and the
magnetic eld.
Physical quantities can be modeled by dierent mathematical objects. In this sec-
tion rst we introduce the model most commonly used in classical electrodynamics.
Then we present another approach: a model using dierential forms.
Denition 3.1.1. Let Ω ⊂ R3 be a domain. The electric eld and the magnetic
eld are described by the time dependent dierentiable vector elds E and B dened
on the domain Ω:
E : Ω× R→ R3, B : Ω× R→ R3.
The electric charge density ρ is described by a time dependent scalar eld and
41
the current density j by a time dependent vector eld on the domain Ω:
ρ : Ω× R→ R, j : Ω× R→ R3.
The electric and magnetic eld has an impact on the electric charge and the current
density. Also electric charges and currents generate the electric and magnetic eld.
Maxwell's equations describe how the interaction between these quantities work.
Having chosen units in which µ0 = ε0 = c = 1 the equations take the following
form:
divE = ρ, rotE = −∂B∂t,
divB = 0, rotB = j +∂E
∂t.
3.1.2 Minkowski spacetime
One of the most important principles in physics is that every law describing nature
needs to be expressed with equations which are independent from the location of
the observer. Physical phenomena do not depend on the coordinate system in
which we describe it. Thus the equations must be invariant under changes of co-
ordinate systems.
This principle led to the birth of the theory of special relativity and the concept
of spacetime, since Maxwell's equations are not invariant. However if we don't
distinguish time from the spacial coordinates we can overcome this trouble. To
read more about the topic see [8].
Now we construct the mathematical background for the Minkowski spacetime.
Denition 3.1.2. Let V be a nite dimensional vector space. V is called a
pseudo-euclidean space if on V there is symmetric bilinear non-degenerate map
g : V × V → R
42
such that for all u, v, w ∈ V and λ ∈ R the following properties are satised:
1. g(λu+ v, w) = λg(u,w) + g(v, w),
2. g(u, λv + w) = λg(u,w) + g(u,w) (bilinear),
3. g(u, v) = g(v, u) (symmetric),
4. g(u, v) = 0 ∀u ∈ V ⇐⇒ v = 0 (non-degenerate).
For a given basis e1, ...en in V let's dene the matrix:
M(g) := (g(ei, ej))ni,j=1 := (gij)
ni,j=1.
We will denote this pseudo-euclidean space (V,g).
A pseudo-euclidean space is "less" than an euclidean space, since the g map is not
positive-denite. If it were, it would be a scalar product. However, in physics'
literature g is often called (mistakenly) a scalar product, since their algebraic
properties are quite similar.
Denition 3.1.3. Let (V, g) be a pseudo-euclidean space and e1, ...en a basis in
V . We say that e1, ...en is an orthonormal basis if
g(ei, ej) =
±1 (i = j),
0 (i 6= j)..
Theorem 3.1.1 (Sylvester). Let (V, g) be a pseudo-euclidean space. Then there
exists a basis e1, ...en in V such that the matrix M(g) is diagonal and only
contains +1's and −1's, i.e.
M(g) =
1. . . 0
1
−1
0. . .
−1
.
43
The numbers of +1's (p) and −1's (q) are independent of the particular basis. The
pair (p,q) is called the signature of g.
We will not prove this theorem here, for a proof see for example [9].
Since the matrix M(g) is diagonal, and has no 0 elements it follows that it is
invertable. We will denote the entries of the inverse matrix gij.
Denition 3.1.4. The pseudo-euclidean space (R4, g) is calledMinkowski-spacetime
if g has a signature (3,1).
A point in Minkowski-spacetime can be given by four coordinates:
x = (x1, x2, x3, ct) ∈ R4,
where the rst three coordinates are the usual "spatial" coordinates, and the fourth
one is the time coordinate, c being the speed of light in vacuum. In theoretical
works one frequently chooses a unit system where c = 1 (we will also do it).
Another commonly used notation is x0 := t, so in computations the 0 index always
refers to the time coordinate.
3.1.3 The Hodge-star operator
In this section we introduce the Hodge-star operator, which is an isomorphism
between the Ak(V ) and An−k(V ) spaces. It will be an essential tool for modeling
the electric and magnetic elds with dierential forms. It will turn out that these
quantities are strongly depend on each other and it is better to model them as one
object.
First we extend the g map on a pseudo-euclidean space (V, g) to theAk(V ). Adding
this new structure will make them pseudo-euclidean spaces as well. This extension
will be done relative to an orthonormal basis.
Denition 3.1.5. Let (V, g) be a pseudo-euclidean space and e1, ...en an or-
44
thonormal basis in it. Let f1, f2 ∈ Ak(V ). Then
gk : Ak(V )×Ak(V )→ R,
gk(f1, f2) :=∑i∈Nk
∗
gi1i2 ...gikikf1(ei1 , ..., eik)f2(ei1 , ..., eik).
Theorem 3.1.2. The above dened gk map is non-degenerate, symmetric and
bilinear. Moreover if e1, ..., en is an orthonormal basis in V (with respect to g),
then ∆n,ki (i ∈ Nk
∗ ) is an orthonormal basis in Ak(V ).
Proof. The bilinearity and symmetry of g is clear, so we focus on the non-degenerativity.
Let h ∈ Ak(V ) and suppose that
gk(f, h) =∑i∈Nk
∗
gi1i2 ...gikikf(ei1 , ..., eik)h(ei1 , ..., eik) = 0 ∈ R ∀f ∈ Ak(V ).
We need to show that
h = 0 ∈ Ak(V ).
Since a k-form is uniquely determined by its value on the k-tuples of vectors with
strictly increasing multiindex let's choose f as the following:
f(ei1 , ..., eik) := gi1i2 ...gikikh(ei1 , ..., eik).
This way
gk(f, h) =∑i∈Nk
∗
(gi1i2 ...gikik
)2h(ei1 , ..., eik)2 = 0.
This is only possible if all the terms in the summation are 0. Since the numbers
gijij are not 0 it follows that all h(ei1 , ...eik) must be 0. Which means that h =
0 ∈ Ak(V ), thus g is non-degenerate.
To prove the orthonormality set two multiindex i, j ∈ Nk∗ .
gk(∆n,ki ,∆n,k
j ) =∑l∈Nk
∗
gl1l1 ...gklkl∆n,ki (el1 , ..., elk)∆n,k
j (ej1 , ..., ejk)
This expression does not equal 0 if and only if i = j = l, in which case it is ±1,
45
thus the basis ∆n,ki (i ∈ Nk
∗ ) is in fact orthonormal.
Apart from the g map we x an orientation on V . For this consider the set B(V )
of all ordered bases B = (v1, ..., vn). For two ordered bases B = (v1, ..., vn) and
B′ = (v′1, ..., v′n) there exists a linear transformation A(B,B′) = (aij)
ni,j=1 which
transforms one into the other:
vi =n∑j=1
aijv′j.
We can dene an equivalence relation ∼ on the set B(V ) by requiring that
B ∼ B′ ⇐⇒ det(A(B,B′)) > 0.
This way we have two equivalence classes.
Denition 3.1.6. The orientation of V is a choice of one of the two equivalence
classes in the set B(V ).
Denition 3.1.7. Let (V, g) be an oriented pseudo-euclidean space. Let e1, ...enbe a basis in V such that the matrix M(g) has the diagonal format of Sylvester's
theorem, and (e1, ..., en) is positively oriented. Now we dene the volume-form
dV ∈ An(V ) as follows:
dV (v1, ..., vn) := det(g(vi, ej))ni,j=1 = ∆n,n
(1,...,n)(v1, ..., vn) (v1, ..., vn ∈ V ).
Theorem 3.1.3. Let (V, g) be a pseudo-euclidean space and f : V → R be a linear
function. Then for all α ∈ V there exists a unique β ∈ V such that f(α) = g(α, β).
Proof.
Step 1 (uniqueness)
Suppose that f(α) = g(α, β) = g(α, γ). We have
g(α, β)− g(α, γ) = 0.
46
Using the bilinearity of g we get
g(α, β − γ) = 0.
Since g is non-degenerate it follows that
β − γ = 0 =⇒ β = γ.
Step 2 (existence)
Let e1, ...en be an orthonormal basis in V , expand α in terms of the basis
elements:
α =n∑i=1
αiei
and let β be the following:
β :=n∑j=1
g(ej, ej)f(ej)ej.
Using the bilinearity g we get:
g(α, β) = g
(n∑i=1
αiei,n∑j=1
g(ej, ej)f(ej)ej
)=
n∑i=1
n∑j=1
αig(ej, ej)f(ej)g(ei, ej).
Because the basis is orthonormal this simplies to:
g(α, β) =n∑i=1
αig(ei, ei)2f(ei).
Since g(ei, ei)2 = 1, and again using the bilinearity
g(α, β) =n∑i=1
αif(ei) = f
(n∑i=1
αiei
)= f(α).
Now let's use this result for the vector space An−k(V ). Let λ ∈ Ak(V ) a xed
47
k-form. Then for any θ ∈ An−k(V ) we have λ ∧ θ ∈ An(V ). Since An(V ) is one
dimensional, and the basis element is the volume-form there exists a unique a ∈ Rsuch that λ∧ θ = a · dV . Using this equality we can dene the following function:
fλ : An−k(V )→ R, fλ(θ) := a.
With this denition fλ is a linear function, thus we can apply the previous theorem,
which tells us that there exists a unique element φ ∈ An−k(V ) such that
fλ(θ) = g(θ, φ) (θ ∈ An−k(V )).
We can nally dene the Hodge-dual ?λ of λ to be
?λ := (−1)qφ ∈ An−k(V ).
Denition 3.1.8. The Hodge-star operator is a map between k-forms and n−kforms
? : Ak(V )→ An−k(V )
dened by the following equation:
λ ∧ θ = (−1)qg(θ, ?λ)dV (θ ∈ An−k(V )).
This quite abstract denition is not sucient to actually compute the Hodge-dual
of some form. In order to do this we use a dierent approach. Since the Hodge-
star operator is linear it suces to compute the duals of the basis elements, then
expend the results in a linear and alternating way.
So we wish to compute ?∆n,ki (i ∈ Nk
∗ ). Let j ∈ Nn−k∗ . Then from the denition
of ?:
∆n,ki ∧∆n,n−k
j = (−1)qg(∆n,n−kj , ?∆n,k
i )∆n,n(1,...,n).
Note that the left side of the equation diers from 0 only if j is the complementary
index of i. Since the ∆n,n−kj (j ∈ Nn−k
∗ ) basis elements are orthonormal the right
48
side tells us that ?∆n,ki has the following form:
?∆n,ki = c∆n,n−k
j
where j is the complementary index and c ∈ R. Putting this back into the originalequation yields:
sgn(τ)∆n,n(1,...,n) = ∆n,k
i ∧∆n,n−kj = (−1)qg(∆n,n−k
j , c∆n,n−kj )∆n,n
(1,...,n) =
= (−1)qcgj1j1 ...gjn−k,jn−k∆n,n(1,...,n).
Where τ is the number of inversions in the permutation which takes the sequence
(i1, ..., ik, j1, ..., jn−k) to (1, ..., n). Using the fact that gjrjr = 1gjrjr
we obtain
c = (−1)q sgn(τ)gj1j1 ...gjn−kjn−k.
Hence
?∆n,ki = (−1)q sgn(τ)gj1j1 ...gjn−kjn−k
∆n,n−kj .
3.1.4 Maxwell's equations in terms of dierential forms
First we consider the homogeneous Maxwell equations:
divB = 0, rotE +∂B
∂t= 0.
We have shown that divergence of a vector eld can be thought of as the exterior
derivative of a dierential 2-form, and the rotation of a vector eld as the exterior
derivative of a 1-form. Therefore it seems logical to represent the magnetic eld
as a 2-form and the electric eld as a 1-form:
B := B1dx2 ∧ dx3 +B2dx3 ∧ dx1 +B3dx1 ∧ dx2,
E := E1dx1 + E2dx2 + E3dx3.
Now, these forms are the inhabitants of spacetime, and we can combine them into
one dierential form.
49
Denition 3.1.9. The electromagnetic eld F is a dierential 2-form on space-
time dened by the following equation:
F ∈ Λ12(R4) F := B + E ∧ dx0
where dx0 denotes the time coordinate of spacetime.
The full form of F is the following:
F =E1dx1 ∧ dx0 + E2dx2 ∧ dx0 + E3dx3 ∧ dx0+
+B1dx2 ∧ dx3 +B2dx3 ∧ dx1 +B3dx1 ∧ dx2.
Now let's take the exterior derivative of F:
dF =∂2E1dx2 ∧ dx1 ∧ dx0 + ∂3E1dx3 ∧ dx1 ∧ dx0 + ∂1E2dx1 ∧ dx2 ∧ dx0+
+ ∂3E2dx3 ∧ dx2 ∧ dx0 + ∂1E3dx1 ∧ dx3 ∧ dx0 + ∂2E3dx2 ∧ dx3 ∧ dx0+
+ ∂1B1dx1 ∧ dx2 ∧ dx3 + ∂0B1dx0 ∧ dx2 ∧ dx3 + ∂2B2dx2 ∧ dx3 ∧ dx1+
+ ∂0B2dx0 ∧ dx3 ∧ dx1 + ∂3B3dx3 ∧ dx1 ∧ dx2 + ∂0B3dx0 ∧ dx1 ∧ dx2.
Now collecting the terms and using the anti-symmetry of the wedge product we
get:
dF =(∂1B1 + ∂2B2 + ∂3B3)dx1 ∧ dx2 ∧ dx3+
+ (∂2E3 − ∂3E2 + ∂0B1)dx0 ∧ dx2 ∧ dx3+
+ (∂3E1 − ∂1E3 + ∂0B2)dx0 ∧ dx3 ∧ dx1+
+ (∂1E2 − ∂2E1 + ∂0B3)dx0 ∧ dx1 ∧ dx2.
50
Note that dF = 0 is the same as
∂1B1 + ∂2B2 + ∂3B3 = 0
and
∂1B1 + ∂2B2 + ∂3B3 = 0,
∂3E1 − ∂1E3 + ∂0B2 = 0,
∂1E2 − ∂2E1 + ∂0B3 = 0.
The rst equality corresponds with divB = 0 and next three with
rotE +∂B
∂t= 0.
Thus the homogeneous equations can be written in the compact form of:
dF = 0
which means that the electromagnetic eld 2-form is closed.
For the inhomogeneous equations let's compute Hodge-dual of F :
?(dx1 ∧ dx0) = − ? (dx0 ∧ dx1) = − ((−1)q sgn(τ)g22g33dx2 ∧ dx3)
= −((−1)1 · 1 · 1 · 1
)dx2 ∧ dx3 = dx2 ∧ dx3,
?(dx2 ∧ dx0) = − ? (dx0 ∧ dx2) = − ((−1)q sgn(τ)g11g33dx1 ∧ dx3)
= −((−1)1 · (−1) · 1 · 1
)dx2 ∧ dx3 = dx3 ∧ dx2,
?(dx3 ∧ dx0) = − ? (dx0 ∧ dx3) = − ((−1)q sgn(τ)g11g22dx1 ∧ dx2)
= −((−1)1 · 1 · 1 · 1
)dx2 ∧ dx3 = dx1 ∧ dx2,
?(dx2 ∧ dx3) = (−1)q sgn(τ)g00g11dx0 ∧ dx1 =
= (−1)1 · 1 · (−1) · 1 · dx0 ∧ dx1 = −dx1 ∧ dx0,
51
?(dx3 ∧ dx1) = − ? (dx1 ∧ dx3) = − ((−1)q sgn(τ)g00g22dx0 ∧ dx2) =
= −((−1)1 · (−1) · (−1) · 1
)dx0 ∧ dx2 = −dx2 ∧ dx0,
?(dx1 ∧ dx2) = (−1)q sgn(τ)g00g33dx0 ∧ dx3
= (−1)1 · 1 · (−1) · 1 · dx0 ∧ dx3 = −dx3 ∧ dx0.
So the Hodge-dual of F is:
dF =−B1dx1 ∧ dx0 −B2dx2 ∧ dx0 −B3dx3 ∧ dx0+
+ E1dx2 ∧ dx3 + E2dx3 ∧ dx1 + E3dx1 ∧ dx2.
We can also unify the electric charge and current density into a 1-form:
J := j1dx1 + j2dx2 + j3dx3 − ρdx0.
Let's compute the exterior derivative of ?F :
d ? F =− ∂2B1dx2 ∧ dx1 ∧ dx0 − ∂3B1dx3 ∧ dx1 ∧ dx0 − ∂1B2dx1 ∧ dx2 ∧ dx0−
− ∂3B2dx3 ∧ dx2 ∧ dx0 − ∂2B3dx2 ∧ dx3 ∧ dx0 − ∂1B3dx1 ∧ dx3 ∧ dx0+
+ ∂0E1dx0 ∧ dx2 ∧ dx3 + ∂1E1dx1 ∧ dx2 ∧ dx3 + ∂0E2dx0 ∧ dx3 ∧ dx1+
+ ∂2E2dx2 ∧ dx3 ∧ dx1 + ∂0E3dx0 ∧ dx1 ∧ dx2 + ∂3E3dx3 ∧ dx1 ∧ dx2.
52
Collecting the terms we obtain:
d ? F =(∂1E1 + ∂2E2 + ∂3E3)dx1 ∧ dx2 ∧ dx3+
+ (∂3B2 − ∂2B3 + ∂0E1)dx0 ∧ dx2 ∧ dx3+
+ (∂3B1 − ∂1B3 − ∂0E2)dx0 ∧ dx1 ∧ dx3+
+ (∂2B1 − ∂1B2 + ∂0E3)dx0 ∧ dx1 ∧ dx2.
Now let's take the dual of d ? F
?d ? F =− (∂1E1 + ∂2E2 + ∂3E3)dx0 + (∂3B2 − ∂2B3 + ∂0E1)dx1+
+ (∂3B1 − ∂1B3 − ∂0E2)dx2 + (∂2B1 − ∂1B2 + ∂0E3)dx3.
Now ?d ? F = J corresponds to
∂1E1 + ∂2E2 + ∂3E3 = ρ
and
∂3B2 − ∂2B3 + ∂0E1 = j1,
∂3B1 − ∂1B3 − ∂0E2 = j2,
∂2B1 − ∂1B2 + ∂0E3 = j3.
The rst equation is the same as divE = ρ and the next three is the same as
rotB − ∂E∂t
= j. Thus the inhomogeneous equations take the form:
?d ? F = J.
53
3.2 Brouwer's Fixed Point Theorem
Theorem 3.2.1. In the Euclidean space (Rn, 〈·, ·〉) every continous map
f : B1(0)→ B1(0)
from the closed unit ball to itself has a xed point.
Proof. Step 1 First we prove the theorem for f ∈ C1 functions. Indirectly let's
suppose that
f : B1(0)→ B1(0) (f ∈ C1)
is a function with no xed points:
f(x) 6= x (x ∈ B1(0)).
Now we construct the function which assigns to each x ∈ B1(0) point the
point of intersection from f(x) through x with the sphere ∂B1(0). The line
from x through f(x) can be parameterized as follows:
L = x+ t(x− f(x)) : t ∈ R.
This line intersects with ∂B1(0) if for some λ ∈ R the following condition is
satised:
‖x+ λ(x− f(x))‖2 = 1.
Let's compute the norm squared of this expression:
‖x+ λ(x− f(x))‖2 = 〈x+ λ(x− f(x)), x+ λ(x− f(x))〉 =
= 〈x, x〉+ 2λ〈x, x− f(x)〉+ λ2〈x− f(x), x− f(x)〉 =
= ‖x‖2 + 2λ〈x, x− f(x)〉+ λ2‖x− f(x)‖2.
So we need to solve the following quadratic equation for λ:
‖x− f(x)‖2λ2 + 2〈x, x− f(x)〉λ+ ‖x‖2 − 1 = 0.
54
The solution is
λ±(x) =〈x, f(x)− x〉 ±
√〈x, f(x)− x〉2 + (1− ‖x‖2)‖f(x)− x‖2
‖f(x)− x‖2.
Now we can construct our function:
F = (F1, ..., Fn) : B1(0)→ ∂B1(0)
F (x) := x+〈x, f(x)− x〉 ±
√〈x, f(x)− x〉2 + (1− ‖x‖2)‖f(x)− x‖2
‖f(x)− x‖2(x−f(x)).
From the formula we can see that F is a C1 function. Moreover F acts on
the boundary of the unit ball as the identity map:
F (x) = x (x ∈ ∂B1(0)).
Since the image of any x is on the boundary of the unit ball the following
relation holds for all x ∈ B1(0):
‖F (x)‖2 =n∑i=1
Fi(x)2 = 1.
Dierentiating the above formula yields:
2n∑i=1
Fi(x)F ′i (x) = 2n∑
i,j=1
Fi(x)∂jFi(x)dxj = 0
and it follows that for each index j
n∑i=1
Fi(x)∂jFi(x) = 0.
This equality shows that the system of equations
n∑i=1
αi∂jFi(x) = 0.
55
has a non-trivial solution. (α1, ...αn) = (F1(x), ..., Fn(x)) 6= (0, ..., 0). Hence
the determinant of the following matrix vanishes.
det (∂jFi(x)) = 0
Now we dene an ω ∈ Λrn−1 dierential form and using the observation made
above conclude that its derivative vanishes. If
ω := F1 · dF2 ∧ ... ∧ dFn
then
dω = dF1 ∧ dF2 ∧ ... ∧ dFn = det (∂jFi(x)) dx1 ∧ ... ∧ dxn = 0.
Now we use Stokes' theorem and the fact that F acts on the boundary of
the unit ball as the identity map and arrive at a contradiction.
0 =
∫B1(0)
dω =
∫∂B1(0)
ω =
∫∂B1(0)
x1dx2 ∧ ... ∧ dxn =
∫B1(0)
dx1 ∧ ... ∧ dxn =
= µ(B1(0)).
Which would mean that the volume of the unit ball is zero, thus this is a
contradiction.
Step 2 Now we consider the general case, where f is continuous. This case will be
reduced to the previous one by using the Stone-Weierstrass approximation
theorem. This theorem states (see [3]) that if Ω ⊂ Rn is compact, f : Ω→ Ris continuous, then for all ε > 0 there exists a p : Ω → R polynomial such
that |f(x) − p(x)| < ε for all x ∈ Ω. In our case B1(0) is compact, so
by applying this theorem to the f1, ..., fn coordinate functions of f we can
conclude that for each ε > 0 there exists a
p : B1(0)→ Rn (p ∈ C1)
56
polynomial such that
‖f(x)− p(x)‖ < ε.
Now consider the normalized function:
p(x) :=p(x)
1 + ε.
Because of
‖p(x)‖ − ‖f(x)‖ ≤ ‖p(x)− f(x)‖ ≤ ε and ‖f(x)‖ ≤ 1
we have ‖p(x)‖ ≤ 1 + ε, thus ‖p(x)‖ ≤ 1. So p is a map from the unit ball
to itself:
p : B1(0)→ B1(0).
Moreover p can be estimated against f :
‖f(x)− p(x)‖ ≤ ‖f(x)− p(x)‖+ ‖p(x)− p(x)‖ ≤ ε+ ‖p(x)‖∣∣∣∣1− 1
1 + ε
∣∣∣∣ ≤≤ ε+ (1 + ε)
ε
1 + ε≤ 2ε.
To sum up we have proved so far that for each ε > 0 there exists a map
p : B1(0)→ B1(0) (p ∈ C1) such that for all x ∈ B1(0) the following estimate
holds:
‖f(x)− p(x)‖ ≤ 2ε.
Now let's suppose that the continuous map f : B1(0) → B1(0) has no xed
points. This means that
0 < µ := infx∈B1(0)
‖f(x)− x‖
Now for ε = µ/2 we choose a smooth map p with the above discussed proper-
ties. Since p ∈ C1 we can apply the rst part of this proof to p and conclude
that it has a xed point x0 ∈ B1(0). But this means
‖f(x0)− p(x0)‖ = ‖f(x0)− x0‖ < µ
57
which contradicts the denition of µ. So our initial assumption (that f has
no xed points) is false.
58
Bibliography
[1] Agricola I. and Friedrich T.: Global Analysis: Dierential Forms in
Analysis, Geometry and Physics, Graduate Studies in Mathematics, vol. 52
(2002.)
[2] Manfredo P. Do Carmo: Dierential Forms and Applications, Univeri-
text 1994.
[3] Kovács, S.: Funkcionálanalízis feladatokban, egyetemi jegyzet, Budapest,
2013.
ISBN: 978-963-284-445-9
(http://numanal.inf.elte.hu/~alex/hu/anyag/PROGINF/FunkAnal/
FunkAnalKS.pdf)
[4] Kovács, S.: Alkalmazott analízis gyakorlat, egyetemi jegyzet, Budapest, 2018.
ISBN 978-963-489-032-4
(http://numanal.inf.elte.hu/~alex/AlkAnalGyak/AlkAnalGyakKS.pdf)
[5] Simon, P.: Válogatott fejezetek a matematikából, egyetemi jegyzet, Budapest,
2019. ELTE Eötvös kiadó
(http://numanal.inf.elte.hu/~simon/ujfolyt.mod.latex.pdf)
[6] Tevian Dray: The Hodge Dual Operator, university lecture note 1999.
(http://people.oregonstate.edu/~drayt/Courses/MTH434/2009/dual.
pdf)
[7] Solomon Akaraka Owelle: Maxwell's Equations in Terms of Dieren-
tial Forms, postgraduate thesis 2010.
59
(https://bbs.pku.edu.cn/attach/13/c8/13c819b28e8fb43c/maxwell_
hodge.pdf)
[8] Gregory L. Naber: The Geometry of Minkowski Spacetime: An Introduc-
tion to the Mathematics of the Special Theory of Relativity (Applied Mathe-
matical Sciences) 2012.
[9] Igor R. Shafarevich, Alexey O. Remizov: Linear Algebra and Geom-
etry, Springer 2012.
60