a gentle introduction to the kaczmarz algorithmorion.math.iastate.edu › esweber › math610 ›...
TRANSCRIPT
A Gentle Introduction to the Kaczmarz Algorithm
Eric Weber
Iowa State University
Mathematics ColloquiumDePaul UniversityOctober 27, 2017
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 1 / 39
Systems of Linear Equations
Q: How do we solve a system of equations? (We assume consistency here).
a11x1 + a12x2 + · · ·+ a1nxn = y1
a21x1 + a22x2 + · · ·+ a2nxn = y2
......
am1x1 + am2x2 + · · ·+ amnxn = ym
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 2 / 39
Solutions
A: 1) Gaussian Elimination and Backsubstitution:a11 a12 . . . a1n y1a21 a22 . . . a2n y2
......
. . ....
...am1 am2 . . . amn ym
2) Matrix Inversion (m = n; detA 6= 0):
A~x = ~y ⇒ ~x = A−1~y .
3) Moore-Penrose Inversion (m ≥ n; nullity(A) = 0):
~x = A†~y = (ATA)−1AT~y
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 3 / 39
Solutions
A: 1) Gaussian Elimination and Backsubstitution:a11 a12 . . . a1n y1a21 a22 . . . a2n y2
......
. . ....
...am1 am2 . . . amn ym
2) Matrix Inversion (m = n; detA 6= 0):
A~x = ~y ⇒ ~x = A−1~y .
3) Moore-Penrose Inversion (m ≥ n; nullity(A) = 0):
~x = A†~y = (ATA)−1AT~y
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 3 / 39
Solutions
A: 1) Gaussian Elimination and Backsubstitution:a11 a12 . . . a1n y1a21 a22 . . . a2n y2
......
. . ....
...am1 am2 . . . amn ym
2) Matrix Inversion (m = n; detA 6= 0):
A~x = ~y ⇒ ~x = A−1~y .
3) Moore-Penrose Inversion (m ≥ n; nullity(A) = 0):
~x = A†~y = (ATA)−1AT~y
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 3 / 39
(Non)-Solutions
4) Least-Squares (no solution; Moore-Penrose or gradient-descent )
Find x1, . . . , xn that minimizesm∑j=1
|(aj1x1 + · · ·+ ajnxn)− yj |2
5) Compressed Sensing (many solutions; m < n)
Find the solution x1, . . . , xn that minimizes eithern∑
k=1
|xk |0 orn∑
k=1
|xk |1
Genome mapping; MRI machines.
6) Kaczmarz Algorithm
(coming soon! these last two form a 1-2 punch in data science)
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 4 / 39
(Non)-Solutions
4) Least-Squares (no solution; Moore-Penrose or gradient-descent )
Find x1, . . . , xn that minimizesm∑j=1
|(aj1x1 + · · ·+ ajnxn)− yj |2
5) Compressed Sensing (many solutions; m < n)
Find the solution x1, . . . , xn that minimizes eithern∑
k=1
|xk |0 orn∑
k=1
|xk |1
Genome mapping; MRI machines.
6) Kaczmarz Algorithm
(coming soon! these last two form a 1-2 punch in data science)
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 4 / 39
(Non)-Solutions
4) Least-Squares (no solution; Moore-Penrose or gradient-descent )
Find x1, . . . , xn that minimizesm∑j=1
|(aj1x1 + · · ·+ ajnxn)− yj |2
5) Compressed Sensing (many solutions; m < n)
Find the solution x1, . . . , xn that minimizes eithern∑
k=1
|xk |0 orn∑
k=1
|xk |1
Genome mapping; MRI machines.
6) Kaczmarz Algorithm
(coming soon! these last two form a 1-2 punch in data science)
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 4 / 39
Compressed Sensing, An Example
Known:
N,
{f (t0), f (t1), . . . , f (tN)} (Samples)
f (x) = a0 + a1x + · · ·+ aNxN
Unknown:
f , i.e. a0, a1, . . . , aN
(N + 1 unknown variables in N + 1 dimensions)
Can we recover/reconstruct f (x)? Yes:1 t0 t20 . . . tN01 t1 t21 . . . tN1...
......
. . ....
1 tN t2N . . . tNN
a0a1...aN
=
f (t0)f (t1)
...f (tN)
We need N + 1 samples or “measurements” to uniquely determine the vector f inN + 1 dimensions. Here, any N + 1 distinct samples will work.
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 5 / 39
Compressed Sensing, An Example
Known:
N,
{f (t0), f (t1), . . . , f (tN)} (Samples)
f (x) = a0 + a1x + · · ·+ aNxN
Unknown:
f , i.e. a0, a1, . . . , aN
(N + 1 unknown variables in N + 1 dimensions)
Can we recover/reconstruct f (x)? Yes:1 t0 t20 . . . tN01 t1 t21 . . . tN1...
......
. . ....
1 tN t2N . . . tNN
a0a1...aN
=
f (t0)f (t1)
...f (tN)
We need N + 1 samples or “measurements” to uniquely determine the vector f inN + 1 dimensions. Here, any N + 1 distinct samples will work.
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 5 / 39
Compressed Sensing, An Example
Known:
N,
{f (t0), f (t1), . . . , f (tN)} (Samples)
f (x) = a0 + a1x + · · ·+ aNxN
Unknown:
f , i.e. a0, a1, . . . , aN
(N + 1 unknown variables in N + 1 dimensions)
Can we recover/reconstruct f (x)? Yes:1 t0 t20 . . . tN01 t1 t21 . . . tN1...
......
. . ....
1 tN t2N . . . tNN
a0a1...aN
=
f (t0)f (t1)
...f (tN)
We need N + 1 samples or “measurements” to uniquely determine the vector f inN + 1 dimensions. Here, any N + 1 distinct samples will work.
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 5 / 39
Compressed Sensing, An Example
Known:
N,
{f (t0), f (t1), . . . , f (tN)} (Samples)
f (x) = a0 + a1x + · · ·+ aNxN
Unknown:
f , i.e. a0, a1, . . . , aN
(N + 1 unknown variables in N + 1 dimensions)
Can we recover/reconstruct f (x)? Yes:1 t0 t20 . . . tN01 t1 t21 . . . tN1...
......
. . ....
1 tN t2N . . . tNN
a0a1...aN
=
f (t0)f (t1)
...f (tN)
We need N + 1 samples or “measurements” to uniquely determine the vector f inN + 1 dimensions. Here, any N + 1 distinct samples will work.
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 5 / 39
Compressed Sensing, An Example (cont’d)
Known:
N,
{f (t0), f (t1), . . . , f (t2N), f (t2N+1)} (Samples)
f (x) = a0xn0 + a1x
n1 + · · ·+ aNxnN
Unknown:
f , i.e. a0, a1, . . . , aN AND n0, n1, . . . , nN
(2N + 2 unknown variables in infinite dimensions!)
Can we recover/reconstruct f (x)? Yes: (if we are careful!)
We choose t0 > 0, t0 6= 1, then tj = t j+10 .
Proof.
Variation of Prony’s algorithm (1795)!
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 6 / 39
Compressed Sensing, An Example (cont’d)
Known:
N,
{f (t0), f (t1), . . . , f (t2N), f (t2N+1)} (Samples)
f (x) = a0xn0 + a1x
n1 + · · ·+ aNxnN
Unknown:
f , i.e. a0, a1, . . . , aN AND n0, n1, . . . , nN
(2N + 2 unknown variables in infinite dimensions!)
Can we recover/reconstruct f (x)? Yes: (if we are careful!)
We choose t0 > 0, t0 6= 1, then tj = t j+10 .
Proof.
Variation of Prony’s algorithm (1795)!
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 6 / 39
Compressed Sensing, An Example (cont’d)
Known:
N,
{f (t0), f (t1), . . . , f (t2N), f (t2N+1)} (Samples)
f (x) = a0xn0 + a1x
n1 + · · ·+ aNxnN
Unknown:
f , i.e. a0, a1, . . . , aN AND n0, n1, . . . , nN
(2N + 2 unknown variables in infinite dimensions!)
Can we recover/reconstruct f (x)? Yes: (if we are careful!)
We choose t0 > 0, t0 6= 1, then tj = t j+10 .
Proof.
Variation of Prony’s algorithm (1795)!
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 6 / 39
Compressed Sensing, An Example (cont’d)
Known:
N,
{f (t0), f (t1), . . . , f (t2N), f (t2N+1)} (Samples)
f (x) = a0xn0 + a1x
n1 + · · ·+ aNxnN
Unknown:
f , i.e. a0, a1, . . . , aN AND n0, n1, . . . , nN
(2N + 2 unknown variables in infinite dimensions!)
Can we recover/reconstruct f (x)? Yes: (if we are careful!)
We choose t0 > 0, t0 6= 1, then tj = t j+10 .
Proof.
Variation of Prony’s algorithm (1795)!
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 6 / 39
Reconstruction
Back to systems of equations:we write ~aj = (aj1, aj2, . . . , ajn), ~x = (x1, x2, . . . , xn), and
~x · ~a1 = y1
...
~x · ~am = ym
So, the data we have are the dot products of ~x with the ~aj ’s (linearmeasurements), and we want to recover ~x from that data.
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 7 / 39
Reconstruction
Back to systems of equations:we write ~aj = (aj1, aj2, . . . , ajn), ~x = (x1, x2, . . . , xn), and
~x · ~a1 = y1
...
~x · ~am = ym
So, the data we have are the dot products of ~x with the ~aj ’s (linearmeasurements), and we want to recover ~x from that data.
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 7 / 39
Reconstruction (cont’d)
If {~a1, . . . , ~an} is an orthonormal basis (think ~i , ~j , ~k), then
~x = (~x · ~a1)~a1 + · · ·+ (~x · ~an)~an = y1~a1 + · · ·+ yn~an.
(Note the form of the terms–projections!)
If {~a1, . . . , ~am} is a basis (or even just a spanning set–i.e. frame), then there exists
{~b1, . . . , ~bm} such that
~x = (~x · ~a1)~b1 + · · ·+ (~x · ~am)~bm = y1~b1 + · · ·+ ym~bm.
Calculating ~bj ’s is easy on paper and hard on a computer!
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 8 / 39
Reconstruction (cont’d)
If {~a1, . . . , ~an} is an orthonormal basis (think ~i , ~j , ~k), then
~x = (~x · ~a1)~a1 + · · ·+ (~x · ~an)~an = y1~a1 + · · ·+ yn~an.
(Note the form of the terms–projections!)
If {~a1, . . . , ~am} is a basis (or even just a spanning set–i.e. frame), then there exists
{~b1, . . . , ~bm} such that
~x = (~x · ~a1)~b1 + · · ·+ (~x · ~am)~bm = y1~b1 + · · ·+ ym~bm.
Calculating ~bj ’s is easy on paper and hard on a computer!
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 8 / 39
Reconstruction (cont’d)
If {~a1, . . . , ~an} is an orthonormal basis (think ~i , ~j , ~k), then
~x = (~x · ~a1)~a1 + · · ·+ (~x · ~an)~an = y1~a1 + · · ·+ yn~an.
(Note the form of the terms–projections!)
If {~a1, . . . , ~am} is a basis (or even just a spanning set–i.e. frame), then there exists
{~b1, . . . , ~bm} such that
~x = (~x · ~a1)~b1 + · · ·+ (~x · ~am)~bm = y1~b1 + · · ·+ ym~bm.
Calculating ~bj ’s is easy on paper and hard on a computer!
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 8 / 39
Kaczmarz Algorithm
Given {~an}∞n=0 ⊂ H (unit vectors) and ~x · ~an, can we recover ~x? Note: yes ifONB/frame.
~x0 = (~x · ~a0)~a0
~xn = ~xn−1 + ((~x − ~xn−1) · ~an)~an
If limn→∞‖~x − ~xn‖ = 0 for all ~x , then the sequence {~an}∞n=0 is said to be effective.
Theorem (Kaczmarz, 1937)
Suppose {~a0, . . . , ~am−1} is a spanning set of unit vectors for H, and define thesequence by periodizing:
~am = ~a0, ~am+1 = ~a1, . . .
then {~an}∞n=0 is effective.
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 9 / 39
Kaczmarz Algorithm
Given {~an}∞n=0 ⊂ H (unit vectors) and ~x · ~an, can we recover ~x? Note: yes ifONB/frame.
~x0 = (~x · ~a0)~a0
~xn = ~xn−1 + ((~x − ~xn−1) · ~an)~an
If limn→∞‖~x − ~xn‖ = 0 for all ~x , then the sequence {~an}∞n=0 is said to be effective.
Theorem (Kaczmarz, 1937)
Suppose {~a0, . . . , ~am−1} is a spanning set of unit vectors for H, and define thesequence by periodizing:
~am = ~a0, ~am+1 = ~a1, . . .
then {~an}∞n=0 is effective.
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 9 / 39
Proof of the Kaczmarz Theorem
We want to show that ‖~x − ~xn‖ → 0. Calculate:
~x − ~xn = (~x − ~xn−1)− ((~x − ~xn−1) · ~an)~an
Note that the second term is a projection.
Thus,
~x − ~xn = (I − Pn)(~x − ~xn−1)
= (I − Pn)(I − Pn−1) . . . (I − P1)(I − P0)~x
Since projections only decrease the norm, we have ‖~x − ~xn‖ ≤ ‖~x‖. Note that:
~x − ~x2m = (I − Pm−1) . . . (I − P1)(I − P0)(I − Pm−1) . . . (I − P1)(I − P0)~x
so if ‖~x − ~xm‖ = α‖~x‖, we have ‖~x − ~x2m‖ = α2‖~x‖.
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 10 / 39
Proof of the Kaczmarz Theorem
We want to show that ‖~x − ~xn‖ → 0. Calculate:
~x − ~xn = (~x − ~xn−1)− ((~x − ~xn−1) · ~an)~an
Note that the second term is a projection. Thus,
~x − ~xn = (I − Pn)(~x − ~xn−1)
= (I − Pn)(I − Pn−1) . . . (I − P1)(I − P0)~x
Since projections only decrease the norm, we have ‖~x − ~xn‖ ≤ ‖~x‖. Note that:
~x − ~x2m = (I − Pm−1) . . . (I − P1)(I − P0)(I − Pm−1) . . . (I − P1)(I − P0)~x
so if ‖~x − ~xm‖ = α‖~x‖, we have ‖~x − ~x2m‖ = α2‖~x‖.
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 10 / 39
Proof of the Kaczmarz Theorem
We want to show that ‖~x − ~xn‖ → 0. Calculate:
~x − ~xn = (~x − ~xn−1)− ((~x − ~xn−1) · ~an)~an
Note that the second term is a projection. Thus,
~x − ~xn = (I − Pn)(~x − ~xn−1)
= (I − Pn)(I − Pn−1) . . . (I − P1)(I − P0)~x
Since projections only decrease the norm, we have ‖~x − ~xn‖ ≤ ‖~x‖. Note that:
~x − ~x2m = (I − Pm−1) . . . (I − P1)(I − P0)(I − Pm−1) . . . (I − P1)(I − P0)~x
so if ‖~x − ~xm‖ = α‖~x‖, we have ‖~x − ~x2m‖ = α2‖~x‖.
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 10 / 39
Proof of the Kaczmarz Theorem (cont’d)
Again, since we have projections, α ≤ 1. If α < 1 for every ~x we are done. So,can α = 1?
What if it were? We use the Rado-Horn Theorem:
α ≤ ‖(I − Pm−1) . . . (I − P1)(I − P0)‖ ≤ 1
and if the norm is 1, then there exists an eigenvector ~y with eigenvalue 1. Thus:
~y = (I − Pm−1) . . . (I − P1)(I − P0)~y
= (I − Pm−1) . . . (I − P1)~y
...
= (I − Pm−1)~y .
Thus, ~y · ~aj = 0 for all j , so we must have ~y = 0!
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 11 / 39
Proof of the Kaczmarz Theorem (cont’d)
Again, since we have projections, α ≤ 1. If α < 1 for every ~x we are done. So,can α = 1?What if it were? We use the Rado-Horn Theorem:
α ≤ ‖(I − Pm−1) . . . (I − P1)(I − P0)‖ ≤ 1
and if the norm is 1, then there exists an eigenvector ~y with eigenvalue 1. Thus:
~y = (I − Pm−1) . . . (I − P1)(I − P0)~y
= (I − Pm−1) . . . (I − P1)~y
...
= (I − Pm−1)~y .
Thus, ~y · ~aj = 0 for all j , so we must have ~y = 0!
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 11 / 39
Proof of the Kaczmarz Theorem (cont’d)
Again, since we have projections, α ≤ 1. If α < 1 for every ~x we are done. So,can α = 1?What if it were? We use the Rado-Horn Theorem:
α ≤ ‖(I − Pm−1) . . . (I − P1)(I − P0)‖ ≤ 1
and if the norm is 1, then there exists an eigenvector ~y with eigenvalue 1.
Thus:
~y = (I − Pm−1) . . . (I − P1)(I − P0)~y
= (I − Pm−1) . . . (I − P1)~y
...
= (I − Pm−1)~y .
Thus, ~y · ~aj = 0 for all j , so we must have ~y = 0!
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 11 / 39
Proof of the Kaczmarz Theorem (cont’d)
Again, since we have projections, α ≤ 1. If α < 1 for every ~x we are done. So,can α = 1?What if it were? We use the Rado-Horn Theorem:
α ≤ ‖(I − Pm−1) . . . (I − P1)(I − P0)‖ ≤ 1
and if the norm is 1, then there exists an eigenvector ~y with eigenvalue 1. Thus:
~y = (I − Pm−1) . . . (I − P1)(I − P0)~y
= (I − Pm−1) . . . (I − P1)~y
...
= (I − Pm−1)~y .
Thus, ~y · ~aj = 0 for all j , so we must have ~y = 0!
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 11 / 39
So, Why Kaczmarz?
Advantages:
1 Fast! Low complexity, geometric convergence.
2 easy to program
3 simple assumptions on the vectors
Disadvantages:
1 non-exact solution: at best an approximate solution (not a problemnumerically)
2 slower/more complex IF we know a priori an alternative reconstruction (dualbasis/frame)
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 12 / 39
So, Why Kaczmarz?
Advantages:
1 Fast! Low complexity, geometric convergence.
2 easy to program
3 simple assumptions on the vectors
Disadvantages:
1 non-exact solution: at best an approximate solution (not a problemnumerically)
2 slower/more complex IF we know a priori an alternative reconstruction (dualbasis/frame)
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 12 / 39
The Kaczmarz Algorithm and Compressed Sensing
Suppose we have an underdetermined system A~x = ~y (the matrix A is short andfat). We want to find a solution ~x with the fewest non-zero entries.
Let us denote the columns of A by ~A1, . . . , ~An.
Goal: find a subset ~Ak1 , . . . ,~Akq of the columns of A with the fewest elements
such that ~y is in the span of those column vectors.
This is a combinatorial search–extremely slow (NP Hard)!
The main results in compressed sensing say roughly the following: if A has acertain property (hard to get, but possible) then the sparsest solution ( minimizes∑n
k=1 |xk |0 ) is the one that minimizes∑n
k=1 |xk |1.
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 13 / 39
The Kaczmarz Algorithm and Compressed Sensing
Suppose we have an underdetermined system A~x = ~y (the matrix A is short andfat). We want to find a solution ~x with the fewest non-zero entries.
Let us denote the columns of A by ~A1, . . . , ~An.
Goal: find a subset ~Ak1 , . . . ,~Akq of the columns of A with the fewest elements
such that ~y is in the span of those column vectors.
This is a combinatorial search–extremely slow (NP Hard)!
The main results in compressed sensing say roughly the following: if A has acertain property (hard to get, but possible) then the sparsest solution ( minimizes∑n
k=1 |xk |0 ) is the one that minimizes∑n
k=1 |xk |1.
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 13 / 39
The Kaczmarz Algorithm and Compressed Sensing
Suppose we have an underdetermined system A~x = ~y (the matrix A is short andfat). We want to find a solution ~x with the fewest non-zero entries.
Let us denote the columns of A by ~A1, . . . , ~An.
Goal: find a subset ~Ak1 , . . . ,~Akq of the columns of A with the fewest elements
such that ~y is in the span of those column vectors.
This is a combinatorial search–extremely slow (NP Hard)!
The main results in compressed sensing say roughly the following: if A has acertain property (hard to get, but possible) then the sparsest solution ( minimizes∑n
k=1 |xk |0 ) is the one that minimizes∑n
k=1 |xk |1.
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 13 / 39
The Kaczmarz Algorithm and Compressed Sensing
Suppose we have an underdetermined system A~x = ~y (the matrix A is short andfat). We want to find a solution ~x with the fewest non-zero entries.
Let us denote the columns of A by ~A1, . . . , ~An.
Goal: find a subset ~Ak1 , . . . ,~Akq of the columns of A with the fewest elements
such that ~y is in the span of those column vectors.
This is a combinatorial search–extremely slow (NP Hard)!
The main results in compressed sensing say roughly the following: if A has acertain property (hard to get, but possible) then the sparsest solution ( minimizes∑n
k=1 |xk |0 ) is the one that minimizes∑n
k=1 |xk |1.
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 13 / 39
The Kaczmarz Algorithm and Compressed Sensing
Suppose we have an underdetermined system A~x = ~y (the matrix A is short andfat). We want to find a solution ~x with the fewest non-zero entries.
Let us denote the columns of A by ~A1, . . . , ~An.
Goal: find a subset ~Ak1 , . . . ,~Akq of the columns of A with the fewest elements
such that ~y is in the span of those column vectors.
This is a combinatorial search–extremely slow (NP Hard)!
The main results in compressed sensing say roughly the following: if A has acertain property (hard to get, but possible) then the sparsest solution ( minimizes∑n
k=1 |xk |0 ) is the one that minimizes∑n
k=1 |xk |1.
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 13 / 39
The Kaczmarz Algorithm and Compressed Sensing (cont’d)
We assume a priori that there exists q columns of A whose span contains ~y ; wejust don’t know which ones.
Alternative approach:
Step 1: Choose ~Ak1 , . . . ,~Ak2q using an estimate on which columns are the
correct choice;
Step 2: Apply Kaczmarz to the system(~Ak1
~Ak2 . . .~Ak2q
)~z = ~y
to obtain an estimate ~z to ~x ;
Step 3: Use ~z to get a better estimate on which columns to choose, and repeatStep 2.
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 14 / 39
The Kaczmarz Algorithm and Compressed Sensing (cont’d)
We assume a priori that there exists q columns of A whose span contains ~y ; wejust don’t know which ones.
Alternative approach:
Step 1: Choose ~Ak1 , . . . ,~Ak2q using an estimate on which columns are the
correct choice;
Step 2: Apply Kaczmarz to the system(~Ak1
~Ak2 . . .~Ak2q
)~z = ~y
to obtain an estimate ~z to ~x ;
Step 3: Use ~z to get a better estimate on which columns to choose, and repeatStep 2.
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 14 / 39
The Kaczmarz Algorithm and Compressed Sensing (cont’d)
We assume a priori that there exists q columns of A whose span contains ~y ; wejust don’t know which ones.
Alternative approach:
Step 1: Choose ~Ak1 , . . . ,~Ak2q using an estimate on which columns are the
correct choice;
Step 2: Apply Kaczmarz to the system(~Ak1
~Ak2 . . .~Ak2q
)~z = ~y
to obtain an estimate ~z to ~x ;
Step 3: Use ~z to get a better estimate on which columns to choose, and repeatStep 2.
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 14 / 39
The Kaczmarz Algorithm and Compressed Sensing (cont’d)
We assume a priori that there exists q columns of A whose span contains ~y ; wejust don’t know which ones.
Alternative approach:
Step 1: Choose ~Ak1 , . . . ,~Ak2q using an estimate on which columns are the
correct choice;
Step 2: Apply Kaczmarz to the system(~Ak1
~Ak2 . . .~Ak2q
)~z = ~y
to obtain an estimate ~z to ~x ;
Step 3: Use ~z to get a better estimate on which columns to choose, and repeatStep 2.
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 14 / 39
Kaczmarz Algorithm in Infinite Dimensions
Theorem (Kwapien & Mycielski, 2001)
If {φn}∞n=1 ⊂ H is a stationary sequence with dense span, then it is an effectivesequence if and only if it’s spectral measure is either Lebesgue measure or purelysingular.
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 15 / 39
Fourier Series
Theorem (Herr & W., 2015)
If µ is a singular Borel probability measure on (−1/2, 1/2), then the sequence{e2πinx
}∞n=0
is effective in L2(µ). As a consequence, any element f ∈ L2(µ)possesses a Fourier series
f (x) =∞∑n=0
cne2πinx ,
where the sum converges in the L2(µ) norm.
The Fourier coefficients cn are givenby
cn =
∫ 1/2
−1/2f (x)gn(x) dµ(x),
where {gn}∞n=0 is the auxiliary sequence of{e2πinx
}∞n=0
in L2(µ).
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 16 / 39
Fourier Series
Theorem (Herr & W., 2015)
If µ is a singular Borel probability measure on (−1/2, 1/2), then the sequence{e2πinx
}∞n=0
is effective in L2(µ). As a consequence, any element f ∈ L2(µ)possesses a Fourier series
f (x) =∞∑n=0
cne2πinx ,
where the sum converges in the L2(µ) norm. The Fourier coefficients cn are givenby
cn =
∫ 1/2
−1/2f (x)gn(x) dµ(x),
where {gn}∞n=0 is the auxiliary sequence of{e2πinx
}∞n=0
in L2(µ).
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 16 / 39
Inversion Lemma
Lemma (Herr & W., 2015)
There exists a sequence {αn}∞n=0 such that
gn(x) =n∑
j=0
αn−je2πijx .
The sequence is given by
1
µ+(z)=∞∑n=0
αnzn
where
µ+(z) =
∫ 1/2
−1/2
1
1− e−2πitzdµ(t).
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 17 / 39
Inversion Lemma
Lemma (Herr & W., 2015)
There exists a sequence {αn}∞n=0 such that
gn(x) =n∑
j=0
αn−je2πijx .
The sequence is given by
1
µ+(z)=∞∑n=0
αnzn
where
µ+(z) =
∫ 1/2
−1/2
1
1− e−2πitzdµ(t).
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 17 / 39
The Paley-Wiener Space
We can reconstruct an unknown vector ~x from its “samples” (polynomials) or“linear measurements” (A~x = ~y) quite well in finite dimensions. However, thesampling algorithm usually occurs in infinite dimensions! i.e. the vector space ofcontinuous functions on [0, 1].
Consider the vector space of entire functions f : C→ C that satisfies these twoconditions:
1∫∞−∞ |f (t)|2dt is finite;
2 |f (z)| ≤ Ceπ|z| for some constant C .
This is the Paley-Wiener space.
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 18 / 39
The Paley-Wiener Space
We can reconstruct an unknown vector ~x from its “samples” (polynomials) or“linear measurements” (A~x = ~y) quite well in finite dimensions. However, thesampling algorithm usually occurs in infinite dimensions! i.e. the vector space ofcontinuous functions on [0, 1].
Consider the vector space of entire functions f : C→ C that satisfies these twoconditions:
1∫∞−∞ |f (t)|2dt is finite;
2 |f (z)| ≤ Ceπ|z| for some constant C .
This is the Paley-Wiener space.
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 18 / 39
The Shannon Sampling Theorem
Theorem (Shannon-Whittaker-Kotelnikov ( ∼1945, ∼1915, ∼1935))
If f is in the Paley-Wiener space, then
f (x) =∑n∈Z
f (n)
(sin(π(x − n)
π(x − n)
).
In other words: Known:
1 f is in the Paley-Wiener space;
2 f (n) for all n ∈ Zthen we can recover f from this data.
This is the foundation of digital communications: cellphones, digital audio(compact discs, MP3, etc), CAT scans, MRI’s, . . . (Nyquist rate; analog to digitalconversion)
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 19 / 39
The Shannon Sampling Theorem
Theorem (Shannon-Whittaker-Kotelnikov ( ∼1945, ∼1915, ∼1935))
If f is in the Paley-Wiener space, then
f (x) =∑n∈Z
f (n)
(sin(π(x − n)
π(x − n)
).
In other words: Known:
1 f is in the Paley-Wiener space;
2 f (n) for all n ∈ Zthen we can recover f from this data.
This is the foundation of digital communications: cellphones, digital audio(compact discs, MP3, etc), CAT scans, MRI’s, . . . (Nyquist rate; analog to digitalconversion)
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 19 / 39
The Shannon Sampling Theorem
Theorem (Shannon-Whittaker-Kotelnikov ( ∼1945, ∼1915, ∼1935))
If f is in the Paley-Wiener space, then
f (x) =∑n∈Z
f (n)
(sin(π(x − n)
π(x − n)
).
In other words: Known:
1 f is in the Paley-Wiener space;
2 f (n) for all n ∈ Zthen we can recover f from this data.
This is the foundation of digital communications: cellphones, digital audio(compact discs, MP3, etc), CAT scans, MRI’s, . . . (Nyquist rate; analog to digitalconversion)
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 19 / 39
Paley-Wiener Theorem
Theorem (Paley-Wiener Theorem (1933))
If f is in the Paley-Wiener space, then there exists a function g ∈ L2(−1/2, 1/2)such that
f (z) =
∫ 1/2
−1/2g(t)e−2πitzdt.
Equivalently:If f satisfies the following conditions:
1 f is entire;
2∑
n∈Z |f (n)|2 is finite;
3
f (x) =∑n∈Z
f (n)
(sin(π(x − n)
π(x − n)
).
then f is the Fourier transform of some g .
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 20 / 39
Paley-Wiener Theorem
Theorem (Paley-Wiener Theorem (1933))
If f is in the Paley-Wiener space, then there exists a function g ∈ L2(−1/2, 1/2)such that
f (z) =
∫ 1/2
−1/2g(t)e−2πitzdt.
Equivalently:If f satisfies the following conditions:
1 f is entire;
2∑
n∈Z |f (n)|2 is finite;
3
f (x) =∑n∈Z
f (n)
(sin(π(x − n)
π(x − n)
).
then f is the Fourier transform of some g .
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 20 / 39
Singular Measures
Fix a singular measure µ on (−1/2, 1/2). Question: when can an entire functionF be written as
F (z) =
∫ 1/2
−1/2f (t)e−2πitzdµ(t)
for some f ∈ L2(µ)?
Answer: Kaczmarz Algorithm!
Using the Kaczmarz algorithm, we are able to provide complete characterizationsusing i) a sampling theory idea and ii) an interpolation idea.
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 21 / 39
Singular Measures
Fix a singular measure µ on (−1/2, 1/2). Question: when can an entire functionF be written as
F (z) =
∫ 1/2
−1/2f (t)e−2πitzdµ(t)
for some f ∈ L2(µ)?
Answer: Kaczmarz Algorithm!
Using the Kaczmarz algorithm, we are able to provide complete characterizationsusing i) a sampling theory idea and ii) an interpolation idea.
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 21 / 39
The Paley-Wiener Theorem for µ
Theorem (W., 2017)
Let µ be a singular Borel probability measure on (−1/2, 1/2). Let {αn}∞n=0 be thesequence of scalars induced by µ by the Inversion Lemma. The entire function Fhas the form
F (z) =
∫ 1/2
−1/2f (x)e−2πitz dµ(t)
for some f ∈ L2(µ) if and only if F satisfies
1
∞∑n=0
∣∣∣∣∣∣n∑
j=0
αn−jF (j)
∣∣∣∣∣∣2
<∞;
2 for all z ∈ C,
F (z) =∞∑n=0
n∑j=0
αn−jF (j)
( n∑l=0
αn−l µ(z − l)
).
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 22 / 39
The Paley-Wiener Theorem for µ
Theorem (W., 2017)
Let µ be a singular Borel probability measure on (−1/2, 1/2). Let {αn}∞n=0 be thesequence of scalars induced by µ by the Inversion Lemma. The entire function Fhas the form
F (z) =
∫ 1/2
−1/2f (x)e−2πitz dµ(t)
for some f ∈ L2(µ) if and only if F satisfies
1
∞∑n=0
∣∣∣∣∣∣n∑
j=0
αn−jF (j)
∣∣∣∣∣∣2
<∞;
2 for all z ∈ C,
F (z) =∞∑n=0
n∑j=0
αn−jF (j)
( n∑l=0
αn−l µ(z − l)
).
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 22 / 39
The Paley-Wiener Theorem for µ
Theorem (W., 2017)
Let µ be a singular Borel probability measure on (−1/2, 1/2). Let {αn}∞n=0 be thesequence of scalars induced by µ by the Inversion Lemma. The entire function Fhas the form
F (z) =
∫ 1/2
−1/2f (x)e−2πitz dµ(t)
for some f ∈ L2(µ) if and only if F satisfies
1
∞∑n=0
∣∣∣∣∣∣n∑
j=0
αn−jF (j)
∣∣∣∣∣∣2
<∞;
2 for all z ∈ C,
F (z) =∞∑n=0
n∑j=0
αn−jF (j)
( n∑l=0
αn−l µ(z − l)
).
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 22 / 39
Proof
Necessity: Apply Fourier transform to
f =∞∑n=0
〈f , gn〉gn.
Sufficiency: Define f ∈ L2(µ) by
f =∞∑n=0
n∑j=0
αn−jF (j)
gn
then apply Fourier transform to obtain
f (z) = F (z).
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 23 / 39
Proof
Necessity: Apply Fourier transform to
f =∞∑n=0
〈f , gn〉gn.
Sufficiency: Define f ∈ L2(µ) by
f =∞∑n=0
n∑j=0
αn−jF (j)
gn
then apply Fourier transform to obtain
f (z) = F (z).
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 23 / 39
The Paley-Wiener Theorem for µ
Theorem (W. 2017)
Suppose µ is a singular Borel probability measure on (−1/2, 1/2), and let b be theinner function associated to µ by the Herglotz Representation. The entire functionF is the Fourier transform f for some f ∈ L2(µ) if and only if
(i) |F (z)| ≤ ε(|z |)eπ|z| with ε(r) = o(1);
(ii) the following inclusions hold:
G+(z) :=
∑∞n=0 F (n)zn
µ+(z)∈ H(b), G−(z) :=
∑∞n=0 F (−n)zn
µ+(z)∈ H(b);
(iii) the L2(µ)-boundaries of G+ and G− satisfy the relationship
G?+ = G?−.
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 24 / 39
The Paley-Wiener Theorem for µ
Theorem (W. 2017)
Suppose µ is a singular Borel probability measure on (−1/2, 1/2), and let b be theinner function associated to µ by the Herglotz Representation. The entire functionF is the Fourier transform f for some f ∈ L2(µ) if and only if
(i) |F (z)| ≤ ε(|z |)eπ|z| with ε(r) = o(1);
(ii) the following inclusions hold:
G+(z) :=
∑∞n=0 F (n)zn
µ+(z)∈ H(b), G−(z) :=
∑∞n=0 F (−n)zn
µ+(z)∈ H(b);
(iii) the L2(µ)-boundaries of G+ and G− satisfy the relationship
G?+ = G?−.
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 24 / 39
The Paley-Wiener Theorem for µ
Theorem (W. 2017)
Suppose µ is a singular Borel probability measure on (−1/2, 1/2), and let b be theinner function associated to µ by the Herglotz Representation. The entire functionF is the Fourier transform f for some f ∈ L2(µ) if and only if
(i) |F (z)| ≤ ε(|z |)eπ|z| with ε(r) = o(1);
(ii) the following inclusions hold:
G+(z) :=
∑∞n=0 F (n)zn
µ+(z)∈ H(b), G−(z) :=
∑∞n=0 F (−n)zn
µ+(z)∈ H(b);
(iii) the L2(µ)-boundaries of G+ and G− satisfy the relationship
G?+ = G?−.
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 24 / 39
The Paley-Wiener Theorem for µ
Theorem (W. 2017)
Suppose µ is a singular Borel probability measure on (−1/2, 1/2), and let b be theinner function associated to µ by the Herglotz Representation. The entire functionF is the Fourier transform f for some f ∈ L2(µ) if and only if
(i) |F (z)| ≤ ε(|z |)eπ|z| with ε(r) = o(1);
(ii) the following inclusions hold:
G+(z) :=
∑∞n=0 F (n)zn
µ+(z)∈ H(b), G−(z) :=
∑∞n=0 F (−n)zn
µ+(z)∈ H(b);
(iii) the L2(µ)-boundaries of G+ and G− satisfy the relationship
G?+ = G?−.
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 24 / 39
A No-Go Result
Denote: PW (µ) = {f |f ∈ L2(µ)}, Eτ the collection of all entire functions ofexponential type at most τ .
Theorem
Suppose PW (µ) = Cτ ∩ L2(w) for some τ ∈ (0, π] and some weight or measure won R with ‖f ‖µ ' ‖f ‖w . Then there exists a Riesz basis of the form
{ωne2πiλnx}n∈Z ⊂ L2(µ) (1)
for some sequence {λn} ⊂ R and ωn > 0.
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 25 / 39
A No-Go Result
Denote: PW (µ) = {f |f ∈ L2(µ)}, Eτ the collection of all entire functions ofexponential type at most τ .
Theorem
Suppose PW (µ) = Cτ ∩ L2(w) for some τ ∈ (0, π] and some weight or measure won R with ‖f ‖µ ' ‖f ‖w . Then there exists a Riesz basis of the form
{ωne2πiλnx}n∈Z ⊂ L2(µ) (1)
for some sequence {λn} ⊂ R and ωn > 0.
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 25 / 39
Herglotz Representation Theorem and the space H(b)
Theorem
There is a 1-to-1 correspondence between the nonconstant inner functions b in H2
and the nonnegative singular Borel measures µ on T ≡ [0, 1) given by
Re
(1 + b(z)
1− b(z)
)=
∫T
1− |z |2
|ξ − z |2dµ(ξ).
We will say that b corresponds to µ, and that µ corresponds to b. Theconstruction of the de Branges-Rovnyak space H(b) is based on Toeplitzoperators, but here suffice it to say that for b an inner function, we have
H(b) = H2 bH2.
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 26 / 39
Normalized Cauchy Transform
Given a measure µ on (−1/2, 1/2), the normalized Cauchy transform is theoperator Vµ from L2(µ) to the set of functions on C \ T given by
Vµf (z) =
∫ 1/2
−1/2
f (x)
1− ze−2πixdµ(x)∫ 1/2
−1/2
1
1− ze−2πixdµ(x)
.
Clark showed that if µ is a singular Borel probability measure and b is itscorresponding inner function, then Vµ maps L2(µ) unitarily onto H(b).
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 27 / 39
Re-Expression of the Normalized Cauchy Transform
Theorem (H.&W., 2015)
Let µ be a singular Borel probability measure, and {gn}∞n=0 the auxiliary sequenceof {en}∞n=0 in L2(µ). Then for f ∈ L2(µ),
Vµf (z) =∞∑n=0
〈f , gn〉µ zn.
Thus, every function F ∈ H(b) is of the form F (z) =∑∞
n=0 〈f , gn〉µ zn. Since
f =∑∞
n=0 〈f , gn〉µ e2πinx and Fr (x) :=∑∞
n=0 〈f , gn〉µ re2πinx , Abel summability
shows that limr→1− ‖Fr − f ‖µ = 0, and so f is an L2(µ) boundary function of F .
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 28 / 39
Re-Expression of the Normalized Cauchy Transform
Theorem (H.&W., 2015)
Let µ be a singular Borel probability measure, and {gn}∞n=0 the auxiliary sequenceof {en}∞n=0 in L2(µ). Then for f ∈ L2(µ),
Vµf (z) =∞∑n=0
〈f , gn〉µ zn.
Thus, every function F ∈ H(b) is of the form F (z) =∑∞
n=0 〈f , gn〉µ zn. Since
f =∑∞
n=0 〈f , gn〉µ e2πinx and Fr (x) :=∑∞
n=0 〈f , gn〉µ re2πinx , Abel summability
shows that limr→1− ‖Fr − f ‖µ = 0, and so f is an L2(µ) boundary function of F .
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 28 / 39
The Paley-Wiener Theorem for µ, redux
For an entire function F of exponential type, we use hF to denote thePhragmen-Lindelof indicator function.
Theorem (W. 2017)
Suppose µ is a singular Borel probability measure with support in[α, β] ⊂ [−1/2, 1/2] where β − α < 1. Let b be the inner function associated to µvia the Herglotz Representation. The entire function F is the Fourier transform ffor some f ∈ L2(µ) if and only if
(i) F is of exponential type;
(ii) the indicator function of F satisfies hF (π
2) ≤ 2πβ and hF (−π
2) ≤ −2πα;
(iii) the following inclusion holds:
GF (z) :=
∑∞n=0 F (n)zn
µ+(z)∈ H(b)
i.e. the function GF is in the kernel of the Toeplitz operator Tb.
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 29 / 39
The Paley-Wiener Theorem for µ, redux
For an entire function F of exponential type, we use hF to denote thePhragmen-Lindelof indicator function.
Theorem (W. 2017)
Suppose µ is a singular Borel probability measure with support in[α, β] ⊂ [−1/2, 1/2] where β − α < 1. Let b be the inner function associated to µvia the Herglotz Representation. The entire function F is the Fourier transform ffor some f ∈ L2(µ) if and only if
(i) F is of exponential type;
(ii) the indicator function of F satisfies hF (π
2) ≤ 2πβ and hF (−π
2) ≤ −2πα;
(iii) the following inclusion holds:
GF (z) :=
∑∞n=0 F (n)zn
µ+(z)∈ H(b)
i.e. the function GF is in the kernel of the Toeplitz operator Tb.
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 29 / 39
The Paley-Wiener Theorem for µ, redux
For an entire function F of exponential type, we use hF to denote thePhragmen-Lindelof indicator function.
Theorem (W. 2017)
Suppose µ is a singular Borel probability measure with support in[α, β] ⊂ [−1/2, 1/2] where β − α < 1. Let b be the inner function associated to µvia the Herglotz Representation. The entire function F is the Fourier transform ffor some f ∈ L2(µ) if and only if
(i) F is of exponential type;
(ii) the indicator function of F satisfies hF (π
2) ≤ 2πβ and hF (−π
2) ≤ −2πα;
(iii) the following inclusion holds:
GF (z) :=
∑∞n=0 F (n)zn
µ+(z)∈ H(b)
i.e. the function GF is in the kernel of the Toeplitz operator Tb.
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 29 / 39
The Paley-Wiener Theorem for µ, redux
For an entire function F of exponential type, we use hF to denote thePhragmen-Lindelof indicator function.
Theorem (W. 2017)
Suppose µ is a singular Borel probability measure with support in[α, β] ⊂ [−1/2, 1/2] where β − α < 1. Let b be the inner function associated to µvia the Herglotz Representation. The entire function F is the Fourier transform ffor some f ∈ L2(µ) if and only if
(i) F is of exponential type;
(ii) the indicator function of F satisfies hF (π
2) ≤ 2πβ and hF (−π
2) ≤ −2πα;
(iii) the following inclusion holds:
GF (z) :=
∑∞n=0 F (n)zn
µ+(z)∈ H(b)
i.e. the function GF is in the kernel of the Toeplitz operator Tb.
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 29 / 39
An Interpolation Problem
Lemma
Suppose µ is a singular Borel probability measure on T, b is the inner function onD associated to µ via the Herglotz representation, and suppose {an}∞n=0 ⊂ C. Thefollowing conditions are equivalent:
(i) there exists a function f ∈ L2(µ) with the property that
an =
∫Tf (x)e−2πinx dµ(x);
(2)
(ii) the following inclusion holds:
Ga(z) :=
∑∞n=0 anz
n
µ+(z)∈ H(b).
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 30 / 39
An Interpolation Problem
Lemma
Suppose µ is a singular Borel probability measure on T, b is the inner function onD associated to µ via the Herglotz representation, and suppose {an}∞n=0 ⊂ C. Thefollowing conditions are equivalent:
(i) there exists a function f ∈ L2(µ) with the property that
an =
∫Tf (x)e−2πinx dµ(x); (2)
(ii) the following inclusion holds:
Ga(z) :=
∑∞n=0 anz
n
µ+(z)∈ H(b).
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 30 / 39
Outline of Proof
Version using hF :
1 Condition (iii) says that {F (n)} can be interpolated by some f , soF (n) = f (n) for n ∈ N0;
2 Condition (i) and (ii) says that F (z) = f (z) for all z by Carlson’s theorem.
Version using growth condition is similar, but Carlson’s theorem does not apply.We use a generalization of Carlson’s theorem from Boas’ book.
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 31 / 39
Outline of Proof
Version using hF :
1 Condition (iii) says that {F (n)} can be interpolated by some f , soF (n) = f (n) for n ∈ N0;
2 Condition (i) and (ii) says that F (z) = f (z) for all z by Carlson’s theorem.
Version using growth condition is similar, but Carlson’s theorem does not apply.We use a generalization of Carlson’s theorem from Boas’ book.
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 31 / 39
Outline of Proof
Version using hF :
1 Condition (iii) says that {F (n)} can be interpolated by some f , soF (n) = f (n) for n ∈ N0;
2 Condition (i) and (ii) says that F (z) = f (z) for all z by Carlson’s theorem.
Version using growth condition is similar, but Carlson’s theorem does not apply.We use a generalization of Carlson’s theorem from Boas’ book.
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 31 / 39
The End
Thank you!
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 32 / 39
Selected Works Cited
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 33 / 39
The Paley-Wiener Theorem
An entire function F can be written in the form
F (z) =
∫ 1/2
−1/2f (t)e−2πitzdt
if and only if F satisfies:
1 F is of exponential type π;
2 F (t) ∈ L2(R).
}Eπ ∩ L2(R)
Alternative characterization: F satisfies
1 ∑n∈Z|F (n)|2 <∞;
2
F (z) =∑n∈Z
F (n)sinc(z − n).
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 34 / 39
The Paley-Wiener Theorem
An entire function F can be written in the form
F (z) =
∫ 1/2
−1/2f (t)e−2πitzdt
if and only if F satisfies:
1 F is of exponential type π;
2 F (t) ∈ L2(R).
}Eπ ∩ L2(R)
Alternative characterization: F satisfies
1 ∑n∈Z|F (n)|2 <∞;
2
F (z) =∑n∈Z
F (n)sinc(z − n).
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 34 / 39
A Shannon Sampling Formula
Theorem (H. & W., 2015)
Let µ be a singular Borel probability measure on (−1/2, 1/2). Let {αn}∞n=0 be thesequence of scalars induced by µ by the Inversion Lemma. Suppose F : R→ C isof the form
F (y) =
∫ 1/2
−1/2f (x)e−2πiyx dµ(x)
for some f ∈ L2(µ). Then
F (y) =∞∑n=0
n∑j=0
αn−jF (j)
µ(y − n),
where the series converges uniformly in y .
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 35 / 39
A Shannon Sampling Formula
Theorem (H. & W., 2015)
Let µ be a singular Borel probability measure on (−1/2, 1/2). Let {αn}∞n=0 be thesequence of scalars induced by µ by the Inversion Lemma. Suppose F : R→ C isof the form
F (y) =
∫ 1/2
−1/2f (x)e−2πiyx dµ(x)
for some f ∈ L2(µ). Then
F (y) =∞∑n=0
n∑j=0
αn−jF (j)
µ(y − n),
where the series converges uniformly in y .
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 35 / 39
An Alternative Sampling Formula
Since the {gn} form a Parseval frame, we obtain the following variation.
Theorem (H. & W., 2015)
Let µ be a singular Borel probability measure on (−1/2, 1/2). Let {αn}∞n=0 be thesequence of scalars induced by µ by the Inversion Lemma. Suppose F : R→ C isof the form
F (y) =
∫ 1/2
−1/2f (x)e−2πiyx dµ(x)
for some f ∈ L2(µ). Then
F (y) =∞∑n=0
n∑j=0
αn−jF (j)
( n∑l=0
αn−l µ(y − l)
),
where the series converges uniformly in y .
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 36 / 39
An Alternative Sampling Formula
Since the {gn} form a Parseval frame, we obtain the following variation.
Theorem (H. & W., 2015)
Let µ be a singular Borel probability measure on (−1/2, 1/2). Let {αn}∞n=0 be thesequence of scalars induced by µ by the Inversion Lemma. Suppose F : R→ C isof the form
F (y) =
∫ 1/2
−1/2f (x)e−2πiyx dµ(x)
for some f ∈ L2(µ). Then
F (y) =∞∑n=0
n∑j=0
αn−jF (j)
( n∑l=0
αn−l µ(y − l)
),
where the series converges uniformly in y .
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 36 / 39
Fourier Series without Frames
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 37 / 39
The Paley-Wiener Theorem via a Sampling Criteria
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 38 / 39
The Paley-Wiener Theorem via an Interpolation Criteria
Eric Weber A Gentle Introduction to the Kaczmarz Algorithm 39 / 39