traditional approaches to modeling and analysis
DESCRIPTION
Traditional Approaches to Modeling and Analysis. Outline. Concepts: Dynamical Systems Model Fixed Points Optimality Convergence Stability Models Contraction Mappings Markov chains Standard Interference Function. Basic Model. Dynamical system - PowerPoint PPT PresentationTRANSCRIPT
1 © Cognitive Radio Technologies, 2007
Traditional Approaches to Modeling and Analysis
2 © Cognitive Radio Technologies, 2007
Outline
Concepts:– Dynamical Systems Model– Fixed Points– Optimality– Convergence– Stability
Models– Contraction Mappings– Markov chains– Standard Interference Function
3 © Cognitive Radio Technologies, 2007
Basic Model
Dynamical system– A system whose change in
state is a function of the current state and time
Autonomous system– Not a function of time– OK for synchronous timing
Characteristic function
Evolution function– First step in analysis of
dynamical system– Describes state as function
of time & initial state.– For simplicity
:d A T A ,a g a t
x
y
:jj N
d d d A A
while noting the relevant timing model
4 © Cognitive Radio Technologies, 2007
Connection to Cognitive Radio Model
g = d/ t Assumption of a known
decision rule obviates need to solve for evolution function.
Reflects innermost loop of the OODA loop
Useful for deterministic procedural radios
(generally discrete time for our purposes)
5 © Cognitive Radio Technologies, 2007
Example: ([Yates_95]) Power control applications
Defines a discrete time evolution function as a function of each radio’s observed SINR, j , each radio’s target SINR and the current transmit power
Applications Fixed assignment - each
mobile is assigned to a particular base station
Minimum power assignment - each mobile is assigned to the base station in the network where its SINR is maximized
Macro diversity - all base stations in the network combine the signals of the mobiles
Limited diversity - a subset of the base stations combine the signals of the mobiles
Multiple connection reception - the target SINR must be maintained at a number of base stations.
1ˆ jm m
j jj
p p
\1
ˆ mj kj k j
k N im mj j m
jj j
g p
p pKg p
ˆ j
6 © Cognitive Radio Technologies, 2007
Applicable analysis models & techniques
Markov models– Absorbing & ergodic chains
Standard Interference Function– Can be applied beyond power control
Contraction mappings Lyapunov Stability
7 © Cognitive Radio Technologies, 2007
Differences between assumptions of dynamical system and CRN model
Goals of secondary importance– Technically not needed
Not appropriate for ontological radios– May not be a closed form expression for decision
rule and thus no evolution function– Really only know that radio will “intelligently” –
work towards its goal Unwieldy for random procedural radios
– Possible to model as Markov chain, but requires empirical work or very detailed analysis to discover transition probabilities
8 © Cognitive Radio Technologies, 2007
Steady-states
Recall model of <N,A,{di},T> which we characterize with the evolution function d
Steady-state is a point where a*= d(a*) for all t t *
Obvious solution: solve for fixed points of d. For non-cooperative radios, if a* is a fixed point
under synchronous timing, then it is under the other three timings.
Works well for convex action spaces– Not always guaranteed to exist– Value of fixed point theorems
Not so well for finite spaces– Generally requires exhaustive search
9 © Cognitive Radio Technologies, 2007
Fixed Point Definition
Given a mapping a point
is said to be a fixed point of f if
*x X:f X X
* *f x x
y f x y x
1
1
0 x
f(x)
In 2-D fixed points for f can be found by evaluating where and intersect.
How much information do we need to have to know that a function has a fixed point/Nash equilibrium?
10 © Cognitive Radio Technologies, 2007
Visualizing Fixed Point Existence
Consider continuous
X compact, convex Fixed Point must
exist
1
1
0 x
f(x)
:f X X
11 © Cognitive Radio Technologies, 2007
Convex Sets
Let S n. S is said to be convex if for all x, y S, the point w = x + (1- )y is in S for all [0,1].
Definition Convex Set
Equivalent expressionA set S is convex if for all possible pairs of points, x, y, drawnfrom S the line segments joining x, y is also in S.
Convex Convex Not Convex
x
y
12 © Cognitive Radio Technologies, 2007
Compact Sets
Definition Compact SetA bounded set S is compact if there is no point xS such that the limit of a sequence formed entirely from elements in S is x.
Any closed finite interval [0,1]
Compact sets
Equivalent – closed and bounded
Non-compact sets
(0,1]
[0,)
Closed Disk (Note, mathematically a disk is just a ball)
Closed n-Ball (A filled sphere)
13 © Cognitive Radio Technologies, 2007
Continuous Function
Definition Continuous FunctionA function f: XY is continuous if for all x0X the following three conditions hold: f(x0) Y
0
limx x
f x Y
0
0limx x
f x f x
Note being differentiable at x0 implies continuity at x0, but continuity does not imply differentiability
A continuous but not differentiable function
14 © Cognitive Radio Technologies, 2007
Visualizing Fixed Point Existence
Consider continuous
X not compact, convex or X compact, not convex
Fixed point need not exist
1
1
0 x
f(x)
:f X X
15 © Cognitive Radio Technologies, 2007
Brouwer’s Fixed Point Theorem
Let f :X X be a continuous function from a non-empty compact convex set X n, then there is some x*X such that f(x*) = x*. (Note originally written as f :B B where B = {x n : ||x||1} [the unit n-ball])
16 © Cognitive Radio Technologies, 2007
Visualizing Fixed Point Existence
Consider f :X X as an upper semi-continuous correspondence
X compact, convex
1
1
0 x
f(x)
17 © Cognitive Radio Technologies, 2007
Kakutani’s Fixed Point Theorem
Let f :X X be a upper semi-continuous convex valued correspondence from a non-empty compact convex set X n, then there is some x*X such that x* f(x*)
18 © Cognitive Radio Technologies, 2007
Example steady-state solution
Consider Standard Interference Function
1ˆ jm m
j jj
p p
\1
ˆ mj kj k j
k N im mj j m
jj j
g p
p pKg p
* *
\
ˆ jj kj k j
k N ijj
p g pKg
11 1 12 1
21 22 2*
1 2
1 2
ˆ/
ˆ/
ˆ/
n
n
n n nn n
Kg g g
g Kg
g g Kg
p
19 © Cognitive Radio Technologies, 2007
Optimality
In general we assume the existence of some design objective function J:A
The desirableness of a network state, a, is the value of J(a).
In general maximizers of J are unrelated to fixed points of d.
Figure from Fig 2.6 in I. Akbar, “Statistical Analysis of Wireless Systems Using Markov Models,” PhD Dissertation, Virginia Tech, January 2007
20 © Cognitive Radio Technologies, 2007
Identification of Optimality
If J is differentiable, then optimal point must either lie on a boundary or be at a point where the gradient is the zero vector
1 2
1 2
ˆ ˆ ˆnn
J a J a J aJ a a a a
a a a
0
21 © Cognitive Radio Technologies, 2007
Convergent Sequence
A sequence {pn} in a Euclidean space X with point pX such that for every >0, there is an integer N such that nN implies dX(pn,p)<
This can be equivalently written as
or np plim nn
p p
22 © Cognitive Radio Technologies, 2007
Example Convergent Sequence
Given , choose N=1/ , p=0
1/np n
10
Establish convergence by applying definitionNecessitates knowledge of p.
23 © Cognitive Radio Technologies, 2007
Convergent Sequence Properties
24 © Cognitive Radio Technologies, 2007
Cauchy Sequence
A sequence {pn} in a metric space X such that for every >0, there is an integer N such that if ,m n N ,X n md p p
25 © Cognitive Radio Technologies, 2007
Example Cauchy Sequence
Given , choose N=2/, p=0
1 /nnp n
10
Establish convergence by applying definitionNo need to know p
In k, every Cauchy sequence converges, and every convergent sequence is Cauchy
26 © Cognitive Radio Technologies, 2007
Monotonic Sequences
A sequence {sn} is monotonically increasing if .
A sequence {sn} is monotonically decreasing if
(Note: some authors use the inclusion of the equals condition to define a sequence to be respectively monotonically nondecreasing or monotonically nonincreasing.). A sequence which is either monotonically increasing or monotonically decreasing is said to be monotonic.
n mn m s s
n mn m s s
27 © Cognitive Radio Technologies, 2007
Convergent Monotonic Sequences
Suppose is a monotonic in X. Then converges if X is bounded.
Note that also converges if X is compact.
ns ns
ns
28 © Cognitive Radio Technologies, 2007
Showing convergence with nonlinear programming
(shamelessly lifted from Matlab’s logo)
J
Left unanswered: where does come from?
29 © Cognitive Radio Technologies, 2007
Stability
x
y
x
y
Stable, but not attractive
x
y
Attractive, but not stable
30 © Cognitive Radio Technologies, 2007
Lyapunov’s Direct Method
Left unanswered: where does L come from?
31 © Cognitive Radio Technologies, 2007
Comments on analysis
We just covered some very general techniques for showing that a system has a fixed point (steady-state), converges, and is stable.
Could apply these to every problem independently, but can sometimes be painful (and nonobvious – where does Lyapunov function come from, convergence assumes we already know a fixed point)
My preferred approach is to analyze general models and then show that particular problems satisfy conditions of one of the general models.
32 © Cognitive Radio Technologies, 2007
Analysis models appropriate for dynamical systems
Contraction Mappings– Identifiable unique steady-state– Everywhere convergent, bound for convergence
rate– Lyapunov stable (=)
Lyapunov function = distance to fixed point– General Convergence Theorem (Bertsekas)
provides convergence for asynchronous timing if contraction mapping under synchronous timing
Standard Interference Function – Forms a pseudo-contraction mapping– Can be applied beyond power control
Markov Chains (Ergodic and Absorbing)– Also useful in game analysis
O1
O2
O3
O4
O5
O6O7
O8O9
O10
O11
O11
A(t0)
A(t1)
A(t2)
A(t3)
A(t4)
A(t5)
A(t6)
A(t7)
A(t8)
A(t8)
A(t9)
33 © Cognitive Radio Technologies, 2007
Contraction Mappings
Every contraction is a pseudo-contraction
Every pseudo-contraction has a fixed point
Every pseudo-contraction converges at a rate of
Every pseudo-contraction is globally asymptotically stable
– Lyapunov function is distance to the fixed point) 1
1
0 1
1
0
A Pseudo-contractionwhich is not a contraction
* *, 0 ,td a t a d a a
34 © Cognitive Radio Technologies, 2007
General Convergence Theorem
A synchronous contraction mapping also converges asynchronously
35 © Cognitive Radio Technologies, 2007
Standard Interference Function
Conditions Suppose d:AA and d satisfies:
– Positivity: d(a)>0– Monotonicity: If a1a2, then d(a1)d(a2)– Scalability: For all >1, d(a)>d( a)
d is a pseudo-contraction mapping [Berggren] under synchronous timing– Implies synchronous and asynchronous
convergence– Implies stability
R. Yates, “A Framework for Uplink Power Control in Cellular Radio Systems,” IEEE JSAC., Vol. 13, No 7, Sep. 1995, pp. 1341-1347. F. Berggren, “Power Control, Transmission Rate Control and Scheduling in Cellular Radio Systems,” PhD Dissertation Royal Institute of Technology, Stockholm, Sweden, May, 2001.
36 © Cognitive Radio Technologies, 2007
Yates’ power control applications
Target SINR algorithms
Fixed assignment - each mobile is assigned to a particular base station
Minimum power assignment - each mobile is assigned to the base station in the network where its SINR is maximized
Macro diversity - all base stations in the network combine the signals of the mobiles
Limited diversity - a subset of the base stations combine the signals of the mobiles
Multiple connection reception - the target SINR must be maintained at a number of base stations.
1ˆ jk k
j jj
p p
jj j
jkj j j
k N
g p
g p N
37 © Cognitive Radio Technologies, 2007
Example steady-state solution
Consider Standard Interference Function
1ˆ jm m
j jj
p p
\1
ˆ mj kj k j
k N im mj j m
jj j
g p
p pKg p
* *
\
ˆ jj kj k j
k N ijj
p g pKg
11 1 12 1
21 22 2*
1 2
1 2
ˆ/
ˆ/
ˆ/
n
n
n n nn n
Kg g g
g Kg
g g Kg
p
38 © Cognitive Radio Technologies, 2007
Markov Chains
Describes adaptations as probabilistic transitions between network states.
– d is nondeterministic Sources of
randomness:– Nondeterministic timing– Noise
Frequently depicted as a weighted digraph or as a transition matrix
39 © Cognitive Radio Technologies, 2007
General Insights ([Stewart_94])
Probability of occupying a state after two iterations.
– Form PP.– Now entry pmn in the mth row and
nth column of PP represents the probability that system is in state an two iterations after being in state am.
Consider Pk. – Then entry pmn in the mth row and
nth column of represents the probability that system is in state an two iterations after being in state am.
40 © Cognitive Radio Technologies, 2007
Steady-states of Markov chains
May be inaccurate to consider a Markov chain to have a fixed point– Actually ok for absorbing Markov chains
Stationary Distribution– A probability distribution such that * such that
*T P =*T is said to be a stationary distribution for the Markov chain defined by P.
Limiting distribution– Given initial distribution 0 and transition matrix P,
the limiting distribution is the distribution that results from evaluating
0lim T k
k
P
41 © Cognitive Radio Technologies, 2007
Ergodic Markov Chain
[Stewart_94] states that a Markov chain is ergodic if it is a Markov chain if it is a) irreducible, b) positive recurrent, and c) aperiodic.
Easier to identify rule:– For some k Pk has only nonzero entries
(Convergence, steady-state) If ergodic, then chain has a unique limiting stationary distribution.
42 © Cognitive Radio Technologies, 2007
Shortcomings in traditional techniques
Fixed point theorems provide little insight into convergence or stability
Lyapunov functions hard to identify Contraction mappings rarely encountered Doesn’t address nondeterministic algorithms
– Genetic algorithms Analyze one algorithm at a time – little insight into
related algorithms Not very useful for finite action spaces No help if all you have is the cognitive radios’ goal
and actions
43 © Cognitive Radio Technologies, 2007
Absorbing Markov Chains
Absorbing state– Given a Markov chain with transition matrix P, a
state am is said to be an absorbing state if pmm=1.
Absorbing Markov Chain– A Markov chain is said to be an absorbing Markov
chain if it has at least one absorbing state and from every state in the Markov chain there exists a
sequence of state transitions with nonzero probability that leads to an absorbing state. These nonabsorbing states are called transient states.
a0 a1 a2 a3 a4 a5
44 © Cognitive Radio Technologies, 2007
Absorbing Markov Chain Insights ([Kemeny_60] )
Canonical Form
Fundamental Matrix
Expected number of times that the system will pass through state am given that the system starts in state ak.
– nkm
(Convergence Rate) Expected number of iterations before the system ends in an absorbing state starting in state am is given by tm where 1 is a ones vector
– t=N1 (Final distribution) Probability of ending up in absorbing state am
given that the system started in ak is bkm where
' ab
Q RP
0 I
1 N I Q
B NR
45 © Cognitive Radio Technologies, 2007
Two-Channel DFS
(f1,f1) (f1,f2)
(f2,f2)(f2,f1)
0.25
0.25
0.25
0.25
0.25
1
1 0.25
0.25
0.25
(f1,f1) (f1,f2)
(f2,f2)(f2,f1)
0.25
0.25
0.25
0.25
0.25
1
1 0.25
0.25
0.25
0.250.250.250.25(f2,f2)
0100(f2,f1)
0010(f1,f2)
0.250.250.250.25(f1,f1)
(f2,f2)(f2,f1)(f1,f2)(f1,f1)
0.250.250.250.25(f2,f2)
0100(f2,f1)
0010(f1,f2)
0.250.250.250.25(f1,f1)
(f2,f2)(f2,f1)(f1,f2)(f1,f1)
P =
1.50.5(f2,f2)
0.51.5(f1,f1)
(f2,f2)(f1,f1)
1.50.5(f2,f2)
0.51.5(f1,f1)
(f2,f2)(f1,f1)
N =0.50.5(f2,f2)
0.50.5(f1,f1)
(f2,f1)(f1,f2)
0.50.5(f2,f2)
0.50.5(f1,f1)
(f2,f1)(f1,f2)
B =
1
1j j
jj j
f fu a
f f
1,
\ 1j j
j j jj j
f u ad f f
f F f u a
Decision Rule
Goal
TimingRandom timer set to go off with probability p=0.5 at each iteration
46 © Cognitive Radio Technologies, 2007
Analysis Models
47 © Cognitive Radio Technologies, 2007
Model Steady States
48 © Cognitive Radio Technologies, 2007
Model Convergence
49 © Cognitive Radio Technologies, 2007
Model Stability
50 © Cognitive Radio Technologies, 2007
Shortcomings in “traditional” techniques
Fixed point theorems provide little insight into convergence or stability
Lyapunov functions hard to identify Contraction mappings rarely encountered Doesn’t address nondeterministic algorithms
– Genetic algorithms Not very useful for finite action spaces No help if all you have is the cognitive radios’ goal
and actions
51 © Cognitive Radio Technologies, 2007
Comments
No unified method for analyzing cognitive radio interactions– Random collection of methods for different
problems
Perhaps a bit of a stretch to call it “traditional” with respect to cognitive radios
Is not suitable for analyzing radios with