bayesian statistics
DESCRIPTION
Bayesian StatisticsTRANSCRIPT
-
Introduction to Bayesian Statistics(And some computational methods)
Theo Kypraioshttp://www.maths.nott.ac.uk/tk
MSc in Applied Bioinformatics @ Cranfield University.
Statistics and Probability Research Group1 / 1
-
My Background
Bayesian Statistics;
Computational methods, such as Markov Chain andSequential Monte Carlo (MCMC & SMC);
Large and complex (real) data analysis, mainly InfectiousDisease Modelling, Neuroimaging, Time series . . ..
Recent interest in Bioinformatics (Gene Expression Data).
2 / 1
-
Outline of the Talk
1. Why (statistical) modelling is useful?
2. The Frequentic/Classical Approach to Inference.
3. The Bayesian Paradigm to Inference.
Theory
Examples
4. More Advanced Concepts (e.g. Model Choice/Comparison)
5. Conclusions
3 / 1
-
Use of Statistics
Examples include:
Sample Size Determination
Comparison between two (or more) groups
t-tests, Z-tests;
Analysis of variance (ANOVA);
tests for proportions etc;
Classification/Clustering;
. . .
4 / 1
-
Use of Statistics
Examples include:
Sample Size Determination
Comparison between two (or more) groups
t-tests, Z-tests;
Analysis of variance (ANOVA);
tests for proportions etc;
Classification/Clustering;
. . .
4 / 1
-
Use of Statistics
Examples include:
Sample Size Determination
Comparison between two (or more) groups
t-tests, Z-tests;
Analysis of variance (ANOVA);
tests for proportions etc;
Classification/Clustering;
. . .
4 / 1
-
Use of Statistics
Examples include:
Sample Size Determination
Comparison between two (or more) groups
t-tests, Z-tests;
Analysis of variance (ANOVA);
tests for proportions etc;
Classification/Clustering;
. . .
4 / 1
-
Statistical Modelling
5 / 1
-
Why Statistical Modelling?
l
l
ll
ll
l
l
l
l
l
l
l
l l
l
l
l
ll
l
l
l
l
ll
l
l
ll
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
lll
l
ll
l l
l
ll
ll
l
l
ll
l
l
l
l
ll
ll
l
l
l
l
l
l
lll
l
ll
l
l
lll
l
l
l
l
l
l
l
ll
l
l
3 2 1 0 1 2
2
02
46
8
explanatory (x)
resp
onse
(y)
Suppose that we are interested in investigating the associationbetween x and y .
Isnt just enough to calculate the correlation () between xand y?
6 / 1
-
Why Statistical Modelling?
l
l
ll
ll
l
l
l
l
l
l
l
l l
l
l
l
ll
l
l
l
l
ll
l
l
ll
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
lll
l
ll
l l
l
ll
ll
l
l
ll
l
l
l
l
ll
ll
l
l
l
l
l
l
lll
l
ll
l
l
lll
l
l
l
l
l
l
l
ll
l
l
3 2 1 0 1 2
2
02
46
8
explanatory (x)
resp
onse
(y)
Suppose that we are interested in investigating the associationbetween x and y .
Isnt just enough to calculate the correlation () between xand y?
6 / 1
-
Why Statistical Modelling?
Perahps, for this dataset it is enough. . .
. . . = 0.83 indicates some strong correlation between x andy .
7 / 1
-
Why Statistical Modelling?
Perahps, for this dataset it is enough. . .
. . . = 0.83 indicates some strong correlation between x andy .
7 / 1
-
Why Statistical Modelling?
What about this dataset? The correlation coefficient turns out tobe 0.5.
l
l
lll
l
l
ll
l
l
l
l
l
l
l
l
l
ll
l
ll
l ll
l
ll
ll l
l
ll l l
l
l
l
l
lll
l
lll
l
l
l
l
l
l
l
llll
l
l
l
ll
l
l ll
l
l
l
ll l
ll
l
l
l
l
l
l
ll
l
lll
l
l
l
ll
l
l
l
l
l
ll
3 2 1 0 1 2
24
68
explanatory (x)
resp
onse
(y)
8 / 1
-
Why Statistical Modelling?
What about this dataset? The correlation coefficient turns out tobe -0.78.
lll
lll
l
lllllll
l
llllllll
l
l
llllll
llll
lllll
l
lllll
lll
lll
l
l
llllll
lll
lllll
l
l
lll
l
ll
llllllll
l
l
llll
llll
l
lll
ll
0 1 2 3 4 5
0.1
0.00.1
0.20.3
explanatory (x)
resp
onse
(y)
9 / 1
-
Statistical Modelling
One (of the best) ways(s) to describe some data is by fitting a(statistical) model.
Examples:
1. y = + x + error2. y = + x + x2 + error3. y = 1 11+x x + error
The model parameters (, , ) tell us much more about therelationship between x and y rather than just the correlationcoefficient . . .
What about a more general model?
4. y = f (, x) + error
10 / 1
-
Aims of Statistical Modelling
In statical modelling we are interested in estimating theunknown parameters from data Statistical Inference.
Parameter estimation needs be done in a formal way. In otherwords we ask ourselves the question: what are the bestvalues, say, for and such that the proposed model bestsdescribes the observed data?
And, what do we mean by best?
Should we only look for a single estimate for (, )?
No!
11 / 1
-
Aims of Statistical Modelling
In statical modelling we are interested in estimating theunknown parameters from data Statistical Inference.
Parameter estimation needs be done in a formal way. In otherwords we ask ourselves the question: what are the bestvalues, say, for and such that the proposed model bestsdescribes the observed data?
And, what do we mean by best?
Should we only look for a single estimate for (, )?
No!
11 / 1
-
Aims of Statistical Modelling
In statical modelling we are interested in estimating theunknown parameters from data Statistical Inference.
Parameter estimation needs be done in a formal way. In otherwords we ask ourselves the question: what are the bestvalues, say, for and such that the proposed model bestsdescribes the observed data?
And, what do we mean by best?
Should we only look for a single estimate for (, )?
No!
11 / 1
-
Aims of Statistical Modelling
In statical modelling we are interested in estimating theunknown parameters from data Statistical Inference.
Parameter estimation needs be done in a formal way. In otherwords we ask ourselves the question: what are the bestvalues, say, for and such that the proposed model bestsdescribes the observed data?
And, what do we mean by best?
Should we only look for a single estimate for (, )?
No!
11 / 1
-
Least Squares Estimation
lll
lll
l
lllllll
l
llllllll
l
l
llllll
llll
lllll
l
lllll
lll
lll
l
l
llllll
lll
lllll
l
l
lll
l
ll
llllllll
l
l
llll
llll
l
lll
ll
0 1 2 3 4 5
0.1
0.00.1
0.20.3
Find the values of and which minimize the squareddifference (distance) between what the model predicts and thedata, (a.k.a. Least Squares Estimation (LSE) )
What about other pairs (, ) (perhaps very different fromeach other) which describe equally well the observed data uncertainty in parameter estimation. . .
12 / 1
-
Classical (or Frequentist)Inference
13 / 1
-
Statistical Approach: The Likelihood Function
The likelihood function plays a fundamental role in statisticalinference.
In non-technical terms, the likelihood function is a function thatwhen evaluated at a particular point, say (0, 0), is theprobability of observing the (observed) data given that theparameters (, ) take the values 0 and 0.
14 / 1
-
The Likelihood Function A Toy Example
Let us think of a very simple example.
Consider a Binomial experiment:
n trials (e.g. toss a coin n times)
model the number x successes (e.g. got heads x times).
Suppose we are interested in estimating the probability ofsuccess (denoted by ) for one particular experiment.
Data: Out of 100 times we repeated the experiment weobserved 80 successes.
What about L(0.1), L(0.7), L(0.99)?
15 / 1
-
The Likelihood Function A Toy Example
What L(0.1) really means, is what is how likely is to observe80 times heads out of tossing a coin 100 times if the true (butunknown) probability of success is 0.7?
In other words,
L(0.7) = P(X = 80|) =(
100
80
)0.780 (10.7)10080 = 0.075
But if we can evaluate L(0.7) then we can evaluate L() forall possible values for .
16 / 1
-
The Likelihood Function A Toy Example
llllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllll
ll
l
l
l
l
l
l
lll
l
l
l
l
l
l
lllllllllllll
0.0 0.2 0.4 0.6 0.8 1.0
0.00
0.04
0.08
theta
L(the
ta)
17 / 1
-
Classical (Frequentist) Inference
Frequentist inference tell us that:
we should for parameter values that maximise the likelihoodfunction maximum likelihood estimator (MLE)
associate parameters uncertainty with the calculation ofstandard errors(SE) . . .
. . . which in turn enable us to construct confidence intervalsfor the parameters, e.g. 95% CI
1.96
var()
or 1.96 SE()
For this example, this turns out to be
(0.8 1.96 0.04, 0.8 + 1.96 0.04) = (0.72, 0.88)
18 / 1
-
Interval Estimation
Having obtained the variance-covariance matrix, we can thenconstruct confidence intervals for the parameters based onsampling theory.
Such an approach is based on the notion that:
1. if the experiment was to be repeated many times,
2. and a maximum likelihood method is derived as well as aconfidence interval each time
3. then on average, the interval estimates would contain thetrue parameter (1 )% of the time.
19 / 1
-
Interval Estimation
Having obtained the variance-covariance matrix, we can thenconstruct confidence intervals for the parameters based onsampling theory.
Such an approach is based on the notion that:
1. if the experiment was to be repeated many times,
2. and a maximum likelihood method is derived as well as aconfidence interval each time
3. then on average, the interval estimates would contain thetrue parameter (1 )% of the time.
19 / 1
-
Interval Estimation
Having obtained the variance-covariance matrix, we can thenconstruct confidence intervals for the parameters based onsampling theory.
Such an approach is based on the notion that:
1. if the experiment was to be repeated many times,
2. and a maximum likelihood method is derived as well as aconfidence interval each time
3. then on average, the interval estimates would contain thetrue parameter (1 )% of the time.
19 / 1
-
Interval Estimation
Whats wrong with that?
Nothing, but . . .
. . . it is approximate, counter-intuitive (data is assumed to berandom, parameter assumed to be fixed) and mathematicallyintractable for complex scenarios.
20 / 1
-
Interval Estimation
Whats wrong with that?
Nothing, but . . .
. . . it is approximate, counter-intuitive (data is assumed to berandom, parameter assumed to be fixed) and mathematicallyintractable for complex scenarios.
20 / 1
-
Some (Other) Issues with this Approach
For instance, recall the previous experiment: twe cannot ask(or even answer!) questions such as
1. what is the chance that the unknown parameter (i.e.probability of success) is greater than 0.6? i.e. compute thequantity P( > 0.6|data) . . .
2. or something like, P(0.3 < < 0.9|data);
Sometimes we are interested in (not necessarily linear)functions of parameters, e.g.
1 + 2,1/(1 1)2/(1 2)
Whilst in some cases, the frequentist approach offers asolution which is not exact but approximate, there are others,for which it cannot or it is very hard to do so.
21 / 1
-
Bayesian Inference
22 / 1
-
Bayesian Inference
When drawing inference within a Bayesian framework,
the data are treated as a fixed quantity and
the parameters are treated as random variables.
That allows us to assign to parameters (and models) probabilities,making the inferential framework
far more intuitive and
more straightforward (at least in principle!)
23 / 1
-
Bayesian Inference (2)
Denote by the parameters and by y the observed data. Bayestheorem allows to write:
pi(|y) = pi(y|)pi()pi(y)
=pi(y|)pi()
pi(y|)pi() dwhere
pi(|y) denotes the posterior distribution of the parametersgiven the data;
pi(y|) = L() is the likelihood function;pi() is the prior distribution of which express our beliefsabout the parameters, before we see the data;
pi(y) is often called the marginal likelihood and plays the roleof the normalising constant of the density of the posteriordistribution.
24 / 1
-
Bayesian Inference (2)
In a nutshell the Bayesian paradigm provides us with a distributionas for what we have learned about the parameter from the data.
In contrast to the frequentist approach with which we are getting apoint estimate (MLE) and a standard error (SE), in the Bayesianworld we getting a whole distribution (i.e. we get much more forour money!)
25 / 1
-
The Posterior Distribution
26 / 1
-
Bayesian Inference (3)
We can write the posterior distribution as follows:
pi(|y) = pi(y|)pi() pi(y|)pi() d
The density of the posterior distribution is proportional to thelikelihood times the prior density;
The posterior distribution tells us everything we need to knowabout the parameter;
Statements such as P( > k) or P(
1+
)> k) where k is a
constant make sense, since is a random variable but, inaddition, they are very useful in modelling.
27 / 1
-
Why Having the Distribution is Very Useful?
0.0 0.5 1.0 1.5 2.0
01
23
4
Den
sity
28 / 1
-
Why Having the Distribution is Very Useful?
0 2 4 6 8
0.0
0.2
0.4
0.6
Den
sity
29 / 1
-
Why Having the Distribution is Very Useful?
0.2 0.4 0.6 0.8 1.0
02
46
Den
sity
30 / 1
-
The PriorRecall that:
pi(|y) = pi(y|)pi()pi(y)
=pi(y|)pi()
pi(y|)pi() d
31 / 1
-
Bayesian Inference: The Prior
One of the biggest criticisms to the Bayesian paradigm is the useof the prior distribution.
Couldnt I choose a very informative prior and come up withfavorable result?
Yes, but this is bad science!
I know nothing about the parameter; what prior should Ichoose?
Choose an uninformative (or vague) prior. more detailsshortly.
If there is a lot of data available then the posterior distributionwould not be influenced so much by the prior and vice versa;
32 / 1
-
Bayesian Inference: The Prior
One of the biggest criticisms to the Bayesian paradigm is the useof the prior distribution.
Couldnt I choose a very informative prior and come up withfavorable result?
Yes, but this is bad science!
I know nothing about the parameter; what prior should Ichoose?
Choose an uninformative (or vague) prior. more detailsshortly.
If there is a lot of data available then the posterior distributionwould not be influenced so much by the prior and vice versa;
32 / 1
-
Bayesian Inference: The Prior
One of the biggest criticisms to the Bayesian paradigm is the useof the prior distribution.
Couldnt I choose a very informative prior and come up withfavorable result?
Yes, but this is bad science!
I know nothing about the parameter; what prior should Ichoose?
Choose an uninformative (or vague) prior. more detailsshortly.
If there is a lot of data available then the posterior distributionwould not be influenced so much by the prior and vice versa;
32 / 1
-
Bayesian Inference: The Prior
One of the biggest criticisms to the Bayesian paradigm is the useof the prior distribution.
Couldnt I choose a very informative prior and come up withfavorable result?
Yes, but this is bad science!
I know nothing about the parameter; what prior should Ichoose?
Choose an uninformative (or vague) prior. more detailsshortly.
If there is a lot of data available then the posterior distributionwould not be influenced so much by the prior and vice versa;
32 / 1
-
Some Examples on the Effect of the Prior
83/100 successes: interested in probability of success
0.0 0.2 0.4 0.6 0.8 1.0
02
46
810
theta
poste
rior
posteriorlikprior
33 / 1
-
Some Examples on the Effect of the Prior
83/100 successes: interested in probability of success
0.0 0.2 0.4 0.6 0.8 1.0
02
46
810
theta
poste
rior
posteriorlikprior
34 / 1
-
Some Examples on the Effect of the Prior
83/100 successes: interested in probability of success
0.0 0.2 0.4 0.6 0.8 1.0
02
46
810
theta
poste
rior
posteriorlikprior
35 / 1
-
Some Examples on the Effect of the Prior
8/10 successes: interested in probability of success
0.0 0.2 0.4 0.6 0.8 1.0
02
46
810
theta
poste
rior
posteriorlikprior
36 / 1
-
Some Examples on the Effect of the Prior
83/100 successes: interested in probability of success
0.0 0.2 0.4 0.6 0.8 1.0
02
46
810
theta
poste
rior
posteriorlikprior
37 / 1
-
The Prior Distribution
Takehome message:Be rather careful with the choice of prior, which of
course is (or can be) subjective!
38 / 1
-
(Back to) The PosteriorDistribution
39 / 1
-
Taking a Closer a Look at the Formulas
We can write the posterior distribution as follows:
pi(|y) = pi(y|)pi() pi(y|)pi() d
=pi(y|)pi()
pi(y)
pi(y|)pi() (1)
where pi(y) is often called the marginal likelihood and plays therole of the normalising constant of the density of the posteriordistribution, i.e. makes the area under the curve pi(y|)pi() tointegrate to one, i.e. a proper probability density function
40 / 1
-
How to Deal with the Normalising Constant?
If we were only interested in finding MAP for which pi(|y) ismaximised, then there is no need to compute the normalisingconstant, pi(y).
Nevertheless, suppose that we want to get as a summarystatistic from our posterior and compute, for instance, aposterior expectation, e.g.
E [|y] = pi(|y) d
or the posterior variance etc. That, of course, requiresknowledge of the full expression of pi(|y), i.e. not just up toa normalising constant.
How to compute this integral then? Numerical integrationtechniques? Can we guess? Or . . .
41 / 1
-
How to Deal with the Normalising Constant?
If we were only interested in finding MAP for which pi(|y) ismaximised, then there is no need to compute the normalisingconstant, pi(y).
Nevertheless, suppose that we want to get as a summarystatistic from our posterior and compute, for instance, aposterior expectation, e.g.
E [|y] = pi(|y) d
or the posterior variance etc. That, of course, requiresknowledge of the full expression of pi(|y), i.e. not just up toa normalising constant.
How to compute this integral then? Numerical integrationtechniques? Can we guess? Or . . .
41 / 1
-
How to Deal with the Normalising Constant?
If we were only interested in finding MAP for which pi(|y) ismaximised, then there is no need to compute the normalisingconstant, pi(y).
Nevertheless, suppose that we want to get as a summarystatistic from our posterior and compute, for instance, aposterior expectation, e.g.
E [|y] = pi(|y) d
or the posterior variance etc. That, of course, requiresknowledge of the full expression of pi(|y), i.e. not just up toa normalising constant.
How to compute this integral then? Numerical integrationtechniques? Can we guess? Or . . .
41 / 1
-
Do we Really Need to Compute theNormalising Constant?
Instead of deriving the full expression of the probabilitydensity function of |y explicitly, we could draw samplesfrom pi(|y).
If we have samples from pi(|y) then we can approximate theposterior expectation as follows:
E [pi(|y)] 1M
Mi=1
j , j
pi(|y)
Therefore, the only thing we need to come up with, is amethod which will allow us to draw samples from pi(|y)without the need of evaluating the normalising constant.
42 / 1
-
Do we Really Need to Compute theNormalising Constant?
Instead of deriving the full expression of the probabilitydensity function of |y explicitly, we could draw samplesfrom pi(|y).
If we have samples from pi(|y) then we can approximate theposterior expectation as follows:
E [pi(|y)] 1M
Mi=1
j , j
pi(|y)
Therefore, the only thing we need to come up with, is amethod which will allow us to draw samples from pi(|y)without the need of evaluating the normalising constant.
42 / 1
-
Do we Really Need to Compute theNormalising Constant?
Instead of deriving the full expression of the probabilitydensity function of |y explicitly, we could draw samplesfrom pi(|y).
If we have samples from pi(|y) then we can approximate theposterior expectation as follows:
E [pi(|y)] 1M
Mi=1
j , j
pi(|y)
Therefore, the only thing we need to come up with, is amethod which will allow us to draw samples from pi(|y)without the need of evaluating the normalising constant.
42 / 1
-
Derive the Posterior Distribution viaSamplingBased Inference
43 / 1
-
SamplingBased Inference
If we can draw samples from the posterior distribution pi(|y)then we can do everything we want/need:
Estimate the density (kernel density estimation, histogram);
Estimate moments (eg means, variances), probabilities etc;
Derive the distribution of (not necessarily linear) functions ofthe parameters g() in a very straightforward manner;
Visualise the relationship of two or more model parameters.
44 / 1
-
SamplingBased Inference
If we can draw samples from the posterior distribution pi(|y)then we can do everything we want/need:
Estimate the density (kernel density estimation, histogram);
Estimate moments (eg means, variances), probabilities etc;
Derive the distribution of (not necessarily linear) functions ofthe parameters g() in a very straightforward manner;
Visualise the relationship of two or more model parameters.
44 / 1
-
SamplingBased Inference
If we can draw samples from the posterior distribution pi(|y)then we can do everything we want/need:
Estimate the density (kernel density estimation, histogram);
Estimate moments (eg means, variances), probabilities etc;
Derive the distribution of (not necessarily linear) functions ofthe parameters g() in a very straightforward manner;
Visualise the relationship of two or more model parameters.
44 / 1
-
SamplingBased Inference
If we can draw samples from the posterior distribution pi(|y)then we can do everything we want/need:
Estimate the density (kernel density estimation, histogram);
Estimate moments (eg means, variances), probabilities etc;
Derive the distribution of (not necessarily linear) functions ofthe parameters g() in a very straightforward manner;
Visualise the relationship of two or more model parameters.
44 / 1
-
A Toy Example on SamplingBased InferenceSuppose that the random variable X comes from a Gammadistribution with the following probability density function (pdf)
fX (x |, ) =
()x1 exp{x}, , > 0
For any given and the expectation, i.e the mean of X isderived by doing this integral
E [X ] =
Xx fX (x) dx
and we also know that the probability
P(X < 0.5) =
0.50
fX (x) dx
45 / 1
-
A Toy Example on SamplingBased Inference
1. Suppose that, somehow, we have a way of simulatingrealizations from the Gamma distribution . . .
2. . . . and draw N samples where N is a large number, e.g.100, 000.
3. If we plot the histogram of these N values by doing somethinglike this in R
hist(rgamma(10^5, 5, 3), prob=TRUE,main="Samples from Gamma(5,3)",xlab=expression(x),col=2)
46 / 1
-
A Toy Example on SamplingBased Inference
1. Suppose that, somehow, we have a way of simulatingrealizations from the Gamma distribution . . .
2. . . . and draw N samples where N is a large number, e.g.100, 000.
3. If we plot the histogram of these N values by doing somethinglike this in R
hist(rgamma(10^5, 5, 3), prob=TRUE,main="Samples from Gamma(5,3)",xlab=expression(x),col=2)
46 / 1
-
A Toy Example on SamplingBased Inference
1. Suppose that, somehow, we have a way of simulatingrealizations from the Gamma distribution . . .
2. . . . and draw N samples where N is a large number, e.g.100, 000.
3. If we plot the histogram of these N values by doing somethinglike this in R
hist(rgamma(10^5, 5, 3), prob=TRUE,main="Samples from Gamma(5,3)",xlab=expression(x),col=2)
46 / 1
-
A Toy Example on SamplingBased InferenceWe get something like this:
Samples from Gamma(5,3)
x
Den
sity
0 1 2 3 4 5 6 7
0.0
0.1
0.2
0.3
0.4
0.5
47 / 1
-
A Toy Example on SamplingBased InferenceWe get something like this and draw fX (x) on top:
Samples from Gamma(5,3)
x
Den
sity
0 1 2 3 4 5 6 7
0.0
0.1
0.2
0.3
0.4
0.5
48 / 1
-
A Toy Example on SamplingBased InferenceThat means that we can:
approximate (or estimate) the mean E[X] by the samplemean, i.e.
E [X ] =1
N
Ni=1
xi
and the probability P(X < 0.5) by the proportion of thevalues in the sample which are less than 0.5, i.e.
P(X < 0.5) =1
N
Ni=1
1(xi < 0.5)
Of course, these approximations are getting better and better asN >.
49 / 1
-
An Example of a Bivariate Distribution
l
l
ll
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
lll
l
l
l
l
l l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
lll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
ll
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l l
l
l
l
l
l
l
l
l
l
l
l
l l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l l
ll
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
ll
l
ll
l
l
l
l
l
l
l
l
l
l l
l
l l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l l
l
l
l
l
l
l
l
l
ll
l l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
ll
l
l
l
l
ll
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
ll
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l l
l
l
ll
l
l
l l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
ll
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll l
l
l
l
l
l
l
l
ll
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l l
l
l
l
l
l
l l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
ll
l
ll
l
l
l
l
l
l
l
l
l l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
ll
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
ll
l
l
ll
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l l
l
l
l
ll
l
l
l
l
ll
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l l
l
l
l l
l
l
l
l
l
l
l
l
l
l
l
l
ll
lll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
ll
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
lll
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
ll l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l l
l
l
ll
l
l
l
l
l
l
l
l
l
l l
ll
l
ll
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
ll
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
ll
l
l
l
l
l
l
l
l l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
ll
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l l
l
l
l l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
ll l
l
ll
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
lll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
ll
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
ll
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l l
l l
l
l
ll
l
l
l
l
l
l
ll
l
ll
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l l
ll
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
ll
l
l
l
lll
l
ll
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l l
l
l
l
ll
l
l l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
ll
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
ll
l
l
l
ll
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l l
l
l
l
l
l
l
l
l l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l l
l
l
l
l
ll
l
l
l
l
ll
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
ll
l
l
l
l
l l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
ll
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
ll
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
ll
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
ll
l
l
l l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
ll
l
l
l