10 fundamental theorems for econometrics

63
10 Fundamental Theorems for Econometrics Thomas S. Robinson (https://ts-robinson.com) 2020-09-30 โ€“ v0.1

Upload: others

Post on 19-Jan-2022

3 views

Category:

Documents


0 download

TRANSCRIPT

10 Fundamental Theorems for Econometrics

Thomas S. Robinson (https://ts-robinson.com)

2020-09-30 โ€“ v0.1

2

Preface

A list of 10 econometric theorems was circulated on Twitter citing what Jeffrey Wooldridge claims you justneed to apply repeatedly in order to do econometrics. As a political scientist with applied statistics training,this list caught my attention because it contains many of the theorems I see used in (methods) papers, butwhich I typically glaze over for lack of understanding. The complete list (slightly paraphrased) is:

1. Law of Iterated Expectations, Law of Total Variance2. Linearity of Expectations, Variance of a Sum3. Jensenโ€™s Inequality, Chebyshevโ€™s Inequality4. Linear Projection and its Properties5. Weak Law of Large Numbers, Central Limit Theorem6. Slutskyโ€™s Theorem, Continuous Convergence Theorem, Asymptotic Equivalence Lemma7. Big Op, Little op, and the algebra of them8. Delta Method9. Frisch-Waugh Partialling Out

10. For PD matrices A and B, A-B is PSD if and only if ๐ตโˆ’1 โˆ’ ๐ดโˆ’1 is PSD.

As an exercise in improving my own knowledge of these fundamentals, I decided to work through each theoremโ€“ using various lecture notes found online, and excellent textbooks like Aronow & Millerโ€™s (2019) Foundationsof Agnostic Statistics, Angrist and Pischkeโ€™s (2008) Mostly Harmless Econometrics, and Wassermanโ€™s (2004)All of Statistics.

3

I found for a list of important theorems there were few consistent sources that contained explanations andproofs of each item. Often, textbooks had excellent descriptive intuitions but would hold back on offeringfull, annotated proofs. Or full proofs were offered without explaining the wider significance of the theorems.Some of the concepts, moreover, had different definitions dependent on the field or source of the proof (likeSlutskyโ€™s Theorems)!

This resource is an attempt to collate my writing on these theorems โ€“ the intuitions, proofs, and examples โ€“into a single document. I have taken some liberties in doing so โ€“ for instance combining Wooldridgeโ€™s firsttwo points into a single chapter on โ€˜Expectation Theoremsโ€™, and often omit continuous proofs where discreteproofs are similar and easier to follow. That said, I have tried to be reasonably exhaustive in my proofs sothat they are accessible to those (like me) without a formal statistics background.

The inspiration for this project was Jeffrey Wooldridgeโ€™s list, an academic whose work I admire greatly. Thisdocument, however, is in no way endorsed by or associated with him. Most of the applied examples (andinvisible corrections to my maths) stem from discussions with Andy Eggers and Musashi Harukawa. Therewill inevitably still be some errors, omissions, and confusing passages. I would be more than grateful toreceive any feedback at [email protected] or via the GitHub repo for this project.

Prerequisites

I worked through these proofs learning the bits of maths I needed as I went along. For those who want toconsult Google a little less than I had to, the following should ease you into the more formal aspects of thisdocument:

โ€ข A simple working knowledge of probability theory

โ€ข The basics of expectation notation, but you donโ€™t need to know any expectation rules (I cover theimportant ones in Chapter 1).

โ€ข A basic understanding of linear algebra (i.e. how you multiply matrices, what transposition is, and whatthe identity matrix looks like). More complicated aspects like eigenvalues and Gaussian eliminationmake fleeting appearances, particularly in Chapter 9, but these are not crucial.

โ€ข Where relevant, I provide coded examples in R. Iโ€™ve kept my use of packages to a minimum so thecode should be reasonably easy to read/port to other programming languages.

Version notes

v0.1

This is the first complete draft, and some sections are likely to be changed in future versions. For instance,in Chapter 9 I would like to provide a more comprehensive overview of quadratic form in linear algebra, howwe derive gradients, and hence the shape of PD matrices. Again, any suggestions on ways to improve/addto this resource are very much welcome!

10 Fundamental Theorems for Econometrics by Thomas Samuel Robinson is licensed under CC BY-NC-SA4.0

4

Chapter 1

Expectation Theorems

This chapter sets out some of the basic theorems that can be derived from the definition of expectations,as highlighted by Wooldridge. I have combined his first two points into a single overview of expectationmaths. The theorems themselves are not as immediately relevant to applied research as some of the latertheorems on Wooldridgeโ€™s list. However, they often form the fundamental basis upon which future proofsare conducted.

1.1 Law of Iterated Expectations

The Law of Iterated Expectations (LIE) states that:

๐”ผ[๐‘‹] = ๐”ผ[๐”ผ[๐‘‹|๐‘Œ ]] (1.1)

In plain English, the expected value of ๐‘‹ is equal to the expectation over the conditional expectation of ๐‘‹given ๐‘Œ . More simply, the mean of X is equal to a weighted mean of conditional means.

Aronow & Miller (2019) note that LIE is โ€˜one of the most important theoremsโ€™, because being able to expressunconditional expectation functions in terms of conditional expectations allow you to hold some parametersfixed, making calculations more tractable.

1.1.1 Proof of LIE

First, we can express the expectation over conditional expectations as a weighted sum over all possible valuesof Y, and similarly express the conditional expectations using summation too:

๐”ผ[๐”ผ[๐‘‹|๐‘Œ ]] = โˆ‘๐‘ฆ

๐”ผ[๐‘‹|๐‘Œ = ๐‘ฆ]๐‘ƒ (๐‘Œ = ๐‘ฆ) (1.2)

= โˆ‘๐‘ฆ

โˆ‘๐‘ฅ

๐‘ฅ๐‘ƒ(๐‘‹ = ๐‘ฅ|๐‘Œ = ๐‘ฆ)๐‘ƒ(๐‘Œ = ๐‘ฆ) (1.3)

= โˆ‘๐‘ฆ

โˆ‘๐‘ฅ

๐‘ฅ๐‘ƒ(๐‘Œ = ๐‘ฆ|๐‘‹ = ๐‘ฅ)๐‘ƒ(๐‘‹ = ๐‘ฅ)., (1.4)

5

Note that the final line follows due to Bayesโ€™ Rule.1 And so:

... = โˆ‘๐‘ฆ

โˆ‘๐‘ฅ

๐‘ฅ๐‘ƒ(๐‘‹ = ๐‘ฅ)๐‘ƒ(๐‘Œ = ๐‘ฆ|๐‘‹ = ๐‘ฅ) (1.5)

= โˆ‘๐‘ฅ

๐‘ฅ๐‘ƒ(๐‘‹ = ๐‘ฅ) โˆ‘๐‘ฆ

๐‘ƒ(๐‘Œ = ๐‘ฆ|๐‘‹ = ๐‘ฅ) (1.6)

= โˆ‘๐‘ฅ

๐‘ฅ๐‘ƒ(๐‘‹ = ๐‘ฅ) (1.7)

= ๐”ผ[๐‘‹] โ–ก (1.8)

The last steps of the proof are reasonably simple. Equation 1.5 is a trivial rearrangement of terms. Thesecond line follows since ๐‘ฆ does not appear in ๐‘ฅ๐‘ƒ(๐‘‹ = ๐‘ฅ) and so we can move the summation over ๐‘Œ to withinthe summation over ๐‘‹. The final line follows from the fact that the sum of the conditional probabilities๐‘ƒ(๐‘Œ = ๐‘ฆ|๐‘‹ = ๐‘ฅ) = 1 (by simple probability theory).

1.2 Law of Total Variance

The Law of Total Variance (LTV) states the following:

๐‘ฃ๐‘Ž๐‘Ÿ[๐‘Œ ] = ๐”ผ[๐‘ฃ๐‘Ž๐‘Ÿ[๐‘Œ |๐‘‹]] + ๐‘ฃ๐‘Ž๐‘Ÿ(๐”ผ[๐‘Œ |๐‘‹]) (1.9)

1.2.1 Proof of LTV

LTV can be proved almost immediately using LIE and the definition of variance:

๐‘ฃ๐‘Ž๐‘Ÿ(๐‘Œ ) = ๐”ผ[๐‘Œ 2] โˆ’ ๐”ผ[๐‘Œ ]2 (1.10)= ๐”ผ[๐”ผ[๐‘Œ 2|๐‘‹]] โˆ’ ๐”ผ[๐”ผ[๐‘Œ |๐‘‹]]2 (1.11)= ๐”ผ[๐‘ฃ๐‘Ž๐‘Ÿ[๐‘Œ |๐‘‹] + ๐”ผ[๐‘Œ ]2]] โˆ’ ๐”ผ[๐”ผ[๐‘Œ |๐‘‹]]2 (1.12)= ๐”ผ[๐‘ฃ๐‘Ž๐‘Ÿ[๐‘Œ |๐‘‹]] + (๐”ผ[๐”ผ[๐‘Œ ]2] โˆ’ ๐”ผ[๐”ผ[๐‘Œ |๐‘‹]]2) (1.13)= ๐”ผ[๐‘ฃ๐‘Ž๐‘Ÿ[๐‘Œ |๐‘‹]] + ๐‘ฃ๐‘Ž๐‘Ÿ(๐”ผ[๐‘Œ |๐‘‹]) โ–ก (1.14)

The second line applies LIE to both ๐‘Œ 2 and ๐‘Œ separately. Then we apply the definition of variance to๐”ผ[๐‘Œ 2|๐‘‹], and subsequently decompose this term (since ๐”ผ[๐ด + ๐ต] = ๐”ผ[๐ด] + ๐”ผ[๐ต].

1.3 Linearity of Expectations

The Linearity of Expectations (LOE) simply states that:

๐”ผ[๐‘Ž๐‘‹ + ๐‘๐‘Œ ] = ๐‘Ž๐”ผ[๐‘‹] + ๐‘๐”ผ[๐‘Œ ], (1.15)

where ๐‘Ž and ๐‘ are real numbers, and ๐‘‹ and ๐‘Œ are random variables.1Bayesโ€™ Rule states ๐‘ƒ(๐ด|๐ต) = ๐‘ƒ(๐ต|๐ด)๐‘ƒ(๐ด)

๐‘ƒ(๐ต) . Therefore:

๐‘ƒ(๐‘‹ = ๐‘ฅ|๐‘Œ = ๐‘ฆ) ร— ๐‘ƒ(๐‘Œ = ๐‘ฆ) = ๐‘ƒ(๐‘Œ = ๐‘ฆ|๐‘‹ = ๐‘ฅ)๐‘ƒ(๐‘‹ = ๐‘ฅ)๐‘ƒ(๐‘Œ = ๐‘ฆ)๐‘ƒ(๐‘Œ = ๐‘ฆ)

= ๐‘ƒ(๐‘Œ = ๐‘ฆ|๐‘‹ = ๐‘ฅ)๐‘ƒ(๐‘‹ = ๐‘ฅ).

6

1.3.1 Proof of LOE

๐”ผ[๐‘Ž๐‘‹ + ๐‘๐‘Œ ] = โˆ‘๐‘ฅ

โˆ‘๐‘ฆ

(๐‘Ž๐‘ฅ + ๐‘๐‘ฆ)๐‘ƒ(๐‘‹ = ๐‘ฅ, ๐‘Œ = ๐‘ฆ) (1.16)

= โˆ‘๐‘ฅ

โˆ‘๐‘ฆ

๐‘Ž๐‘ฅ๐‘ƒ(๐‘‹ = ๐‘ฅ, ๐‘Œ = ๐‘ฆ) + โˆ‘๐‘ฅ

โˆ‘๐‘ฆ

๐‘๐‘ฆ๐‘ƒ(๐‘‹ = ๐‘ฅ, ๐‘Œ = ๐‘ฆ) (1.17)

= ๐‘Ž โˆ‘๐‘ฅ

๐‘ฅ โˆ‘๐‘ฆ

๐‘ƒ(๐‘‹ = ๐‘ฅ, ๐‘Œ = ๐‘ฆ) + ๐‘ โˆ‘๐‘ฆ

๐‘ฆ โˆ‘๐‘ฅ

๐‘ƒ(๐‘‹ = ๐‘ฅ, ๐‘Œ = ๐‘ฆ) (1.18)

(1.19)

The first line simply expands the expectation into summation form i.e. the expectation is the sum of ๐‘Ž๐‘‹ +๐‘๐‘Œfor each (discrete) value of ๐‘‹ and ๐‘Œ weighted by their joint probability. We then expand out these terms.Since summations are commutative, we can rearrange the order of the summations for each of the two partsin the final line, and shift the real numbers and random variables outside the various operators.

Now note that โˆ‘๐‘– ๐‘ƒ(๐ผ = ๐‘–, ๐ฝ = ๐‘—) โ‰ก ๐‘ƒ(๐ฝ = ๐‘—) by probability theory. Therefore:

... = ๐‘Ž โˆ‘๐‘ฅ

๐‘ฅ๐‘ƒ(๐‘‹ = ๐‘ฅ) + ๐‘ โˆ‘๐‘ฆ

๐‘ฆ๐‘ƒ(๐‘Œ = ๐‘ฆ) (1.20)

The two terms within summations are just the weighted averages of ๐‘‹ and ๐‘Œ respectively, i.e. the expectationsof ๐‘‹ and ๐‘Œ , so:

... = ๐‘Ž๐”ผ[๐‘‹] + ๐‘๐”ผ[๐‘Œ ] โ–ก (1.21)(1.22)

1.4 Variance of a Sum

There are two versions of the Variance of a Sum (VOS) law:

โ€ข ๐‘ฃ๐‘Ž๐‘Ÿ(๐‘‹ + ๐‘Œ ) = ๐‘ฃ๐‘Ž๐‘Ÿ(๐‘‹) + ๐‘ฃ๐‘Ž๐‘Ÿ(๐‘Œ ), when X and Y are independentโ€ข ๐‘ฃ๐‘Ž๐‘Ÿ(๐‘‹ + ๐‘Œ ) = ๐‘ฃ๐‘Ž๐‘Ÿ(๐‘‹) + ๐‘ฃ๐‘Ž๐‘Ÿ(๐‘Œ ) + 2๐ถ๐‘œ๐‘ฃ(๐‘‹, ๐‘Œ ), when X and Y are correlated

1.4.1 Proof of VoS: ๐‘‹, ๐‘Œ are independent

๐‘ฃ๐‘Ž๐‘Ÿ(๐‘‹ + ๐‘Œ ) = ๐”ผ[(๐‘‹ + ๐‘Œ )2] โˆ’ (๐”ผ[๐‘‹ + ๐‘Œ ])2 (1.23)= ๐”ผ[(๐‘‹2 + 2๐‘‹๐‘Œ + ๐‘Œ 2)] โˆ’ (๐”ผ[๐‘‹] + ๐”ผ[๐‘Œ ])2 (1.24)

The first line of the proof is simply the definition of variance. In the second line, we expand the equation inthe first term and using LOE decompose the second term. We can expand this equation further, continuingto use LOE and noting that :

... = ๐”ผ[๐‘‹2] + ๐”ผ[2๐‘‹๐‘Œ ] + ๐”ผ[๐‘Œ 2] โˆ’ (๐”ผ[๐‘‹]2 + 2๐”ผ[๐‘‹]๐”ผ[๐‘Œ ] + ๐”ผ[๐‘Œ ]2) (1.25)= ๐”ผ[๐‘‹2] + ๐”ผ[๐‘Œ 2] โˆ’ ๐”ผ[๐‘‹]2 โˆ’ ๐”ผ[๐‘Œ ]2 (1.26)= ๐‘‰ ๐‘Ž๐‘Ÿ[๐‘‹] + ๐‘‰ ๐‘Ž๐‘Ÿ[๐‘Œ ] โ–ก (1.27)

since ๐”ผ[๐ด]๐”ผ[๐ต] = ๐”ผ[๐ด๐ต] when ๐ด and ๐ต are independent.

7

1.4.2 Proof of VoS: ๐‘‹, ๐‘Œ are dependent

As before, we can expand out the variance of a sum into its expected values:

๐‘ฃ๐‘Ž๐‘Ÿ(๐‘‹ + ๐‘Œ ) = ๐”ผ[๐‘‹2] + ๐”ผ[2๐‘‹๐‘Œ ] + ๐”ผ[๐‘Œ 2] โˆ’ (๐”ผ[๐‘‹]2 + 2๐”ผ[๐‘‹]๐”ผ[๐‘Œ ] + ๐”ผ[๐‘Œ ]2). (1.28)

Since ๐‘‹ and ๐‘Œ are assumed to be dependent, the non-squared terms do not necessarily cancel each otherout anymore. Instead, we can rearrange as follows:

๐‘ฃ๐‘Ž๐‘Ÿ(๐‘‹ + ๐‘Œ ) = ๐‘ฃ๐‘Ž๐‘Ÿ(๐‘‹) + ๐‘ฃ๐‘Ž๐‘Ÿ(๐‘Œ ) + ๐”ผ[2๐‘‹๐‘Œ ] โˆ’ 2๐”ผ[๐‘‹]๐”ผ[๐‘Œ ] (1.29)= ๐‘ฃ๐‘Ž๐‘Ÿ(๐‘‹) + ๐‘ฃ๐‘Ž๐‘Ÿ(๐‘Œ ) + 2(๐”ผ[๐‘‹๐‘Œ ] โˆ’ ๐”ผ[๐‘‹]๐”ผ[๐‘Œ ]), (1.30)

and note that ๐”ผ[๐ด๐ต] โˆ’ ๐”ผ[๐ด]๐”ผ[๐ต] = ๐ถ๐‘œ๐‘ฃ(๐ด, ๐ต):

... = ๐‘ฃ๐‘Ž๐‘Ÿ(๐‘‹) + ๐‘ฃ๐‘Ž๐‘Ÿ(๐‘Œ ) + 2๐ถ๐‘œ๐‘ฃ(๐ด, ๐ต) โ–ก (1.31)

Two further points are worth noting. First, the independent version of the proof is just a special case of thedependent version of the proof. When ๐‘‹ and ๐‘Œ are independent, the covariance between the two randomvariables is zero, and therefore the the variance of the sum is just equal to the sum of the variances.

Second, nothing in the above proofs rely on there being just two random variables. In fact,๐‘ฃ๐‘Ž๐‘Ÿ(โˆ‘๐‘›

๐‘– ๐‘‹๐‘–) = โˆ‘๐‘›๐‘– ๐‘ฃ๐‘Ž๐‘Ÿ(๐‘‹๐‘–) when all variables are independent from each other, and equal to

โˆ‘๐‘›๐‘– ๐‘ฃ๐‘Ž๐‘Ÿ(๐‘‹๐‘–) + 2 โˆ‘๐‘›

1โ‰ค๐‘–<๐‘—โ‰ค๐‘› ๐ถ๐‘œ๐‘ฃ(๐‘‹๐‘–, ๐‘‹๐‘—). This can be proved by induction using the above proofs, butintuitively: we can replace, for example, ๐‘Œ with ๐‘Œ = (๐‘Œ1 + ๐‘Œ2) and iteratively apply the above proof firstto ๐‘‹ + ๐‘Œ and then subsequently expand ๐‘ฃ๐‘Ž๐‘Ÿ(๐‘Œ ) as ๐‘ฃ๐‘Ž๐‘Ÿ(๐‘Œ1 + ๐‘Œ2).

8

Chapter 2

Inequalities involving expectations

This chapter discusses and proves two inequalities that Wooldridge highlights - Jensenโ€™s and Chebyshevโ€™s.Both involve expectations (and the theorems derived in the previous chapter).

2.1 Jensenโ€™s Inequality

Jensenโ€™s Inequality is a statement about the relative size of the expectation of a function compared with thefunction over that expectation (with respect to some random variable). To understand the mechanics, I firstdefine convex functions and then walkthrough the logic behind the inequality itself.

2.1.1 Convex functions

A function ๐‘“ is convex (in two dimensions) if all points on a straight line connecting any two points on thegraph of ๐‘“ is above or on that graph. More formally, ๐‘“ is convex if for โˆ€๐‘ฅ1, ๐‘ฅ2 โˆˆ โ„, and โˆ€๐‘ก โˆˆ [0, 1]:

๐‘“(๐‘ก๐‘ฅ1, (1 โˆ’ ๐‘ก)๐‘ฅ2) โ‰ค ๐‘ก๐‘“(๐‘ฅ1) + (1 โˆ’ ๐‘ก)๐‘“(๐‘ฅ2).

Here, ๐‘ก is a weighting parameter that allows us to range over the full interval between points ๐‘ฅ1 and ๐‘ฅ2.

Note also that concave functions are defined as the opposite of convex functions i.e. a function โ„Ž is concaveif and only if โˆ’โ„Ž is convex.

2.1.2 The Inequality

Jensenโ€™s Inequality (JI) states that, for a convex function ๐‘” and random variable ๐‘‹:

๐”ผ[๐‘”(๐‘‹)] โ‰ฅ ๐‘”(๐ธ[๐‘‹])

This inequality is exceptionally general โ€“ it holds for any convex function. Moreover, given that concavefunctions are defined as negative convex functions, it is easy to see that JI also implies that if โ„Ž is a concavefunction, โ„Ž(๐”ผ[๐‘‹]) โ‰ฅ ๐”ผ[โ„Ž(๐‘‹)].1

Interestingly, note the similarity between this inequality and the definition of variance in terms of expecta-tions:

1Since โˆ’โ„Ž(๐‘ฅ) is convex, ๐”ผ[โˆ’โ„Ž(๐‘‹)] โ‰ฅ โˆ’โ„Ž(๐”ผ[๐‘‹]) by JI. Hence, โ„Ž(๐”ผ[๐‘‹]) โˆ’ ๐”ผ[โ„Ž(๐‘‹)] โ‰ฅ 0 and so โ„Ž(๐”ผ[๐‘‹]) โ‰ฅ ๐”ผ[โ„Ž(๐‘‹)].

9

๐‘ฃ๐‘Ž๐‘Ÿ(๐‘‹) = ๐”ผ[๐‘‹2] โˆ’ (๐”ผ[๐‘‹])2,

and since ๐‘ฃ๐‘Ž๐‘Ÿ(๐‘‹) is always positive:

๐”ผ[๐‘‹2] โˆ’ (๐”ผ[๐‘‹])2 โ‰ฅ 0๐”ผ[๐‘‹2] โ‰ฅ (๐”ผ[๐‘‹])2).

We can therefore define ๐‘”(๐‘‹) = ๐‘‹2 (a convex function), and see that variance itself is an instance of Jensenโ€™sInequality.

2.1.3 Proof

Assume ๐‘”(๐‘‹) is a convex function, and ๐ฟ(๐‘‹) = ๐‘Ž + ๐‘๐‘‹ is a linear function tangential to ๐‘”(๐‘‹) at point ๐”ผ[๐‘‹].Hence, since ๐‘” is convex and ๐ฟ is tangential to ๐‘”, we know by definition that:

๐‘”(๐‘ฅ) โ‰ฅ ๐ฟ(๐‘ฅ), โˆ€๐‘ฅ โˆˆ ๐‘‹. (2.1)

So, therefore:

๐”ผ[๐‘”(๐‘‹)] โ‰ฅ ๐”ผ[๐ฟ(๐‘‹)] (2.2)โ‰ฅ ๐”ผ[๐ด + ๐‘๐‘‹] (2.3)โ‰ฅ ๐‘Ž + ๐‘๐”ผ[๐‘‹] (2.4)โ‰ฅ ๐ฟ(๐”ผ[๐‘‹]) (2.5)โ‰ฅ ๐‘”(๐”ผ[๐‘‹]) โ–ก (2.6)

The majority of this proof is straightforward. If one function is always greater than or equal to anotherfunction, then the unconditional expectation of the first function must be at least as big as that of thesecond. The interior lines of the proof follow from the definition of ๐ฟ, the linearity of expectations, andanother application of the definition of ๐ฟ respectively.

The final line then follows because, by the definition of the straight line ๐ฟ, we know that ๐ฟ[๐”ผ[๐‘‹]] is tangentialwith ๐‘” at ๐”ผ[๐”ผ[๐‘‹]] = ๐”ผ[๐‘‹] = ๐‘”(๐”ผ[๐‘‹]).2

2.1.4 Application

In Chapter 2 of Agnostic Statistics (2019), the authors note (almost in passing) that the standard error of themean is not unbiased, i.e. that ๐”ผ[๏ฟฝ๏ฟฝ] โ‰  ๐œŽ, even though it is consistent i.e. that ๏ฟฝ๏ฟฝ

๐‘โˆ’โ†’ ๐œŽ. The bias of the meanโ€™s

standard error is somewhat interesting (if not surprising), given how frequently we deploy the standarderror (and, in a more general sense, highlights how important asymptotics are not just for the estimation ofparameters, but also those parametersโ€™ uncertainty). The proof of why ๏ฟฝ๏ฟฝ is biased also, conveniently for thischapter, uses Jensenโ€™s Inequality.

The standard error of the mean is denoted as

๐œŽ = โˆš๐‘‰ (๏ฟฝ๏ฟฝ)2Based on lecture notes by Larry Wasserman.

10

,

where ๐‘‰ (๏ฟฝ๏ฟฝ) = ๐‘‰ (๐‘‹)๐‘› .

Our best estimate of this quantity ๏ฟฝ๏ฟฝ = โˆš ๐‘‰ (๏ฟฝ๏ฟฝ) is simply the square root of the sample variance estimator.We know that the variance estimator itself is unbiased and a consistent estimator of the sampling variance(see Agnostic Statistics Theorem 2.1.9).

The bias in the estimate of the sample meanโ€™s standard error originates from the square root functionNote that the square root is a strictly concave function. This means we can make two claims about theestimator. First, as with any concave function we can use the inverse version of Jensenโ€™s Inequality, i.e. that๐”ผ[๐‘”(๐‘‹)] โ‰ค ๐‘”(๐”ผ[๐‘‹]). Second, since the square root is a strictly concave function, we can use the weaker โ€œlessthan or equal toโ€ operator with the strict โ€œless thanโ€ inequality. Hence, the proof is reasonably easy:

๐”ผ [๏ฟฝ๏ฟฝ] = ๐”ผ [โˆš ๐‘‰ (๏ฟฝ๏ฟฝ)] < โˆš๐”ผ[ ๐‘‰ (๏ฟฝ๏ฟฝ)] (by Jensenโ€™s Inequality)

< โˆš๐‘‰ (๏ฟฝ๏ฟฝ) (since the sampling variance is unbiased)< ๐œŽ. โ–ก

The first line follows by first defining the conditional expectation of the sample meanโ€™s standard error, andthen applying the noted variant of Jensenโ€™s inequality. Then, since we know that the standard error estimatorof the variance is unbiased, we can replace the expectation with the true sampling variance, and note finallythat the square root of the true sampling variance is, by definition, the true standard error of the samplemean. Hence, we see that our estimator of the sampling meanโ€™s standard error is strictly less than the truevalue and therefore is biased.

2.2 Chebyshevโ€™s Inequality

The other inequality Wooldridge highlights is the Chebyshev Inequality. This inequality states that for aset of probability distributions, no more than a specific proportion of that distribution is more than a setdistance from the mean.

More formally, if ๐œ‡ = ๐”ผ[๐‘‹] and ๐œŽ2 = ๐‘ฃ๐‘Ž๐‘Ÿ(๐‘‹), then:

๐‘ƒ(|๐‘| โ‰ฅ ๐‘˜) โ‰ค 1๐‘˜2 , (2.7)

where ๐‘ = (๐‘‹ โˆ’ ๐œ‡)/๐œŽ (Wasserman, 2004, p.64) and ๐‘˜ indicates the number of standard deviations.

2.2.1 Proof

First, let us define the variance (๐œŽ2) as:

๐œŽ2 = ๐”ผ[(๐‘‹ โˆ’ ๐œ‡)2]. (2.8)

By expectation theory, we know that we can express any unconditional expectation as the weighted sum ofits conditional components i.e. ๐”ผ[๐ด] = โˆ‘๐‘– ๐”ผ[๐ด|๐‘๐‘–]๐‘ƒ (๐‘๐‘–), where โˆ‘๐‘– ๐‘ƒ(๐‘๐‘–) = 1. Hence:

... = ๐”ผ[(๐‘‹ โˆ’ ๐œ‡)2|๐‘˜๐œŽ โ‰ค |๐‘‹ โˆ’ ๐œ‡|]๐‘ƒ (๐‘˜๐œŽ โ‰ค |๐‘‹ โˆ’ ๐œ‡|) + ๐”ผ[(๐‘‹ โˆ’ ๐œ‡)2|๐‘˜๐œŽ > |๐‘‹ โˆ’ ๐œ‡|]๐‘ƒ (๐‘˜๐œŽ > |๐‘‹ โˆ’ ๐œ‡|) (2.9)

11

Since any probability is bounded between 0 and 1, and variance must be greater than or equal to zero, thesecond term must be non-negative. If we remove this term, therefore, the right-hand side is necessarily eitherthe same size or smaller. Therefore we can alter the equality to the following inequality:

๐œŽ2 โ‰ฅ ๐”ผ[(๐‘‹ โˆ’ ๐œ‡)2|๐‘˜๐œŽ โ‰ค ๐‘‹ โˆ’ ๐œ‡]๐‘ƒ(๐‘˜๐œŽ โ‰ค |๐‘‹ โˆ’ ๐œ‡|) (2.10)

This then simplifies:

๐œŽ2 โ‰ฅ (๐‘˜๐œŽ)2๐‘ƒ(๐‘˜๐œŽ โ‰ค |๐‘‹ โˆ’ ๐œ‡|)โ‰ฅ ๐‘˜2๐œŽ2๐‘ƒ(๐‘˜๐œŽ โ‰ค |๐‘‹ โˆ’ ๐œ‡|)

1๐‘˜2 โ‰ฅ ๐‘ƒ(|๐‘| โ‰ฅ ๐‘˜) โ–ก

Conditional on ๐‘˜๐œŽ โ‰ค |๐‘‹ โˆ’ ๐œ‡|, (๐‘˜๐œŽ)2 โ‰ค (๐‘‹ โˆ’ ๐œ‡)2, and therefore ๐”ผ[(๐‘˜๐œŽ)2] โ‰ค ๐”ผ[(๐‘‹ โˆ’ ๐œ‡)2]. Then, the last stepsimply rearranges the terms within the probability function.3

2.2.2 Applications

Wasserman (2004) notes that this inequality is useful when we want to know the probable bounds of anunknown quantity, and where direct computation would be difficult. It can also be used to prove the WeakLaw of Large Numbers (point 5 in Wooldridgeโ€™s list!), which I demonstrate here.

It is worth noting, however, that the inequality is really powerful โ€“ it guarantees that a certain amount of aprobability distribution is within a certain region โ€“ irrespective of the shape of that distribution (so long aswe can estimate the mean and variance)!

For some well-defined distributions, this theorem is weaker than what we know by dint of their form. Forexample, we know that for a normal distribution, approximately 95 percent of values lie within 2 standarddeviations of the mean. Chebyshevโ€™s Inequality only guarantees that 75 percent of values lie within twostandard deviations of the mean (since ๐‘ƒ(|๐‘| โ‰ฅ ๐‘˜) โ‰ค 1

22 ). Crucially, however, even if we didnโ€™t knowwhether a given distribution was normal, so long as it is a well-behaved probability distribution (i.e. theunrestricted integral sums to 1) we can guarantee that 75 percent will lie within two standard deviations ofthe mean.

3๐‘˜๐œŽ โ‰ค |๐‘‹ โˆ’ ๐œ‡| โ‰ก ๐‘˜ โ‰ค |๐‘‹ โˆ’ ๐œ‡|/๐œŽ โ‰ก |๐‘| โ‰ฅ ๐‘˜, since ๐œŽ is strictly non-negative.

12

Chapter 3

Linear Projection

This chapter provides a basic introduction to projection using both linear algebra and geometric demon-strations. I discuss the derivation of the orthogonal projection, its general properties as an โ€œoperatorโ€, andexplore its relationship with ordinary least squares (OLS) regression. I defer a discussion of linear projectionsโ€™applications until the penultimate chapter on the Frisch-Waugh Theorem, where projection matrices featureheavily in the proof.

3.1 Projection

Formally, a projection ๐‘ƒ is a linear function on a vector space, such that when it is applied to itself you getthe same result i.e. ๐‘ƒ 2 = ๐‘ƒ .1

This definition is slightly intractable, but the intuition is reasonably simple. Consider a vector ๐‘ฃ in two-dimensions. ๐‘ฃ is a finite straight line pointing in a given direction. Suppose there is some point ๐‘ฅ not onthis straight line but in the same two-dimensional space. The projection of ๐‘ฅ, i.e. ๐‘ƒ๐‘ฅ, is a function thatreturns the point โ€œclosestโ€ to ๐‘ฅ along the vector line ๐‘ฃ. Call this point ๐‘ฅ. In most contexts, closest refersto Euclidean distance, i.e. โˆšโˆ‘๐‘–(๐‘ฅ๐‘– โˆ’ ๐‘ฅ๐‘–)2, where ๐‘– ranges over the dimensions of the vector space (in thiscase two dimensions).2 Figure 3.1 depicts this logic visually. The green dashed line shows the orthogonalprojection, and red dashed lines indicate other potential (non-orthgonal) projections that are further awayin Euclidean space from ๐‘ฅ than ๐‘ฅ.

In short, projection is a way of simplifying some n-dimensional space โ€“ compressing information onto a(hyper-) plane. This is useful especially in social science settings where the complexity of the phenomena westudy mean exact prediction is impossible. Instead, we often want to construct models that compress busyand variable data into simpler, parsimonious explanations. Projection is the statistical method of achievingthis โ€“ it takes the full space and simplifies it with respect to a certain number of dimensions.

While the above is (reasonably) intuitive it is worth spelling out the maths behind projection, not leastbecause it helps demonstrate the connection between linear projection and linear regression.

To begin, we can take some point in n-dimensional space, ๐‘ฅ, and the vector line ๐‘ฃ along which we want toproject ๐‘ฅ. The goal is the following:

1Since ๐‘ƒ is (in the finite case) a square matrix, a projection matrix is an idempotent matrix โ€“ I discuss this property inmore detail later on in this note.

2Euclidean distance has convenient properties, including that the closest distance between a point and a vector line isorthogonal to the vector line itself.

13

Figure 3.1: Orthogonal projection of a point onto a vector line.

๐‘Ž๐‘Ÿ๐‘” ๐‘š๐‘–๐‘›๐‘โˆšโˆ‘๐‘–

( ๐‘ฅ๐‘– โˆ’ ๐‘ฅ)2 = ๐‘Ž๐‘Ÿ๐‘” ๐‘š๐‘–๐‘›๐‘ โˆ‘๐‘–

( ๐‘ฅ๐‘– โˆ’ ๐‘ฅ)2

= ๐‘Ž๐‘Ÿ๐‘” ๐‘š๐‘–๐‘›๐‘ โˆ‘๐‘–

(๐‘๐‘ฃ๐‘– โˆ’ ๐‘ฅ)2

This rearrangement follows since the square root is a monotonic transformation, such that the optimalchoice of ๐‘ is the same across both ๐‘Ž๐‘Ÿ๐‘” ๐‘š๐‘–๐‘›โ€™s. Since any potential ๐‘ฅ along the line drawn by ๐‘ฃ is some scalarmultiplication of that line (๐‘๐‘ฃ), we can express the function to be minimised with respect to ๐‘, and thendifferentiate:

๐‘‘๐‘‘๐‘ โˆ‘

๐‘–(๐‘๐‘ฃ๐‘– โˆ’ ๐‘ฅ)2 = โˆ‘

๐‘–2๐‘ฃ๐‘–(๐‘๐‘ฃ๐‘– โˆ’ ๐‘ฅ)

= 2(โˆ‘๐‘–

๐‘๐‘ฃ2๐‘– โˆ’ โˆ‘

๐‘–๐‘ฃ๐‘–๐‘ฅ)

= 2(๐‘๐‘ฃโ€ฒ๐‘ฃ โˆ’ ๐‘ฃโ€ฒ๐‘ฅ) โ‡’ 0

Here we differentiate the equation and rearrange terms. The final step simply converts the summationnotation into matrix multiplication. Solving:

2(๐‘๐‘ฃโ€ฒ๐‘ฃ โˆ’ ๐‘ฃโ€ฒ๐‘ฅ) = 0๐‘๐‘ฃโ€ฒ๐‘ฃ โˆ’ ๐‘ฃโ€ฒ๐‘ฅ = 0

๐‘๐‘ฃโ€ฒ๐‘ฃ = ๐‘ฃโ€ฒ๐‘ฅ๐‘ = (๐‘ฃโ€ฒ๐‘ฃ)โˆ’1๐‘ฃโ€ฒ๐‘ฅ.

From here, note that ๐‘ฅ, the projection of ๐‘ฅ onto the vector line, is ๐‘ฃ๐‘ = ๐‘ฃ(๐‘ฃโ€ฒ๐‘ฃ)โˆ’1๐‘ฃโ€ฒ๐‘ฅ. Hence, we can definethe projection matrix of ๐‘ฅ onto ๐‘ฃ as:

14

๐‘ƒ๐‘ฃ = ๐‘ฃ(๐‘ฃโ€ฒ๐‘ฃ)โˆ’1๐‘ฃโ€ฒ.

In plain English, for any point in some space, the orthogonal projection of that point onto some subspace,is the point on a vector line that minimises the Euclidian distance between itself and the original point. Avisual demonstration of this point is shown and discussed in Figure ?? below.

Note also that this projection matrix has a clear analogue to the linear algebraic expression of linear regression.The vector of coefficients in a linear regression ๐›ฝ can be expressed as (๐‘‹โ€ฒ๐‘‹)โˆ’1๐‘‹โ€ฒ๐‘ฆ. And we know thatmultiplying this vector by the matrix of predictors ๐‘‹ results in the vector of predicted values ๐‘ฆ. Now wehave ๐‘ฆ = ๐‘‹(๐‘‹โ€ฒ๐‘‹)โˆ’1๐‘‹โ€ฒ๐‘Œ โ‰ก ๐‘ƒ๐‘‹๐‘ฆ. Clearly, therefore, linear projection and linear regression are closely relatedโ€“ and I return to this point below.

3.2 Properties of the projection matrix

The projection matrix ๐‘ƒ has several interesting properties. First, and most simply, the projection matrix issquare. Since ๐‘ฃ is of some arbitrary dimensions ๐‘› ร— ๐‘˜, its transpose is of dimensions ๐‘˜ ร— ๐‘›. By linear algebra,the shape of the full matrix is therefore ๐‘› ร— ๐‘›, i.e. square.

Projection matrices are also symmetric, i.e. ๐‘ƒ = ๐‘ƒ โ€ฒ. To prove symmetry, note that transposing both sidesof the projection matrix definition:

๐‘ƒ โ€ฒ = (๐‘ฃ(๐‘ฃโ€ฒ๐‘ฃ)โˆ’1๐‘ฃโ€ฒ)โ€ฒ (3.1)= ๐‘ฃ(๐‘ฃโ€ฒ๐‘ฃ)โˆ’1๐‘ฃโ€ฒ (3.2)= ๐‘ƒ, (3.3)

since (๐ด๐ต)โ€ฒ = ๐ตโ€ฒ๐ดโ€ฒ and (๐ดโˆ’1)โ€ฒ = (๐ดโ€ฒ)โˆ’1.Projection matrices are idempotent:

๐‘ƒ๐‘ƒ = ๐‘ฃ(๐‘ฃโ€ฒ๐‘ฃ)โˆ’1๐‘ฃโ€ฒ๐‘ฃ(๐‘ฃโ€ฒ๐‘ฃ)โˆ’1๐‘ฃโ€ฒ (3.4)= ๐‘ฃ(๐‘ฃโ€ฒ๐‘ฃ)โˆ’1๐‘ฃโ€ฒ (3.5)= ๐‘ƒ, (3.6)

since (๐ด)โˆ’1๐ด = ๐ผ and ๐ต๐ผ = ๐ต.

Since, projection matrices are idempotent, this entails that projecting a point already on the vector line willjust return that same point. This is fairly intuitive: the closest point on the vector line to a point alreadyon the vector line is just that same point.

Finally, we can see that the projection of any point is orthogonal to the respected projected point on vectorline. Two vectors are orthogonal if ๐‘Ž๐‘ = 0. Starting with the expression in Equation 3.1 (i.e. minimising theEuclidean distance with respect to ๐‘):

2(๐‘๐‘ฃโ€ฒ๐‘ฃ โˆ’ ๐‘ฃโ€ฒ๐‘ฅ) = 0๐‘ฃโ€ฒ๐‘๐‘ฃ โˆ’ ๐‘ฃโ€ฒ๐‘ฅ = 0๐‘ฃโ€ฒ(๐‘๐‘ฃ โˆ’ ๐‘ฅ) = 0๐‘ฃโ€ฒ( ๐‘ฅ โˆ’ ๐‘ฅ) = 0,

hence the line connecting the original point ๐‘ฅ is orthogonal to the vector line.

15

The projection matrix is very useful in other fundamental theorems in econometrics, like Frisch Waugh LovellTheorem discussed in Chapter 8.

3.3 Linear regression

Given a vector of interest, how do we capture as much information from it as possible using set of predictors?Projection matrices essentially simplify the dimensionality of some space, by casting points onto a lower-dimensional plane. Think of it like capturing the shadow of an object on the ground. There is far moredetail in the actual object itself but we roughly know its position, shape, and scale from the shadow thatโ€™scast on the 2d plane of the ground.

Note also this is actually quite similar to how we think about regression. Loosely, when we regress ๐‘Œ on๐‘‹, we are trying to characterise how the components (or predictors) within ๐‘‹ characterise or relate to๐‘Œ . Of course, regression is also imperfect (after all, the optimisation goal is to minimise the errors of ourpredictions). So, regression also seems to capture some lower dimensional approximation of an outcome.

In fact, linear projection and linear regression are very closely related. In this final section, I outline howthese two statistical concepts relate to each other, both algebraically and geometrically,

Suppose we have a vector of outcomes ๐‘ฆ, and some n-dimensional matrix ๐‘‹ of predictors. We write thelinear regression model as:

๐‘ฆ = ๐‘‹๐›ฝ + ๐œ–, (3.7)

where ๐›ฝ is a vector of coefficients, and ๐œ– is the difference between the prediction and the observed value in๐‘ฆ. The goal of linear regression is to minimise the sum of the squared residuals:

๐‘Ž๐‘Ÿ๐‘” ๐‘š๐‘–๐‘› ๐œ–2 = ๐‘Ž๐‘Ÿ๐‘” ๐‘š๐‘–๐‘›(๐‘ฆ โˆ’ ๐‘‹๐›ฝ)โ€ฒ(๐‘ฆ โˆ’ ๐‘‹๐›ฝ)

Differentiating with respect to ๏ฟฝand solving:

๐‘‘๐‘‘๐›ฝ (๐‘ฆ โˆ’ ๐‘‹๐›ฝ)โ€ฒ(๐‘ฆ โˆ’ ๐‘‹๐›ฝ) = โˆ’2๐‘‹(๐‘ฆ โˆ’ ๐‘‹๐›ฝ)

= 2๐‘‹โ€ฒ๐‘‹๐›ฝ โˆ’ 2๐‘‹โ€ฒ๐‘ฆ โ‡’ 0๐‘‹โ€ฒ๐‘‹ ๐›ฝ = ๐‘‹โ€ฒ๐‘ฆ

(๐‘‹โ€ฒ๐‘‹)โˆ’1๐‘‹โ€ฒ๐‘‹ ๐›ฝ = (๐‘‹โ€ฒ๐‘‹)โˆ’1๐‘‹โ€ฒ๐‘ฆ๐›ฝ = (๐‘‹โ€ฒ๐‘‹)โˆ’1๐‘‹โ€ฒ๐‘ฆ.

To get our prediction of ๐‘ฆ, i.e. ๐‘ฆ, we simply multiply our beta coefficient by the matrix X:

๐‘ฆ = ๐‘‹(๐‘‹โ€ฒ๐‘‹)โˆ’1๐‘‹โ€ฒ๐‘ฆ.

Note how the OLS derivation of ๐‘ฆ is very similar to ๐‘ƒ = ๐‘‹(๐‘‹โ€ฒ๐‘‹)โˆ’1๐‘‹, the orthogonal prediction matrix.The two differ only in that that ๐‘ฆ includes the original outcome vector ๐‘ฆ in its expression. But, note that๐‘ƒ๐‘ฆ = ๐‘‹(๐‘‹โ€ฒ๐‘‹)โˆ’1๐‘‹โ€ฒ๐‘ฆ = ๐‘ฆ! Hence the predicted values from a linear regression simply are an orthogonalprojection of ๐‘ฆ onto the space defined by ๐‘‹.

16

3.3.1 Geometric interpretation

It should be clear now that linear projection and linear regression are connected โ€“ but it is probably lessclear why this holds. To understand whatโ€™s going on, letโ€™s depict the problem geometrically.3

To appreciate whatโ€™s going on, we first need to invert how we typically think about observations, variablesand datapoints. Consider a bivariate regression problem with three observations. Our data will include threevariables: a constant (c, a vector of 1โ€™s), a predictor (X), and an outcome variable (Y). As a matrix, thismight look something like the following:

Y X c2 3 13 1 12 1 1

Typically we would represent the relationship geometrically by treating the variables as dimensions, suchthat every datapoint is an observation (and we would typically ignore the constant column since all its valuesare the same).

An alternative way to represent this data is to treat each observation (i.e. row) as a dimension and thenrepresent each variable as a vector. What does that actually mean? Well consider the column ๐‘Œ = (2, 3, 2).This vector essentially gives us the coordinates for a point in three-dimensional space: ๐‘‘1 = 2, ๐‘‘2 = 3, ๐‘‘3 = 2.Drawing a straight line from the origin (0,0,0) to this point gives us a vector line for the outcome. Whilevisually this might seem strange, from the perspective of our data itโ€™s not unusual to refer to each variable asa column vector, and thatโ€™s precisely because it is a quantity with a magnitude and direction (as determinedby its position in ๐‘› dimensions).

Our predictors are the vectors ๐‘‹ and ๐‘ (note the vector ๐‘ is now slightly more interesting because it is adiagonal line through the three-dimensional space). We can extend either vector line by multiplying it bya constant e.g. 2๐‘‹ = (6, 2, 2). With a single vector, we can only move forwards or backwards along a line.But if we combine two vectors together, we can actually reach lots of points in space. Imagine placing thevector ๐‘‹ at the end of the ๐‘. The total path now reaches a new point that is not intersected by either ๐‘‹ or๐‘. In fact, if we multiply ๐‘‹ and ๐‘ by some scalars (numbers), we can snake our way across a whole array ofdifferent points in three-dimensional space. Figure 3.2 demonstrates some of these combinations in the twodimensional space created by ๐‘‹ and ๐‘.

The comprehensive set of all possible points covered by linear combinations of ๐‘‹ and ๐‘ is called the span orcolumn space. In fact, with the specific set up of this example (3 observations, two predictors), the span ofour predictors is a flat plane. Imagine taking a flat bit of paper and aligning one corner with the origin, andthen angling surface so that the end points of the vectors ๐‘‹ and ๐‘ are both resting on the cardโ€™s surface.Keeping that alignment, any point on the surface of the card is reachable by some combination of ๐‘‹ and๐‘. Algebraically we can refer to this surface as ๐‘๐‘œ๐‘™(๐‘‹, ๐‘), and it generalises beyond two predictors (althoughthis is much harder to visualise).

Crucially, in our reduced example of three-dimensional space, there are points in space not reachable bycombining these two vectors (any point above or below the piece of card). We know, for instance that thevector line ๐‘ฆ lies off this plane. The goal therefore is to find a vector that is on the column space of (๐‘‹, ๐‘)that gets closest to our off-plane vector ๐‘ฆ as possible. Figure 3.3 depicts this set up visually โ€“ each dimensionis an observation, each column in the matrix is represented a vector, and the column space of (๐‘‹, ๐‘) is theshaded grey plane. The vector ๐‘ฆ lies off this plane.

From our discussion in Section 3.1, we know that the โ€œbestโ€ vector is the orthogonal projection from thecolumn space to the vector ๐‘ฆ. This is the shortest possible distance between the flat plane and the observed

3This final section borrows heavily from Ben Lambertโ€™s explanation of projection and a demonstration using R by AndyEggers.

17

Figure 3.2: Potential combinations of two vectors.

๐‘‹ + ๐‘๐›ผ๐‘‹ + ๐›พ๐‘

๐›ผ๐‘‹

๐‘‹๐‘

outcome, and is just ๐‘ฆ. Moreover, since ๐‘ฆ lies on the column space, we know we only need to combine somescaled amount of ๐‘‹ and ๐‘ to define the vector ๐‘ฆ, i.e., ๐›ฝ1๐‘‹ + ๐›ฝ0๐‘. Figure 3.4 shows this geometrically. Andin fact, the scalar coefficients ๐›ฝ1, ๐›ฝ0 in this case are just the regression coefficients derived from OLS. Why?Because we know that the orthogonal projection of ๐‘ฆ onto the column space minimises the error between ourprediction ๐‘ฆ and the observed outcome vector ๐‘ฆ. This is the same as the minimisation problem that OLSsolves, as outlined at the beginning of this section!

Consider any other vector on the column space, and the distance between itself and and ๐‘ฆ. Each non-orthogonal vector would be longer, and hence have a larger predictive error, than ๐‘ฆ. For example, Figure3.5 plots two alternative vectors on ๐‘๐‘œ๐‘™(๐‘‹, ๐‘) alongside ๐‘ฆ. Clearly, ๐œ– < ๐œ–โ€ฒ < ๐œ–โ€ณ, and this is true of any othervector on the column space too.

Hence, linear projection and linear regression can be seen (both algebraically and geometrically) to be solvingthe same problem โ€“ minimising the (squared) distance between an observed vector ๐‘ฆ and prediction vector

๐‘ฆ. This demonstration generalises to many dimensions (observations), though of course it becomes muchharder to intuit the geometry of highly-dimensional data. And similarly, with more observations we couldalso extend the number of predictors too such that ๐‘‹ is not a single column vector but a matrix of predictorvariables (i.e. multivariate regression). Again, visualising what the column space of this matrix would looklike geometrically becomes harder.

To summarise, this section has demonstrated two features. First, that linear regression simply is an orthogo-nal projection. We saw this algebraically by noting that the derivation of OLS coefficients, and subsequentlythe predicted values from a linear regression, is identical to ๐‘ƒ๐‘ฆ (where ๐‘ƒ is a projection matrix). Second,and geometrically, we intuited why this is the case: namely that projecting onto a lower-dimensional columnspace involves finding the linear combination of predictors that minimises the Euclidean distance to ๐‘ฆ, i.e. ๐‘ฆ.The scalars we use to do so are simply the regression coefficients we would generate using OLS regression.

18

Figure 3.3: Schematic of orthogonal projection as a geometric problem

19

Figure 3.4: Relation of orthogonal projection to linear regression.

20

Figure 3.5: Alternative vectors on the column space are further away from y.

21

22

Chapter 4

Weak Law of Large Numbers and CentralLimit Theorem

This chapter focuses on two fundamental theorems that form the basis of our inferences from samples topopulations. The Weak Law of Large Numbers (WLLN) provides the basis for generalisation from a samplemean to the population mean. The Central Limit Theorem (CLT) provides the basis for quantifying ouruncertainty over this parameter. In both cases, I discuss the theorem itself and provide an annotated proof.Finally, I discuss how the two theorems complement each other.

4.1 Weak Law of Large Numbers

4.1.1 Theorem in Plain English

Suppose we have a random variable ๐‘‹. From ๐‘‹, we can generate a sequence of random variables๐‘‹1, ๐‘‹2, ..., ๐‘‹๐‘› that are independent and identically distributed (i.i.d.) draws of ๐‘‹. Assuming ๐‘› is finite, wecan perform calculations on this sequence of random numbers. For example, we can calculate the mean ofthe sequence ๏ฟฝ๏ฟฝ๐‘› = 1

๐‘› โˆ‘๐‘›๐‘–=1 ๐‘‹๐‘–. This value is the sample mean โ€“ from a much wider population, we have

drawn a finite sequence of observations, and calculated the average across them. How do we know that thissample parameter is meaningful with respect to the population, and therefore that we can make inferencesfrom it?

WLLN states that the mean of a sequence of i.i.d. random variables converges in probability to the expectedvalue of the random variable as the length of that sequence tends to infinity. By โ€˜converging in probabilityโ€™,we mean that the probability that the difference between the mean of the sample and the expected value ofthe random variable tends to zero.

In short, WLLN guarantees that with a large enough sample size the sample mean should approximatelymatch the true population parameter. Clearly, this is powerful theorem for any statistical exercise: givenwe are (always) constrained by a finite sample, WLLN ensures that we can infer from the data somethingmeaningful about the population. For example, from a large enough sample of voters we can estimate theaverage support for a candidate or party.

More formally, we can state WLLN as follows:

๏ฟฝ๏ฟฝ๐‘›๐‘โˆ’โ†’ ๐”ผ[๐‘‹], (4.1)

where๐‘โˆ’โ†’ denotes โ€˜converging in probabilityโ€™.

23

4.1.2 Proof

To prove WLLN, we use Chebyshevโ€™s Inequality (CI). More specifically we first have to prove Chebyshevโ€™sInequality of the Sample Mean (CISM), and then use CISM to prove WLLN. The following steps are basedon the proof provided in Aronow and Miller (2019).

Proof of Chebyshevโ€™s Inequality of the Sample Mean. Chebyshevโ€™s Inequality for the Sample Mean (CISM)states that:

๐‘ƒ(|๏ฟฝ๏ฟฝ๐‘› โˆ’ ๐”ผ[๐‘‹]| โ‰ฅ ๐‘˜) โ‰ค ๐‘ฃ๐‘Ž๐‘Ÿ(๐‘‹)๐‘˜2๐‘› , (4.2)

where ๏ฟฝ๏ฟฝ๐‘› is the sample mean of a sequensce of ๐‘› independent draws from a random variable ๐‘‹. RecallCI states that ๐‘ƒ (|(๐‘‹ โˆ’ ๐œ‡)/๐œŽ| โ‰ฅ ๐‘˜) โ‰ค 1

๐‘˜2 . To help prove CISM, we can rearrange the left hand side of theinequality by multiplying both sides of the inequality within the probability function by ๐œŽ, such that:

๐‘ƒ(|(๐‘‹ โˆ’ ๐œ‡)| โ‰ฅ ๐‘˜๐œŽ) โ‰ค 1๐‘˜2 . (4.3)

Then, finally, let us define ๐‘˜โ€ฒ = ๐‘˜๐œŽ . Hence:

๐‘ƒ(|(๏ฟฝ๏ฟฝ โˆ’ ๐”ผ[๐‘‹])| โ‰ฅ ๐‘˜) = ๐‘ƒ(|(๏ฟฝ๏ฟฝ โˆ’ ๐”ผ[๐‘‹])| โ‰ฅ ๐‘˜โ€ฒ๐œŽ) (4.4)

โ‰ค 1๐‘˜โ€ฒ2 (4.5)

โ‰ค ๐œŽ2

๐‘˜2 (4.6)

โ‰ค ๐‘ฃ๐‘Ž๐‘Ÿ(๏ฟฝ๏ฟฝ)๐‘˜2 (4.7)

โ‰ค ๐‘ฃ๐‘Ž๐‘Ÿ(๐‘‹)๐‘˜2๐‘› โ–ก (4.8)

This proof is reasonably straightfoward. Using our definition of ๐‘˜โ€ฒ allows us to us rearrange the probabilitywithin CISM to match the form of the Chebyshev Inequality stated above, which then allows us to infer thebounds of the probability. We then replace ๐‘˜โ€ฒ with ๐‘˜

๐œŽ , expand and simplify. The move made between thepenultimate and final line relies on the fact that variance of the sample mean is equal to the variance in therandom variable divided by the sample size (n).1

Applying CISM to WLLN proof. Given that all probabilities are non-negative and CISM, we can now write:

0 โ‰ค ๐‘ƒ(|๏ฟฝ๏ฟฝ๐‘› โˆ’ ๐”ผ[๐‘‹]| โ‰ฅ ๐‘˜) โ‰ค ๐‘ฃ๐‘Ž๐‘Ÿ(๐‘‹)๐‘˜2๐‘› . (4.9)

Note that for the first and third term of this multiple inequality, as ๐‘› approaches infinity both terms approach0. In the case of the constant zero, this is trivial. In the final term, note that ๐‘ฃ๐‘Ž๐‘Ÿ(๐‘‹) denotes the inherentvariance of the random variable, and therefore is constant as ๐‘› increases. Therefore, as the denominatorincreases, the term converges to zero.

Since the middle term is sandwiched in between these two limits, by definition we know that this term mustalso converge to zero.2 Therefore:

1See Aronow and Miller 2019, p.98.2To see why this is the case, given the limits of the first and third terms, Equation ?? is of the form 0 โ‰ค ๐ด โ‰ค 0 as ๐‘› โ†’ โˆž.

The only value of ๐ด that satisfies this inequality is 0.

24

lim๐‘›โ†’โˆž๐‘ƒ(|๏ฟฝ๏ฟฝ๐‘› โˆ’ ๐”ผ[๐‘‹]| โ‰ฅ ๐‘˜) = 0 โ–ก (4.10)

Hence, WLLN is proved: for any value of ๐‘˜, the probability that the difference between the sample meanand the expected value is greater or equal to ๐‘˜ converges on zero. Since ๐‘˜โ€™s value is arbitrary, it can be setto something infinitesimally small, such that the sample mean and expected value converge in value.

4.2 Central Limit Theorem

WLLN applies to the value of the statistic itself (the mean value). Given a single, n-length sequence drawnfrom a random variable, we know that the mean of this sequence will converge on the expected value of therandom variable. But often, we want to think about what happens when we (hypothetically) calculate themean across multiple sequences i.e. expectations under repeat sampling.

The Central Limit Theorem (CLT) is closely related to the WLLN. Like WLLN, it relies on asymptoticproperties of random variables as the sample size increases. CLT, however, lets us make informative claimsabout the distribution of the sample mean around the true population parameter.

4.2.1 Theorem in Plain English

CLT states that as the sample size increases, the distribution of sample means converges to a normaldistribution. That is, so long as the underlying distribution has a finite variance (bye bye Cauchy!), thenirrespective of the underlying distribution of ๐‘‹ the distribution of sample means will be a normal distribution!

In fact, there are multiple types of CLT that apply in a variety of different contexts โ€“ cases includingBernoulli random variables (de Moivre - Laplace), where random variables are independent but do not needto be identically distributed (Lyapunov), and where random variables are vectors in โ„๐‘˜ space (multivariateCLT).

In what follows, I will discuss a weaker, more basic case of CLT where we assume random variables arescalar, independent, and identically distributed (i.e. drawn from the same unknown distribution function).In particular, this section proves that the standardized difference between the sample mean and populationmean for i.i.d. random variables converges in distribution to the standard normal distribution ๐‘(0, 1). Thisvariant of the CLT is called the Lindeberg-Levy CLT, and can be stated as:

๏ฟฝ๏ฟฝ๐‘› โˆ’ ๐œ‡๐œŽโˆš๐‘›

๐‘‘โˆ’โ†’ ๐‘(0, 1), (4.11)

where๐‘‘โˆ’โ†’ denotes โ€˜converging in distributionโ€™.

In general, the CLT is useful because proving that the sample mean is normally distributed allows us toquantify the uncertainty around our parameter estimate. Normal distributions have convenient propertiesthat allow us to calculate the area under any portion of the curve, given just the same mean and standarddeviation. We already know by WLLN that the sample mean will (with a sufficiently large sample) approx-imate the population mean, so we know that the distribution is also centred around the true populationmean. By CLT, the dispersion around that point is therefore normal, and to quantify the probable boundsof the point estimate (under the assumption of repeat sampling) requires only an estimate of the variance.

4.2.2 Primer: Characteristic Functions

CLT is harder (and lengthier) to prove than other proofs weโ€™ve encountered so far โ€“ it relies on showing thatthe sample mean converges in distribution to a known mathematical form that uniquely and fully describes

25

the normal distribution. To do so, we use the idea of a characteristic functions, which simply denotes afunction that completely defines a probability function.

For example, and we will use this later on, we know that the characteristic function of the normal distributionis ๐‘’๐‘–๐‘ก๐œ‡โˆ’ ๐œŽ2๐‘ก2

2 . A standard normal distriibution (where ๐œ‡ = 0, ๐œŽ2 = 1) therefore simplifies to ๐‘’โˆ’ ๐‘ก22 .

More generally, we know that for any scalar random variable ๐‘‹, the characteristic function of ๐‘‹ is definedas:

๐œ™๐‘‹(๐‘ก) = ๐”ผ[๐‘’๐‘–๐‘ก๐‘‹], (4.12)

where ๐‘ก โˆˆ โ„ and ๐‘– is the imaginary unit. Proving why this is the case is beyond the purview of this section,so unfortunately I will just take it at face value.

We can expand ๐‘’๐‘–๐‘ก๐‘‹ as an infinite sum, using a Taylor Series, since ๐‘’๐‘ฅ = 1 + ๐‘ฅ + ๐‘ฅ22! + ๐‘ฅ3

3! + .... Hence:

๐œ™๐‘‹(๐‘ก) = ๐”ผ[1 + ๐‘–๐‘ก๐‘‹ + (๐‘–๐‘ก๐‘‹)2

2! + (๐‘–๐‘ก๐‘‹)3

3! + ...], (4.13)

Note that ๐‘–2 = โˆ’1, and since the latter terms tend to zero faster than the second order term we cansummarise them as ๐‘œ(๐‘ก2) (they are no larger than of order ๐‘ก2). Therefore we can rewrite this expression as:

๐œ™๐‘‹(๐‘ก) = ๐”ผ[1 + ๐‘–๐‘ก๐‘‹ โˆ’ ๐‘ก2

2 ๐‘‹2 + ๐‘œ(๐‘ก2)]. (4.14)

In the case of continuous random variables, the expected value can be expressed as the integral across allspace of the expression multiplied by the probability density, such that:

๐œ™๐‘‹(๐‘ก) = โˆซโˆž

โˆ’โˆž[1 + ๐‘–๐‘ก๐‘‹ โˆ’ ๐‘ก2

2 ๐‘‹2 + ๐‘œ(๐‘ก2)]๐‘“๐‘‹๐‘‘๐‘‹, (4.15)

and this can be simplified to:

๐œ™๐‘‹(๐‘ก) = 1 + ๐‘–๐‘ก๐”ผ[๐‘‹] โˆ’ ๐‘ก2

2 ๐”ผ[๐‘‹2] + ๐‘œ(๐‘ก2)], (4.16)

since 1ร—๐‘“๐‘‹ = ๐‘“๐‘‹, the total area under a probability density necessarily sums to 1; โˆซ ๐‘‹๐‘“๐‘‹๐‘‘๐‘‹ is the definitionof the expected value of X, and so by similar logic โˆซ ๐‘‹2๐‘“๐‘‹๐‘‘๐‘‹ = ๐”ผ[๐‘‹2].In Ben Lambertโ€™s video introducing the CLT proof, he notes that if we assume X has mean 0 and variance1, the characteristic function of that distribution has some nice properties, namely that it simplifies to:

๐œ™๐‘‹(๐‘ก) = 1 โˆ’ ๐‘ก2

2 + ๐‘œ(๐‘ก2)], (4.17)

since ๐”ผ[๐‘‹] = 0 cancelling the second term, and ๐”ผ[๐‘‹2] โ‰ก ๐”ผ[(๐‘‹ โˆ’ 0)2] = ๐”ผ[(๐‘‹ โˆ’ ๐œ‡)2] = ๐‘ฃ๐‘Ž๐‘Ÿ(๐‘‹) = 1.

One final piece of characteristic function math that will help finalise the CLT proof is to note that if wedefine some random variable ๐‘„๐‘› = โˆ‘๐‘›

๐‘–=1 ๐‘…๐‘–, where all ๐‘…๐‘– are i.i.d., then the characteristic function of ๐‘„๐‘›can be expressed as ๐œ™๐‘„๐‘›

(๐‘ก) = [๐œ™๐‘…(๐‘ก)]๐‘›. Again, I will not prove this property here.

26

4.2.3 Proof of CLT

This proof is based in part on Ben Lambertโ€™s excellent YouTube series, as well as Lemons et al. (2002).

Given the above discussion of a characteristic function, let us assume a sequence of independent and identi-cally distributed (i.i.d.) random variables ๐‘‹1, ๐‘‹2, ..., ๐‘‹๐‘›, each with mean ๐œ‡ and finite3 variance ๐œŽ2. The sumof these random variables has mean ๐‘›๐œ‡ (since each random variable has the same mean) and the varianceequivalent to ๐‘›๐œŽ2 (because the random variables are i.i.d. we know that ๐‘ฃ๐‘Ž๐‘Ÿ(๐ด, ๐ต) = ๐‘ฃ๐‘Ž๐‘Ÿ(๐ด)๐‘ฃ๐‘Ž๐‘Ÿ(๐ต)).Now letโ€™s consider the standardized difference between the actual sum of the random variables and the mean.Standardization simply means dividing a parameter estimate by its standard deviation. In particular, wecan consider the following standardized random variable:

๐‘๐‘› = โˆ‘๐‘›๐‘–=1(๐‘‹๐‘– โˆ’ ๐œ‡)

๐œŽโˆš๐‘› , (4.18)

where ๐‘๐‘›, in words, is the standardised difference between the sum of i.i.d. random variables and theexpected value of the sequence. Note that we use the known variance in the denominator.

We can simplify this further:

๐‘๐‘› =๐‘›

โˆ‘๐‘–=1

1โˆš๐‘›๐‘Œ๐‘–, (4.19)

where we define a new random variable ๐‘Œ๐‘– = ๐‘‹๐‘–โˆ’๐œ‡๐œŽ .

๐‘Œ๐‘– has some convenient properties. First, since each random variable ๐‘‹๐‘– in our sample has mean ๐œ‡, we knowthat ๐”ผ[๐‘Œ๐‘–] = 0 since ๐”ผ[๐‘‹๐‘–] = ๐œ‡ and therefore ๐œ‡ โˆ’ ๐œ‡ = 0. Note that this holds irrespective of the distributionand value of ๐”ผ[๐‘‹๐‘–].The variance of ๐‘Œ๐‘– is also recoverable. First note three basic features of variance: if ๐‘Ž is a constant, and ๐‘‹and ๐‘Œ are random variables, ๐‘ฃ๐‘Ž๐‘Ÿ(๐‘Ž) = 0; ๐‘ฃ๐‘Ž๐‘Ÿ(๐‘Ž๐‘‹) = ๐‘Ž2๐‘ฃ๐‘Ž๐‘Ÿ(๐‘‹); and from the variance of a sum ๐‘ฃ๐‘Ž๐‘Ÿ(๐‘‹ โˆ’๐‘Œ ) =๐‘ฃ๐‘Ž๐‘Ÿ(๐ด) โˆ’ ๐‘ฃ๐‘Ž๐‘Ÿ(๐ต). Therefore:

๐‘ฃ๐‘Ž๐‘Ÿ( 1๐œŽ (๐‘‹๐‘– โˆ’ ๐œ‡) = 1

๐œŽ2 ๐‘ฃ๐‘Ž๐‘Ÿ(๐‘‹๐‘– โˆ’ ๐œ‡) (4.20)

๐‘ฃ๐‘Ž๐‘Ÿ(๐‘‹๐‘– โˆ’ ๐œ‡) = ๐‘ฃ๐‘Ž๐‘Ÿ(๐‘‹๐‘–) โˆ’ ๐‘ฃ๐‘Ž๐‘Ÿ(๐œ‡) (4.21)= ๐‘ฃ๐‘Ž๐‘Ÿ(๐‘‹๐‘–). (4.22)

Hence:

๐‘ฃ๐‘Ž๐‘Ÿ(๐‘Œ๐‘–) = ๐‘ฃ๐‘Ž๐‘Ÿ(๐‘‹๐‘–)๐œŽ2 = 1, (4.23)

since ๐‘ฃ๐‘Ž๐‘Ÿ(๐‘‹๐‘–) = ๐œŽ2.

At this stage, the proof is tantalisingly close. While we have not yet fully characterised the distribution of๐‘๐‘› or even ๐‘Œ๐‘–, the fact that ๐‘Œ๐‘– has unit variance and a mean of zero means suggests we are on the righttrack to proving that this does asymptotically tend in distribution to the standard normal. In fact, recallfrom the primer on characteristic functions, that Lambert notes for any random variable with unit varianceand mean of 0, ๐œ™๐‘‹(๐‘ก) = 1 โˆ’ ๐‘ก2

2 + ๐‘œ(๐‘ก2). Hence, we can now say that:

3Hence why degenerate distributions like the Cauchy are not covered by CLT.

27

๐œ™๐‘Œ๐‘–(๐‘ก) = 1 โˆ’ ๐‘ก2

2 + ๐‘œ(๐‘ก2). (4.24)

Now let us return to ๐‘๐‘› = โˆ‘๐‘›๐‘–=1

1โˆš๐‘› ๐‘Œ๐‘– and using the final bit of characteristic function math in the primer,we can express the characteristic function of ๐‘๐‘› as:

๐œ™๐‘๐‘›(๐‘ก) = [๐œ™๐‘Œ ( ๐‘กโˆš๐‘›)]๐‘›, (4.25)

since ๐‘Œ๐‘– is divided by the square root of the sample size. Given our previously stated expression of thecharacteristic function of ๐‘Œ๐‘–:

๐œ™๐‘๐‘›(๐‘ก) = [1 โˆ’ ๐‘ก2

2๐‘› + ๐‘œ(๐‘ก2)]]๐‘›. (4.26)

We can now consider what happens as ๐‘› โ†’ โˆž. By definition, we know that ๐‘œ(๐‘ก2) converges to zero fasterthan the other terms, so we can safely ignore it. As a result, and noting that ๐‘’๐‘ฅ = lim(1 + ๐‘ฅ

๐‘› )๐‘›:

lim๐‘›โ†’โˆž

๐œ™๐‘๐‘›(๐‘ก) = ๐‘’โˆ’ ๐‘ก2

2 . (4.27)

This expression shows that as ๐‘› tends to infinity, the characteristic function of ๐‘๐‘› is the standard normaldistribution (as noted in the characteristic function primer). Therefore:

lim๐‘›โ†’โˆž

๐‘๐‘› = ๐‘(0, 1) (4.28)

lim๐‘›โ†’โˆž

๏ฟฝ๏ฟฝ๐‘› โˆ’ ๐œ‡๐œŽโˆš๐‘› = ๐‘(0, 1). โ–ก (4.29)

The last line here simply follows from the definition of ๐‘๐‘›.

4.2.4 Generalising CLT

From here, it is possible to intuit the more general CLT that the distribution of sampling means is normallydistributed around the true mean ๐œ‡ with variance ๐œŽ2

๐‘› . Note this is only a pseudo-proof, because as Lambertnotes, multiplying through by ๐‘› is complicated by the limit operator with respect to ๐‘›. However, it is usefulto see how these two CLT are closely related.

First, we can rearrange the limit expression using known features of the normal distribution:

lim๐‘›โ†’โˆž

๐‘๐‘›๐‘‘โˆ’โ†’ ๐‘(0, 1) (4.30)

lim๐‘›โ†’โˆž

โˆ‘๐‘›๐‘–=1(๐‘‹๐‘–) โˆ’ ๐‘›๐œ‡โˆš

๐‘›๐œŽ2๐‘‘โˆ’โ†’ ๐‘(0, 1) (4.31)

lim๐‘›โ†’โˆž

๐‘›โˆ‘๐‘–=1

(๐‘‹๐‘–) โˆ’ ๐‘›๐œ‡ ๐‘‘โˆ’โ†’ ๐‘(0, ๐‘›๐œŽ2) (4.32)

lim๐‘›โ†’โˆž

๐‘›โˆ‘๐‘–=1

(๐‘‹๐‘–)๐‘‘โˆ’โ†’ ๐‘(๐‘›๐œ‡, ๐‘›๐œŽ2), (4.33)

28

since ๐‘Ž๐‘(๐‘, ๐‘) = ๐‘(๐‘Ž๐‘, ๐‘Ž2๐‘), and ๐‘(๐‘‘, ๐‘’) + ๐‘“ = ๐‘(๐‘‘ + ๐‘“, ๐‘’).At this penultimate step, we know that the sum of i.i.d. random variables is a normal distribution. To seethat the sample mean is also normally distributed, we simply divide through by ๐‘›:

lim๐‘›โ†’โˆž

๏ฟฝ๏ฟฝ = 1๐‘›

๐‘›โˆ‘๐‘–=1

(๐‘‹๐‘–)๐‘‘โˆ’โ†’ ๐‘(๐œ‡, ๐œŽ2

๐‘› ). (4.34)

4.2.5 Limitation of CLT (and the importance of WLLN)

Before ending, it is worth noting that CLT is a claim with respect to repeat sampling from a population(holding ๐‘› constant each time). It is not, therefore, a claim that holds with respect to any particular sampledraw. We may actually estimate a mean value that, while probable, lies away from the true populationparameter (by definition, since the sample means are normally distributed, there is some dispersion). Con-structing uncertainty estimates using CLT on this estimate alone does not guarantee that we are in factcapturing either the true variance or the true parameter.

That being said, with sufficiently high-N, we know that WLLN guarantees (assuming i.i.d. observations)that our estimate converges on the population mean. WLLNโ€™s asymptotics rely only on sufficiently largesample sizes for a single sample. Hence, both WLLN and CLT are crucial for valid inference from sampleddata. WLLN leads us to expect that our parameter estimate will in fact be centred approximately near thetrue parameter. Here, CLT can only say that across multiple samples from the population the distributionof sample means is centred on the true parameter. With WLLN in action, however, CLT allows us to makeinferential claims about the uncertainty of this converged parameter.

29

30

Chapter 5

Slutskyโ€™s Theorem

5.1 Theorem in plain English

Slutskyโ€™s Theorem allows us to make claims about the convergence of random variables. It states that arandom variable converging to some distribution ๐‘‹, when multiplied by a variable converging in probabilityon some constant ๐‘Ž, converges in distribution to ๐‘Ž ร— ๐‘‹. Similarly, if you add the two random variables, theyconverge in distribution to ๐‘Ž plus ๐‘‹. More formally, the theorem states that if ๐‘‹๐‘›

๐‘‘โˆ’โ†’ ๐‘‹ and ๐ด๐‘›๐‘โˆ’โ†’, where

๐‘Ž is a constant, then:

1. ๐‘‹๐‘› + ๐ด๐‘›๐‘‘โˆ’โ†’ ๐‘‹ + ๐‘Ž

2. ๐ด๐‘›๐‘‹๐‘›๐‘‘โˆ’โ†’ ๐‘Ž๐‘‹

Note that if ๐ด๐‘› or ๐ต๐‘› do not converge in probability to constants, and instead converge towards somedistribution, then Slutskyโ€™s Theorem does not hold. More trivially, if all variables converge in probabilityto constants, then ๐ด๐‘›๐‘‹๐‘› + ๐ต๐‘›

๐‘โˆ’โ†’ ๐‘Ž๐‘‹ + ๐ต.

5.2 Coded demonstration

This theorem is reasonably intuitive. Suppose that the random variable ๐‘‹๐‘› converges in distribution to astandard normal distribution ๐‘(0, 1). For part 1) of the Theorem, note that when we multiply a standardnormal by a constant we โ€œstretchโ€ the distribution (assuming |๐‘Ž| > 1, else we โ€œcompressโ€ it). Recall fromthe discussion of the standard normal in Chapter 5 that ๐‘Ž๐‘(0, 1) = ๐‘(0, ๐‘Ž2). As ๐‘› approaches infinity,therefore, by definition ๐ด๐‘›

๐‘โˆ’โ†’ ๐‘Ž, and so the degree to which the standard normal is stretched will converge

to that constant too. To demonstrate this feature visually, consider the following simulation:library(ggplot2)set.seed(89)N <- c(10,20,500)

results <- data.frame(n = as.factor(levels(N)),X_n = as.numeric(),A_n = as.numeric(),ax = as.numeric())

for (n in N) {X_n <- rnorm(n)

31

A_n <- 2 + exp(-n)aX <- A_n * X_n

results <- rbind(results, cbind(n, X_n, A_n, aX))

}

ggplot(results, aes(x = aX)) +facet_wrap(n~., ncol = 3, labeller = "label_both") +geom_density() +labs(y = "p(aX)")

n: 10 n: 20 n: 500

โˆ’6 โˆ’3 0 3 6 โˆ’6 โˆ’3 0 3 6 โˆ’6 โˆ’3 0 3 6

0.00

0.05

0.10

0.15

0.20

aX

p(aX

)

Here we have defined two random variables: X_n is a standard normal, and A_n converges in value to 2.Varying the value of n, I take ๐‘› draws from a standard normal distribution and calculate the value theconverging constant ๐ด๐‘›. I then generate the product of these two variables. The figure plots the resultingdistribution aX. We can see that as n increases, the distribution becomes increasingly normal, remainscentred around 0 and the variance approaches 4 (since 95% of the curve is approximately bounded between0 ยฑ 2 ร— โˆš๐‘ฃ๐‘Ž๐‘Ÿ(๐‘Ž๐‘‹) = 0 ยฑ 2 ร— 2 = 0 ยฑ 4).

Similarly, if we add the constant ๐‘Ž to a standard distribution, the effect is to shift the distribution in itsentirety (since a constant has no variance, it does not โ€˜โ€™stretchโ€ the distribution). As ๐ด๐‘› converges inprobability, therefore, the shift converges on the constant ๐‘Ž. Again, we can demonstrate this result in R:library(ggplot2)set.seed(89)N <- c(10,20,500)

results <- data.frame(n = as.factor(levels(N)),

32

X_n = as.numeric(),A_n = as.numeric(),a_plus_X= as.numeric())

for (n in N) {X_n <- rnorm(n)A_n <- 2 + exp(-n)a_plus_X <- A_n + X_n

results <- rbind(results, cbind(n, X_n, A_n, a_plus_X))

}

ggplot(results, aes(x = a_plus_X)) +facet_wrap(n~., ncol = 3, labeller = "label_both") +geom_density() +geom_vline(xintercept = 2, linetype = "dashed") +labs(y = "p(a+X)", x="a+X")

n: 10 n: 20 n: 500

โˆ’1 0 1 2 3 4 5 โˆ’1 0 1 2 3 4 5 โˆ’1 0 1 2 3 4 5

0.0

0.1

0.2

0.3

0.4

a+X

p(a+

X)

As n becomes larger, the resulting distribution becomes approximately normal, with variance of 1 and amean value centred around 0 + ๐‘Ž = 2.

Slutskyโ€™s Theorem is so useful precisely because it allows us to combine multiple random variables withknown asymptotics, and retain this knowledge i.e. we know what the resultant distribution will converge toassuming ๐‘› โ†’ โˆž.

33

5.3 Proof of Slutskyโ€™s Theorem

Despite the intuitive appeal of Slutskyโ€™s Theorem, the proof is less straightforward. It relies on the continuousmapping theorem (CMT), which in turns rests on several other theorems such as the Portmanteau Theorem.To avoid the rabbit hole of proving all necessary antecedent theorems, I simply introduce and state thecontinuous mapping theorem (CMT) here, and then show how this can be used to prove Slutskyโ€™s Theorem.

5.3.1 CMT

The continuous mapping theorem states that if there is some random variable such that ๐‘‹๐‘›๐‘‘โˆ’โ†’ ๐‘‹, then

๐‘”(๐‘‹๐‘›) ๐‘‘โˆ’โ†’ ๐‘”(๐‘‹), so long as ๐‘” is a continuous function. In approximate terms (which are adequate for ourpurpose), a continuous function is one in which for a given domain the function can be represented as ansingle unbroken curve (or hyperplane in many dimensions). For example, consider the graph of ๐‘“(๐‘ฅ) = ๐‘ฅโˆ’1.For the domain ๐ท+ โˆถ โ„ > 0, this function is continuous. But for the domain ๐ทโˆž โˆถ โ„, the function isdiscontinuous because the function is undefined when ๐‘ฅ = 0.

In short, CMT states that a continuous function preserves the asymptotic limits of a random variable. Morebroadly (and again, I do not prove this here), CMT entails that ๐‘”(๐‘ƒ๐‘›, ๐‘„๐‘›, ..., ๐‘๐‘›) ๐‘‘โˆ’โ†’ ๐‘”(๐‘ƒ , ๐‘„, ..., ๐‘) if all๐‘ƒ๐‘›, ๐‘„๐‘›, ... etc. converge in distribution to ๐‘ƒ , ๐‘„, ... respectively.

5.3.2 Proof using CMT

How does this help prove Slutskyโ€™s Theorem? We know by the definitions in Slutskyโ€™s Theorem that ๐‘‹๐‘›๐‘‘โˆ’โ†’ ๐‘‹

and, by a similar logic, we know that ๐ด๐‘›๐‘‘โˆ’โ†’ ๐‘Ž (since ๐ด๐‘›

๐‘โˆ’โ†’ ๐‘Ž, and converging in probability entails converging

in distribution). So we can note that the joint vector (๐‘‹๐‘›, ๐ด๐‘›) ๐‘‘โˆ’โ†’ (๐‘‹, ๐‘Ž). By CMT, therefore, ๐‘”(๐‘‹๐‘›, ๐ด๐‘›) ๐‘‘โˆ’โ†’๐‘”(๐‘‹, ๐‘Ž). Hence, any continuous function ๐‘” will preserve the limits of the respective distributions.

Given this result, it is sufficient to note that both addition and multiplication are continuous functions.Again, I do not show this here but the continuity of addition and multiplication (both scalar and vector) canbe proved mathematically (for example see one such proof here For an intuitive explanation, think about thediagonal line ๐‘ฆ = ๐‘‹ โ€“ any multiplication of that line is still a single, uninterrupted line (๐‘ฆ = ๐‘Ž๐‘‹) assuming๐‘Ž is a constant. Similarly, adding a constant to the function of a line also yields an uninterrupted line(e.g. ๐‘ฆ = ๐‘‹ + ๐‘Ž).

Hence, CMT guarantees both parts 1 and 2 of the Theorem. โ–ก

5.4 Applications

Slutskyโ€™s Theorem is a workhorse theorem that allows researchers to make claims about the limiting distri-butions of multiple random variables. Instead of being used in applied settings, it typically underpins themodelling strategies used in applied research. For example, Aronow and Samii (2016) consider the problemof weighting multiple regression when the data sample is unrepresentative of the population of interest. Intheir proofs, they apply Slutskyโ€™s Theorem at two different points to prove that their weighted regression esti-mates converge in probability on the weighted expectation of individual treatment effects, and subsequently,that the same coefficient converges in probability to the true average treatment effect in the population.

34

5.4.1 Proving the consistency of sample variance, and the normality of the t-statistic

In the remainder of this chapter, I consider applications of both Central Mapping Theorem and Slutskyโ€™sTheorem in fundamental statistical proofs. I first show how CMT can be used to prove the consistency ofthe variance of a random variable, and subsequently how in combination with Slutskyโ€™s Theorem this helpsprove the normality of a t-statistic. These examples are developed from David Hunterโ€™s notes on asymptotictheory that accompany his Penn State course in large-sample theory.

5.4.1.1 Consistency of the sample variance estimator

First, let us define the sample variance (๐‘ 2๐‘›) of a sequence of i.i.d random variables drawn from a distribution

๐‘‹ with ๐”ผ[๐‘‹] = ๐œ‡ and ๐‘ฃ๐‘Ž๐‘Ÿ(๐‘‹) = ๐œŽ2 as:

๐‘ 2๐‘› = 1

๐‘› โˆ’ 1๐‘›

โˆ‘๐‘–=1

(๐‘‹๐‘– โˆ’ ๐‘‹๐‘›)2.

We can show that the sample variance formula above is a consistent estimator of the true variance ๐œŽ2. Thatis, as the sequence of i.i.d. random variables ๐‘‹1, ๐‘‹2, ... increases in length, the sample variance estimator ofthat sequence converges in probability to the true variance value ๐œŽ2.

We can prove this by redefining ๐‘ 2 as follows:

๐‘ 2๐‘› = ๐‘›

๐‘› โˆ’ 1 [ 1๐‘›

๐‘›โˆ‘๐‘–=1

(๐‘‹๐‘– โˆ’ ๐œ‡)2 โˆ’ ( ๐‘‹๐‘› โˆ’ ๐œ‡)2] ,

which clearly simplifies to the conventional definition of ๐‘ 2 as first introduced.

From here, we can note using WLLN, that ( ๐‘‹๐‘› โˆ’ ๐œ‡)๐‘โˆ’โ†’ 0, and hence that ( ๐‘‹๐‘› โˆ’ ๐œ‡)2 ๐‘

โˆ’โ†’ 0. Note that thisterm converges in probability to a constant. Moreover, 1

๐‘› โˆ‘๐‘›๐‘–=1(๐‘‹๐‘– โˆ’ ๐œ‡)2 ๐‘

โˆ’โ†’ ๐”ผ[๐‘‹๐‘– โˆ’ ๐œ‡] = ๐‘ฃ๐‘Ž๐‘Ÿ(๐‘‹) = ๐œŽ2, bydefinition.

Now let us define an arbitrary continuous function ๐‘”(๐ด๐‘›, ๐ต๐‘›). We know by CMT that if ๐ด๐‘›๐‘โˆ’โ†’ ๐ด, ๐ต๐‘›

๐‘โˆ’โ†’ ๐ต

then ๐‘”(๐ด๐‘›, ๐ต๐‘›)๐‘โˆ’โ†’ ๐‘”(๐ด, ๐ต). And hence, using the implications above we know that for any continuous

function ๐‘” that ๐‘”( 1๐‘› โˆ‘๐‘›

๐‘–=1(๐‘‹๐‘– โˆ’ ๐œ‡)2, ( ๐‘‹๐‘› โˆ’ ๐œ‡))๐‘โˆ’โ†’ ๐‘”(๐œŽ2, 0).

Since subtraction is a continuous function, we therefore know that:

[ 1๐‘›

๐‘›โˆ‘๐‘–=1

(๐‘‹๐‘– โˆ’ ๐œ‡)2 โˆ’ ( ๐‘‹๐‘› โˆ’ ๐œ‡)2]๐‘โˆ’โ†’ [๐œŽ2 โˆ’ 0] = ๐œŽ2.

Separately we can intuitively see that ๐‘›๐‘›โˆ’1

๐‘โˆ’โ†’ 1. Hence, by applying CMT again to this converging variable

multiplied by the converging limit of the above (since multiplication is a continuous function), we can seethat:

๐‘ 2๐‘›

๐‘โˆ’โ†’ 1 ร— ๐œŽ2 = ๐œŽ2 โ–ก

35

5.4.1.2 Normality of the t-statistic

Letโ€™s define a t-statistic as:

๐‘ก๐‘› =โˆš๐‘›(๏ฟฝ๏ฟฝ๐‘› โˆ’ ๐œ‡)

โˆš ๐œŽ2

By the Central Limit Theorem (CLT, Chapter 5), we know that for a random variable ๐‘‹ with mean ๐œ‡ andvariance ๐œŽ2 that

โˆš๐‘›(๏ฟฝ๏ฟฝ๐‘› โˆ’ ๐œ‡) ๐‘‘โˆ’โ†’ ๐‘(0, ๐œŽ2).

We also know from the proof above that if ๏ฟฝ๏ฟฝ2 = ๐‘ 2 then ๏ฟฝ๏ฟฝ2 ๐‘โˆ’โ†’ ๐œŽ2 โ€“ a constant. Given this, we can also note

that 1๐œŽ2

๐‘โˆ’โ†’ 1

๐œŽ2 .

Hence, by Slutskyโ€™s Theorem:

โˆš๐‘›(๏ฟฝ๏ฟฝ๐‘› โˆ’ ๐œ‡) ร— 1โˆš๐œŽ2

๐‘‘โˆ’โ†’ ๐‘(0, ๐œŽ2) ร— 1โˆš๐œŽ2 (5.1)

= ๐œŽ๐‘(0, 1) ร— 1๐œŽ (5.2)

= ๐‘(0, 1) โ–ก (5.3)

One brief aspect of this proof that is noteworthy is that since Slutskyโ€™s Theorem rests on the CMT, theapplication of Slutskyโ€™s Theorem requires that the function of the variables ๐‘” (in this case multiplication)is continuous and defined for the specified domain. Note that 1

0 is undefined and therefore that the aboveproof only holds when we assume ๐œŽ2 > 0. Hence why in many statistics textbooks and discussions of modelasymptotics, authors note that they must assume a positive, non-zero variance.

36

Chapter 6

Big Op and little op

6.1 Stochastic order notation

โ€œBig Opโ€ (big oh-pee), or in algebraic terms ๐‘‚๐‘, is a shorthand means of characterising the convergence inprobability of a set of random variables. It directly builds on the same sort of convergence ideas that werediscussed in Chapters 5 and 6.

Big Op means that some given random variable is stochastically bounded. If we have some random variable๐‘‹๐‘› and some constant ๐‘Ž๐‘› (where n indexes both sets), then

๐‘‹๐‘› = ๐‘‚๐‘(๐‘Ž๐‘›)is the same as saying that

๐‘ƒ(|๐‘‹๐‘›๐‘Ž๐‘›

| > ๐›ฟ) < ๐œ–, โˆ€๐‘› > ๐‘.

๐‘€ and ๐‘ here are just finite numbers, and ๐œ– is some arbitrary (small) number. In plain English, ๐‘‚๐‘ meansthat for a large enough ๐‘› there is some number (๐‘€) such that the probability that the random variable ๐‘‹๐‘›

๐‘Ž๐‘›is larger than that number is essentially zero. It is โ€œbounded in probabilityโ€ (van der Vaart, 1998, Section2.2).

โ€œLittle opโ€ (litle oh-pee), or ๐‘œ๐‘, refers to convergence in probability towards zero. ๐‘‹๐‘› = ๐‘œ๐‘(1) is the same assaying

lim๐‘›โ†’โˆž

(๐‘ƒ |๐‘‹๐‘›| โ‰ฅ ๐œ–) = 0, โˆ€๐œ– > 0.

By definition of the notation, if ๐‘‹๐‘› = ๐‘œ๐‘(๐‘Ž๐‘›) then

๐‘ฅ๐‘›๐‘Ž๐‘›

= ๐‘œ๐‘(1).

In turn, we can therefore express ๐‘‹๐‘› = ๐‘œ๐‘(๐‘Ž๐‘›) as

lim๐‘›โ†’โˆž

(๐‘ƒ |๐‘‹๐‘›๐‘Ž๐‘›

| โ‰ฅ ๐œ–) = 0, โˆ€๐œ– > 0.

In other words, ๐‘‹๐‘› = ๐‘œ๐‘(๐‘Ž๐‘›) if and only if ๐‘‹๐‘›๐‘Ž๐‘›

๐‘โˆ’โ†’ 0.

37

6.1.1 Relationship of big-O and little-o

๐‘‚๐‘ and ๐‘œ๐‘ may seem quite similar, and thatโ€™s because they are! Another way to express ๐‘‹๐‘› = ๐‘‚๐‘(๐‘Ž๐‘›), is

โˆ€๐œ– โˆƒ๐‘๐œ–, ๐›ฟ๐œ– ๐‘ .๐‘ก.โˆ€๐‘› > ๐‘๐œ–, ๐‘ƒ (|๐‘‹๐‘›๐‘Ž๐‘›

| โ‰ฅ ๐›ฟ๐œ–) โ‰ค ๐œ–.

This restatement makes it clear that the values of ๐›ฟ and ๐‘ are to be found with respect to ๐œ–. That is, weonly have to find one value of ๐‘ and ๐›ฟ for each ๐‘’๐‘๐‘ ๐‘–๐‘™๐‘œ๐‘›, and these can differ across ๐œ–โ€™s.Using the same notation, ๐‘‹๐‘› = ๐‘œ๐‘(๐‘Ž๐‘›) can be expressed as

โˆ€๐œ–, ๐›ฟ โˆƒ๐‘๐œ–,๐›ฟ ๐‘ .๐‘ก.โˆ€๐‘› > ๐‘๐œ–,๐›ฟ, ๐‘ƒ (|๐‘‹๐‘›๐‘Ž๐‘›

| โ‰ฅ ๐›ฟ) โ‰ค ๐œ–.

๐‘œ๐‘ is therefore a more general statement, ranging over all values of ๐œ– and ๐›ฟ, and hence any combination ofthose two values. In other words, for any given pair of values for ๐œ– and ๐›ฟ there must be some ๐‘ that satisfiesthe above inequality (assuming ๐‘‹๐‘› = ๐‘œ๐‘(๐‘Ž๐‘›)).Note also, therefore that ๐‘œ๐‘(๐‘Ž๐‘›) entails ๐‘‚๐‘(๐‘Ž๐‘›), but that the inverse is not true. If for all ๐œ– and ๐›ฟ there issome ๐‘๐œ–,๐›ฟ that satisfies the inequality, then it must be the case that for all ๐œ– there exists some ๐›ฟ such thatthe inequality also holds. But just because for some ๐›ฟ๐œ– the inequality holds, this does not mean that it willhold for all ๐›ฟ.

6.2 Notational shorthand and โ€œarithmeticโ€ properties

Expressions like ๐‘‹๐‘› = ๐‘œ๐‘ ( 1โˆš๐‘› ) do not contain literal identities. Big and little o are merely shorthand waysof expressing how some random variable converges (either to a bound or zero). Suppose for instance thatwe know ๐‘‹๐‘› = ๐‘œ๐‘( 1

๐‘› ). We also therefore know that ๐‘‹๐‘› = ๐‘œ๐‘( 1๐‘›0.5 ). Analogously, think about an object

accelerating at a rate of at least 10๐‘š๐‘ โˆ’2 โ€“ that car is also accelerating at a rate at least 5๐‘š๐‘ โˆ’2. But itโ€™s notthe case that ๐‘œ๐‘( 1

๐‘› ) = ๐‘œ๐‘( 1โˆš๐‘› ). For instance a car accelerating at least as fast as 5๐‘š๐‘ โˆ’2 is not necessarilyaccelerating at least as fast as 10๐‘š๐‘ โˆ’2.

Hence, when we use stochastic order notation we should be careful to think of it as implying somethingrather than making the claim that some random variable or expression involving random variables equalssome stochastic order.

That being said, we can note some simple implications of combining ๐‘‚๐‘ and/or ๐‘œ๐‘ terms, including:

โ€ข ๐‘œ๐‘(1) + ๐‘œ๐‘(1) = ๐‘œ๐‘(1) โ€“ this is straightforward: two terms that both converge to zero at the samerate, collectively converge to zero at that rate. Note this is actually just an application of ContinuousMapping Theorem, since If ๐‘‹๐‘› = ๐‘œ๐‘(1), ๐‘Œ๐‘› = ๐‘œ๐‘(1) then ๐‘‹๐‘›

๐‘โˆ’โ†’ 0, ๐‘Œ๐‘›

๐‘โˆ’โ†’ 0 then the addition of these

two terms is a continuous mapping function, and therefore ๐‘‹๐‘› + ๐‘Œ๐‘›๐‘โˆ’โ†’ 0, โˆด๐‘‹๐‘› + ๐‘Œ๐‘› = ๐‘œ๐‘(1).

โ€ข ๐‘‚๐‘(1) + ๐‘œ๐‘(1) = ๐‘‚๐‘(1) โ€“ a term that is bounded in probability (๐‘‚๐‘(1)) plus a term converging inprobability to zero, is bounded in probability.

โ€ข ๐‘‚๐‘(1)๐‘œ๐‘(1) = ๐‘œ๐‘(1) โ€“ a bounded probability multiplied by a term that converges (in the same order)to zero itself converges to zero.

โ€ข ๐‘œ๐‘(๐‘…) = ๐‘… ร— ๐‘œ๐‘(1) โ€“ again this is easy to see, since suppose ๐‘‹๐‘› = ๐‘œ๐‘(๐‘…), then ๐‘‹๐‘›/๐‘… = ๐‘œ๐‘(1), and so๐‘‹๐‘› = ๐‘…๐‘œ๐‘(1).

Further rules, and intuitive explanations for their validity, can be found in Section 2.2 of van der Vaart(1998). The last rule above, however, is worth dwelling on briefly since it makes clear why we use differentrate terms (๐‘…) in the little-o operator. Consider two rates ๐‘…(1) = 1โˆš๐‘› , ๐‘…(2) = 1

3โˆš2 , and some random variable

38

๐‘Œ๐‘›๐‘โˆ’โ†’ 0, that is ๐‘Œ๐‘› = ๐‘œ๐‘(1). Given the final rule (and remembering the equals signs should not be read

literally), if ๐‘‹(1)๐‘› = ๐‘œ๐‘(๐‘…(1)), then

๐‘‹(1)๐‘› = 1โˆš๐‘› ร— ๐‘Œ๐‘›,

and if ๐‘‹(2)๐‘› = ๐‘œ๐‘(๐‘…(2)), then

๐‘‹(2)๐‘› = 1

3โˆš๐‘› ร— ๐‘Œ๐‘›.

For each value of ๐‘Œ๐‘› as ๐‘› approaches infinity, ๐‘‹(1)๐‘› is smaller ๐‘‹(2)

๐‘› . In other words, ๐‘‹(2)๐‘› will converge in

probably towards zero more slowly. This implication of the notation, again,

6.3 Why is this useful?1

A simple (trivial) example of this notation is to consider a sequence of random variables ๐‘‹๐‘› with known๐”ผ[๐‘‹๐‘›] = ๐‘‹. We can therefore decompose ๐‘‹๐‘› = ๐‘‹+๐‘œ๐‘(1), since we know by the Weak Law of Large Numbersthat ๐‘‹๐‘›

๐‘โˆ’โ†’ ๐‘‹. This is useful because, without having to introduce explicit limits into our equations, we

know that with a sufficiently large ๐‘›, the second term of our decomposition converges to zero, and thereforewe can (in a hand-wavey fashion) ignore it.

Letโ€™s consider a more meaningful example. Suppose now that ๐‘‹๐‘› โˆผ ๐‘(0, ๐‘›). Using known features of normaldistributions, we can rearrange this to

๐‘‹๐‘›โˆš๐‘› โˆผ ๐‘(0, 1).

There exists some ๐‘€ such that the probability that a value from ๐‘(0, 1) exceeds ๐‘€ is less than ๐œ– > 0, andtherefore

๐‘‹๐‘› = ๐‘‚๐‘(โˆš๐‘›).

๐‘‹๐‘› is also little-op of ๐‘› since

๐‘‹๐‘›๐‘› โˆผ ๐‘(0, ๐‘›

๐‘›2 )

โˆผ ๐‘(0, 1๐‘›)

And so we just need to prove the righthand side above is ๐‘œ๐‘(1). To do so note that:

๐‘ƒ(|๐‘(0, 1๐‘›)| > ๐œ–) = ๐‘ƒ( 1โˆš๐‘›|๐‘(0, 1)| > ๐œ–)

= ๐‘ƒ(|๐‘(0, 1)| > โˆš๐‘›๐œ–)๐‘โˆ’โ†’ 0.

The last follows sinceโˆš๐‘› โ†’ โˆž, and so the probability that the standard normal is greater than โˆž decreases

to zero. Hence ๐‘‹๐‘› = ๐‘œ๐‘(๐‘›).1The first two examples in this section are adapted from Ashesh Rambachanโ€™s Asymptotics Review lecture slides, from

Harvard Math Camp โ€“ Econometrics 2018.

39

lim๐‘›โ†’โˆž

๐‘ƒ (โˆฃ๐‘(0, 1๐‘› )

๐‘› โˆฃ โ‰ฅ ๐œ–) = 0 = ๐‘œ๐‘(1),

for all ๐œ– > 0, and therefore that

๐‘‹๐‘› = ๐‘œ๐‘(๐‘›).

The big-O, little-o notation captures the complexity of the equation or, equivalently, the rate at which itconverges. One way to read ๐‘‹๐‘› = ๐‘œ๐‘(๐‘Ž๐‘›) is that, for any multiple of ๐‘—, ๐‘‹๐‘› converges in probability to zeroat the rate determined by ๐‘Ž๐‘›. So, for example, ๐‘œ๐‘(๐‘Ž2

๐‘›) converges faster than ๐‘œ๐‘(๐‘Ž๐‘›), since for some randomvariable ๐‘‹๐‘›, ๐‘‹๐‘›

๐‘Ž2๐‘›< ๐‘‹๐‘›

๐‘Ž๐‘›, ๐‘› > 1.

When we want to work out the asymptotic limits of a more complicated equation, where multiple terms areaffected by the number of observations, if we have a term that converges in probability to zero at a fasterrate than others then we can safely ignore that term.

6.4 Worked Example: Consistency of mean estimators

A parameter is โ€œconsistentโ€ if it converges in probability to the true parameter as the number of observationsincreases. More formally, a parameter estimate ๐œƒ is consistent if

๐‘ƒ(| ๐œƒ โˆ’ ๐œƒ| โ‰ฅ ๐œ–)๐‘โˆ’โ†’ 0,

where ๐œƒ is the true parameter.

One question we can ask is how fast our consistent parameter estimate converges on the true parametervalue. This is an โ€œappliedโ€ methods problem to the extent that, as researchers seeking to make an inferenceabout the true parameter, and confronted with potentially many ways of estimating it, we want to choosean efficient estimator i.e. one that gets to the truth quickest!

Letโ€™s suppose we want to estimate the population mean of ๐‘‹, i.e. ๏ฟฝ๏ฟฝ. Suppose further we have two potentialestimators, the sample mean is 1

๐‘ โˆ‘๐‘๐‘–=1 ๐‘‹๐‘– and the median is ๐‘‹(๐‘+1)/2, where ๐‘ = 2๐‘› + 1 (weโ€™ll assume

an odd number of observations for the ease of calculation) and ๐‘‹ is an ordered sequence from smallest tolargest.

We know by the Central Limit Theorem that the sample mean

๏ฟฝ๏ฟฝ๐‘ โˆผ ๐’ฉ(๐œƒ, ๐œŽ2

๐‘ ),

and note that I use ๐’ฉ to denote the normal distribution function, to avoid confusion with the total numberof observations ๐‘ .

Withholding the proof, the large-sample distribution of the median estimator can be expressed approxi-mately2 as

Med(๐‘‹1, ๐‘‹2, ..., ๐‘‹๐‘) โˆผ ๐’ฉ(๐œƒ, ๐œ‹๐œŽ2

2๐‘ ).

2See this Wolfram MathWorld post for more information about the exact CLT distribution of sample medians.

40

How do these estimators perform in practice? Letโ€™s first check this via Monte Carlo, by simulating drawsof a standard normal distribution with various sizes of N and plotting the resulting distribution of the twoestimators:library(tidyverse)library(ccaPP) # This pkg includes a fast algorithm for the median

# Compute sample mean and median 1000 times, using N draws from std. normalrep_sample <- function(N) {sample_means <- c()sample_medians <- c()for (s in 1:1000) {

sample <- rnorm(N)sample_means[s] <- mean(sample)sample_medians[s] <- fastMedian(sample)

}return(data.frame(N = N, Mean = sample_means, Median = sample_medians))

}

set.seed(89)Ns <- c(5,seq(50,250, by = 50)) # A series of sample sizes

# Apply function and collect results, then pivot dataset to make plotting easiersim_results <- do.call("rbind", lapply(Ns, FUN = function(x) rep_sample(x))) %>%pivot_longer(-N, names_to = "Estimator", values_to = "estimate")

ggplot(sim_results, aes(x = estimate, color = Estimator, fill = Estimator)) +facet_wrap(~N, ncol = 2, scales = "free_y", labeller = "label_both") +geom_density(alpha = 0.5) +labs(x = "Value", y = "Density") +theme(legend.position = "bottom")

41

N: 200 N: 250

N: 100 N: 150

N: 5 N: 50

โˆ’2 โˆ’1 0 1 โˆ’2 โˆ’1 0 1

0

1

2

012345

0

2

4

6

0.00

0.25

0.50

0.75

0

1

2

3

4

0

2

4

Value

Den

sity

Estimator Mean Median

Here we can see that for both the mean and median sample estimators, the distribution of parameters isnormally distributed around the true mean (๐œƒ = 0). The variance of the sample mean distribution, however,shrinks faster than that of the sample median estimator. In other words, the sample mean is more โ€œefficientโ€(in fact it is the most efficient estimator). Efficiency here captures what we noted mathematically above โ€“that the rate of convergence on the true parameter (i.e. the rate at which the estimator converges on zero)is faster for the sample mean than the median.

Note that both estimators are therefore unbiased (they are centred on ๐œƒ), normally distributed, and areconsistent (the sampling distributions shrink towards the true parameter as N increases), but that thevariances shrinks at slightly different rates.

We can quantify this using little-o notation and the behaviour of these estimators with large-samples. First,we can define the estimation errors of the mean and median respectively as

๐œ“Mean = ๐œƒ โˆ’ ๐œƒ

= ๐’ฉ(๐œƒ, ๐œŽ2

๐‘ ) โˆ’ ๐’ฉ(๐œƒ, 0)

= ๐’ฉ(0, ๐œŽ2

๐‘ ).

Similarly,

๐œ“Med. = ๐’ฉ(๐œƒ, ๐œ‹๐œŽ2

2๐‘ ) โˆ’ ๐’ฉ(๐œƒ, 0)

= ๐’ฉ(0, ๐œ‹๐œŽ2

2๐‘ ).

With both mean and median expressions, we can see that the error of the estimators is centered around zero

42

(i.e. it is unbiased), and that the dispersion of the error around zero decreases as ๐‘ increases. Given earlierdiscussions in this chapter, we can rearrange both to find out their rate of convergence.

For the sample mean:

๐œ“Mean = 1โˆš๐‘

๐’ฉ(0, ๐œŽ2)

๐œ“Mean๐‘โˆ’0.5 = ๐’ฉ(0, ๐œŽ2)

We know that for a normal distribution, there will be some ๐‘€๐œ–, ๐‘๐œ–, such that ๐‘ƒ(|๐’ฉ(0, ๐œŽ2)| โ‰ฅ ๐‘€๐œ–) < ๐œ–, andhence:

๐œ“Mean = ๐‘‚๐‘( 1โˆš๐‘

).

Similarly, for the sample median:

๐œ“Med. = ๐’ฉ(0, ๐œ‹๐œŽ2

2๐‘ )

= ( ๐œ‹2๐‘ )

0.5๐’ฉ(0, ๐œŽ2)

๐œ“Med./ ( ๐œ‹2๐‘ )

0.5= ๐’ฉ(0, ๐œŽ2)

๐œ“Med. = ๐‘‚๐‘ ([ ๐œ‹2๐‘ ]

0.5)

= ๐‘‚๐‘ (โˆš๐œ‹โˆš2๐‘

) .

Now we can see that the big-op of the sample medianโ€™s estimating error is โ€œslowerโ€ (read: larger) thanthe big-op of the sample mean, meaning that the sample mean converges on the true parameter with fewerobservations than the sample median.

Another, easy way to see the intuition behind this point is to note that at intermediary steps in the aboverearrangements:

๐œ“Mean = 1โˆš๐‘

๐’ฉ(0, ๐œŽ2)

๐œ“Med. =โˆš๐œ‹โˆš2๐‘

๐’ฉ(0, ๐œŽ2),

and so, for any sized sample, the estimating error of the median is larger than that of the mean. To visualisethis, we can plot the estimation error as a function of ๐‘ using the rates derived above:N <- seq(0.01,100, by = 0.01)mean_convergence <- 1/sqrt(N)median_convergence <- sqrt(pi)/sqrt(2*N)

plot_df <- data.frame(N, Mean = mean_convergence, Median = median_convergence) %>%pivot_longer(-N, names_to = "Estimator", values_to = "Rate")

ggplot(plot_df, aes(x = N, y = Rate, color = Estimator)) +geom_line() +ylim(0,1) +theme(legend.position = "bottom")

43

Figure 6.1: Simulated distribution of sample mean and median estimators for different sized samples.

0.00

0.25

0.50

0.75

1.00

0 25 50 75 100N

Rat

e

Estimator Mean Median

Note that the median rate line is always above the mean line for all ๐‘ (though not by much) โ€“ it thereforehas a slower convergence.

44

Chapter 7

Delta Method

7.1 Delta Method in Plain English

The Delta Method (DM) states that we can approximate the asymptotic behaviour of functions over arandom variable, if the random variable is itself asymptotically normal. In practice, this theorem tells usthat even if we do not know the expected value and variance of the function ๐‘”(๐‘‹) we can still approximateit reasonably. Note that by Central Limit Theorem we know that several important random variables andestimators are asymptotically normal, including the sample mean. We can therefore approximate the meanand variance of some transformation of the sample mean using its variance.

More specifically, suppose that we have some sequence of random variables ๐‘‹๐‘›, such that as ๐‘› โ†’ โˆž

๐‘‹๐‘› โˆผ ๐‘(๐œ‡, ๐œŽ2

๐‘› ).

We can rearrange this statement, to capture that the difference between the random variable and someconstant ๐œ‡ converges to a normal distribution around zero, with a variance determined by the number ofobservations:1

(๐‘‹๐‘› โˆ’ ๐œ‡) โˆผ ๐‘(0, ๐œŽ2

๐‘› ).

Further rearrangement yields

(๐‘‹๐‘› โˆ’ ๐œ‡) โˆผ ๐œŽโˆš๐‘›๐‘(0, 1)โˆš๐‘›(๐‘‹๐‘› โˆ’ ๐œ‡)

๐œŽ โˆผ ๐‘(0, 1),

by first moving the finite variance and ๐‘› terms outside of the normal distribution, and then dividing through.

Given this, if ๐‘” is some smooth function (i.e. there are no discontinuous jumps in values) then the DeltaMethod states that:

โˆš๐‘›(๐‘”(๐‘‹๐‘›) โˆ’ ๐‘”(๐œ‡))|๐‘”โ€ฒ(๐œ‡)|๐œŽ โ‰ˆ ๐‘(0, 1),

where ๐‘”โ€ฒ is the first derivative of ๐‘”. Rearranging again, we can see that1There are clear parallels here to how we expressed estimator consistency.

45

๐‘”(๐‘‹๐‘›) โ‰ˆ ๐‘ (๐‘”(๐œ‡), ๐‘”โ€ฒ(๐œ‡)2๐œŽ2

๐‘› ) .

Note that the statement above is an approximation because ๐‘”(๐‘‹๐‘›) = ๐‘”(๐œ‡) +๐‘”โ€ฒ(๐œ‡)( ๐œ‡ โˆ’ ๐œ‡ +๐‘”โ€ณ(๐œ‡) (๐‘‹๐‘›โˆ’๐œ‡)2

2! +...,i.e. an infinite sum. The Delta Method avoids the infinite regress by ignoring higher order terms (Liu, 2012).I return to this point below in the proof.

DM also generalizes to multidimensional functions, where instead of converging on the standard normal therandom variable must converge in distribution to a multivariate normal, and the derivatives of ๐‘” are replacedwith the gradient of g (a vector of all partial derivatives).[^fn_gradient] For the sake of simplicity I do notprove this result here, and instead focus on the univariate case.

โˆ‡๐‘” =โŽกโŽขโŽขโŽขโŽฃ

๐‘‘๐‘”๐‘‘๐‘ฅ1๐‘‘๐‘”๐‘‘๐‘ฅ2โ‹ฎ

๐‘‘๐‘”๐‘‘๐‘ฅ๐‘›

โŽคโŽฅโŽฅโŽฅโŽฆ

7.2 Proof

Before offering a full proof, we need to know a little bit about Taylor Series and Taylorโ€™s Theorem. I brieflyoutline this concept here, then show how this expansion helps to prove DM.

7.2.1 Taylorโ€™s Series and Theorem

Suppose we have some continuous function ๐‘” that is infinitely differentiable. By that, we mean that we meansome function that is continuous over a domain, and for which there is always some further derivative of thefunction. Consider the case ๐‘”(๐‘ฅ) = ๐‘’2๐‘ฅ,

๐‘”โ€ฒ(๐‘ฅ) = 2๐‘’2๐‘ฅ

๐‘”โ€ณ(๐‘ฅ) = 4๐‘’2๐‘ฅ

๐‘”โ€ด(๐‘ฅ) = 8๐‘’2๐‘ฅ

๐‘”โ—(๐‘ฅ) = 16๐‘’2๐‘ฅ

...For any integer ๐‘˜, the ๐‘˜th derivative of ๐‘”(๐‘ฅ) is defined. An interesting non-infinitely differentiable functionwould be ๐‘”(๐‘ฅ) = |๐‘ฅ| where โˆ’โˆž < ๐‘ฅ < โˆž. Here note that when ๐‘ฅ > 0, the first order derivative is 1 (thefunction is equivalent to ๐‘ฅ), and similarly at ๐‘ฅ < 0, the first order derivative is -1 (the function is equivalentto โˆ’๐‘ฅ). When ๐‘ฅ = 0, however, the first derivative is undefined โ€“ the first derivative jumps discontinuously.

The Taylor Series for an infinitely differentiable function at a given point ๐‘ฅ = ๐‘ is an expansion of thatfunction in terms of an infinite sum:

๐‘”(๐‘ฅ) = ๐‘”(๐‘) + ๐‘”โ€ฒ(๐‘)(๐‘ฅ โˆ’ ๐‘) + ๐‘”โ€ณ(๐‘)2! (๐‘ฅ โˆ’ ๐‘)2 + ๐‘”โ€ด(๐‘)

3! (๐‘ฅ โˆ’ ๐‘)3 + ...

Taylor Series are useful because they allow us to approximate a function at a lower polynomial order, usingTaylorโ€™s Theorem. This Theorem loosely states that, for a given point ๐‘ฅ = ๐‘, we can approximate acontinuous and k-times differentiable function to the ๐‘—th order using the Taylor Series up to the ๐‘—th derivative.In other words, if we have some continuous differentiable function ๐‘”(๐‘ฅ), its first-order approximation (i.e. itslinear approximation) at point ๐‘ is defined as

46

๐‘”(๐‘) + ๐‘”โ€ฒ(๐‘)(๐‘ฅ โˆ’ ๐‘).

To make this more concrete, consider the function ๐‘”(๐‘ฅ) = ๐‘’๐‘ฅ. The Taylor Series expansion of ๐‘” at point๐‘ฅ = 0 is

๐‘”(๐‘ฅ) = ๐‘”(0) + ๐‘”โ€ฒ(0)(๐‘ฅ โˆ’ 0) + ๐‘”โ€ณ(0)2! (๐‘ฅ โˆ’ 0)2 + ๐‘”โ€ด(0)

3! (๐‘ฅ โˆ’ 0)3 + ...

So up to the first order, Taylors Theorem states that

๐‘”(๐‘ฅ) โ‰ˆ ๐‘”(0) + ๐‘”โ€ฒ(0)(๐‘ฅ โˆ’ 0) = 1 + ๐‘ฅ,which is the line tangent to ๐‘’๐‘ฅ at ๐‘ฅ = 0. If we consider up to the second order (the quadratic approximation)our fit would be better, and even more so if we included the third, fourth, fifth orders and so on, up untilthe โˆžth order โ€“ at which point the Taylor Approximation is the function precisely.

7.2.2 Proof of Delta Method

Given Taylorโ€™s Theorem, we know that so long as ๐‘” is a continuous and derivable up to the ๐‘˜th derivative,where ๐‘˜ โ‰ฅ 2, then at the point ๐œ‡:

๐‘”(๐‘‹๐‘›) โ‰ˆ ๐‘”(๐œ‡) + ๐‘”โ€ฒ(๐œ‡)(๐‘‹๐‘› โˆ’ ๐œ‡).

Subtracting ๐‘”(๐œ‡) we have:

(๐‘”(๐‘‹๐‘›) โˆ’ ๐‘”(๐œ‡)) โ‰ˆ ๐‘”โ€ฒ(๐œ‡)(๐‘‹๐‘› โˆ’ ๐œ‡).

We know by CLT and our assumptions regarding ๐‘‹๐‘› that (๐‘‹๐‘› โˆ’ ๐œ‡) ๐‘‘โˆ’โ†’ ๐‘(0, ๐œŽ2๐‘› ). Therefore we can rewrite

the above as

(๐‘”(๐‘‹๐‘›) โˆ’ ๐‘”(๐œ‡)) โ‰ˆ ๐‘”โ€ฒ(๐œ‡)๐‘(0, ๐œŽ2

๐‘› ),

Hence, by the properties of normal distributions (multiplying by a constant, adding a constant):

๐‘”(๐‘‹๐‘›) โ‰ˆ ๐‘ (๐‘”(๐œ‡), ๐‘”โ€ฒ(๐œ‡)2๐œŽ2

๐‘› ) โ–ก

7.3 Applied example

Bowler et al. (2006) use the DM to provide confidence intervals for predicted probabilities generated from alogistic regression. Their study involves surveying politiciansโ€™ attitudes toward electoral rule changes. Theyestimate a logistic model of the support for change on various features of the politicians including whetherthey won under existing electoral rules or not. To understand how winning under existing rules affectsattitudes, they then generate the predicted probabilities for losers and winners separately.

Generating predicted probabilities from a linear regression involves a non-linear transformation of an asymp-totically normal parameter (the logistic coefficient), and therefore we must take account of this transformationwhen variance of the predicted probability.

To generate the predicted probability we use the equation

47

๐‘ = ๐‘’(๏ฟฝ๏ฟฝ+ ๐›ฝ1๐‘‹1+...+๐›ฝ๐‘›๐‘‹๐‘›)

1 + ๐‘’(๏ฟฝ๏ฟฝ+ ๐›ฝ1๐‘‹1+...+ ๐›ฝ๐‘›๐‘‹๐‘›),

where ๐‘ is the predicted probability. Estimating the variance around the predicted probability is thereforequite difficult โ€“ it involves multiple estimators, and non-linear transformations. But we do know that,assuming i.i.d and correct functional form, the estimating error of the logistic equation is asymptoticallymultivariate normal on the origin. And so the authors can use DM to calculate 95 percent confidence intervals.In general, the delta method is a useful way of estimating standard and errors and confidence intervals whenusing (but not limited to) logistic regression and other models involving non-linear transformations of modelparameters.

7.4 Alternative strategies

The appeal of the delta method is that it gives an analytic approximation of a functionโ€™s distribution,using the asymptotic properties of some more (model) parameter. But there are alternative methods toapproximating these distributions (and thus standard errors) that do not rely on deriving the order conditionsof that function.

One obvious alternative is the bootstrap. For a given transformation of a random variable, calculate theoutput of the function ๐ต times using samples of the same size as the original sample, but with replacementand take either the standard deviation or the ๐‘Ž and 1 โˆ’ ๐‘Ž percentiles of the resultant parameter distribution.This method does not require the user to calculate the derivative of a function. It is a non-parametricalternative that simply approximates the distribution itself, rather than approximates the parameters of aparametric distribution.

The bootstrap is computationally more intensive (requiring ๐ต separate samples and calculations etc.) but,on the other hand, is less technical to calculate. Moreover, the Delta Methodโ€™s approximation is limitedanalytically by the number of terms considered in the Taylor Series expansion. While the first order TaylorTheorem may be reasonable, it may be imprecise. To improve the precision one has to undertake to find thesecond, third, fourth etc. order terms (which may be analytically difficult). With bootstrapping, however,you can improve precision simply by taking more samples (increasing ๐ต) (King et al., 2000).

Given the ease with which we can acquire and deploy computational resources now, perhaps the delta methodis no longer as useful in applied research. But the proof and asymptotic implications remain statisticallyinteresting and worth knowing.

48

Chapter 8

Frisch-Waugh-Lovell Theorem

8.1 Theorem in plain English

The Frisch-Waugh-Lovell Theorem (FWL; after the initial proof by Frisch and Waugh (1933), and latergeneralisation by Lovell (1963)) states that:

Any predictorโ€™s regression coefficient in a multivariate model is equivalent to the regression coefficient esti-mated from a bivariate model in which the residualised outcome is regressed on the residualised componentof the predictor; where the residuals are taken from models regressing the outcome and the predictor on allother predictors in the multivariate regression (separately).

More formally, assume we have a multivariate regression model with ๐‘˜ predictors:

๐‘ฆ = ๐›ฝ1๐‘ฅ1 + ... ๐›ฝ๐‘˜๐‘ฅ๐‘˜ + ๐œ–. (8.1)

FWL states that every ๐›ฝ๐‘— in Equation 8.1 is equal to ๐›ฝโˆ—๐‘— , and the residual ๐œ– = ๐œ–โˆ— in:

๐œ–๐‘ฆ = ๐›ฝโˆ—๐‘—๐œ–๐‘ฅ๐‘— + ๐œ–โˆ— (8.2)

where:

๐œ–๐‘ฆ = ๐‘ฆ โˆ’ โˆ‘๐‘˜โ‰ ๐‘—

๐›ฝ๐‘ฆ๐‘˜๐‘ฅ๐‘˜

๐œ–๐‘ฅ๐‘— = ๐‘ฅ๐‘— โˆ’ โˆ‘๐‘˜โ‰ ๐‘—

๐›ฝ๐‘ฅ๐‘—๐‘˜ ๐‘ฅ๐‘˜.

(8.3)

where ๐›ฝ๐‘ฆ๐‘˜ and ๐›ฝ๐‘ฅ๐‘—

๐‘˜ are the regression coefficients from two separate regression models of the outcome (omitting๐‘ฅ๐‘—) and ๐‘ฅ๐‘— respectively.

In other words, FWL states that each predictorโ€™s coefficient in a multivariate regression explains that varianceof ๐‘ฆ not explained by both the other k-1 predictorsโ€™ relationship with the outcome and their relationshipwith that predictor, i.e. the independent effect of ๐‘ฅ๐‘—.

49

8.2 Proof

8.2.1 Primer: Projection matrices1

We need two important types of projection matrices to understand the linear algebra proof of FWL. First,the prediction matrix that was introduced in Chapter 4:

๐‘ƒ = ๐‘‹(๐‘‹โ€ฒ๐‘‹)โˆ’1๐‘‹โ€ฒ. (8.4)

Recall that this matrix, when applied to an outcome vector (๐‘ฆ), produces a set of predicted values ( ๐‘ฆ).Reverse engineering this, note that ๐‘ฆ = ๐‘‹ ๐›ฝ = ๐‘‹(๐‘‹โ€ฒ๐‘‹)โˆ’1๐‘‹โ€ฒ๐‘ฆ = ๐‘ƒ๐‘ฆ.

Since ๐‘ƒ๐‘ฆ produces the predicted values from a regression on ๐‘‹, we can define its complement, the residualmaker:

๐‘€ = ๐ผ โˆ’ ๐‘‹(๐‘‹โ€ฒ๐‘‹)โˆ’1๐‘‹โ€ฒ, (8.5)

since ๐‘€๐‘ฆ = ๐‘ฆ โˆ’ ๐‘‹(๐‘‹โ€ฒ๐‘‹)โˆ’1๐‘‹โ€ฒ๐‘ฆ โ‰ก ๐‘ฆ โˆ’ ๐‘ƒ๐‘ฆ โ‰ก ๐‘ฆ โˆ’ ๐‘‹ ๐›ฝ โ‰ก ๐œ–, the residuals from regressing ๐‘Œ on ๐‘‹.

Given these definitions, note that M and P are complementary:

๐‘ฆ = ๐‘ฆ + ๐œ–โ‰ก ๐‘ƒ๐‘ฆ + ๐‘€๐‘ฆ

๐ผ๐‘ฆ = ๐‘ƒ๐‘ฆ + ๐‘€๐‘ฆ๐ผ๐‘ฆ = (๐‘ƒ + ๐‘€)๐‘ฆ๐ผ = ๐‘ƒ + ๐‘€.

(8.6)

With these projection matrices, we can express the FWL claim (which we need to prove) as:

๐‘ฆ = ๐‘‹1 ๐›ฝ1 + ๐‘‹2 ๐›ฝ2 + ๐œ–๐‘€1๐‘ฆ = ๐‘€1๐‘‹2 ๐›ฝ2 + ๐œ–,

(8.7)

8.2.2 FWL Proof 2

Let us assume, as in Equation 8.7 that:

๐‘Œ = ๐‘‹1 ๐›ฝ1 + ๐‘‹2 ๐›ฝ2 + ๐œ–. (8.8)

First, we can multiply both sides by the residual maker of ๐‘‹1:

๐‘€1๐‘Œ = ๐‘€1๐‘‹1 ๐›ฝ1 + ๐‘€1๐‘‹2 ๐›ฝ2 + ๐‘€1 ๐œ–, (8.9)

which first simplifies to:1Citation Based on lecture notes from the University of Osloโ€™s โ€œEconometrics โ€“ Modelling and Systems Estimationโ€ course

(author attribution unclear), and Davidson et al. (2004).}2Citation: Adapted from York University, Canadaโ€™s wiki for statistical consulting.

50

๐‘€1๐‘Œ = ๐‘€1๐‘‹2 ๐›ฝ2 + ๐‘€1 ๐œ–, (8.10)

because ๐‘€1๐‘‹1 ๐›ฝ1 โ‰ก (๐‘€1๐‘‹1) ๐›ฝ1 โ‰ก 0 ๐›ฝ1 = 0. In plain English, by definition, all the variance in ๐‘‹1 is explainedby ๐‘‹1 and therefore a regression of ๐‘‹1 on itself leaves no part unexplained so ๐‘€1๐‘‹1 is zero.3

Second, we can simplify this equation further because, by the properties of OLS regression, ๐‘‹1 and ๐œ– areorthogonal. Therefore the residual of the residuals are the residuals! Hence:

๐‘€1๐‘Œ = ๐‘€1๐‘‹2 ๐›ฝ2 + ๐œ– โ–ก.

8.2.3 Interesting features/extensions

A couple of interesting features come out of the linear algebra proof:

โ€ข FWL also holds for bivariate regression when you first residualise Y and X on a ๐‘› ร— 1 vector of 1โ€™s(i.e. the constant) โ€“ which is like demeaning the outcome and predictor before regressing the two.

โ€ข ๐‘‹1 and ๐‘‹2 are technically sets of mutually exclusive predictors i.e. ๐‘‹1 is an ๐‘› ร— ๐‘˜ matrix {๐‘‹1, ..., ๐‘‹๐‘˜},and ๐‘‹2 is an ๐‘›ร—๐‘š matrix {๐‘‹๐‘˜+1, ..., ๐‘‹๐‘˜+๐‘š}, where ๐›ฝ1 is a corresponding vector of regression coefficients๐›ฝ1 = {๐›พ1, ..., ๐›พ๐‘˜}, and likewise ๐›ฝ2 = {๐›ฟ1, ..., ๐›ฟ๐‘š}, such that:

๐‘Œ = ๐‘‹1๐›ฝ1 + ๐‘‹2๐›ฝ2

= ๐‘‹1 ๐›พ1 + ... + ๐‘‹๐‘˜ ๐›พ๐‘˜ + ๐‘‹๐‘˜+1 ๐›ฟ1 + ... + ๐‘‹๐‘˜+๐‘š ๐›ฟ๐‘š,

Hence the FWL theorem is exceptionally general, applying not only to arbitrarily long coefficient vectors,but also enabling you to back out estimates from any partitioning of the full regression model.

8.3 Coded example

set.seed(89)

## Generate random datadf <- data.frame(y = rnorm(1000,2,1.5),

x1 = rnorm(1000,1,0.3),x2 = rnorm(1000,1,4))

## Partial regressions

# Residual of y regressed on x1y_res <- lm(y ~ x1, df)$residuals

# Residual of x2 regressed on x1x_res <- lm(x2 ~ x1, df)$residuals

resids <- data.frame(y_res, x_res)

## Compare the beta values for x2

3Algebraically, ๐‘€1๐‘‹1 = (๐ผ โˆ’ ๐‘‹1(๐‘‹โ€ฒ1๐‘‹1)โˆ’1๐‘‹โ€ฒ

1)๐‘‹1 = ๐‘‹1 โˆ’ ๐‘‹1(๐‘‹โ€ฒ1๐‘‹1)โˆ’1๐‘‹โ€ฒ

1๐‘‹ = ๐‘‹1 โˆ’ ๐‘‹1๐ผ = ๐‘‹1 โˆ’ ๐‘‹1 = 0.

51

# Multivariate regression:summary(lm(y~x1+x2, df))

#### Call:## lm(formula = y ~ x1 + x2, data = df)#### Residuals:## Min 1Q Median 3Q Max## -4.451 -1.001 -0.039 1.072 5.320#### Coefficients:## Estimate Std. Error t value Pr(>|t|)## (Intercept) 2.33629 0.16427 14.222 <2e-16 ***## x1 -0.31093 0.15933 -1.952 0.0513 .## x2 0.02023 0.01270 1.593 0.1116## ---## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1#### Residual standard error: 1.535 on 997 degrees of freedom## Multiple R-squared: 0.006252,Adjusted R-squared: 0.004258## F-statistic: 3.136 on 2 and 997 DF, p-value: 0.04388# Partials regressionsummary(lm(y_res ~ x_res, resids))

#### Call:## lm(formula = y_res ~ x_res, data = resids)#### Residuals:## Min 1Q Median 3Q Max## -4.451 -1.001 -0.039 1.072 5.320#### Coefficients:## Estimate Std. Error t value Pr(>|t|)## (Intercept) 5.228e-17 4.850e-02 0.000 1.000## x_res 2.023e-02 1.270e-02 1.593 0.111#### Residual standard error: 1.534 on 998 degrees of freedom## Multiple R-squared: 0.002538,Adjusted R-squared: 0.001538## F-statistic: 2.539 on 1 and 998 DF, p-value: 0.1114

Note: This isnโ€™t an exact demonstration because there is a degrees of freedom of error. The (correct)multivariate regression degrees of freedom is calculated as ๐‘ โˆ’ 3 since there are three variables. In thepartial regression the degrees of freedom is ๐‘ โˆ’ 2. This latter calculation does not take into account theadditional loss of freedom as a result of partialling out ๐‘‹1.

8.4 Application: Sensitivity analysis

Cinelli and Hazlett (2020) develop a series of tools for researchers to conduct sensitivity analyses on regressionmodels, using an extension of the omitted variable bias framework. To do so, they use FWL to motivatethis bias. Suppose that the full regression model is specified as:

52

๐‘Œ = ๐œ๐ท + ๐‘‹ ๐›ฝ + ๐›พ๐‘ + ๐œ–full, (8.11)

where ๐œ , ๐›ฝ, ๐›พ are estimated regression coefficients, D is the treatment variable, X are observed covariates,and Z are unobserved covariates. Since, Z is unobserved, researchers measure:

๐‘Œ = ๐œObs.๐ท + ๐‘‹ ๐›ฝObs. + ๐œ–Obs (8.12)

By FWL, we know that ๐œObs. is equivalent to the coefficient of regressing the residualised outcome (withrespect to X), on the residualised outcome of D (again with respect to X). Call these two residuals ๐‘Œ๐‘Ÿ and๐ท๐‘Ÿ.

And recall that the regression model for the final-stage of the partial regressions is bivariate (๐‘Œ๐‘Ÿ โˆผ ๐ท๐‘Ÿ).Conveniently, a bivariate regression coefficient can be expressed in terms of the covariance between theleft-hand and right-hand side variables:4

๐œObs. = ๐‘๐‘œ๐‘ฃ(๐ท๐‘Ÿ, ๐‘Œ๐‘Ÿ)๐‘ฃ๐‘Ž๐‘Ÿ(๐ท๐‘Ÿ) . (8.13)

Note that given the full regression model in Equation 8.11, the partial outcome ๐‘Œ๐‘Ÿ is actually composed ofthe elements ๐œ๐ท๐‘Ÿ + ๐›พ๐‘๐‘Ÿ, and so:

๐œObs. = ๐‘๐‘œ๐‘ฃ(๐ท๐‘Ÿ, ๐œ๐ท๐‘Ÿ + ๐›พ๐‘๐‘Ÿ)๐‘ฃ๐‘Ž๐‘Ÿ(๐ท๐‘Ÿ) (8.14)

Next, we can expand the covariance using the expectation rule that ๐‘๐‘œ๐‘ฃ(๐ด, ๐ต + ๐ถ) = ๐‘๐‘œ๐‘ฃ(๐ด, ๐ต) + ๐‘๐‘œ๐‘ฃ(๐ด, ๐ถ)and since ๐œ , ๐›พ are scalar, we can move them outside the covariance functions:

๐œObs. = ๐œ๐‘๐‘œ๐‘ฃ(๐ท๐‘Ÿ, ๐ท๐‘Ÿ) + ๐›พ๐‘๐‘œ๐‘ฃ(๐ท๐‘Ÿ, ๐‘๐‘Ÿ)๐‘ฃ๐‘Ž๐‘Ÿ(๐ท๐‘Ÿ) (8.15)

Since ๐‘๐‘œ๐‘ฃ(๐ด, ๐ด) = ๐‘ฃ๐‘Ž๐‘Ÿ(๐ด) and therefore:

๐œObs. = ๐œ๐‘ฃ๐‘Ž๐‘Ÿ(๐ท๐‘Ÿ) + ๐›พ๐‘๐‘œ๐‘ฃ(๐ท๐‘Ÿ, ๐‘๐‘Ÿ)๐‘ฃ๐‘Ž๐‘Ÿ(๐ท๐‘Ÿ) โ‰ก ๐œ + ๐›พ ๐‘๐‘œ๐‘ฃ(๐ท๐‘Ÿ, ๐‘๐‘Ÿ)

๐‘ฃ๐‘Ž๐‘Ÿ(๐ท๐‘Ÿ) โ‰ก ๐œ + ๐›พ ๐›ฟ (8.16)

Frisch-Waugh is so useful because it simplifies a multivariate equation into a bivariate one. While compu-tationally this makes zero difference (unlike in the days of hand computation), here it allows us to use aconvenient expression of the bivariate coefficient to show and quantify the bias when you run a regressionin the presence of an unobserved confounder. Moreover, note that in Equation 8.14, we implicitly use FWLagain since we know that the non-stochastic aspect of Y not explained by X are the residualised componentsof the full Equation 8.11.

8.4.1 Regressing the partialled-out X on the full Y

In Mostly Harmless Econometrics (MHE; Angrist and Pischke (2009)), the authors note that you also getan identical coefficient to the full regression if you regress the residualised predictor on the non-residualised๐‘Œ . We can use the OVB framework above to explain this case.

Letโ€™s take the full regression model as:4If ๐‘ฆ = ๏ฟฝ๏ฟฝ + ๐›ฝ๐‘ฅ + ๐œ–, then by least squares ๐›ฝ = ๐‘๐‘œ๐‘ฃ(๐‘ฅ,๐‘ฆ)

๐‘ฃ๐‘Ž๐‘Ÿ(๐‘ฅ) and ๏ฟฝ๏ฟฝ = ๐‘ฆ โˆ’ ๐›ฝ๏ฟฝ๏ฟฝ.

53

๐‘Œ = ๐›ฝ1๐‘‹1 + ๐›ฝ2๐‘‹2 + ๐œ–. (8.17)

MHE states that:

๐‘Œ = ๐›ฝ1๐‘€2๐‘‹1 + ๐œ–. (8.18)

Note that this is just FWL, except we have not also residualised ๐‘Œ . Our aim is to check whether there isany bias in the estimated coefficient from this second equation. As before, since this is a bivariate regressionwe can express the coefficient as:

๐›ฝ1 = ๐‘๐‘œ๐‘ฃ(๐‘€2๐‘‹1, ๐‘Œ )๐‘ฃ๐‘Ž๐‘Ÿ(๐‘€2๐‘‹1)

= ๐‘๐‘œ๐‘ฃ(๐‘€2๐‘‹1, ๐›ฝ1๐‘‹1 + ๐›ฝ2๐‘‹2)๐‘ฃ๐‘Ž๐‘Ÿ(๐‘€2๐‘‹1)

= ๐›ฝ1๐‘๐‘œ๐‘ฃ(๐‘€2๐‘‹1, ๐‘‹1)

๐‘ฃ๐‘Ž๐‘Ÿ(๐‘€2๐‘‹1) + ๐›ฝ2๐‘๐‘œ๐‘ฃ(๐‘€2๐‘‹1, ๐‘‹2)

๐‘ฃ๐‘Ž๐‘Ÿ(๐‘€2๐‘‹1)= ๐›ฝ1 + ๐›ฝ2 ร— 0= ๐›ฝ1

(8.19)

This follows from two features. First, ๐‘๐‘œ๐‘ฃ(๐‘€2๐‘‹1, ๐‘‹1) = ๐‘ฃ๐‘Ž๐‘Ÿ(๐‘€2๐‘‹1). Second, it is clear that๐‘๐‘œ๐‘ฃ(๐‘€2๐‘‹1, ๐‘‹2) = 0 because ๐‘€2๐‘‹1 is ๐‘‹1 stripped of any variance associated with ๐‘‹2 and so, bydefinition, they do not covary. Therefore, we can recover the unbiased regression coefficient using anadapted version of FWL where we do not residualise Y โ€“ as stated in MHE.

54

Chapter 9

Positive Definite Matrices

9.1 Terminology

A ๐‘› ร— ๐‘› symmetric matrix ๐‘€ is positive definite (PD) if and only if ๐‘ฅโ€ฒ๐‘€๐‘ฅ > 0, for all non-zero ๐‘ฅ โˆˆ โ„๐‘›. Forexample, take the 3 ร— 3 identity matrix, and a column vector of non-zero real numbers [๐‘Ž, ๐‘, ๐‘]:

[๐‘Ž ๐‘ ๐‘] โŽกโŽขโŽฃ

1 0 00 1 00 0 1

โŽคโŽฅโŽฆ

โŽกโŽขโŽฃ

๐‘Ž๐‘๐‘โŽคโŽฅโŽฆ

= [๐‘Ž ๐‘ ๐‘] โŽกโŽขโŽฃ

๐‘Ž๐‘๐‘โŽคโŽฅโŽฆ

= ๐‘Ž2 + ๐‘2 + ๐‘2.

Since by definition ๐‘Ž2, ๐‘2, and ๐‘2 are all greater than zero (even if ๐‘Ž, ๐‘, or ๐‘ are negative), their sum is alsopositive.

A matrix is positive semi-definite (PSD) if and only if ๐‘ฅโ€ฒ๐‘€๐‘ฅ โ‰ฅ 0 for all non-zero ๐‘ฅ โˆˆ โ„๐‘›. Note that PSDdiffers from PD in that the transformation of the matrix is no longer strictly positive.

One known feature of matrices (that will be useful later in this chapter) is that if a matrix is symmetric andidempotent then it will be positive semi-definite. Take some non-zero vector ๐‘ฅ, and a symmetric, idempotentmatrix ๐ด. By idempotency we know that ๐‘ฅโ€ฒ๐ด๐‘ฅ = ๐‘ฅโ€ฒ๐ด๐ด๐‘ฅ. By symmetry we know that ๐ดโ€ฒ = ๐ด, and therefore:

๐‘ฅโ€ฒ๐ด๐‘ฅ = ๐‘ฅโ€ฒ๐ด๐ด๐‘ฅ= ๐‘ฅโ€ฒ๐ดโ€ฒ๐ด๐‘ฅ= (๐ด๐‘ฅ)โ€ฒ๐ด๐‘ฅ โ‰ฅ 0,

and hence PSD.1

9.1.1 Positivity

Both PD and PSD are concerned with positivity. For scalar values like -2, 5, 89, positivity simply refers totheir sign โ€“ and we can tell immediately whether the numbers are positive or not. Some functions are also(strictly) positive. Think about ๐‘“(๐‘ฅ) = ๐‘ฅ2 + 1. For all ๐‘ฅ โˆˆ โ„, ๐‘“(๐‘ฅ) โ‰ฅ 1 > 0. PD and PSD extend this notion

1This short proof is taken from this discussion.

55

of a positivity to matrices, which is useful when we consider multidimensional optimisation problems or thecombination of matrices.

While for abstract matrices like the identity matrix it is easy to verify PD and PSD properties, for morecomplicated matrices we often require other more complicated methods. For example, we know that asymmetric matrix is PSD if and only if all its eigenvalues are non-negative. The eigenvalue ๐œ† is a scalarsuch that, for a matrix ๐ด and non-zero ๐‘› ร— 1 vector ๐‘ฃ, ๐ด โ‹… ๐‘ฃ = ๐œ† โ‹… ๐‘ฃ. While I do not explore this further inthis chapter, there are methods available for recovering these values from the preceding equation. Furtherdiscussion of the full properties of PD and PSD matrices can be found here as well as in print (e.g. Hornand Johnson, 2013, Chapter 7).

9.2 ๐ด โˆ’ ๐ต is PSD iff ๐ตโˆ’1 โˆ’ ๐ดโˆ’1 is PSD

Wooldridgeโ€™s list of 10 theorems does not actually include a general claim about the importance P(S)Dmatrices. Instead, he lists a very specific feature of two PD matrices. In plain English, this theorem statesthat, assuming ๐ด and ๐ต are both positive definite, ๐ด โˆ’ ๐ต is positive semi-definite if and only if the inverseof ๐ต minus the inverse of ๐ด is positive semi-definite.

Before we prove this theorem, itโ€™s worth noting a few points that are immediately intuitive from its statement.Note that the theorem moves from PD matrices to PSD matrices. This is because we are subtracting onematrix from another. While we know A and B are both PD, if they are both equal then ๐‘ฅโ€ฒ(๐ด โˆ’ ๐ต)๐‘ฅ willequal zero. For example, if ๐ด = ๐ต = ๐ผ2 = ( 1 0

0 1 ), then ๐ด โˆ’ ๐ต = ( 0 00 0 ). Hence, ๐‘ฅโ€ฒ(๐ด โˆ’ ๐ต)๐‘ฅ = 0 and therefore

๐ด โˆ’ ๐ต is PSD, but not PD.

Also note that this theorem only applies to a certain class of matrices, namely those we know to be PD.This hints at the sort of applied relevance this theorem may have. For instance, we know that variance is astrictly positive quantity.

The actual applied relevance of this theorem is not particularly obvious, at least from the claim alone. Inhis post, Wooldridge notes that he repeatedly uses this fact โ€˜to show the asymptotic efficiency of variousestimators.โ€™ In his Introductory Economics textbook (2013), for instance, Wooldridge makes use of theproperties of PSD matrices in proving that the Gauss-Markov (GM) assumptions ensure that OLS is thebest, linear, unbiased estimator (BLUE). And, more generally, PD and PSD matrices are very helpful inoptimisation problems (of relevance to machine learning too). Neither appear to be direct applications ofthis specific, bidirectional theorem. In the remainder of this chapter, therefore, I prove the theorem itself forcompleteness. I then broaden the discussion to explore how PSD properties are used in Wooldridgeโ€™s BLUEproof as well as discuss the more general role of PD matrices in optimisation problems.

9.2.1 Proof

The proof of Wooldridgeโ€™s actual claim is straightforward. In fact, given the symmetry of the proof, we onlyneed to prove one direction (i.e. if ๐ด โˆ’ ๐ต is PSD, then ๐ดโˆ’1 โˆ’ ๐ตโˆ’1 is PSD.)

Letโ€™s assume, therefore, that ๐ด โˆ’ ๐ต is PSD. Hence:

๐‘ฅโ€ฒ(๐ด โˆ’ ๐ต)๐‘ฅ โ‰ฅ 0๐‘ฅโ€ฒ๐ด๐‘ฅ โˆ’ ๐‘ฅ๐ต๐‘ฅ โ‰ฅ 0

๐‘ฅโ€ฒ๐ด๐‘ฅ โ‰ฅ ๐‘ฅโ€ฒ๐ต๐‘ฅ๐ด๐‘ฅ โ‰ฅ ๐ต๐‘ฅ

๐ด โ‰ฅ ๐ต.

Next, we can invert our two matrices while maintaining the inequality:

56

๐ดโˆ’1๐ด๐ตโˆ’1 โ‰ฅ ๐ดโˆ’1๐ต๐ตโˆ’1

๐ผ๐ตโˆ’1 โ‰ฅ ๐ดโˆ’1๐ผ๐ตโˆ’1 โ‰ฅ ๐ดโˆ’1.

Finally, we can just remultiply both sides of the inequality by our arbitrary non-zero vector:

๐‘ฅโ€ฒ๐ตโˆ’1 โ‰ฅ ๐‘ฅโ€ฒ๐ดโˆ’1

๐‘ฅโ€ฒ๐ตโˆ’1๐‘ฅ โ‰ฅ ๐‘ฅโ€ฒ๐ดโˆ’1๐‘ฅ๐‘ฅโ€ฒ๐ตโˆ’1๐‘ฅ โˆ’ ๐‘ฅโ€ฒ๐ดโˆ’1๐‘ฅ โ‰ฅ 0

๐‘ฅโ€ฒ(๐ตโˆ’1 โˆ’ ๐ดโˆ’1)๐‘ฅ โ‰ฅ 0.

Proving the opposite direction (if ๐ตโˆ’1 โˆ’ ๐ดโˆ’1 is PSD then ๐ด โˆ’ ๐ต is PSD) simply involves replacing A with๐ตโˆ’1 an ๐ต with ๐ดโˆ’1 and vice versa throughout the proof, since (๐ดโˆ’1)โˆ’1 = ๐ด. โ–ก

9.3 Applications

9.3.1 OLS as the best linear unbiased estimator (BLUE)

First, letโ€™s introduce the four Gauss-Markov assumptions. I only state these briefly, in the interest of space,spending a little more time explaining the rank of a matrix. Collectively, these assumptions guarantee thatthe linear regression estimates ๐›ฝ are BLUE (the best linear unbiased estimator of ๐›ฝ).

1. The true model is linear such that ๐‘ฆ = ๐‘‹๐›ฝ + ๐‘ข, where ๐‘ฆ is a ๐‘› ร— 1 vector, ๐‘‹ is a ๐‘› ร— (๐‘˜ + 1) matrix,and ๐‘ข is an unobserved ๐‘› ร— 1 vector.

2. The rank of ๐‘‹ is (๐‘˜ + 1) (full-rank), i.e. that there are no linear dependencies among the variables in๐‘‹. To understand what the rank of matrix denotes, consider the following 3 ร— 3 matrix:

๐‘€1 = โŽกโŽขโŽฃ

1 0 00 1 02 0 0

โŽคโŽฅโŽฆ

Note that the third row of ๐‘€1 is just two times the first column. They are therefore entirely linearlydependent, and so not separable. The number of independent rows (the rank of the matrix) is therefore 2.One way to think about this geometrically, as in Chapter 3, is to plot each row as a vector. The third vectorwould completely overlap the first, and so in terms of direction we would not be able to discern betweenthem. In terms of the span of these two columns, moreover, there is no point that one can get to using acombination of both that one could not get to by scaling either one of them.

A slightly more complicated rank-deficient (i.e. not full rank) matrix would be:

๐‘€2 = โŽกโŽขโŽฃ

1 0 00 1 02 1 0

โŽคโŽฅโŽฆ

Here note that the third row is not scalar multiple of either other column. But, it is a linear combinationof the other two. If rows 1, 2, and 3 are represented by ๐‘Ž, ๐‘, and ๐‘ respectively, then ๐‘ = 2๐‘Ž + ๐‘. Again,geometrically, there is no point that the third row vector can take us to which cannot be achieved using onlythe first two rows.

An example of a matrix with full-rank, i.e. no linear dependencies, would be:

57

๐‘€2 = โŽกโŽขโŽฃ

1 0 00 1 02 0 1

โŽคโŽฅโŽฆ

It is easy to verify that ๐‘€1 and ๐‘€2 are rank-deficient, whereas ๐‘€3 is of full-rank in R:M1 <- matrix(c(1,0,2,0,1,0,0,0,0), ncol = 3)M1_rank <- qr(M1)$rank

M2 <- matrix(c(1,0,2,0,1,1,0,0,0), ncol = 3)M2_rank <- qr(M2)$rank

M3 <- matrix(c(1,0,2,0,1,0,0,0,1), ncol = 3)M3_rank <-qr(M3)$rank

print(paste("M1 rank:", M1_rank))

## [1] "M1 rank: 2"print(paste("M2 rank:", M2_rank))

## [1] "M2 rank: 2"print(paste("M3 rank:", M3_rank))

## [1] "M3 rank: 3"

3. ๐”ผ[๐‘ข|๐‘‹] = 0 i.e. that the model has zero conditional mean or, in other words, our average error is zero.

4. Var(๐‘ข๐‘–|๐‘‹) = ๐œŽ2, Cov(๐‘ข๐‘–, ๐‘ข๐‘—|๐‘‹) = 0 for all ๐‘– โ‰  ๐‘—, or equivalently that ๐‘‰ ๐‘Ž๐‘Ÿ(๐‘ข|๐‘‹) = ๐œŽ2๐ผ๐‘›. This matrixhas diagonal elements all equal to ๐œŽ2 and all off-diagonal elements equal to zero.

BLUE states that the regression coefficient vector ๐›ฝ is the best, or lowest variance, estimator of the true ๐›ฝ.Wooldridge (2013) has a nice onproof of this claim (p.812). Here I unpack hisi proof in slightly more detail,noting specifically how PD matrices are used.

To begin our proof of BLUE, let us denote any other linear estimator as ๐›ฝ = ๐ดโ€ฒ๐‘ฆ, where ๐ด is some ๐‘›ร—(๐‘˜+1)matrix consisting of functions of ๐‘‹.

We know by definition that ๐‘ฆ = ๐‘‹๐›ฝ + ๐‘ข and therefore that:

๐›ฝ = ๐ดโ€ฒ(๐‘‹๐›ฝ + ๐‘ข) = ๐ดโ€ฒ๐‘‹๐›ฝ + ๐ดโ€ฒ๐‘ข.

The conditional expectation of ๐›ฝ can be expressed as:

๐”ผ( ๐›ฝ|๐‘‹) = ๐ดโ€ฒ๐‘‹๐›ฝ + ๐”ผ(๐ดโ€ฒ๐‘ข|๐‘‹),

and since ๐ด is a function of ๐‘‹, we can move it outside the expectation:

๐”ผ( ๐›ฝ|๐‘‹) = ๐ดโ€ฒ๐‘‹๐›ฝ + ๐ดโ€ฒ๐”ผ(๐‘ข|๐‘‹).

By the GM assumption no. 3, we know that ๐”ผ(๐‘ข|๐‘‹) = 0, therefore:

๐”ผ( ๐›ฝ|๐‘‹) = ๐ดโ€ฒ๐‘‹๐›ฝ.

58

Since we are only comparing ๐›ฝ against other unbiased estimators, we know the conditional mean of any otherestimator must equal the true parameter, and therefore that

๐ดโ€ฒ๐‘‹๐›ฝ = ๐›ฝ.

The only way that this is true is if ๐ดโ€ฒ๐‘‹ = ๐ผ . Hence, we can rewrite our estimator as

๐›ฝ = ๐›ฝ + ๐ดโ€ฒ๐‘ข.

The variance of our estimator ๐›ฝ then becomes

๐‘‰ ๐‘Ž๐‘Ÿ( ๐›ฝ|๐‘‹) = (๐›ฝ โˆ’ [๐›ฝ + ๐ดโ€ฒ๐‘ข])(๐›ฝ โˆ’ [๐›ฝ + ๐ดโ€ฒ๐‘ข])โ€ฒ

= (๐ดโ€ฒ๐‘ข)(๐ดโ€ฒ๐‘ข)โ€ฒ

= ๐ดโ€ฒ๐‘ข๐‘ขโ€ฒ๐ด= ๐ดโ€ฒ[Var(๐‘ข|๐‘‹)]๐ด= ๐œŽ2๐ดโ€ฒ๐ด,

since by GM assumption no. 4 the variance of the errors is a constant scalar ๐œŽ2.

Hence:

Var( ๐›ฝ|๐‘‹) โˆ’ Var( ๐›ฝ|๐‘‹) = ๐œŽ2๐ดโ€ฒ๐ด โˆ’ ๐œŽ2(๐‘‹โ€ฒ๐‘‹)โˆ’1

= ๐œŽ2[๐ดโ€ฒ๐ด โˆ’ (๐‘‹โ€ฒ๐‘‹)โˆ’1].We know that ๐ดโ€ฒ๐‘‹ = ๐ผ , and so we can manipulate this expression further:

Var( ๐›ฝ|๐‘‹) โˆ’ Var( ๐›ฝ|๐‘‹) = ๐œŽ2[๐ดโ€ฒ๐ด โˆ’ (๐‘‹โ€ฒ๐‘‹)โˆ’1]= ๐œŽ2[๐ดโ€ฒ๐ด โˆ’ ๐ดโ€ฒ๐‘‹(๐‘‹โ€ฒ๐‘‹)โˆ’1๐‘‹โ€ฒ๐ด]= ๐œŽ2๐ดโ€ฒ[๐ด โˆ’ ๐‘‹(๐‘‹โ€ฒ๐‘‹)โˆ’1๐‘‹โ€ฒ๐ด]= ๐œŽ2๐ดโ€ฒ[๐ผ โˆ’ ๐‘‹(๐‘‹โ€ฒ๐‘‹)โˆ’1๐‘‹โ€ฒ]๐ด= ๐œŽ2๐ดโ€ฒ๐‘€๐ด.

Note that we encountered ๐‘€ in the previous chapter. It is the residual maker, and has the known property ofbeing both symmetric and idempotent. Recall from the first section that we know any symmetric, idempotentmatrix is positive semi-definite, and so we know that

Var( ๐›ฝ|๐‘‹) โˆ’ Var( ๐›ฝ|๐‘‹) โ‰ฅ 0,

and thus that the regression estimator ๐›ฝ is more efficient (hence better) than any other unbiased, linearestimator of ๐›ฝ. โ–กNote that ๐›ฝ and ๐›ฝ are both (๐‘˜ + 1) ร— 1 vectors. As Wooldridge notes at the end of the proof, for any(๐‘˜ + 1) ร— 1 vector ๐‘, we can calculate the scalar ๐‘โ€ฒ๐›ฝ. Think of ๐‘ as the row vector of the ith observation from๐‘‹. Then ๐‘โ€ฒ๐›ฝ = ๐‘โ€ฒ

๐‘œ๐›ฝ0 + ๐‘1๐›ฝ1 + ... + ๐‘๐‘˜๐›ฝ๐‘˜ = ๐‘ฆ๐‘–. Both ๐‘โ€ฒ ๐›ฝ and ๐‘โ€ฒ ๐›ฝ are both unbiased estimators of ๐‘โ€ฒ๐›ฝ. Noteas an extension of the proof above that

Var(๐‘โ€ฒ ๐›ฝ|๐‘‹) โˆ’ Var(๐‘โ€ฒ ๐›ฝ|๐‘‹) = ๐‘โ€ฒ[Var( ๐›ฝ|๐‘‹) โˆ’ Var( ๐›ฝ|๐‘‹)]๐‘.

We know that Var( ๐›ฝ|๐‘‹) โˆ’ Var( ๐›ฝ|๐‘‹) is PSD, and hence by definition that:

59

๐‘โ€ฒ[Var( ๐›ฝ|๐‘‹) โˆ’ Var( ๐›ฝ|๐‘‹)]๐‘ โ‰ฅ 0,

and hence, for any observation in X (call it ๐‘ฅ๐‘–), and more broadly any linear combination of ๐›ฝ, if the GMassumptions hold the estimate ๐‘ฆ๐‘– = ๐‘ฅ๐‘– ๐›ฝ has the smallest variance of any possible linear, unbiased estimator.

9.3.2 Optimisation problems

Optimisation problems, in essence, are about tweaking some parameter(s) until an objective function is asgood as it can be. The objective function summarises some aspect of the model given a potential solution.For example, in OLS, our objection function is defined as โˆ‘๐‘–(๐‘ฆ๐‘– โˆ’ ๐‘ฆ๐‘–)2 โ€“ the sum of squared errors. Typically,โ€œas good as it can beโ€ stands for โ€œis minimisedโ€ or โ€œis maximised.โ€ For example with OLS we seek to minimisethe sum of the squared error terms. In a slight extension of this idea, many machine learning models aim tominimise the prediction error on a โ€œhold-outโ€ sample of observations i.e. observations not used to select themodel parameters. The objective loss function may be the sum of squares, or it could be the mean squarederror, or some more convoluted criteria.

By โ€œtweakingโ€ we mean that the parameter values of the model are adjusted in the hope of generatingan even smaller (bigger) value from our objective function. For example, in least absolute shrinkage andselection (LASSO) regression, the goal is to minimise both the squared prediction error (as in OLS) as wellas the total size of the coefficient vector. More formally, we can write this objective function as:

(๐‘ฆ โˆ’ ๐‘‹๐›ฝ)2 + ๐œ†||๐›ฝ||1,

where ๐œ† is some scalar, and ||๐›ฝ||1 is the sum of the absolute size of the coefficients i.e. โˆ‘๐‘— |๐›ฝ๐‘—|.There are two ways to potentially alter the value of the LASSO loss function: we can change the valueswithin the vector ๐›ฝ or adjust the value of ๐œ†. In fact, iterating through values of ๐œ†, we can solve the squarederror part of the loss function, and then choose from our many different values of ๐œ† which results in thesmallest (read: minimised) objective function.

With infinitely many values of ๐œ†, we can perfectly identify the optimal model. But we are often constrainedinto considering only a subset of possible cases. If we are too coarse in terms of which ๐œ† values to consider,we may miss out on substantial optimisation.

This problem is not just present in LASSO regression. Any non-parametric model (particularly those commonin machine learning) is going to face similar optimisation problems. Fortunately, there are clever ways toreduce the computational intensity of these optimisation problems. Rather than iterating through a rangeof values (an โ€œexhaustive grid-searchโ€) we can instead use our current loss value to adjust our next choice ofvalue for ๐œ† (or whatever other parameter we are optimisimng over). This sequential method helps us narrowin on the optimal parameter values without having to necessarily consider may parameter combinations farfrom the minima.

Of course, the natural question is how do we know how to adjust the scalar ๐œ†, given our existing value?Should it be increased or decreased? One very useful algorithm is gradient descent (GD), which I will focuson in the remainder of this section. Briefly, the basics of GD are:

1. Take a (random) starting solution to your model2. Calculate the gradient (i.e. the k-length vector of derivatives) of the loss at that point3. If the gradient is positive (negative), decrease (increase) your parameter by the gradient value.4. Repeat 1-3 until you converge on a stable solution.

Consider a quadratic curve in two-dimensions, as in Figure 9.1. If the gradient at a given point is positive,then we know we are on the righthand slope. To move closer to the minimum point of the curve we want togo left, so we move in the negative direction. If the gradient is negative, we are on the lefthand slope andwant to move in the positive direction. After every shift I can recalculate the gradient and keep adjusting.

60

Crucially, these movements are dictated by the absolute size of the gradient. Hence, as I approach theminimum point of the curve, the gradient and therefore the movements will be smaller. In 9.1, we see thateach iteration involves not only a move towards the global minima, but also that the movements get smallerwith each iteration.

Figure 9.1: Gradient descent procedure in two dimensions.

PD matrices are like the parabola above. Geometrically, they are bowl-shaped and are guaranteed to havea global minimum.2 Consider rolling a ball on the inside surface of this bowl. It would run up and downthe edges (losing height each time) before eventually resting on the bottom of the bowl, i.e. converging onthe global minimum. Our algorithm is therefore bound to find the global minimum, and this is obviously avery useful property from an optimisation perspective.

If a matrix is PSD, on the other hand, we are not guaranteed to converge on a global minima. PSD matriceshave โ€œsaddle pointsโ€ where the slope is zero in all directions, but are neither (local) minima or maxima inall dimensions. Geometrically, for example, PSD matrices can look like hyperbolic parabaloids (shaped likea Pringles crisp). While there is a point on the surface that is flat in all dimensions, it may be a minima inone dimension, but a maxima in another.

PSD matrices prove more difficult to optimise because we are not guaranteed to converge on that point. Ata point just away from the saddle point, we may actually want to move in opposite direction to the gradientdependent on the axis. In other words, the valence of the individual elements of the gradient vector point indifferent directions. Again, imagine dropping a ball onto the surface of a hyperbolic parabaloid. The ball islikely to pass the saddle point then run off one of the sides: gravity is pulling it down in to a minima in onedimension, but away from a maxima in another. PSD matrices therefore prove trickier to optimise, and caneven mean we do not converge on a miniimum loss value. Therefore our stable of basic algorithms like GDlike gradient descent are less likely to be effective optimisers.

2See these UPenn lecture notes for more details.

61

9.3.3 Recap

In this final section, we have covered two applications of positive (semi-) definiteness: the proof of OLS asBLUE, and the ease of optimisation when a matrix is PD. There is clearly far more that can be discussedwith respect to P(S)D matrices, and this chapter links or cites various resources that can be used to gofurther.

62

Bibliography

Angrist, J. D. and Pischke, J.-S. (2008). Mostly harmless econometrics: An empiricistโ€™s companion. Princetonuniversity press.

Angrist, J. D. and Pischke, J.-S. (2009). Mostly Harmless Econometrics: An Empiricistโ€™s Companion.Princeton University Press, Princeton.

Aronow, P. and Miller, B. (2019). Foundations of Agnostic Statistics. Cambridge University Press.

Aronow, P. M. and Samii, C. (2016). Does regression produce representative estimates of causal effects?American Journal of Political Science, 60(1):250โ€“267.

Bowler, S., Donovan, T., and Karp, J. A. (2006). Why politicians like electoral institutions: Self-interest,values, or ideology? The Journal of Politics, 68(2):434โ€“446.

Cinelli, C. and Hazlett, C. (2020). Making sense of sensitivity: Extending omitted variable bias. Journal ofthe Royal Statistical Society: Series B (Statistical Methodology), 82(1):39โ€“67.

Davidson, R., MacKinnon, J. G., et al. (2004). Econometric theory and methods, volume 5. Oxford UniversityPress New York.

Frisch, R. and Waugh, F. V. (1933). Partial time regressions as compared with individual trends. Econo-metrica: Journal of the Econometric Society, pages 387โ€“401.

Horn, R. A. H. and Johnson, C. R. (2013). Matrix Analysis. NY, USA, 2nd edition edition.

King, G., Tomz, M., and Wittenberg, J. (2000). Making the most of statistical analyses: Improving inter-pretation and presentation. American Journal of Political Science, 44:341โ€“355.

Lemons, D., Langevin, P., and Gythiel, A. (2002). An introduction to stochastic processes in physics. Johnshopkins paperback. Johns Hopkins University Press. Citation Key: lemons2002introduction tex.lccn:2001046459.

Liu, X. (2012). Appendix A: The Delta Method, pages 405โ€“406. John Wiley & Sons, Ltd.

Lovell, M. C. (1963). Seasonal adjustment of economic time series and multiple regression analysis. Journalof the American Statistical Association, 58(304):993โ€“1010.

van der Vaart, A. W. (1998). Asymptotic Statistics. Cambridge Series in Statistical and ProbabilisticMathematics. Cambridge University Press.

Wasserman, L. (2004). All of Statistics: A Concise Course in Statistical Inference. Springer Texts in Statistics.Springer New York, New York, NY.

Wooldridge, J. M. (2013). Introductory econometrics : a modern approach. Mason, OH, 5th edition edition.

63