things to do in lecture 1 outline basic concepts of causality go through the ideas & principles...
TRANSCRIPT
Things to do in Lecture 1
• Outline basic concepts of causality• Go through the ideas & principles
underlying ordinary least squares (OLS) estimation
• Derive formally how to estimate the value of the coefficients we are interested using OLS
• Run an OLS regression
• Regression analysis is primarily concerned with quantifying the relationship between variables
• Does more than measures of association like the correlation coefficient since it implies that one variable depends on another
• hence there is a causal relationship between the variable whose behaviour we would like to explain - the dependent variable (which label y)and the explanatory (or independent) variable
that we think can explain this behaviour (which label X)
DERIVING OLS REGRESSION COEFFICIENTS
- Given data on both the dependent and explanatory variable we can establish that causal relationship using regression analysis
- How?- fit a straight line through the data that best
summarises the relationship- Since the equation of a straight line is given by
Y= b0 + b1X
we can obtain an estimate of the values for the intercept b0 (the constant)
and the slope b1
that together give the “line of best fit”How ?
#
#
#
#
#
#
Y
X
Which line (and hence which slope & intercept) to choose?
- The one that minimises the sum of squared residuals
do
b0
b1
d1
Suppose we plot a set of observations on the Y variable and the X variable of interest - a scatter diagram
Using the principle of Ordinary Least Squares (OLS)
- Try to fit a (straight) line through the data based on “minimising the sum of squared residuals”
What is a residual?
If we fit a line through some data then this will give a predicted value for the dependent variable based on the value of the X variable and the values for the constant and the slope
#
#
#
#
#
#
y
X
b0
Xi
yi
iy^
b1
Given any straight line can always read off actual and predicted values of variable of interest for every individual i in the data set
iXbbiy 1^
0^^
↓
For example given the equation of a straight line
Y = b0 + b1X
we know
when X = 1, then the predicted value of Y
)1(1^
0^^
bby
and when X = 2, then the predicted value of Y
)2(1^
0^^
bby
We can then compare this predicted value with the actual value of the dependent variable and the difference between the actual and predicted value gives the residual
iXbbiy 1^
0^^
Which since
gives
iyiyiu^^
iXiyiyiyiu 1^
0^^^
#
#
#
#
#
#
y
X
b0
Xi
yi
The difference between the actual and predicted value is the residual
iu^
b1
iy^
predicted
actual
We can then compare this predicted value with the actual value of the dependent variable and the difference between the actual and predicted value gives the residual
iXbbyiyiyiu 1^
0^^^
(where the i subscript refers to the ith individual or firm or time period in the data set)
Things to know about residuals
So intuitively then the line of best fit should be the one that delivers the smallest residual values for the each observation in the data set
1. The larger the residual the worse the prediction
iXbbyiyiyiu 1^
0^^^
2. Since the difference between the actual and predicted value gives the residual
0^^
iyiyiu
a positive residual means
The model underpredicts
and similarly The model over-predicts
(larger than actual)
iXbbyiyiyiu 1^
0^^^
0^^
iyiyiu
Given this..Suppose we tried to minimise the sum of all the
residuals in the data in an attempt to get the line of best fit
Whilst this might seem intuitive it will not work because it is possible that any positive residual will be offset by a negative residual in the summation and so the sum could be close to zero even if the overall fit of the regression were poor
N
iiu
1
^
We can avoid this problem is use instead the principle of Ordinary Least Squares (OLS)
Rather than minimise the sum of residuals, minimise the sum of squared residuals
- Squaring ensures that values are always positive and so can never cancel each other out
-Also gives more “weight” to larger residuals and so harder to get away with a poor fit, (the larger any one residual in absolute value, the larger is the sum of squared residuals (RSS)
N
iiu
1
2^
Things to do in Lecture 2
• Derive formally how to estimate the value of the coefficients we are interested using OLS
• Run an OLS regression• Interpret the regression output• Measure how well the model fits
the data
#
#
#
#
#
#
y
X
Which line (and hence which slope & intercept) to choose?
- The one that minimises the sum of squared residuals
do
b0
b1
d1
The Idea Behind Ordinary Least Squares (OLS)
#
#
#
#
#
#
y
X
b0
Xi
yi
The difference between the actual and predicted value is the residual
iu^
b1
iy^
predicted
actual
Consider the following simple example
N=2 and want to fit a straight line y=b0 +b1Xthru’ the following data points using principle of OLS (min sum of squared residuals)(Y1=3 X1 =1) (Y2=5 X2=2)
1
^11
^yyu
It follows that can write estimated residual for the 1st observation
and similarly for the 2nd observation )2(1
^0
^52
^22
^bbyyu
)1(1^
0^
31^
bbu and y1=3
)1(1^
0^
1^
bby )1(1^
0^
1^
Xbby using
as
OLS: minimise the sum of squared residuals
2)1^
20^
5(2)1^
0^
3( bbbbS
)42010425(
)21669(
1^
0^
1^
0^
21
^2
0^
1^
0^^
0^
21
^2
0^
bbbbbb
bbbbbbS
)2561261652( 1^
0^^
0^
21
^2
0^
bbbbbbS (A)
Expanding the terms in brackets
Adding together like terms
2
2^2
1^
uuS
Now need to find values of and 1^b
which minimise this sum,
Using the rules of calculus we know the first order condition for minimisation are:
0
0^
bd
dS0
1^
bd
dS
0^b
→
01664 1^
0
^ bb 026106 1
^
0
^ bb
)2561261652( 1^
0^^
0^
21
^2
0^
bbbbbbS
This gives 2 simultaneous equations
832 1^
0
^ bb
1353 1^
0
^ bb
which can solve for unknown values of 0^b and 1
^b
using rules for simultaneous equations
10^
b 21^
b
So the estimated regression line becomes XY 21^
ie the intercept (constant) with the y axis is at 1 and the slope of the straight line is 2
Basic idea underlying OLS is to choose a “line of best fit”
-Choose a straight line that passes through the data and minimises the sum of squared residuals
Now need to do this more generally so can apply the technique to any possible combination of (x, y) data pairs and any number of observations
If we wish to fit a (straight) line through N (rather than 2) observations, then the OLS principle is still the same ie choose
and to minimise
N
iii
N
iiN yyuuuuS
1
2^
1
2^2^2
2^2
1^
)(.....
0^b
1^b
(where now the summation runs from 1 to N rather than 1 to 2 )
sub. in ii Xbby^
1
^
0
^
iiiiii
nnnnnn
Nn
XbbYXbYbXbbNY
XbbYXbYbXbbY
XbbYXbYbXbbY
XbbYXbbYS
^
1
^
0
^
1
^
02
2^
1
2^
02
^
1
^
0
^
1
^
02
2^
1
2^
02
1
^
1
^
011
^
11
^
021
2^
1
2^
02
1
2^
1
^
02
1
^
1
^
01
222
222
...
222
)(...)(
This is just a generalised version of (A) above
Again, find values of 0^b and 1
^b
which minimise this sum, using the same simple calculus rules
0
0^
bd
dSand 0
1^
bd
dS
01^
220^
20
0^
iXbiYbN
b
S
00^
2221
^20
1^
iXbiYiXiXb
b
S
Now these two (1st order) minimisation conditions give
and again we have 2 simultaneous equations (called the normal equations) which can again solve for
0^b and 1
^b
(1)
(2)
Using the fact that the sample means of Y and X
N
iiyYNN
yY
N
ii
1
__1
N
iixXNN
xX
N
ii
1
__1
can re-write (1)
0_
1^
2_
20^
2 XNbYNbN
and so obtain the formula to calculate the OLS estimate of the intercept
_1
^_0
^XbYb
(3)(*** learn this **)
01^
220^
2 iXbiYbN
Sub.
0)1^
(21
^ iXXbYiYiXiXb
00^
2221
^2 iXbiYiXiXb
_1
^_0
^XbYb into (2)
gives
0)1^
(21
^ XNXbYiYiXiXb
and simplifying
YXNiYiXXNiXb
22
^
1
collecting terms
Dividing both sides by 1/N
YXiYiXNXiXN
b
1221^
1
which gives the formula to calculate the OLS estimate of the slope
),(Cov)(Var1^
YXXb
)(Var),(Cov
1^
XYX
b
))((12)(
1^
1YiYXiXN
XiXNb
(**** learn this ****)
)(Var),(Cov
1^
XYX
b
The OLS equations give a nice, clear intuitive meaning about the influence of the variable X on the size of the slope, since it shows that:
So_
1^_
0^
XbYb
are how the computer determines the size of the intercept and the slope respectively in an OLS regression
i) the greater the covariance between X and Y, the larger the (absolute value of) the slope
ii) the smaller the variance of X, the larger the (absolute value of) the slope
It is equally important to be able to interpret the effect of an estimated regression coefficient
Given OLS essentially passes a straight line through the data, then given
1b
Xd
dY
So the OLS estimate of the slope will give an estimate of the unit change in the dependent variable y following a unit change in the level of the explanatory variable
(so you need to be aware of the units of measurement of your variables in order to be able to interpret what the OLS coefficient is telling you)
Xbby 1^
0^^