actsc 432 review part 1

7
1 Introduction to credibility theory Consider an insurer with a block of automobile insurance business. Under its current ratings system, automobile insurance policies are classied according to the following criteria: 1. number of drivers 2. number of vehicles 3. gender of each driver 4. kilometers driven per year (approximately) 5. whether the vehicle(s) are driven to work The insurer assumes that policies with identical answers to the above ve questions belong to the same rating class (i.e. represent similar risks based on its ratings system). Among the policies in a specic rating class, two policies have been charged the so-called manual premium (i.e. premium specied in the insurance manual for a policy with similar characteristics) for the last three years which for simplicity we assume to be $1500 per year. You also have in your historical database the amount paid by the insurer for the three years of coverage for these two policies: Amount paid by the insurer Policy 1 Policy 2 Year 1 0 500 Year 2 200 4000 Year 3 0 2500 . The chief actuary has asked your recommendation for the premium to be charged to those two policies for Year 4. What is your recommendation? 1

Upload: osiccor

Post on 24-Oct-2014

36 views

Category:

Documents


3 download

TRANSCRIPT

1 Introduction to credibility theory

Consider an insurer with a block of automobile insurance business. Under its current ratings system, automobileinsurance policies are classified according to the following criteria:

1. number of drivers

2. number of vehicles

3. gender of each driver

4. kilometers driven per year (approximately)

5. whether the vehicle(s) are driven to work

The insurer assumes that policies with identical answers to the above five questions belong to the same rating class(i.e. represent similar risks based on its ratings system). Among the policies in a specific rating class, two policies havebeen charged the so-called manual premium (i.e. premium specified in the insurance manual for a policy with similarcharacteristics) for the last three years which for simplicity we assume to be $1500 per year.

You also have in your historical database the amount paid by the insurer for the three years of coverage for thesetwo policies:

Amount paid by the insurerPolicy 1 Policy 2

Year 1 0 500Year 2 200 4000Year 3 0 2500

.

The chief actuary has asked your recommendation for the premium to be charged to those two policies for Year 4.What is your recommendation?

1

2 Review - conditional probability and conditional expectation

2.1 Conditional probability & distributions

We consider two random variables (rv’s) X and Y with joint probability mass function (pmf) or joint density functionfX,Y (x, y). It is known that

• the marginal pmf or density function of X

fX (x) =all y

fX,Y (x, y) dy,

or

fX (x) =all y

fX,Y (x, y) .

• the marginal pmf or density function of Y

fY (y) =all x

fX,Y (x, y) dx,

or

fY (y) =all x

fX,Y (x, y) .

• the conditional pmf or density function of X |Y = y is

fX|Y (x |y ) = fX,Y (x, y)

fY (y).

Remark 1 If the rv’s X and Y are independent,

fX,Y (x, y) = fX (x) fY (y) .

As an immediate consequence, we have

fX|Y (x |y ) = fX (x) fY (y)

fY (y)= fX (x) .

Using the conditional distribution of X |Y , the marginal pmf or density function of X can be expressed as

fX (x) =all y

fX|Y (x |y ) fY (y) dy,

or

fX (x) =all y

fX|Y (x |y ) fY (y) . (1)

Often fX|Y (x |y ) is a known parametric probability distribution with a given set of parameters. Under the represen-tation (1), the marginal distribution of X is called a mixed distribution (or mixture).

Example 2 (n-point mixture) Assume that Y is a discrete rv with support {yi}ni=1 with fY (yi) = pi (p1+...+pn = 1).Then,

fX (x) =

n

i=1

fX|Y (x |yi ) fY (yi)

=

n

i=1

pi fX|Y (x |yi ) .

2

Example 3 (negative binomial) Suppose that

• X |Θ = θ is Poisson distributed with mean θ

• Θ is gamma distributed with density function

fΘ (θ) =β (βθ)

α−1e−βθ

Γ (α), θ > 0,

where α,β > 0 with

Γ (α) =∞

0

θα−1e−θdθ.

Aside: Using integration by parts,

Γ (α+ 1) = αΓ (α) , for α > 0.

For α a positive integer,

Γ (α+ 1) = αΓ (α)

= α (α− 1)Γ (α− 1)= α (α− 1) ... (1)Γ (1)= α!

Find the marginal distribution of X.

fX (x) =∞

0

e−θθx

x!

β (βθ)α−1

e−βθ

Γ (α)dθ

=βα

Γ (α)

1

x!

0

θx+α−1e−(β+1)θdθ

=βα

Γ (α)

1

x!

Γ (x+ α)

(β + 1)x+α

0

(β + 1)x+α

θx+α−1e−(β+1)θ

Γ (x+ α)dθ

=1

=Γ (x+ α)

Γ (α)

1

x!

βα

(β + 1)x+α

=Γ (x+ α)

Γ (α)

1

x!

β

β + 1

α1

β + 1

x

=x+ α− 1

x

β

β + 1

α1

β + 1

x

.

With this pmf, X is known to be a negative binomial rv.

2.2 Conditional expectation

In this section, we assume that X |Y = y is a continuous rv with a valid density function fX|Y (· |y ) (if X |Y is discrete,replace all the integral signs by summation signs). The conditional moment of X |Y is given by

E [X |Y = y ] =all x

xfX|Y (x |y ) dx.

3

Note that E [X |Y ] is a random variable (in Y ). The mean of E [X |Y ] is

E [E [X |Y ]] =all y

E [X |Y = y ] fY (y) dy

=all y all x

xfX|Y (x |y ) fY (y) dxdy

=all y all x

xfX,Y (x, y) dxdy

=all x

xall y

fX,Y (x, y) dy dx

=all x

xfX (x) dx

= E [X] .

Thus,E [X] = E [E [X |Y ]] .

Example 4 (negative binomial) Suppose that

• X |Θ = θ is Poisson distributed with mean θ

• Θ is gamma distributed with density function

fΘ (θ) =β (βθ)

α−1e−βθ

Γ (α), θ > 0.

Then,

E [X] = E [E [X |Θ ]]= E [Θ]

β.

Along the same lines, for a function h (X,Y ) that satisfies some mild integrability conditions,

E [h (X,Y )] = E [E [h (X,Y ) |Y ]] .

For instance, we choose h (X,Y ) = (X − E [X |Y ])2. Then,

E [h (X,Y ) |Y ] = E (X − E [X |Y ])2 |Y= V ar (X |Y )= E X2 |Y − (E [X |Y ])2 .

It follows that

E [V ar (X |Y )] = E E X2 |Y − (E [X |Y ])2

= E E X2 |Y − E (E [X |Y ])2

= E X2 − E (E [X |Y ])2 . (2)

4

Also, by definition, the variance of the rv E [X |Y ] is given by

V ar (E [X |Y ]) = E (E [X |Y ])2 − E [E [X |Y ]]2

= E (E [X |Y ])2 − E [X]2 . (3)

Adding up (2) and (3), one finds that

E [V ar (X |Y )] + V ar (E [X |Y ]) = E X2 − E [X]2= V ar (X) .

Example 5 (negative binomial) Suppose that

• X |Θ = θ is Poisson distributed with mean θ

• Θ is gamma distributed with density function

fΘ (θ) =β (βθ)

α−1e−βθ

Γ (α), θ > 0.

Then,

V ar (X) = E [V ar (X |Θ )] + V ar (E [X |Θ ])= E [Θ] + V ar (Θ)

β+

α

β2

=α (β + 1)

β2.

Example 6 ( compound Poisson) Suppose that

X =Ni=1 Yi, N > 0

0, N = 0,

where N is Poisson distributed with mean λ and Y1, Y2, ... is a sequence of iid rv‘s with mean μ and variance σ2. Allthese rv‘s are independent. We say that X is a compound Poisson rv. We have

E [X] = E [E [X |N ]]= E [Nμ]

= λμ,

and

V ar (X) = E [V ar (X |N )] + V ar (E [X |N ])= E Nσ2 + V ar (μN)

= σ2E [N ] + μ2V ar (N)

= λ σ2 + μ2 .

Example 7 (normal-normal) Suppose that X |Θ = θ ∼ Normal(θ, v) and Θ ∼ Normal(μ, a). Find the marginaldistribution of X.

5

Solution: We identify the distribution of X via its moment generating function. Indeed,

MX (t) = E etX

= E E etX |Θ= E etΘ+

t2

2 v

= et2

2 vE etΘ

= et2

2 vetμ+t2

2 a

= etμ+t2

2 (a+v).

By the uniqueness of moment generating function, one concludes that X ∼ Normal(μ, a+ v).

2.3 Concept of unbiaised estimation

Suppose that one has selected a model, i.e. a density function or pmf f (x; θ) which depends on a parameter θ. Supposethat X1, ..., Xn are n iid rv‘s with density function or pmf f (x; θ) and define the random vector X = (X1, ..., Xn). Ourobjective is to find an estimator of θ using the random sample X.

We say that an estimator θ = θ (X) is unbiaised of θ if

E θ = θ. (4)

Example 8 Let X1, ..., Xn be iid rv’s with the exponential distribution

f (x;β) = βe−βx, x > 0.

Clearly, the sample mean X = (X1 + ...Xn) /n is an unbiaised estimator for the mean 1/β

E X = EX1 + ...Xn

n

=1

nn1

β

=1

β.

While (4) seems intuitively to be a reasonable property for an estimator θ to have, there are some drawbacks.

• Unbiaisedness is not preserved under parameter transformation, i.e. if (4) is satisfied, 1/X is not an unbiaisedestimator of β

E1

X= β,

for instance.

• In some situations, (4) is satisfied but results in a silly estimator.

Example 9 Let X be Poisson distributed with mean λ. Then,

E (−1)X = eλ(−1−1)

= e−2λ,

6

which implies that (−1)X is an unbiaised estimator of e−2λ. However, the estimator (−1)X is such that

(−1)x = −1, x even1, x odd

,

which is no where close to e−2λ. A better estimator would be e−2X (even if biaised).

Despite some shortcomings, unbiaisedness is generally a reasonable property for an estimator to have. Thus, incredibility theory, we will construct nonparametric unbiaised estimators of some quantities of interest. For instance,suppose that X1, ..., Xn are all iid rv’s with mean μ and variance σ2. Define

X =1

n

n

i=1

Xi.

In this context, it can be shown that X is an unbiaised estimator of the mean μ andni=1(Xi−X)2

n−1 is an unbiaisedestimator of the variance σ2.

7