learning and technology growth regimes - norges bank€¦ · introduction the model learning...

33
Introduction The Model Learning Finding The Steady State Results Learning and Technology Growth Regimes Andrew Foerster 1 Christian Matthes 2 1 FRB Kansas City 2 FRB Richmond January 2018 The views expressed are solely those of the authors and do not necessarily reflect the views of the Federal Reserve Banks of Kansas City and Richmond or the Federal Reserve System.

Upload: others

Post on 25-Jan-2021

0 views

Category:

Documents


0 download

TRANSCRIPT

  • Introduction The Model Learning Finding The Steady State Results

    Learning and Technology Growth Regimes

    Andrew Foerster1 Christian Matthes2

    1FRB Kansas City 2FRB Richmond

    January 2018The views expressed are solely those of the authors and do not necessarily reflect the views

    of the Federal Reserve Banks of Kansas City and Richmond or the Federal Reserve System.

  • Introduction The Model Learning Finding The Steady State Results

    Introduction

    • Markov-Switching used in a Variety of Economic Environments• Monetary or Fiscal Policy

    • Variances of Shocks• Persistence of Exogenous Processes

    • Typically Models Assume Full Information about State Variables,Regime Perfectly Observed

  • Introduction The Model Learning Finding The Steady State Results

    Introduction

    • Markov-Switching used in a Variety of Economic Environments• Monetary or Fiscal Policy• Variances of Shocks

    • Persistence of Exogenous Processes

    • Typically Models Assume Full Information about State Variables,Regime Perfectly Observed

  • Introduction The Model Learning Finding The Steady State Results

    Introduction

    • Markov-Switching used in a Variety of Economic Environments• Monetary or Fiscal Policy• Variances of Shocks• Persistence of Exogenous Processes

    • Typically Models Assume Full Information about State Variables,Regime Perfectly Observed

  • Introduction The Model Learning Finding The Steady State Results

    Introduction

    • Markov-Switching used in a Variety of Economic Environments• Monetary or Fiscal Policy• Variances of Shocks• Persistence of Exogenous Processes

    • Typically Models Assume Full Information about State Variables,Regime Perfectly Observed

  • Introduction The Model Learning Finding The Steady State Results

    Do We Always Know The Regime? - Empirical Evidencefor Technology Growth Regimes

    • Utilization-adjusted TFP (zt)• Investment-specific technology (ut) measured as PCE deflator dividedby nonresidential fixed investment deflator

    • Estimate regime switching process

    zt = µz(s

    µzt

    )+ zt−1 + σz (s

    σzt ) εz,t

    ut = µu(s

    µut

    )+ ut−1 + σu (s

    σut ) εu,t

    • Two Regimes Each

  • Introduction The Model Learning Finding The Steady State Results

    Data

    1950 1960 1970 1980 1990 2000 20100.8

    1

    1.2

    1.4

    1.6

    1.8

    2

    Index, 1947:Q

    1 =

    100

    log IST

    log TFP

  • Introduction The Model Learning Finding The Steady State Results

    Table: Evidence for TFP Regimes from Fernald

    Mean Std Dev

    1947-1973 2.00 3.681974-1995 0.60 3.441996-2004 1.99 2.662005-2017 0.39 2.35

  • Introduction The Model Learning Finding The Steady State Results

    Estimated (Filtered) Regimes

    Low TFP Growth

    1960 1980 2000

    0

    0.5

    1

    Low TFP Volatility

    1960 1980 2000

    0

    0.5

    1

    Low IST Growth

    1960 1980 2000

    0

    0.5

    1

    Low IST Volatility

    1960 1980 2000

    0

    0.5

    1

  • Introduction The Model Learning Finding The Steady State Results

    Table: Parameter Estimates

    µj (L) µj (H) σj (L) σj (H) PµjLL P

    µjHH P

    σjLL P

    σjHH

    TFP (j = z) 0.6872(0.6823)

    1.8928(0.6476)

    2.5591(0.2805)

    3.8949(0.3744)

    0.9855(0.0335)

    0.9833(0.0295)

    0.9803(0.0215)

    0.9796(0.0232)

    IST (j = u) 0.4128(0.1112)

    2.0612(0.1694)

    1.1782(0.0745)

    3.9969(0.4091)

    0.9948(0.0049)

    0.9839(0.0162)

    0.9627(0.0197)

    0.9026(0.0463)

  • Introduction The Model Learning Finding The Steady State Results

    This Paper

    • Provide General Methodology for Perturbing MS Models where AgentsInfer the Regime by Bayesian Updating

    • Agents are Fully Rational

    • Extension of Work on Full Information MS Models (Foerster, Rubio,Waggoner & Zha)

    • Key Issue: How do we Define the Steady State? What Point do weApproximate Around?

    • Joint Approximations to Both Learning Process and Decision Rules• Second- or Higher-Order Approximations - Often Not Considered inLearning Literature

    • Use a RBC Model as a Laboratory

  • Introduction The Model Learning Finding The Steady State Results

    This Paper

    • Provide General Methodology for Perturbing MS Models where AgentsInfer the Regime by Bayesian Updating

    • Agents are Fully Rational• Extension of Work on Full Information MS Models (Foerster, Rubio,Waggoner & Zha)

    • Key Issue: How do we Define the Steady State? What Point do weApproximate Around?

    • Joint Approximations to Both Learning Process and Decision Rules

    • Second- or Higher-Order Approximations - Often Not Considered inLearning Literature

    • Use a RBC Model as a Laboratory

  • Introduction The Model Learning Finding The Steady State Results

    This Paper

    • Provide General Methodology for Perturbing MS Models where AgentsInfer the Regime by Bayesian Updating

    • Agents are Fully Rational• Extension of Work on Full Information MS Models (Foerster, Rubio,Waggoner & Zha)

    • Key Issue: How do we Define the Steady State? What Point do weApproximate Around?

    • Joint Approximations to Both Learning Process and Decision Rules• Second- or Higher-Order Approximations - Often Not Considered inLearning Literature

    • Use a RBC Model as a Laboratory

  • Introduction The Model Learning Finding The Steady State Results

    This Paper

    • Provide General Methodology for Perturbing MS Models where AgentsInfer the Regime by Bayesian Updating

    • Agents are Fully Rational• Extension of Work on Full Information MS Models (Foerster, Rubio,Waggoner & Zha)

    • Key Issue: How do we Define the Steady State? What Point do weApproximate Around?

    • Joint Approximations to Both Learning Process and Decision Rules• Second- or Higher-Order Approximations - Often Not Considered inLearning Literature

    • Use a RBC Model as a Laboratory

  • Introduction The Model Learning Finding The Steady State Results

    The Model

    Ẽ0∞

    ∑t=0

    βt [log ct + ξ log (1− lt)]

    ct + xt = yt

    yt = exp (zt) kαt−1l

    1−αt

    kt = (1− δ) kt−1 + exp (ut) xtzt = µz

    (s

    µzt

    )+ zt−1 + σz (s

    σzt ) εz,t

    ut = µu(s

    µut

    )+ ut−1 + σu (s

    σut ) εu,t

  • Introduction The Model Learning Finding The Steady State Results

    The Model

    Ẽ0∞

    ∑t=0

    βt [log ct + ξ log (1− lt)]

    ct + xt = yt

    yt = exp (zt) kαt−1l

    1−αt

    kt = (1− δ) kt−1 + exp (ut) xtzt = µz

    (s

    µzt

    )+ zt−1 + σz (s

    σzt ) εz,t

    ut = µu(s

    µut

    )+ ut−1 + σu (s

    σut ) εu,t

  • Introduction The Model Learning Finding The Steady State Results

    Bayesian Learning

    ỹt = λ̃st (xt−1, εt) = [ut zt ]′

    εt = λst (ỹt , xt−1)

    ψi ,t =

    likelihood︷ ︸︸ ︷Jst=i (yt , xt−1) ϕ

    ε (λst=i (yt , xt−1))

    prior︷ ︸︸ ︷ns

    ∑s=1

    ps,iψs,t−1

    ∑nsj=1 Jst=j (yt , xt−1) ϕε (λst=j (yt , xt−1))∑nss=1 ps,jψs,t−1

    .

  • Introduction The Model Learning Finding The Steady State Results

    Bayesian Learning

    ỹt = λ̃st (xt−1, εt) = [ut zt ]′

    εt = λst (ỹt , xt−1)

    ψi ,t =

    likelihood︷ ︸︸ ︷Jst=i (yt , xt−1) ϕ

    ε (λst=i (yt , xt−1))

    prior︷ ︸︸ ︷ns

    ∑s=1

    ps,iψs,t−1

    ∑nsj=1 Jst=j (yt , xt−1) ϕε (λst=j (yt , xt−1))∑nss=1 ps,jψs,t−1

    .

  • Introduction The Model Learning Finding The Steady State Results

    Bayesian Learning

    ỹt = λ̃st (xt−1, εt) = [ut zt ]′

    εt = λst (ỹt , xt−1)

    ψi ,t =

    likelihood︷ ︸︸ ︷Jst=i (yt , xt−1) ϕ

    ε (λst=i (yt , xt−1))

    prior︷ ︸︸ ︷ns

    ∑s=1

    ps,iψs,t−1

    ∑nsj=1 Jst=j (yt , xt−1) ϕε (λst=j (yt , xt−1))∑nss=1 ps,jψs,t−1

    .

  • Introduction The Model Learning Finding The Steady State Results

    Bayesian Learning

    • To Keep Probabilities between 0 and 1, Define

    ηi ,t = log

    (ψi ,t

    1− ψi ,t

    )•

    Ẽtf (...) =ns

    ∑s=1

    ns

    ∑s ′=1

    ps,s ′

    1+ exp (−ηs,t)

    ∫f̃ (...) ϕε

    (ε′)

  • Introduction The Model Learning Finding The Steady State Results

    Issues When Applying Perturbation to MS Models

    • What Point Should we Approximate Around?• What Markov-Switching Parameters Should be Perturbed?• Best Understood in an Example - Let’s Assume we are Only Interestedin Approximating (a Stationary) TFP Process

  • Introduction The Model Learning Finding The Steady State Results

    Equilibrium Conditions

    • Equilibrium Conditions

    µ (st) + σ (st) εt − zt

    log

    1σ(1) ϕε( zt−µ(1)σ(1) )( p1,11+exp(−η1,t−1)+ p2,11+exp(−η2,t−1))1

    σ(2)ϕε(

    zt−µ(2)σ(2)

    )(p1,2

    1+exp(−η1,t−1)+

    p2,21+exp(−η2,t−1)

    )− η1,t

    log

    1σ(2) ϕε( zt−µ(2)σ(2) )( p1,21+exp(−η1,t−1)+ p2,21+exp(−η2,t−1))1

    σ(1)ϕε(

    zt−µ(1)σ(1)

    )(p1,1

    1+exp(−η1,t−1)+

    p2,11+exp(−η2,t−1)

    )− η2,t

    = 0

  • Introduction The Model Learning Finding The Steady State Results

    Steady State

    • Steady State of TFP Independent of σz• (In RBC Model We Rescale all Variables to Make Everything Stationary)• The First Equation Would Also Appear in a Full Information Version ofthe Model

    • So Under Full Information Perturbing σz is not Necessary and Leads toLoss of Information - Partition Principle of Foerster, Rubio, Waggoner& Zha

    • What About Learning?

  • Introduction The Model Learning Finding The Steady State Results

    Steady State

    • Steady State with Naive Perturbation

    µ̄ − zss

    log

    1σ ϕε( zss−µσ )( p1,11+exp(−η1,ss )+ p2,11+exp(−η2,ss ))1σ ϕε(

    zss−µσ )

    (p1,2

    1+exp(−η1,ss )+

    p2,21+exp(−η2,ss )

    )− η1,ss

    log

    1σ ϕε( zss−µσ )( p1,21+exp(−η1,ss )+ p2,21+exp(−η2,ss ))1σ ϕε(

    zss−µσ )

    (p1,1

    1+exp(−η1,ss )+

    p2,11+exp(−η2,ss )

    )− η2,ss

    = 0

    • η2,ss = η1,ss if Probability of Staying in a Regime is the Same AcrossRegimes

    • in general ηj ,ss = f (P) ∀j = 1, 2

  • Introduction The Model Learning Finding The Steady State Results

    Steady State

    • Steady State with Naive Perturbation

    µ̄ − zss

    log

    1σ ϕε( zss−µσ )( p1,11+exp(−η1,ss )+ p2,11+exp(−η2,ss ))1σ ϕε(

    zss−µσ )

    (p1,2

    1+exp(−η1,ss )+

    p2,21+exp(−η2,ss )

    )− η1,ss

    log

    1σ ϕε( zss−µσ )( p1,21+exp(−η1,ss )+ p2,21+exp(−η2,ss ))1σ ϕε(

    zss−µσ )

    (p1,1

    1+exp(−η1,ss )+

    p2,11+exp(−η2,ss )

    )− η2,ss

    = 0

    • η2,ss = η1,ss if Probability of Staying in a Regime is the Same AcrossRegimes

    • in general ηj ,ss = f (P) ∀j = 1, 2

  • Introduction The Model Learning Finding The Steady State Results

    • Steady State with Partition Principle

    µ̄ − zss

    log

    1σ(1) ϕε( zss−µσ(1) )( p1,11+exp(−η1,ss )+ p2,11+exp(−η2,ss ))1

    σ(2)ϕε(

    zss−µσ(2)

    )(p1,2

    1+exp(−η1,ss )+

    p2,21+exp(−η2,ss )

    )− η1,ss

    log

    1σ(2) ϕε( zss−µσ(2) )( p1,21+exp(−η1,ss )+ p2,21+exp(−η2,ss ))1

    σ(1)ϕε(

    zss−µσ(1)

    )(p1,1

    1+exp(−η1,ss )+

    p2,11+exp(−η2,ss )

    )− η2,ss

    = 0

    • η2,ss ̸= η1,ss , but only because of σz and P - steady state modelprobabilities not account for differences in means of regimes

    • Note that we can generally solve for the steady state of those variablesthat appear in the full information version of the model independently ofmodel probabilities

  • Introduction The Model Learning Finding The Steady State Results

    • Steady State with Partition Principle

    µ̄ − zss

    log

    1σ(1) ϕε( zss−µσ(1) )( p1,11+exp(−η1,ss )+ p2,11+exp(−η2,ss ))1

    σ(2)ϕε(

    zss−µσ(2)

    )(p1,2

    1+exp(−η1,ss )+

    p2,21+exp(−η2,ss )

    )− η1,ss

    log

    1σ(2) ϕε( zss−µσ(2) )( p1,21+exp(−η1,ss )+ p2,21+exp(−η2,ss ))1

    σ(1)ϕε(

    zss−µσ(1)

    )(p1,1

    1+exp(−η1,ss )+

    p2,11+exp(−η2,ss )

    )− η2,ss

    = 0

    • η2,ss ̸= η1,ss , but only because of σz and P - steady state modelprobabilities not account for differences in means of regimes

    • Note that we can generally solve for the steady state of those variablesthat appear in the full information version of the model independently ofmodel probabilities

  • Introduction The Model Learning Finding The Steady State Results

    Partition Principle Refinement

    µ̄ − zss

    log

    1σ(1) ϕε( zss−µ(1)σ(1) )( p1,11+exp(−η1,ss )+ p2,11+exp(−η2,ss ))1

    σ(2)ϕε(

    zss−µ(2)σ(2)

    )(p1,2

    1+exp(−η1,ss )+

    p2,21+exp(−η2,ss )

    )− η1,ss

    log

    1σ(2) ϕε( zss−µ(2)σ(2) )( p1,21+exp(−η1,ss )+ p2,21+exp(−η2,ss ))1

    σ(1)ϕε(

    zss−µ(1)σ(1)

    )(p1,1

    1+exp(−η1,ss )+

    p2,11+exp(−η2,ss )

    )− η2,ss

    = 0

    • We Find Model Probabilities That are Consistent With The FullInformation Steady State Under the Partition Principle,but Take IntoAccount All Differences Between the Parameters of the Regimes

    • Then we Apply the Methods from Foerster, Rubio, Waggoner & Zha

  • Introduction The Model Learning Finding The Steady State Results

    Back To The RBC Model

    Table: Accuracy Check of Simple RBC Model

    Method MSE of Beliefs Euler Eqn Error

    Partition PrincipleFirst Order 0.3989 −3.0298Second Order 0.3753 −2.4253Third Order Order 0.3770 −2.2715

    RefinementFirst Order 0.3200 −4.3101Second Order 0.0511 −3.4995Third Order Order 0.0519 −3.6722

    Policy Function Iteration 0 −4.3042

    • Why is First Order Doing so Well in Terms of EE? Expectations areComputed Using Different Model Probabilities for Each Order.

  • Introduction The Model Learning Finding The Steady State Results

    0 20 40-0.5

    0

    0.5IST Growth

    0 20 400

    0.5

    1

    ,u(L)

    0 20 400.9

    0.95

    1

    ,u(L)

    0 20 40-1

    0

    1Capital Growth

    0 20 40-2

    0

    2Output Growth

    0 20 40-5

    0

    5Consumption Growth

    0 20 40-20

    0

    20Investment Growth

    0 20 40-5

    0

    5Labor (% Dev from SS)

    Learning Full Info

  • Introduction The Model Learning Finding The Steady State Results

    -10 -5 0 5 100

    0.1

    0.2Output (Growth)

    -10 -5 0 5 100

    0.1

    0.2Consumption (Growth)

    -10 -5 0 5 100

    0.05

    0.1Investment (Growth)

    0.3 0.32 0.34 0.360

    100

    200Labor (Level)

    0.7 0.8 0.9 10

    10

    20Output (Level)

    4 6 8 100

    0.5

    1Capital (Level)

    0.5 0.6 0.7 0.8 0.90

    10

    20Consumption (Level)

    0.1 0.15 0.2 0.25 0.30

    50

    100Investment (Level)

    Learning Full Info

  • Introduction The Model Learning Finding The Steady State Results

    Table: Economic Effects of Learning

    Full Info Learning Pct DifferenceMean Std Dev Mean Std Dev Mean Std Dev

    Growth VariablesOutput (400 · ∆ log y ) 2.2630 4.3014 2.2630 3.5405 0 −17.69Consumption (400 · ∆ log c) 2.2630 2.8258 2.2630 3.3344 0 17.99Investment (400 · ∆ log x) 2.2630 13.6136 2.2630 6.2904 0 −53.79

    Detrended VariablesOutput (ỹ ) 0.8689 0.0204 0.8789 0.0203 1.15 −0.49Consumption (c̃) 0.6703 0.0249 0.6746 0.0231 0.64 −7.23Investment (x̃) 0.1986 0.0131 0.2037 0.0094 2.57 −28.24Labor (l) 0.3305 0.0047 0.3316 0.0036 0.33 −23.40Capital

    (k̃)

    6.1430 0.5151 6.3156 0.5138 2.81 −0.25

  • Introduction The Model Learning Finding The Steady State Results

    Conclusions

    • A Framework for the Nonlinear Approximation of Limited InformationRational Expectations Models

    • Example Provides a Lower Bound for the Importance of Learning: NoFeedback Effects (Think About an Endogenous Variable Multiplying aRegime-Dependent Coefficient)

    • Opens up Avenues to Think About Multiple Equilibria in LearningModels

    • Could be Extended to Allow for Disperse Beliefs Across Agents

    IntroductionThe ModelLearningFinding The Steady StateResults