brattle group’s report, “review of regulatory cost of ... · kryzanowski critique 5 whether the...
TRANSCRIPT
Kryzanowski Critique 1
BRATTLE GROUP’S REPORT, “REVIEW OF REGULATORY COST OF CAPITAL METHODOLOGIES”
DATED SEPTEMBER 2010 PREPARED FOR CANADIAN TRANSPORTATION AGENCY (CTA)
CRITIQUE BY
Dr. Lawrence Kryzanowski
Submitted on behalf of:
The Coalition of Rail Shippers Group ("CRS")
March 31, 2011
Kryzanowski Critique 2
CRITIQUE OF BRATTLE GROUP’S REPORT, “REVIEW OF REGULATORY COST OF CAPITAL METHODOLOGIES”, DATED SEPTEMBER 2010
TABLE OF CONTENTS
1. INTRODUCTION 1 1.1 Qualifications 1 2. EVALUATION OF THE VARIOUS POTENTIAL METHODOLOGIES
TO DETERMINE COST OF EQUITY (COE) 3 2.1 The Use of Estimates from Different COE Estimation Methods 3 2.2 Criteria for Assessing Alternative Methods for Estimating the Cost of Equity 4 2.3 Efficiency of Capital Markets 5 2.4 Equity Risk Premium Method 8 2.4.1 Use of a “CAPM” Type of ERP 9 2.4.1.1 Market Equity Risk Premium (“MRP”) 9 2.4.1.2 Beta Risk Measure of Relative Risk 10 2.4.1.2.1 Use of Value Line Betas 11 2.4.1.2.2 Non-standard or So-called Adjusted Betas 11 2.4.1.2.3 ECAPM-adjusted Betas 14 2.4.1.2.4 The Roll and CAPM Critiques 16 2.4.1.2.5 International CAPM 18 2.4.1.3 Total Risk Measure of Relative Risk 19 2.4.1.4 Risk-free Forecast 19 2.5 Discounted Cash Flow (DCF) Model 20 2.5.1 Biases in the Forecasts of Analysts 22 2.5.2 Validity of Using a Complex Weighted-average Implied Rate of Return for an
Annually Updated Cost of Equity 24 2.6 Investment Manager and Economist Estimates of Available Prospective Equity
Market Returns 25 2.7 Comparable Earnings Method 30 2.8 ATWACC-like Approaches 32 2.9 Price-to-Book-Value (P-to-BV) Ratios 33 2.9.1 Interpretation of Price to Book Value Ratios and Market-based Evidence on P-to-
BV Ratios for Regulated Entities 33 3. OTHER ISSUES 34
Kryzanowski Critique 1
CRITIQUE OF BRATTLE GROUP’S REPORT, “REVIEW OF REGULATORY COST
OF CAPITAL METHODOLOGIES”, DATED SEPTEMBER 2010
1. INTRODUCTION
1.1 Qualifications
The Canadian Transportation Agency (“the Agency”) is in the process of reviewing its
methodology for computing the cost of capital (“COC”) that it applies to Canadian National
Railway and Canadian Pacific Railway. The Agency sets the cost of capital for the
transportation of western grain each crop year, which begins each August, as well as for the
development of interswitching costs and rates and other regulatory purposes.
Cost of capital is defined by the Agency as the total return on net investment that is required by
shareholders and debt holders so that debt costs can be paid and equity investors can be provided
with an adequate return on investment consistent with the risks assumed for the period under
consideration.
The Agency has engaged an international consulting firm (“The Brattle Group”) to advise it on
the methodology it uses to set the COC rates for CN and CPR and to suggest possible changes in
the methodology. The Coalition of Rail Shippers (CRS) is an affiliation of shipping industry
associations that has been together since 2005 and whose members account for over 80% of the
revenues of CN and CPR. The CRS provides input to government on matters affecting Canadian
rail freight transportation. The CRS has engaged Dr. Lawrence Kryzanowski, of Concordia
University, Montreal, an internationally renowned financial expert, to assist it in this matter.
This report is the work of Dr. Lawrence Kryzanowski who is currently a Full Professor of
Finance and Senior Concordia University Research Chair in Finance (previously Ned Goodman
Chair in Investment Finance) at Concordia University. He earned his Ph.D. in Finance at the
University of British Columbia.
Kryzanowski Critique 2
Dr. Kryzanowski has experience in preparing evidence as an expert witness in utility rate of
return applications; court proceedings dealing with price distortion due to financial
misrepresentation; trader violations and insider trading; and confidential final offer arbitration
proceedings and hearings into disputes pertaining to railway rates and services in the movement
of various products by rail. More broadly, Dr. Kryzanowski often provides technical expertise
and advice on financial policy. Among his public-sector consulting clients are the Superintendent
of Financial Institutions, the Federal Department of Finance, Canada Investment and Savings,
Canada Mortgage and Housing Corporation, and Canada Deposit Insurance Corporation. A brief
curriculum vitae is provided in a separate Appendix.
The purpose of this critique is to provide an examination of issues discussed in the Brattle Group
Report (henceforth, the BG Report), entitled “Review of Regulatory Cost of Capital
Methodologies”, dated September 2010, which was prepared by the Brattle Group at the request
of the Agency to assist the Agency in its review of the cost of capital methodology which it
employs in the conduct of its various regulatory activities under the Canada Transportation Act
and related legislation. The discussion is not to be taken as being complete for any specific
issue. Issues not discussed should not be interpreted as the author of this Critique agreeing with
its treatment in the BG Report.
Before proceeding to an examination of specific issues discussed in the BG Report, it should be
noted that: the BG Report:
• essentially ignores the relevant published literature that deals with the various issues from
a Canadian perspective;
• contains quoted literature, most of which is dated; and
• is unbalanced in its discussion of the various issues.
This critique attempts to deal with each of these overarching deficiencies.
Kryzanowski Critique 3
2. EVALUATION OF THE VARIOUS POTENTIAL METHODOLOGIES TO
DETERMINE COST OF EQUITY (COE)
2.1 The Use of Estimates from Different COE Estimation Methods
Many methods are used by experts to generate an estimate of the expected Cost of Equity or
COE for a regulated entity or to benchmark that estimate. The usefulness of these methods
varies significantly due to their accuracy. Adding Cost of Equity (“COE”) estimates derived
from inferior estimation methods to those from superior estimation methods, as advocated in the
BG Report (page 5), only increases estimation error and potential bias. It does little to produce
a more accurate (fair) estimate of the cost of equity.
Generally, the Equity Risk Premium (“ERP”) method is more precise and therefore superior to
other methods of ROE estimation discussed in the BG Report, such as the DCF method
(especially when bias/overconfidence/etc. is not removed). Thus, some of the other estimation
methods (such as the DCF method at the market level and the survey expectations of market
professionals) are used to provide information for testing (benchmarking) the COE estimates
obtained from the superior ERP method. The fairness of past decisions by the Agency can also
be assessed by using standard metrics for measuring risk-adjusted portfolio return performance,
which is an area that is well developed in finance theory and practice.
The Agency currently allows for the choice of relying on one or more of the following valuation
models in its determination of the cost of common equity for a regulated entity; namely, the
Equity Risk Premium (ERP#1) model that it associates with the Capital Asset Pricing Model
(CAPM), the family of Equity Risk Premium (ERP#2) models that are associated with
benchmarks for less and more risky assets that differ from government T-bills or bonds and the
market proxy, respectively; and (iii) the family of Discounted Cash Flow (DCF) models that can
have one or multiple stages. Currently, the Agency can (and does) exercise its judgment when
determining the model(s) that it will rely on when determining the annual cost of equity for the
entities it regulates. The BG Report does not make a convincing case why the Agency should
not retain this flexibility.
Kryzanowski Critique 4
2.2 Criteria for Assessing Alternative Methods for Estimating the Cost of Equity
The BG Report (page 14) specifies three primary criteria when assessing the merits of the
alternative methods that have been used to estimate the cost of equity in regulatory proceedings.
Specifically:
“a. Reasonable
1. Be consistent with the objective being pursued – namely to provide regulated
railroads with a fair and reasonable return;
2. Be transparent by relying as much as possible on a formula/structured
methodology and by minimizing the use of judgmental factors;
b. Reliable
3. Be based upon auditable information;
4. Produce consistent results for like conditions;
5. Be robust, and reasonably sensitive, to a broad range of economic/financial
conditions;
c. Pragmatic
6. Be based upon readily available information or information that can be
obtained with minimal costs;
7. Be simple to implement for both the regulator and regulated parties; and
8. Recognize the regulatory context and legislative requirements in which the
Agency is exercising its responsibilities (timeframe for issuing decisions, nature
of regulated parties, context in which the cost of capital is being applied).”
While these criteria have merit, they can lead to the adoption of COE estimation methods that
have little or no scientific merit. The four Daubert criteria that have been adopted by federal
and many state courts in the U.S. can be used as a guide for evaluating the admissibility
(scientific merit) of the various estimation methods. They are: (1) whether the methods are
centered upon a testable hypothesis; (2) the known or potential rate of error associated with
the method; (3) whether the method has been subject to peer review and publication; and (4)
Kryzanowski Critique 5
whether the method is generally accepted in the relevant scientific community, particularly in
terms of the non-judicial uses to which the scientific techniques are put.1
2.3 Efficiency of Capital Markets
There are three efficiency notions that have importance in the determination of the cost of equity.
The first is informational efficiency, which the BG Report discusses on page 3. The BG report
discusses the three definitions of information in Fama’s 1970 paper, and not the revised
definitions in his 1991 paper.2 Numerous papers relate the increase in informational efficiency
with the cost of equity. The BG Report is silent on the impact of improvements in the
informational efficiency (including information disclosure) of markets (such as the TSX) over
time on share liquidity and the cost of equity.
A second notion is allocational efficiency, which measures whether capital markets provide
comparably risk-adjusted returns across investments so that higher returns are earned on
investments with greater risk than investments of lower risk. The allocational efficiency of
capital markets is generally tested ex post by examining whether or not there is an upward
sloping relationship between realized returns and risk. Such tests make the assumption that ex
post measures of mean returns and risk based on realized returns are good proxies of ex ante
measures of expected returns and risk. Stated differently, the assumption is that realized returns
are an unbiased estimate of expected returns (i.e. investors, on average, realize their
expectations). Such an assumption is crucial in tests and applications of valuation models, such
as the CAPM and DCF models, where such tests/implementations are based on the validity of
joint hypotheses; namely, that the model is correct, and that markets are allocationally efficient.
If markets are assumed only to be consistent with allocational efficiency, then any rejection of
the tests of the valuation model can be interpreted as a model failure. If the model is assumed to
be correct, any rejection of the tests can be interpreted as being consistent with allocational
1 For a more extensive discussion of this U.S. Supreme court decision, see, for example: Stephen Mahle, The Impact of Daubert v. Merrell Dow Pharmaceuticals, Inc., on Expert Testimony: With Applications to Securities Litigation, April 1999. Available at: http://www.daubertexpert.com/basics_daubert-v-merrell-dow.html. 2 Eugene F. Fama, 1970. Efficient capital markets: A review of theory and empirical work, The Journa1 of Finance 25 (May), 383-417. Eugene F. Fama, 1991. Efficient capital markets: II, The Journa1 of Finance 46 (December), 1575-1617.
Kryzanowski Critique 6
inefficiency. The BG Report (pages 113-116) ignores the joint hypotheses being tested when it
evaluates old (essentially unconditional) tests of the CAPM. The BG Report implicitly assumes
that the markets were allocationally efficient during each of the periods examined so that any
deviation in the estimated intercept from the test of the unconditional CAPM must be due to
model failure.
An examination of the evidence on the allocational efficiency of markets finds that the evidence
strongly suggests that equity investors realized a higher return than they expected and
government bond investors realized a lower return than they expected over the long periods that
experts commonly used to implement the “CAPM” version of the ERP method to estimate the
cost of equity. This implies that an estimate of the ERP using historical data provides an inflated
estimate of the magnitude of the going-forward ERP. Furthermore, the early findings from these
tests of the unconditional CAPM, which assume that betas and risk premium do not change, is
not used by cost of equity experts to make forward-looking cost of equity recommendations
using the ERP Method. Most experts use a conditional, CAPM-like implementation when they
estimate the COE.
Four of the many studies that lead to the conclusion that the market was not allocationally
efficient over long periods of time are now briefly reviewed. The main conclusion of Fama and
French (Working Paper, 2001; Journal of Finance, 2002) is that the stock returns and realized
equity risk premia of the last half-century are a lot higher than what was expected by investors ex
ante.3 Thus, Fama and French believe that their models are correct, and that their empirical
evidence indicates that markets are allocationally inefficient based on realized returns over the
last half century. Claus and Thomas in the Journal of Finance find that ex ante MRP estimates
are close to three percent rather than the eight percent MRP that has been reported based on the
realized return data from Ibbotson Associates.4
3 Eugene F. Fama and Kenneth R. French, 2001, The equity premium, Working Paper no. 522, The Center for Research in Security Prices, Graduate School of Business, University of Chicago, April. Eugene F. Fama and Kenneth R. French, 2002. The Equity Premium, Journal of Finance 57(2), pages 637-659. 4 J. Claus and J. Thomas, 2001. Equity premia as low as a three percent? Evidence from analysts’ earnings forecasts for domestic and international stock markets, The Journal of Finance, pages 1629–1666.
Kryzanowski Critique 7
The 2002 paper by Arnott and Bernstein in the Financial Analysts Journal finds that the realized
MRP over the last 75 years in the U.S. is overstated due to various accidents. Specifically,
equity and bond investors obtained returns higher and lower than what they expected,
respectively, due to a series of favourable accidents for equity holders and one major
unfavourable accident (a prolonged period of unanticipated inflation) for bondholders.5
Similarly, Madsen and Dzhumashev (2008) argue that “high historical excess returns to equity
were the result of a severe ex post bias in the period from 1915 to ca 1960 because inflation
surprises during this period drove a wedge between ex ante and ex post returns to bonds”.6 After
adjusting the ex post equity premium by the ex post bias, they obtain an arithmetic mean MRP of
3.3-4.4% over the past 132 years for the OECD countries.
Thus, the logical conclusion from the empirical findings for the tests of the Capital Asset Pricing
Model (“CAPM”) are to a larger extent due to allocational inefficiency, and using the higher
long Canada as the risk-free proxy should more than account for any problem in terms of
accuracy of the model used. Thus, the evidence is not consistent with improper implementations
of the “CAPM” version of the ERP Method that purport to adjust for the evidence on the
differences in the ex ante and ex post risk-return relationships found in U.S. and Canadian
financial asset markets. The proper adjustment is not to adjust the betas upward by adding a
“fudge” factor via the E-CAPM, as the BG Report suggests (see discussion below), but to adjust
the MRP estimates from realized returns downward.
This confusion in the BG Report most likely emanates from a depiction of the zero-beta version
of the CAPM (so-called Black version) where in Figure A.2 (page 106) the zero-beta portfolio is
assumed to be the minimum variance portfolio on the efficient frontier. In contrast, it is well-
known that the zero-beta portfolio plots on the inefficient portion of the efficient frontier, and
that the return on the zero-beta portfolio exceeds the risk-free rate if a risk-free rate existed.
5 Robert D. Arnott and Peter L. Bernstein, 2002, What risk premium is “normal”?, Financial Analysts Journal 58:2 (March/April), pages 64-85. 6 Jakob B. Madsen and Ratbek Dzhumashey, 2009, The equity premium puzzle and the ex post bias, Applied Financial Economics 19: 2, pages 157-174.
Kryzanowski Critique 8
Stambaugh (1982), amongst others, finds strong support for the zero-beta form of the CAPM
used by most experts, and he finds evidence against the standard form of the CAPM.7
A third notion is operational efficiency that measures whether market participants can execute
transactions and receive services at a price that equates fairly to the actual costs required in their
provision. An operationally-efficient market allows investors to make transactions that facilitate
the allocational and informational efficiencies of markets. Since investors are interested in the
relationship between net returns (i.e., after removing the loss due to frictions such as taxes and
trade costs) and risk, decreases in the prices of services over time decreases the gross returns
(and ERP) that investors require to bear the same amount of risk, all else held equal. The BG
Report is silent on the impact of the decline in, for example, trade costs (including reduction in
bid-ask spreads and commissions) over time on the cost of capital.
As noted by Jones (2001), trade costs drive a wedge between gross equity returns and net equity
returns. His analysis shows that the average cost to buy or sell stocks has dropped from over 1%
of value as late as 1975 (i.e., before the deregulation of brokerage fees) to under 0.18% today.
He concludes that, while trade costs account for a small part of the observed equity premium, the
gross equity premium is perhaps 1% lower today than it was earlier in the 1900’s.8
2.4 Equity Risk Premium Method
The BG Report (Table 3, page 103) clearly shows that the ERP models are the models of choice
for the regulators considered in the study. The DCF models are a distant second, and the
Comparable Earnings model is at the end of the choice line with only one regulator partially
using it.
The ERP models estimate the cost of equity capital for regulated entities with respect to other
publicly traded (marked-to-market) investment opportunities that are available to investors. It is
7 Robert F. Stambaugh, 1982, On the exclusion of assets from tests of the two-parameter model: A sensitivity analysis, Journal of Financial Economics 10:3 (November), pages 237-268. 8 Charles M. Jones, 2001, A century of stock market liquidity and trading costs, working paper presented at an asset pricing workshop, Summer Institute, National Bureau of Economic Research, July 19-20.
Kryzanowski Critique 9
an attempt to find the risk-adjusted “opportunity cost” for investing in the shares of the regulated
entities. This cost is based on the gross rate of return required by equity investors (i.e. the rate of
return required by equity investors before trade costs and taxes). Thus, an adjustment is needed
if these frictions have the expected differential effects on the net returns earned by equity and
bond investors.
2.4.1 Use of a “CAPM” Type of ERP
The implementation of the traditional ERP model to obtain the cost of equity for a regulated
entity or its average-risk counterpart uses three inputs: input #1 a forward-looking risk premium
for the S&P/TSX Composite (the market diversified market proxy); input #2 a forward-looking
forecast of the investment riskiness of the regulated entity or its average-risk counterpart relative
to the market portfolio as proxied by the S&P/TSX Composite or relative to other Canadian
industries or a typical firm in a representative market proxy; and input #3 the normalized yield
forecast for 30-year Canada bonds or other less or no risk proxy. The three input estimates then
are combined as follows: [(Input #1) x (Input #2)] + (Input #3) = recommended cost of equity or
COE for the regulated entity or its average-risk counterpart. Canadian regulators then commonly
add a further adjustment to cover fees involved with potential equity offerings or issues by the
regulated entity or its average-risk counterpart and to ensure its financial flexibility and integrity.
2.4.1.1 Market Equity Risk Premium (“MRP”)
Because the forward-looking or ex ante or expected risk premium is difficult to observe, cost of
equity studies typically place a heavy weight on measurement of historical or ex post or realized
risk premiums. This approach tends to use long periods of time based on the belief that the
difference between averages of realized and expected risk premiums should diminish as the
measurement period gets longer. This, in turn, is based on the validity of two contentious
assumptions. The first contentious assumption that is not addressed in the BG Report is that the
underlying return distribution is normal and remains unchanged over this longer measurement
period. This is commonly referred to as returns being IID normal, or independently and
identically and normally distributed, in that they have the same normal distribution at each point
Kryzanowski Critique 10
in time and returns are independent over time. The second contentious assumption, which as
discussed earlier is dismissed in the BG Report, is that investors in different asset classes, such
as equities and government bonds, will not earn returns that differ from their expectations over
such longer periods of time. In other words, markets are assumed to be allocationally efficient
over long periods of time. Returns are not IID as stock returns are mean reverting and bond
returns are mean averting so that the relative risk of equity compared to bonds decreases with
longer investor holding periods, and what investors earn on stocks and bonds can differ from
what they expected even when performance in measured over long periods of times. Thus, both
of these issues need to be accounted for when arriving at a recommended cost of equity estimate.
The BG Report (page 23) states that it has “become common in many regulatory settings to
implement a long-term version of the model using a long-term government bond yield as the
risk-free rate and a MRP relative to the long-term bond yields” when implementing the “CAPM”
version of the ERP model. One of the major problems with this approach is that those experts
that advocate this approach argue that the best forward-looking predictor of market returns or
market ERP is an arithmetic (equally weighted average of past realizations) not a geometric (an
unequally weighted) mean. However, a long-term bond yield is a geometric type of average yield
if the bond is held to maturity.
The BG Report (page 23) attempts to provide a justification for such a procedure by providing a
reference to a passage in the Morningstar publication, “Ibbotson SBBI 2010 Valuation
Yearbook,” which refers to the use of bond yields in calculating equity risk premiums. This is not
how Morningstar calculates its two equity risk premium series that it provides on an updated
monthly basis in its ‘Stock, Bonds, Bills, Inflation; Ibbotson Associates’ module for its EnCor
software used by investment professionals. Specifically, the U.S. equity risk premiums versus
bonds and t-bills use total returns for bonds and T-bills, not bond yields.
2.4.1.2 Beta Risk Measure of Relative Risk
If the market only rewards investors for bearing non-diversifiable risk (the most commonly
accepted view), the relative non-diversifiable risk or beta of the average-risk utility relative to the
Kryzanowski Critique 11
market proxy needs to be estimated because investments in the securities of individual firms
(such as stocks in specific utilities) are not by themselves well-diversified portfolios. Thus,
applications of the ERP Method in regulatory hearings use standard or non-standard betas where
the non-standard betas are standard betas adjusted to reflect their supposedly mean-reverting
behavior to the beta of the market or relevant sample or normal beta for the regulated entity.
2.4.1.2.1 Use of Value Line Betas
Some experts obtain their beta estimates from Value Line Investment Survey (BG Report, page
41), which uses arbitrarily chosen weights of 37.1% and 63.5% for the market beta of one and
the standard beta to obtain their non-standard beta. Value Line also derives its standard beta from
a 5-year regression between weekly percentage changes in the New York Stock Exchange
Composite Average and the weekly percentage changes in the price of the stock with no
adjustment for dividends. As such, the Value Line beta is a measure of the sensitivity of price
changes for an entity to price level changes of the market proxy, and is not a measure of the
sensitivity of the total returns for a utility to the changes in the total returns of a market proxy.
1.4.1.2.2 Non-standard or So-called Adjusted Betas
If the beta estimate is for the average beta of a sample of regulated entities (i.e. for a generic or
“average” regulated entity), then no adjustment is needed since the average beta estimate will be
the same as the mean beta of the sample. If the beta estimate is for a specific regulated entity and
there is evidence that its beta is mean reverting to the sample of regulated entities, then any
adjustment for the mean-reverting behavior of the regulated entity is towards the mean beta for a
sample of such regulated utilities and not one. A cursory examination of the various beta plots in
the BG Report (e.g., Figure 4, page 37, Figure 5, pages 38-39) does not install confidence that
the various plotted series are mean reverting and what the speed of such mean reversion is, if
such exists. Furthermore, a cursory examination of the rolling window betas for both CN and CP
suggests that there is little evidence that their betas, which are generally below one, exhibit mean
reversion to the market beta of one.
Kryzanowski Critique 12
Two of the rationales, which are often given for using the adjust–beta-to-one Method when
calculating the cost of equity, are not valid. If interest rate risk is a return determinant, then the
correct approach is to use a two-factor asset pricing model where the beta of each factor is
estimated, and the risk premium of each factor is measured over the long Canada yield.
Furthermore, over the long run, we would expect the average return on long Canada’s to be equal
to the yield on long Canada’s (the proxy for the risk-free rate in rate of return settings). In turn,
this implies that the ERP for the long Canada factor would be approximately zero over long
periods of time if realizations approximate expectations.
The second equally fallacious rationale is based on empirical studies (the 1971 and 1975 non-
regulated-entity-specific studies for the U.S.) that provide evidence that betas for all the firms in
the U.S. market, on average, revert to the market average of one over time.9 Whether this results
still applies or whether it applies to specific stratified samples of firms is debatable and untested.
Furthermore, there appears to be no similar tests for the Canadian market.
The third rationale for adjusting betas is the need to incorporate statistical estimation errors in the
beta forecasts. However, the Vasicek-shrinkage method, and not the Blume-type of adjustment
used by Bloomberg and Value Line, should be used to incorporate statistical estimation errors
into the beta estimate.10 The Vasicek-shrinkage method is a weighted average of the standard
beta for the member of the sample and the standard beta for the specific regulated entity or
sample thereof where the weights are based on relative estimation errors. However, for the so-
called average-risk regulated entity the “member” of the sample and the sample itself are one and
the same. Thus, there is no need to make any adjustment.
The published literature on the use of non-standard or adjusted betas for regulated entities, for
example, deals with the benchmark to which the betas should be adjusted towards, and whether
commercially available non-standard betas have predictive accuracy.
9 For example, see: M.E. Blume, 1975. Betas and their regression tendencies, Journal of Finance 30 (June), pages 785-796. Dr. Blume obtained his results by running a regression of the beta estimates obtained over the period 1955-1961 against the beta estimates obtained over the period 1948-1954 for common shares traded on the NYSE. 10The BG Report (pages 40-41) seems to imply incorrectly that the average beta used in the Vasicek-type beta adjustment is the market beta of one.
Kryzanowski Critique 13
Harrington (1983) shows that the betas that are supplied by commercial vendors that use various
types of adjustments have little predictive accuracy.11 Her conclusion is based on a comparison
of the actual beta forecasts supplied by a number of commercial investment vendors (such as
Value Line) with their corresponding benchmark estimates for four forecast horizons. Chambers
and Wood (1985) state on page 23 of their paper that “the measurement of risk to equity holders
is an important concept in public utility rate of return regulation” and that the purpose behind
adjusting the standard beta estimate is “to reflect the tendency of betas to drift towards some
“normal” level”.12 Based on their figure 1, they report that the standard beta estimates for the
electric utility industry in the United States fall into “two distinct periods divided at
approximately 1950” with mean betas usually greater than 1.0 prior to and during World War II
and mean betas tending to range between 0.5 and 0.8 subsequent to World War II. Based on
various statistical tests, they conclude that:
“The empirical results reject the hypothesis that electric utility betas have the same beta
distributions as other firms” (page 29);
“… an adjustment procedure based on the behavior of a market sample is inappropriate
for application in the case of electric utility betas” (page 29); and finally,
“The following example will be used for the concluding discussion. Consider an electric
utility with an unadjusted beta of 0.5. The risk-free rate is 10%, the risk premium on the
market is 10%, and the adjusted beta is 0.7. As discussed above, the beta is adjusted from
0.5 to 0.7 based upon the expectation that betas tend toward 1.0 through time. The
primary conclusion of this study is that the beta should not be adjusted upwards, since
electric utilities do not tend towards 1.0 but rather towards an industry average well
below 1.0.
In the above example, the adjustment procedure inappropriately adds 0.2 to the
unadjusted beta of 0.5.”
11 D.R. Harrington, 1983, Whose beta is best?, Financial Analysts Journal (July-August), pages 67-73. 12 Donald R. Chambers and Robert A. Wood, 1985, The use of adjusted betas in public utility regulation, Review of Business and Economic Research 20: 2 (Spring), pages 23-33.
Kryzanowski Critique 14
Kryzanowski and Jalilvand (1986) test the relative accuracy of six beta predictors for a sample of
fifty U.S. utilities from 1969-1979.13 They find that the best predictors differ only in that they
use different weighted combinations of the average beta of their sample of utilities, and that, not
unexpectedly, the worst predictor is to use a beta of one or the so-called “long-term tendency of
betas towards 1.00”. Gombola and Kahl (1990) advocate no adjustment for an average-risk
utility. After examining the time-series processes of U.S. utility betas, they conclude the
following on pages 91-92 for the required adjustment for a single utility:14
“The results of this study, however, indicate that 1.0 is too high an underlying mean for most
utilities. Instead, they should be adjusted toward a value that is less than one.”
Based on an examination of dynamic betas estimated using the Kalman filter approach, He and
Kryzanowski (2008) find that the trend beta (i.e., the stable part of the beta) has been 0.5 or less
since the late 1990s, that dynamic betas significantly increase the explanatory power of the
market model (particularly for the utilities sector), and that time-variation (temporary deviations)
in the betas is the most important source of variation in the market model for the Canadian
utilities sector.15 They also find that the U.S. market does not make a statistically significant
contribution to explaining the portion of the return of Canadian utilities that is not explained by
the Canadian market.
Thus, the Agency needs to revisit its 2004 decision that beta should be adjusted for a mean-
reverting tendency (BG Report, page 72).16
2.4.1.2.3 ECAPM-adjusted Betas
13 L. Kryzanowski and A. Jalilvand, 1986, Statistical tests of the accuracy of alternative forecasts: Some results for U.S. utility betas, The Financial Review, pages 319-335. 14 Michael J. Gombola and Douglas R. Kahl, 1990, Time-series processes of utility betas: Implications for forecasting systematic risk, Financial Management, pages 84-93. Available at: http://www.oeb.gov.on.ca/documents/cases/EB-2006-0088/gambola-kahlf_131006.pdf. 15 Zhongzhi He and Lawrence Kryzanowski, Dynamic betas for Canadian sector portfolios, International Review of Financial Analysis 17: 5 (December 2008), pages 1110-1122. 16 2004 Decision, page 5.
Kryzanowski Critique 15
The ECAPM is promoted as a “fix” by some experts engaged by regulated entities for the finding
that some tests of the unconditional Sharpe-Lintner-Mossin version of the CAPM find that the
estimated relationship is interpreted as being flatter than predicted (in the BG Report, for
example, see pages 43-45) As discussed above, this interpretation is based on the fallacious
conclusion that the underlying problem is a model failure and not a market efficiency failure
caused by a systematic divergence between realized and expected returns. Based on the
unsupported assumption that these old findings are totally accounted for by model failure, betas
less than one are adjusted upwards and betas greater than one are adjusted downwards. Since the
betas of regulated entities are generally below one (at least in a Canadian context), the ECAPM
adjustment of betas leads to higher betas and, in turn, higher recommended COEs.
Canadian regulatory commissions, boards or régies have reached similar conclusions on the
validity of using the Empirical CAPM or ECAPM. For example, in its decision for Hydro
Quebec Distribution, the Régie de l’Enérgie found insufficient support for the use of the ECAPM
by Dr. Morin. It also reaffirmed its earlier decision against the use of adjusted betas, and
indicated that it did not support estimates obtained using the comparable earnings method or the
DCF for individual firms.17 Unlike the BG Report that concludes that “even implementations
that utilize a long-run risk-free interest rate require a further, albeit smaller, adjustment to match
the empirical SML” (fn. 77, page 43), the Alberta Energy and Utilities Board (EUB) in its 2004
Generic Cost of Capital Decision concluded that:18
“The Board notes Calgary/CAPP’s argument that applying CAPM using long-term
interest rates (long-Canada bond yields) in determining the risk-free rate, as was done by
all experts in this Proceeding, already corrects for the alleged under-estimation that
ECAPM was designed to address. Calgary/CAPP argued that the under estimation would
only be present if the CAPM were applied using short-term interest rates, which none of
the experts did in this Proceeding.”
17 Régie de L’énergie du Québec, D é c i s i o n, Demande relative à la détermination du coût du service du Distributeur et à la modification des tarifs d’électricité, phase I, D-2003-93, R-3492-2002, 21 mai 2003, pages 71-73. 18 EUB Decision 2004-052 (July 2, 2004), page 22.
Kryzanowski Critique 16
In its 2009 Generic Cost of Capital Decision, the Alberta Utilities Commission or AUC (page
19) did not include the ECAPM in the list of factors that it examined in determining the ROE for
2009.19 The AUC concluded that (page 69):
“The Commission is persuaded by the empirical analysis of Drs. Kryzanowski and
Roberts that there is insufficient evidence to support the use of adjusted betas for
Canadian utilities if the purpose of the adjustment is to adjust the beta towards one and
therefore, beta should not be adjusted towards one. Therefore, the Commission rejects
Mr. Coyne’s beta results as unreasonably high, because he adjusted his beta estimates on
the assumption that they would revert to 1.00. In other words, his analysis assumes that,
in time, utilities would be as risky as the market as a whole.”
2.4.1.2.4 The Roll and CAPM Critiques
The BG Report (pages 110-111) discusses an old critique by Roll (1977) dealing with the
CAPM20 but ignores a recent paper by Levy and Roll that provides support for the CAPM. In
their study, Levy and Roll show that a typical market proxy can be made efficient using
parameters that are very close to the sample parameters (i.e., well within estimation error
bounds). As a result, empirically measured return parameters (such as beta) and the market
portfolio weights are perfectly consistent with the CAPM using a typical proxy. They conclude
that (pages 2486-7):21
“From a practical perspective, because sample betas are quite close to betas that
have been adjusted to render the market proxy mean/variance efficient, improved
estimates of expected returns can be obtained from sample betas alone.”
19 Exhibit 976205_1632942, Written evidence of Michael J. Vilbert for Nova Gas Transmission Limited, pages 32-34. 20 Richard A. Roll, 1977, A critique of the asset pricing theory’s tests, Journal of Financial Economics 4, pages 129-176. 21 Moshe Levy and Richard A. Roll, 2010, The market portfolio may be mean/variance efficient after all, The Review of Financial Studies 23: 6, pages 2464-2491.
Kryzanowski Critique 17
It also ignores other research. Using a comparison based on a Bayesian model, which trades of
goodness of fit against model complexity, Ammann and Verhofen (2007) find that the best to
worst performing models are the conditional CAPM, conditional three-factor model, the
unconditional CAPM, and the unconditional three-factor model.22
Levy (2010) examines the theoretical and empirical criticisms of the CAPM and concludes (page
68):23
“It is interesting to discover that the M-V investment rule and the CAPM are robust under wide possible frameworks, even when the empirical distributions of rates of return are not Normal, let alone when Normality prevails. These are very encouraging results: We therefore conclude that one can safely continue to use the CAPM in academic research and in practice, as it cannot be refuted, and is even strongly supported experimentally with ex ante parameters. The difficult issue is, of course, how to estimate the ex ante parameters; however, this cannot be counted as a disadvantage of the CAPM, because virtually all theoretical models encounter this issue.”
Of course, ex ante parameters are available for both the risk-free and market proxies from survey
data, and the market beta is by definition equal to one.
Levy (2008) reports that the Fama and French or FF (2004) static (unconditional) CAPM results
using ex post data as a proxy for ex ante expectations are not robust.24 He shows that the FF
results depend on the sub-period selected for such testing. For example, Fama and French (2004)
report a negative relationship between average return and beta for the period 1963–2003 based
on ten portfolios ordered by their book to market ratios. Using the same data, Levy (2008)
reports that these two variables exhibit a strong positive relationship for the sub-period 1927-
1962, and even a stronger positive relationship (with 84% explanation power) for the whole
period of 1927–2007. The longest period is often used by COE experts to obtain a COE estimate
based on the ERP method.
22 M. Ammann and M. Verhofen, 2007. Testing conditional asset pricing models using a Markov chain Monte Carlo approach, European Financial Management 14, pages 391–418. 23 Haim Levy, 2010. The CAPM is alive and well: A review and synthesis, European Financial Management 16: 1, pages 43–71. 24 M. Levy, 2008. CAPM risk-return tests and the length of the sampling period, Working Paper, Hebrew University. Drawn from the discussion in Levy (2010). See previous footnote.
Kryzanowski Critique 18
2.4.1.2.5 International CAPM
The BG Report (pages 45-46) possesses a number of questions related to the use of an
international CAPM to obtain an estimate of the cost of equity for a regulated Canadian entity.
Kryzanowski and He (2007) conclude that their general result does not apply to Canadian
utilities since the foreign beta is not statistically significant.25 Their general result is that the
Canadian cost of equity should be estimated in an integrated market rather than a segmented
market, and higher importance should be given to estimating the dynamics of betas for the
Canadian sectors.
He and Kryzanowski (2007) compare their MRP estimates to those of Booth (2001) as follows:26
“Our annualized estimates of equity risk premia are 4.32% and 5.88% for Canada and the
U.S., respectively. Our estimates are slightly higher than those estimated by Booth
(2001), who reports a 3.29% equity premium for Canada, and a 5.61% equity premium
for the U.S. over the period of 1957–2000. Two reasons contribute to the different
estimates: (1) We use a longer sample period from 1956 to 2005, in which the Canadian
and U.S. markets realized lower average returns than those in Booth’s sample; (2) Booth
determines the equity premium with respect to the long-term bond yields (8.04% for
Canada and 7.32% for the U.S.), which are significantly higher than the short-term
Treasury bill yields (6.24% for Canada and 5.16% for the U.S.) used in our study.”
Interestingly, the U.S. Federal Energy Regulatory Commission in January of 2009 refused to
include TransCanada in the proxy group it used to evaluate U.S. equity returns. The reason given
by FERC was that Canadian pipelines are subject to “a significantly different regulatory structure
that renders [them] less comparable to domestic pipelines regulated by the Commission.”27
25 Zongzhi He and Lawrence Kryzanowski, 2007, Cost of equity for Canadian and U.S. sectors, North American Journal of Economics and Finance 18, pages 215–229. 26 Booth, L., 2001. Equity risk premiums in the US and Canada, Canadian Investment Review 14(3), pages 34–43. 27 AUC Generic 2009 Proceeding, Exhibit 0292.04.CAPP-85, Revised Evidence of Dr. Safir, lines 21-25, page 14.
Kryzanowski Critique 19
2.4.1.3 Total Risk Measure of Relative Risk
To test the reasonableness of the beta estimate in an extreme setting, some experts invoke the
highly unlikely assumption that investors are compensated for total risk (standard deviation of
return) including the part that they can diversify away by holding portfolios that contain at least
two or more financial assets. In other words, they examine what the relative risk of the regulated
entity is if investors require additional compensation for bearing nearly all of the risk that they
can diversify away.
In such examinations, one should not compare the standard deviations of a single entity to that of
a market portfolio because it is well known that total risk declines as the number of entities in the
portfolio increases.28 Thus, fair assessments compare the cross-sectional mean and median
standard deviations for the sample of regulated entities to their corresponding means and
medians for firms that are in the market proxy.
2.4.1.4 Risk-free Forecast
Of all the methods described by the BG Report for obtaining a risk-free forecast (pages 19-20),
the most prevalent in Canada is to first obtain a forecast for 10 year Canada’s from Consensus
Economics or another source, and then add an estimate of the maturity premium as proxied by
the average spread of 30 year Canada’s over 10 year Canada’s. The BG Report provides no
empirical evidence or references to such evidence that these survey forecasts of 10 year Canada’s
are more reliable than the forecasts of equity market returns.
While the BG Report (fn. 20, page 19) infers that the forecasts for the Canadian 3-month and
10-year government bond rates are based on a survey of more than 250 economists monthly,
28 For recent studies that examine diversification effects of bonds and Canadian equities, see: Wassim Dbouk and Lawrence Kryzanowski, 2009. Diversification benefits for bond portfolios, European Journal of Finance 15: 5&6 (July-September), pages 533-553; and Lawrence Kryzanowski and Shishir Singh, 2010. Should minimum portfolio sizes be prescribed for achieving sufficiently well-diversified equity portfolios?, Frontiers in Finance and Economics 7: 2 (October), pages 1-37. Taken together, these papers show that the benefits are achieved more quickly for equities than bonds, and that considerable diversifiable risk still remains for portfolios that contain many bonds.
Kryzanowski Critique 20
such is not the case. For example, the consensus forecasts in the February 14, 2011 issue of
Consensus Forecasts are based on 16 responses.
2.5 Discounted Cash Flow (DCF) Model
The DCF model is discussed on pages 49-54 of the BG Report. While the BG Report (page 49)
states that the DCF model takes “its point of departure from the Security Market Line”, this is
not possible since the DCF model predates the Security Market Line.
The DCF model has been used (especially in the more distant past) in Canada and is used
extensively in the United States for regulated entities. This continued use of the DCF model in
the United States is paradoxical given the importance of the standalone principle and the findings
reported by Drs. Graham and Harvey (2001, 2002) based on a large sample of U.S. corporations
that the:29
“Capital Asset Pricing Model (CAPM) was by far the most popular method of
estimating the cost of equity capital: 73.5% of respondents always or almost
always used it. The second and third most popular methods were average stock
returns and a multi-factor CAPM, respectively. Few firms used a dividend
discount model to back out the cost of equity.”
Since it is reasonable to assume that these corporations have managements whose competence is
comparable to that of a regulated entity, the implication of these findings is that few of the
managements of the regulated entities would estimate their cost of equity using the DCF model if
they were not regulated.
The BG Report only discusses a few of the DCF models used to obtain the implied cost of
equity; namely, the constant growth Dividend Discount Model or DDM of Gordon (page 49) and 29 John Graham and Campbell Harvey, 2002, How do CFOs make capital budgeting and capital structure decisions?, Journal of Applied Corporate Finance 15:1 (Spring), page 12. This article was a practitioner version of the following paper that won the Jensen prize for the best JFE paper in corporate finance in 2001: John Graham and Campbell Harvey, 2001, The theory and practice of corporate finance: Evidence from the field, Journal of Financial Economics 60.
Kryzanowski Critique 21
the multi-stage DDMs (on pages 49-50). Other Implied cost of equity capital models (e.g., Gode
and Mohanram, 2003; and Easton, 2004) are fundamentally based on the residual income
valuation (RIV) model in Ohlson (1995) and Ohlson and Juettner-Nauroth (2005) where one-
year-ahead forecasted earnings and dividends per share and both the two-year ahead expected
growth rate and long-term growth rate in earnings per share (eps) determine the market value of
equity.30 Other implied cost of equity estimators (e.g., Claus and Thomas, 2001; Gebhardt et al.,
2001; Easton et al., 2002) are based on the RIV model which assumes a clean surplus relation
(i.e., future book values of equity can be imputed from current book values of equity, and
expected earnings and dividends).31
Kryzanowski and Rahman (2009) summarize the literature on the performance of these models
as follows:
“However, as asserted by Easton and Sommers (2007), ‘‘there is very little evidence
regarding the empirical validity of these models.” The conclusion from recent studies that
examine the validity of firm-specific estimates of expected rates of return that are derived
from these reverse-engineering exercises (e.g., Botosan and Plumlee, 2005; Guay et al.,
2005; and Easton and Monahan, 2005) is that these estimates are poor. The key problem
identified by Easton and Sommers (2007) is that the difference between the expected
market rate of return and analysts’ expectations may be correlated with an omitted
variable and hence, represent a systematic bias. This potential upward bias may arise
from optimistic earnings forecasts of analysts.
30 P. Easton, 2004. PE ratios, PEG ratios, and estimating the implied expected rate of return on equity capital, The Accounting Review, pages 73–95. D. Gode and P. Mohanram, 2003. Inferring the cost of capital using the Ohlson–Juettner model, Review of Accounting Studies 8, pages 339–431. J. Ohlson, 1995. Earnings, book values, and dividends in security valuation, Contemporary Accounting Research 11, pages 661–687. J. Ohlson and B. Juettner-Nauroth, 2005. Expected EPS and EPS growth as determinants of value, Review of Accounting Studies 10, pages 349–365. 31 J. Claus and J. Thomas, 2001. Equity premia as low as a three percent? Evidence from analysts’ earnings forecasts for domestic and international stock markets, The Journal of Finance, pages 1629–1666. P. Easton, G. Taylor, P. Shroff and T. Siougiannis, 2002. Using forecasts of earnings to simultaneously estimate growth and the rate of return on equity investment, Journal of Accounting Research 40, pages 657–676. W. Gebhardt, C. Lee and B. Swaminathan, 2001. Toward an implied cost of capital, Journal of Accounting Research 39, pages 135–176.
Kryzanowski Critique 22
Other potential disadvantages of the DCF model make it unreliable for estimating the cost of
equity or the risk premium on equity, particularly for individual companies. These include
circularity when the sample consists of regulated entities, two-way causality where the earnings
forecasts of analysts Granger-cause allowed ROEs and allowed ROEs Granger-cause the
earnings forecasts of analysts, and the evidence that free lunches are reaped from equity
investments in regulated entities in that the realized returns on lower risk utilities exceed the
realized returns on the higher risk market portfolio. Therefore, a proper application of the DCF
model would need to address each of these implementation difficulties, including the corrections
made by investors for forecast and overconfidence biases of analysts.
Summarizing, the reliability of using the DCF model, especially on a prospective basis, for
individual regulated entities using analyst estimates of some of the required inputs is
questionable, and is not improved by using (correlated) inputs for samples of regulated entities
that are covered by essentially the same group of analysts. Thus, the DCF Method should be
more reliable when it is applied to the market proxy using prospective inputs that do not contain
the forecast and overconfidence biases of bottom-up analysts.
There is also an inconsistency in terms of using the prospective implied cost of equity obtained
from a traditional DCF model and that obtained from a supply-side model. The implied cost of
equity from both types of models is of the geometric mean type (i.e., an unequally weight
average). While experts do not convert the implied cost of equity to its arithmetic mean
counterpart when the implied cost of equity is obtained using the traditional DCF model, they do
when they obtain the implied cost of equity from a supply-side model. If the conversion to an
arithmetic mean was done for the traditional DCF model, the estimates would generally be too
high to be taken seriously. The conversion of an estimate from the supply-side model to an
arithmetic mean increases the estimate considerably. For example, the BG Report (pages 32-33)
states that the “estimate is currently 3.08% in geometric terms and 5.18% on an arithmetic
basis”, both presumably for the U.S.
2.5.1 Biases in the Forecasts of Analysts
Kryzanowski Critique 23
The DCF Model can be implemented using retrospective estimates or prospective estimates of
dividends and dividend growth rates. When applied in a prospective manner, the DCF Method
often uses the forecasts of financial analysts. The explicit assumption for such prospective
applications of the DCF Model is that the estimates do not exhibit known biases. The BG
Report (page 51) attempts to provide evidence that the effects of analyst forecast biases are less
likely to be an issue for regulated entities. Examinations of whether the forecasts of analysts
outperform time-series earnings forecasts does not address the issue of whether the forecasts of
analysts are optimistic, and what their upward impact is on the implied cost of equity using a
DCF model.
The literature documents that the earnings estimates of analysts exhibit substantial optimism and
overconfidence biases, that revisions in analysts’ forecasts cause variability in stock prices, and
that the use of these upwardly biased estimates without removing the bias leads to an upwardly
biased estimate of the required ERP and cost of equity. To avoid an extensive review of that
literature, the brief summary reported in Kryzanowski and Rahman (2009) is as follows:
“Indeed, Richardson et al. (2004) find evidence that analysts’ forecasts tend to be more
optimistic the further away they are from the earnings announcement date. Using I/B/E/S
earnings forecasts which tend to be optimistic, Easton and Sommers (2007) show that the
implied cost of equity estimate is, on average, 3.5% higher than the estimate based on
current accounting data. When estimates are based on the S&P 500, the optimism bias is
significantly reduced. Easton and Sommers (2007) conclude that since analysts’ forecasts
are pervasively optimistic, then implied cost of equity estimators will similarly be
pervasively upwardly biased. In addition, Deng et al. (2006) derive an estimation
procedure which infers earnings forecast bias from equivalent price expressions that
utilize different forecast horizons. They find that investors, on average, adjust one-year
earnings forecasts downwards by approximately 10%.”
Kryzanowski Critique 24
Another pertinent article is by James Montier (2005).32 This article has been used in the third and
final year curriculum for those working towards a Chartered Financial Analyst (CFA)
designation. The Montier article classifies analysts as “truly inept seers” who “are terribly good
at telling us what has just happened but of little use in telling us what is going to happen in the
future” (page 3). He attributes this to the following (page 4): “Experts do know more than lay
people, but sadly this extra knowledge seems to trigger even higher levels of overconfidence.”
Some experts incorrectly argue that it is not necessary to adjust analysts’ growth estimates
downward to address bias because whether growth rates are higher or lower than what is actually
achieved is irrelevant to what is measured (i.e. investor expectations and the influence of those
expectations on required returns). However, this assumes considerable irrationality among
investors in that they would believe forecasts that they know are wrong (i.e. have an optimistic
bias). Accordingly, such irrationality would invalidate a basic assumption of using the DCF
Model to estimate the cost of equity; namely, that prices are fair. Thus, if regulators set allowed
COEs based on DCF Models using expected growth rates that are consistently higher (or lower)
than what is subsequently achieved, then the allowed cost of equity will consistently be higher
(or lower) than the fair cost of equity.
2.5.2 Validity of Using a Complex Weighted-average Implied Rate of Return for an
Annually Updated Cost of Equity
The BG Report is silent on the validity of using a multi-period, complex weighted-average
implied rate of return from a DCF model as an estimate for the cost of equity when it is updated
annually. The BG Report (page 6) states that the DCF model has the relative advantage of
capturing the dynamic components of pricing. How this is possible is unclear when the implied
rate of return from the DCF model is the average annual compound rate of return that one
expects from the investment if held from now until infinity. In other words, the r estimated using
the DCF model does not give the expected cost of equity capital for the next period but gives the
constant compound rate of return over the life of the investment.
32 James Montier, 2005, The folly of forecasting: Ignore all economists, strategists, & analysts, Global Equity Strategy, DrKW Macro research, August 24, 2005. Available at: http://www.arthurdevany.com/webstuff/montier-follyofforecasting.pdf.
Kryzanowski Critique 25
2.6 Investment Manager and Economist Estimates of Available Prospective Equity
Market Returns
The BG Report provides an asymmetric treatment of the merits of survey evidence. While it
appears to support the use of consensus estimates for the risk-free rate obtained from surveys of
investment professionals conducted by Consensus Economics (page 19), it dismisses the value of
the estimates of investment professional and economists of the prospective equity market returns
available to investors buying equities at market prices in a variety of ways. First, the BG Report
states (page 30): “We are not aware of any regularly occurring surveys of the market risk
premium in Canada.” This is not correct as both Mercer and Towers Watson conduct annual
surveys of Canadian and global institutional investment professionals and economists that
provide forward-looking estimates of the total return on various series, such as the S&P/TSX
Composite and various bond indexes. For example, the Mercer 20th Annual 2011 Fearless
Forecast polled 56 Canadian and global institutional investment managers in mid-December
2010.33 Like previous surveys, the Towers Watson 29th Annual Survey, Economics Expectations,
provides short- (2010), medium- (2011-2014) and long-term (2015-2024) expectations on
macroeconomic indicators (e.g., real GDP growth for Canada & the U.S.; CPI inflation rate for
Canada & the U.S.) and financial indicators (e.g., 5-, 10- & 30-year Canada bond yields;
S&P/TSX Composite Index returns) based on the “projections of the country’s leading business
economists, strategists and portfolio managers from more than 50 organizations, such as
chartered banks, investment management firms and other corporations” obtained in November
2009.34 The 2011 edition is currently in press.
Second, the BG Report (pages 33-34) compares survey methods to other methods on three
criteria. For the reasonable criterion, the BG Report (page 33) concludes:
“However, a MRP based on a historic average or on a well-defined conditional or supply-
side model will be transparent to stakeholders and with enough specificity regarding the
33 Summary findings available at: http://www.mercer.it/press-releases/1405305. 34 Available at: http://www.towerswatson.com/assets/pdf/1469/TW_14622.pdf.
Kryzanowski Critique 26
implementation, the implementation will be formulaic and minimize the use of
judgmental factors. In contrast, reliance on survey based MRP estimates is not
transparent in that stakeholders cannot replicate the results and generally do not have
access to the underlying data. In addition, we are not aware of any periodic and publicly
available survey of the Canadian MRP.”
The other approaches are not more formulaic and do not require less judgment to implement than
survey based MRP estimates. For example, using the historic average, a stakeholder would need,
for example, to decide on whether to use the geometric or arithmetic average or what time period
over which the estimates should be calculated, and more importantly would have to have access
to costly data sets. Furthermore, how does a stakeholder audit the validity of realized bond or
stock return series from 1900 or 1926 onwards? Similar arguments follow for implementation of
the DCF model and its variants such as the supply-side model. The inputs into these alternative
approaches are not any more auditable than those involved in survey methods. The responses
from survey methods are summarized using the simplest formulas; namely, those for estimating
the mean, median, and so forth. The question to be answered is: Why use indirect methods to
infer prospective MRP expectations when one can obtain them directly using survey methods?
The discussion of the Jacquier et al. (2003) paper in the BG Report (fn. 45, page 27) is
incomplete. The BG Report (page 35) discusses the importance of accounting for structural
shifts but fails to mention the weighting scheme demonstration is only valid if one makes the
heroic (and empirically unsupported) assumption that returns are IID for the period of 1926-2009
(i.e., no structural shifts occurred). Furthermore, Jacquier, Kane and Marcus (2003) show that,
while a weighted-average of the arithmetic and geometric average returns provides an unbiased
estimate of expected long-term returns, the best estimate of cumulative returns is even lower.
They conclude that:35
“Strong cases are made in recent studies that the estimate of the market risk premium
should be revised downward. Our result compounds this argument by stating that even
35E. Jacquier, E., A. Kane and M.J. Marcus, 2003. Geometric or Arithmetic Mean: A Reconsideration, Financial Analysts Journal 59, Nov/Dec, pages 46-53.
Kryzanowski Critique 27
these lower estimates of mean return should be adjusted further downward when
predicting long-term cumulative returns.”
Thus, while the Agency updates the COE annually, long-term equity investors reap cumulative
multi-period returns.
The BG Report (page 32) provides a quote from Mehra and Prescott (2003), who are the authors
who first identified the equity premium puzzle. Mehra and Prescott acknowledge that they
reported arithmetic averages in their original article, since the best available evidence at that
point in time indicated that (multi-year) stock returns were uncorrelated over time.36 They now
acknowledge that the arithmetic average can lead to misleading estimates when returns are
serially correlated, and that the geometric average may be the more appropriate statistic to use.
Drs. Mehra and Prescott (p. 57) note that stock returns have been found to be mean reverting.
With regard to reliability, the BG Report (page 34) makes a similarly questionable inference
based on the assertion that “a survey based MRP will usually not be auditable” unlike the case
for the other MRP estimation methods. Given the current debate on whether the CPI series
provided by Statistics Canada underestimates or overestimates the rate of inflation, it is difficult
to argue that the input to any MRP method is auditable. There is no a priori reason to believe that
Mercer or Towers Watson or Statistics Canada knowingly add any bias to the findings that they
report.
With regard to being pragmatic, the BG Report (page 34) asserts:
“If the conditional or supply-side MRP model is well-specified, both models as well as a
historic average MRP are pragmatic in the sense that they are based on readily available
information and simple to implement. This, however, is a weakness for the survey-based
MRP, where the underlying information is either unavailable or costly to obtain (through
surveys).”
36 Rajnish Mehra and Edward C. Prescott, The Equity Premium in Retrospect, forthcoming: G.M. Constantinides, M. Harris and R. Stulz, Handbook of the Economics of Finance (Amsterdam: North Holland). Draft of their paper, February 2003.
Kryzanowski Critique 28
As stated above, obtaining the information to calculate a survey-based consensus MRP is no
more expensive than that required to implement a conditional or supply-side MRP model
estimate. The results of the surveys by Mercer and Towers Watson can be obtained either free or
at a very low cost.
Interestingly, the BG Report (pages 21 and 30) concentrates on MRP estimates provided by
professors. The BG Report ignores a wealth of other prospective estimates (in addition to the
two discussed above) that are easily obtained. Some examples follow.
In April 2010, Fernandez and del Campo conducted a survey to elicit the MRP that companies,
analysts and professors use to calculate the return to equity in different countries. They find that
in 2010:37
“The average MRP [market equity risk premium] used by analysts in the USA and Canada
(5.1%) [median 5.0%] was similar to the one used by their colleagues in Europe (5.0%)
[median 5.0%], and UK (5.2%) [median 4.5%]. But the average MRP used by companies in
the USA and Canada (5.3%) [median 5.0%] was smaller than the one used by companies in
Europe (5.7%) [median 5.5%], and UK (5.6%) [median 5.5%].”
.
For almost every country, professors had a higher MRP than analysts (e.g., mean of 5.1% and
6.0% for analysts and professors for the U.S.).38 This is most likely due to the measurement of
MRP over short-term government debt instead of over long-term government debt by professors.
The dispersion of the MRP used by professors was also higher than that of the analysts and
companies. In addition, average MRP for the various groups in 2010 was lower than the one they
used in 2009. The three most popular references used by companies and analysts to justify their
MRP estimates, in decreasing order of importance, were internal estimate, Damodaran and
historic data for analysts and internal estimate, Damodaran and Morningstar/Ibbotson for
companies.
37 Medians are added to the quote and are placed in brackets. 38 The mean for professors in Canada was 5.9%. The average MRP declined for Canadian professors by 50 bps from 2009 to 2010.
Kryzanowski Critique 29
Drs. Graham and Harvey have elicited forecasts for expected S&P500 returns from surveys of
U.S. chief financial officers, beginning in the third quarter of 2000 and ending in the third
quarter of 2010. Drs. Graham and Harvey then calculate the MRP as the expected S&P500
return minus the 10-year U.S. Treasury bond yield. Some highlights from a recent survey are:39
The expected annual return for the S&P500 is the lowest observed to date (record low).
The current premium (3.00%) is substantially below the peak premium of 4.74%
observed in the February 2009 survey, and has returned to levels observed in late 2006
and early 2007.
In the December 2010 edition of Economic Insights, Mr. Peter Buchanan uses three conventional
approaches to estimate the forward-looking performance of the Canadian equity market.40 He
concludes:
“Using three conventional approaches, our analysis projects an average annual total
return (dividends plus capital gains) of from 8% to just under 9% for the TSX Composite
in the decade ahead. That’s compared to a longer-term average of 9.4% (Chart 1) but
works out to respectable 6-7% per year in real terms, allowing for expected subdued
inflation.”
Mr. Douglas Porter and Mr. Robert Kavcic in a December 2010 issue of Focus provide MRP
and equity market returns forecasts for the Canadian market.41 Using two different approaches,
they arrive at an expected nominal equity market return of 6.5%-to-7.2% and a MRP of 3.5%-to-
4.2%.
39 Drs. John R. Graham and Campbell R. Harvey, 2010. The equity risk premium in 2010, Working Paper, Fuqua School of Business, Duke University. Available at: http://faculty.fuqua.duke.edu/~charvey/Research/Working_Papers/W104_The_equity_risk.pdf More details available at: http://www.cfosurvey.org. 40 Mr. Peter Buchanan is a Senior Economist at CIBC World Markets. 41 Mr. Douglas Porter and Robert Kavcic, What to expect when you’re investing, Focus, December 10, 2010. Available at: http://www.bmonesbittburns.com/economics/focus/20101210/feature.pdf The authors of this report are, respectively, Managing Director and Deputy Chief Economist, and Economist at BMO Capital Markets.
Kryzanowski Critique 30
2.7.1 Comparable Earnings Method
Hardly any COE experts use the Comparable Earnings Method to estimate a cost of equity
(see BG Report, pages 13 & 58-62). The primary reason is that this method is of dubious
scientific merit (using, for example, the Daubert criteria) and thus unsuitable for use in
determining a fair COE for a regulated entity. There is neither any theoretical underpinning nor
any empirical support for the Comparable Earnings Method for estimating a regulated fair cost of
equity for a regulated entity. As an ad hoc approach to estimating a regulated cost of equity,
there are no agreed-upon rules for deciding on how the Comparable Earnings method should be
implemented. Neither is the method mentioned in surveys of practitioners as being used to
estimate the cost of equity for capital budgeting purposes.
The BG Report (pages 59-60) only provides some of the numerous disadvantages associated
with the use of this method. One such problem is that in regulatory settings the use of this
method results in the selection of control samples of non-regulated entities that are low risk ex
post but have risk-adjusted realized returns over the test period that outperform the general
market. Since the firms in the control samples earn an abnormal risk-adjusted return or “free
lunch” over the test period, their returns exceed the minimum requirements for the comparable
return standard. One of the advantages given in the BG Report (page 59) is imaginary unless
these accounting rates returns are adjusted to reflect what returns are available to investors in
capital markets.
The Alberta Utilities Commission (AUC) summarized this argument well in its 2009 GCOC
Decision as follows:
“280. In Decision 2004-052, the Board rejected the comparable earnings test results as a
measure of return on a comparable investment.
The CE [comparable earnings] test measures actual earnings on actual
book value of comparable companies, which in the Board's view does not
Kryzanowski Critique 31
measure the return “it would receive if it were investing the same amount
in other securities possessing an attractiveness, stability and certainty
equal to that of the company's enterprise” (emphasis added) (unless the
securities were currently trading at book value).242
281. The Commission agrees with the Board that the comparable earnings test examines
accounting earnings on book value for companies, but not returns actually available to, or
required by investors in the market. In the Commission’s view, because the comparable
earnings test does not deal with returns available to investors in capital markets, it is not
consistent with the comparable investment standard and is not a test upon which any
weight should be placed. Consequently, the Commission will not consider the
comparable earnings evidence.”
Fama and French (1999) use a variant of this approach to estimate the cost of equity. They
estimate the discount rate (i.e., internal rate of return or IRR) that equates the aggregate current
market value (or book value) to the future cash flows generated by the sample(s) of firms being
studied, where the IRRs on cost represent the actual costs of capital borne by these firms to use
investors’ funds.42 Kryzanowski and Mohsni (2010) apply the approach to publicly traded
nonfinancial Canadian firms over the period, 1961–2003. They find that the cost-of-equity
estimates range from 9.09% (Utilities) to 11.05% (ALL) to 12.39% (Consumer Stapes), and the
11.05% overall cost of capital embodies an equity risk premium of 2.85% when benchmarked to
Canada long-bond returns, and of 2.96% when benchmarked to Canada long-bond average
yields. When comparing their results to those from the CAPM, Kryzanowski and Mohsni
conclude (page 266):43
“The FF approaches using IRRs and ASRs avoid the reliance on the use of industry betas
and market risk premiums, both of which are subject to estimation errors (Fama & 42E.F. Fama & K.R. French, 1999. The corporate cost of capital and the return on corporate investment, Journal of Finance 54(6), pages 1939–1967. 43 Lawrence Kryzanowski and Sana Mohsni, 2010. Capital returns, costs and EVA for Canadian firms, North American Journal of Economics and Finance 21, pages 256–273. Also, see: G. Athanassakos, 1997. Estimating the cost of equity and equity risk-premia of Canadian firms, Multinational Finance Journal 1(3), pages 229–254.
Kryzanowski Critique 32
French, 1997; He & Kryzanowski, 2007). While the estimation errors are unknown for
the FF IRR approach, they appear to be at least as large for the G- and A-means of the
ASRs as those reported by others using a CAPM approach. Furthermore, the FF approach
can generate counter-intuitive estimates of sector-specific costs of capital or equity, as we
report herein for the IT GICS sector.”
2.8 ATWACC-like Approaches
The BG Report (pages 62-67) presents a disguised After-tax Weighted Average Cost of Capital
(ATWACC) approach to estimating the cost of equity. This methodology is fraught with
unknown estimation error, makes a number of assumptions with very weak empirical support,
and is based on assumptions, relationships and models that have not been verified in the real
world. While the validity of the trade-off theory has reasonable support, it is seriously challenged
by a number of competing theories. Thus, while the trade-off theory can offer useful qualitative
guidance, it does not follow that capital structure differences can be captured by a set of untested
formulas. If the capital components are marked to market, then ATWACC (and the cost of
equity) become more volatile (subject to any bubbles or allocational inefficiencies in the market)
and ATWACC becomes positively related to the price-to-book ratio for equity.44 Furthermore,
there is considerable disagreement about whether the ATWACC curve has a flat bottom and
where such a flat bottom begins and where it ends if such a flat bottom even exists.
The BG Report (page 65) implicitly acknowledges this problem when it states:
“The notion that the WACC is constant across a broad middle range of capital structures
is based upon the Modigliani-Miller theorem that choice of financing does not affect the
firm’s value.107
[Footnote not reproduced.] To the extent that it holds, this reasoning
suggests that one could compute the WACC for each of the sample companies and then
average to produce an estimate of the WACC associated with the underlying asset risk.”
44 The BG Report (pages 25, 28 & 31) acknowledges the occurrences of market bubbles.
Kryzanowski Critique 33
Using the Hamada equation to lever and de-lever betas is fraught with problems. The Hamada
equation combines the first 2 propositions of the Modigliani-Miller capital structure (and
WACC) theorems with the unconditional Capital Asset Pricing Model. As such, the use of the
Hamada equation assumes that: (i) there are no individual taxes, (ii) personal leverage is a
perfect substitute for corporate leverage (i.e., individuals and corporations borrow under the
same terms including rates), (iii) the firm is near its target leverage ratio so that no more (or no
less) debt subsidy is capitalized already into the observed stock price, (iv) all firms that are in a
risk-class have the same capitalization rate for an all-common equity firm, (v) corporate debt is
risk free (even if it is rated in the A or lower category), and (vi) financial and business risk are
independent given risk-free corporate debt. In turn, the assumptions used in the Hamada equation
imply that the levered value of the firm increases indefinitely with leverage if the unlevered
values of the firm remain constant along the WACC curve.
Empirical evidence suggests that firms do not operate at their optimal capital structures to allow
for what is alternately called cushion, slack, unused debt capacity or financial flexibility. In other
words, firms choose a target capital structure that is somewhere on the downward sloping portion
of the WACC curve depending upon the risk tolerance of their managements. Not adjusting for
the difference between a utility’s position on the downward sloping portion of the WACC curve
and its optimal capital structure, which can be sizeable, invalidates the implementation of the
Hamada equation (and thus, the ATWACC Method), which assumes that the firm is operating at
approximately its optimal capital structure.
2.9 Price-to-Book-Value (P-to-BV) Ratios
2.9.2 Interpretation of Price to Book Value Ratios and Market-based Evidence on P-to-
BV Ratios for Regulated Entities
The BG Report (pages 43, 45 and 117-123) only acknowledges the role of the price-to-book
value (P-to-BV) ratio in terms of asset pricing.45 In addition to its role in the Fama and French
three-factor asset pricing model, the P-to-BV ratio has frequent usage in practice by: (i) market
45 Also referred to as the market-to-book or its inverse the book-to-market ratio in the BG Report.
Kryzanowski Critique 34
participants to value and search for undervalued equities, (ii) investors, style portfolio managers
and service providers (such as Morningstar) when classifying stocks as being value or growth
stocks, (iii) financial analysts when valuing firms or assessing overall market valuation, and (iv)
underwriters when pricing issues.
One popular formulation of the DCF model, the residual income valuation or RIV model,
anchors a firm’s share price to its book or net asset value per share (NAVPS). In the RIV model,
the price or value of a share is a combination of the NAVPS, and a present value based on the
future stream of accounting profits. The value of a company’s share is the sum of: the NAVPS
at the time of valuation, and the present value of the stream of future residual income, where
residual income is the amount by which profits as measured by the rate of return on new assets
are expected to exceed the required or fair cost of equity. Thus, if a firm earns its fair return on
new assets, then price is equal to book value and the price-to-book ratio is equal to one. The
notion that each regulated entity should maintain a market value above book value is somewhat
contradictory as it suggests that each regulated entity should plan to earn a return on new
investment above the required rate of return. Thus, an examination of P-to-BV ratios provides a
test of the reasonableness of any cost of equity estimates.
This also implies that the Agency should use book values when determining the relative weights
of long-term debt and common equity in the railway companies’ capital structures for the
purpose of their regulated activities. This is based on the assumption that the Agency continues
to determine the cost rate of long-term debt based on the historic cost of debt in the railway
companies’ financial statements for the most recently completed fiscal year. If the Agency
decides to reflect the projected cost of new debt, any forecasting errors could be captured by
setting up a deferral account for that purpose. However, a switch to using the projected cost of
new debt as the cost of all debt when yields are expected to rise will most likely provide a
“windfall” gain to the regulated entities.
3. OTHER ISSUES
Kryzanowski Critique 35
The BG Report is silent on the following questions included in the Agency’s “Cost of Capital
Methodology Review – Consultation Document”:
• Assessment of a grain risk adjustment – Should the Agency continue to make an annual
assessment of whether the cost of common equity should be adjusted to reflect the risk of
carrying grain and if so, on what basis should it be established? It is unlikely that the
stand-alone COE for carrying grain is equivalent to the COE for the regulated entity.
• Cost rate of long-term debt – Should the Agency determine the cost rate of long-term
debt by using the historic cost of debt in the railway companies’ financial statements for
the most recent completed fiscal year? If not, how should the Agency determine the cost
of long-term debt? The BG Report (pages 64 and 74) does use the market cost of debt in
its WACC discussion without discussing the many problems in moving from using the
historic cost of debt to using the market cost of debt, especially when the current cost of
debt is expected to rise.