testing the effectiveness of “managing for results...
TRANSCRIPT
![Page 1: Testing the Effectiveness of “Managing for Results ...aefpweb.org/sites/default/files/webform/42/Managing for Results in Public...results (Moynihan, 2008), have become increasingly](https://reader034.vdocument.in/reader034/viewer/2022050411/5f8810dd7376467a697a84cb/html5/thumbnails/1.jpg)
Testing the Effectiveness of “Managing for Results”: Evidence from a Natural Experiment
Weijie Wang 1
Ryan Yeung2
Abstract
An important part of performance management is the idea of “managing for results” (MFR). The
core of MFR is decentralizing authority to managers in exchange for greater accountability.
While managing for results makes much theoretical sense, there is a lack of rigorous research on
the effectiveness of MFR. In this study, we use a quasi-experimental design to examine the
impact of a particular MFR reform in New York City, the Empowerment Zone (EZ), which
focused on providing city public school principals greater autonomy to improve school
outcomes. Our differences-in-differences estimates show that the EZ had a significant and
positive effect on school performance as measured by proficiency rates in standardized English-
Language Arts and mathematics exams and Regents diploma graduation rates, though the results
were not immediately felt. One possible mechanism behind this effect may be through increased
turnover of non-tenured teachers.
1 Assistant Professor, Department of Public Administration, SUNY Brockport. Email: [email protected]
2 Assistant Professor, Department of Urban Policy & Planning, Hunter College. Email: [email protected]
![Page 2: Testing the Effectiveness of “Managing for Results ...aefpweb.org/sites/default/files/webform/42/Managing for Results in Public...results (Moynihan, 2008), have become increasingly](https://reader034.vdocument.in/reader034/viewer/2022050411/5f8810dd7376467a697a84cb/html5/thumbnails/2.jpg)
2
Moynihan (2008) argues that we live in an era of governance by performance
management. Generally, performance management involves setting goals, collecting data on
performance, and using said data in management to improve performance. This helps managers
reduce costs and increase efficiency (Moynihan 2006; Moynihan and Pandey 2010; Poister,
Pasha, and Edwards 2013). Under the influence of the New Public Management movement,
performance management and its components like performance measurement and managing for
results (Moynihan, 2008), have become increasingly popular in public administration. Managing
for results (MFR) is an important component of performance management. The core idea of
managing for results, as Moynihan (2006, 78) discussed, is to use “performance information to
increase performance by holding managers accountable for clearly specified goals and providing
them with adequate authority to achieve these goals.” This is arguably the strongest performance
management system because it motivates managers, empowers managers to improve current
processes and holds them accountable through either reward or punishment (Moynihan 2006).
“Letting managers manage” has been a key tenet of the New Public Management
Movement (Kettl 1997), which draws a sharp contrast with traditional public administration.
Historically, inputs (Heinrich 2002) and constraints (Wilson 1991) have been emphasized over
results, and accountability has been maintained through a series of financial and personnel
controls. According to Kettl (1997), public managers’ hands are tied by existing rules,
procedures, and structures, which make it hard for them to respond to environmental
contingences and make the best decisions. For example, they must follow strict rules to make
sure that money is spent strictly for the purpose for which it is allocated, and typically have little
discretion to reallocate money to improve program implementation (Moynihan 2006). Therefore,
by providing public managers the flexibility and freedom from unnecessary red tape, public
![Page 3: Testing the Effectiveness of “Managing for Results ...aefpweb.org/sites/default/files/webform/42/Managing for Results in Public...results (Moynihan, 2008), have become increasingly](https://reader034.vdocument.in/reader034/viewer/2022050411/5f8810dd7376467a697a84cb/html5/thumbnails/3.jpg)
3
managers will be able to achieve goals more efficiently. This idea that “Management matters”
has been supported by a number of studies in public management (Favero, Meier, and O’Toole
2016; Meier and O’Toole 2002; Moynihan and Pandey 2005).
Despite the great potential of MFR for improving organizational performance, empirical
research has been limited and a rigorous examination of its impacts has yet to be conducted. We
examine MFR through the lens of the “Empowerment Zone,” an education reform undertaken by
the New York City Department of Education (NYCDOE). Principals in the Empowerment Zone
(EZ) were given more managerial authority and autonomy over curriculum, finance and
personnel in exchange for greater accountability. To preview our results, we find that the EZ did
have a significant and positive effect on school performance, though the results were not
immediately felt.
Our study makes several contributions to the current literature. It provides further
empirical support to the theoretical principals of MFR and doctrines of New Public Management
(Moynihan 2006). This study is based on a real-world MFR reform and uses objective measures
of both performance management and organizational performance, thus avoiding many of the
measurement problems that hurt the validity of some previous studies. In addition, by using a
quasi-experimental design and a differences-in-differences strategy, this research better
overcomes problems like unobserved heterogeneity and thus strengthens the validity of the
findings. Methodologically, our study is a significant improvement over the previous research by
identifying the impacts of MRF through a better research design.
![Page 4: Testing the Effectiveness of “Managing for Results ...aefpweb.org/sites/default/files/webform/42/Managing for Results in Public...results (Moynihan, 2008), have become increasingly](https://reader034.vdocument.in/reader034/viewer/2022050411/5f8810dd7376467a697a84cb/html5/thumbnails/4.jpg)
4
Performance Management: Does It Work?
Our study is set squarely in the literature and theory of performance management. A
fundamental question with performance management is whether it is associated with improved
organizational performance (Gerrish 2016). Performance management systems are widely used
largely because of their promise to improve the performance of public organizations (Poister,
Pasha, and Edwards 2013). Yet, if these systems do not work or even cause other problems like
goal displacement (Kelman and Friedman 2009), then it makes little sense to waste resources to
implement these reforms.
Recent years have seen more and more studies that examine the effects of performance
management on organizational performance. Though some studies show a positive association
between the two, the overall results are mixed and largely inconclusive. Sun and Van Ryzin
(2014) studied how perceived performance management practices measured by collaborative
goal setting, performance measurement, and performance information use were associated with
school performance. They found that these performance management practices were positively
related to the percentage of students who met proficiency standards in mathematics and English-
Language Arts tests (ELA). In contrast, Hvidman and Andersen (2014) found that performance
management practices measured by items such as managing by objectives and performance
feedback had no effect on student performance in Danish public schools. Using the same
measurement of performance management and study setting, Nielsen (2014) found the main
effect of performance management on school performance was zero or negative, though the
moderating effect of managerial authority was statistically significant. Gerrish’s (2016) meta-
analysis showed that performance management had a small but positive effect on organizational
performance, but his analysis of the Child Support Performance and Incentive Act (CSPIA) of
![Page 5: Testing the Effectiveness of “Managing for Results ...aefpweb.org/sites/default/files/webform/42/Managing for Results in Public...results (Moynihan, 2008), have become increasingly](https://reader034.vdocument.in/reader034/viewer/2022050411/5f8810dd7376467a697a84cb/html5/thumbnails/5.jpg)
5
1998 showed that the performance incentives installed by CSPIA exerted little impact on child
support performance (Gerrish 2017).
Performance management is a holistic concept, and includes different components or
schemes such as strategic planning, performance measurement, and the use of performance
information in decision-making (Kroll 2015; Nielsen 2014). These components may have
different effects on organizational performance, so bundling them together may hide some
impacts (Nielsen 2014). Recent studies have tested how these different components of
performance management are associated with organizational performance. Boyne and Chen
(2007) found that setting quantified targets for measurable objectives was positively related to
the performance of schools. Similarly, Poister et al. (2013) found that strategic planning that
included goal setting was positively related to the performance of small and medium-sized transit
systems in the United States.
Performance measurement is another component that received considerable attention.
Poister et al. (2013) found that self-reported measures of performance measurement were
positively associated with organizational performance, and Yang and Hsieh (2007) reached the
same conclusion based on a sample of government agencies in Taiwan. Kroll (2015), in contrast,
found that perceptual measures of performance measurement were not related to perceived
organizational performance in a statistically significant way. He also found that the link between
performance information use and perceived organizational performance was not statistically
significant, though strategic stances moderated the link.
Accountability based on performance measures is another important component of
performance management, which has received special attention in the educational context. For
example, Hanushek and Raymond (2005) concluded that a strong accountability system that
![Page 6: Testing the Effectiveness of “Managing for Results ...aefpweb.org/sites/default/files/webform/42/Managing for Results in Public...results (Moynihan, 2008), have become increasingly](https://reader034.vdocument.in/reader034/viewer/2022050411/5f8810dd7376467a697a84cb/html5/thumbnails/6.jpg)
6
attached consequences to school performance had a positive effect on student performance as
measured by rapid gains in national standardized test scores. Dee and Jacob (2011) reached a
similar conclusion, finding that school accountability systems produced statistically significant
increases in the average mathematics performance of fourth and eighth graders. In contrast,
Patrick and French (2011) found that school accountability systems implemented after the No
Child Left Behind Act did not improve student learning outcomes.
As discussed above, managing for results is often included as part of a performance
management system. MFR has become widely used in state and local governments (Moynihan
2006), but empirical research on MFR, especially its impacts, has been limited. Moynihan (2005)
studied how and why state governments adopted and implemented MFR and argued that MFR
reforms are often implemented for symbolic reasons. In another paper (Moynihan 2006), he
evaluated the status of the implementation of MFR in state governments, and found that the
major cause of disappointment in MFR was partial implementation: public managers were not
given the managerial authority to achieve the performance expectations. Nevertheless, a rigorous
evaluation of the effect of MFR is missing in the literature. Nielsen’s (2014) study comes the
closest to achieving this goal by investigating the moderating effect of managerial flexibility. He
found that flexibility in managerial decision-making such as decentralized pay negotiation and
flexibility in hiring and firing employees moderated the relationship between perceived
performance management practices and organizational performance. However, his study was not
based on a specific MFR reform but on a series of perceived performance management practices
including goal setting, performance feedback, and company contracts.
To summarize, the current research has made significant progress in terms of examining
the association between performance management systems and organizational performance.
![Page 7: Testing the Effectiveness of “Managing for Results ...aefpweb.org/sites/default/files/webform/42/Managing for Results in Public...results (Moynihan, 2008), have become increasingly](https://reader034.vdocument.in/reader034/viewer/2022050411/5f8810dd7376467a697a84cb/html5/thumbnails/7.jpg)
7
However, these theoretically sound principles and doctrines have only found mixed empirical
support. One limitation with the current literature is that performance management is often
measured on a perceptual basis using surveys, which are obviously affected by memory loss and
subjectivity. Using perceptual measures has its advantages, but studying the impacts of specific,
real-world performance management reforms may give us more confidence about the validity of
the findings. A related limitation is the fact that the current literature is typically correlational in
nature with regressions as the major research method; more rigorous analysis based on
experimental or quasi-experimental designs that can better control the influence of omitted
variables is needed. A final limitation is the lack of studies that examine the impacts of managing
for results as an important component of performance management. This study attempts to fill in
these holes using a quasi-experiment from public education.
Background
Decentralizing decision-making in public education has a long history in the United
States (Steinberg 2013). Despite some peaks and valleys, school-based management that gave
schools more autonomy was a popular reform in the 1980s (Steinberg 2013; Van Langen and
Dekkers 2001). Several large urban school districts, like Chicago, Boston, and Houston,
experimented giving principals more autonomy in return for accountability. For example,
Boston’s autonomous schools experiment began in the mid-1990s, and now approximately one
third of their public schools operate under one of several “autonomy” structures (French, Miles,
and Nathan 2014). Chicago Public Schools, the nation’s fourth largest public school district,
began the Autonomous Management and Performance Schools (AMPS) program in the 2005-06
![Page 8: Testing the Effectiveness of “Managing for Results ...aefpweb.org/sites/default/files/webform/42/Managing for Results in Public...results (Moynihan, 2008), have become increasingly](https://reader034.vdocument.in/reader034/viewer/2022050411/5f8810dd7376467a697a84cb/html5/thumbnails/8.jpg)
8
school year. Schools that had met some performance thresholds were granted more autonomy in
key areas like budgeting and curriculum.
After Mayor Michael Bloomberg took control of the New York City education system in
2002, a series of reforms were implemented under Schools Chancellor Joel Klein. One of these
reforms was to give principals more autonomy, which was piloted as the “Autonomy Zone” and
subsequently implemented citywide as the “Empowerment Zone.” The experiment began in
2004 with 29 schools that participated in the program, though the zone was open to all schools,
regardless of their previous performance. The zone expanded to about 330 schools in its first
year beyond the pilot program in the 2006-2007 Academic Year (AY) (Kelleher 2014).
The theory behind the “Empowerment Zone” was that to improve school performance,
principals had to have more autonomy and authority in decision-making in areas regarding
finance, instruction, and personnel management, but also had to be held accountable for the
performance of their schools. Principals signed contracts with the NYCDOE, which gave them
more authority over the operation of their schools. For example, they enjoyed more funding
discretion vis-à-vis their non-Empowerment Zone peers. On average, each EZ school was
provided with additional discretionary funding of $150,000 in the 2006-2007 AY; restrictions on
select funds in existing budgets were eased; and principals had more flexibility in procurement.
Principals were also exempt from various requirements, like reporting requirements (financial
reports, safety plans, etc.) or attendance at NYCDOE meetings. Empowerment principals were
granted authority over key instructional decisions such as curriculum, assessment, professional
development, and new teacher mentoring (New York City Department of Education 2006). Last
but not least, NYCDOE negotiated with the United Federation of Teachers (UFT), the labor
union that represents public school teachers in the city, to grant principals more authority in
![Page 9: Testing the Effectiveness of “Managing for Results ...aefpweb.org/sites/default/files/webform/42/Managing for Results in Public...results (Moynihan, 2008), have become increasingly](https://reader034.vdocument.in/reader034/viewer/2022050411/5f8810dd7376467a697a84cb/html5/thumbnails/9.jpg)
9
personnel management (Childress et al. 2011). For example, principals became responsible for
hiring teaching staff.
Authority and autonomy came with accountability. The contracts also specified
performance targets. If principals failed to achieve these targets in a specified time frame, they
would face consequences (New York City Department of Education 2006). Schools were
assessed annually and at the end of the contract term. Principals needed to comply with
applicable laws, regulations, and policies, demonstrate fiscal integrity, provide a safe school
environment, and most notably, meet student achievement goals. Schools that consistently met or
exceeded student performance targets would be recognized and receive additional funding and
early extension of performance contracts. Yet, principals could also be removed if schools made
little progress towards meeting student performance goals over two years, and if new principals
still could not improve student performance in the following two years, the schools could be
closed.
Methods
This study relies on education production function methods such as in Hanushek (1979),
Hanushek (1986), Bifulco, Duncombe, and Yinger (2005), Jepsen and Rivkin (2009), and Yeung
(2009). Education production function methods recognize the large role of forces like
socioeconomic status and disability, external to a school’s control, play in a student or school’s
level of performance. By controlling for these variables statistically, researchers can produce
estimates of treatment and policy effects independent of the effects of these variables.
We use school-level production functions as in Schwartz and Stiefel (2001) and Schwartz
and Zabel (2005) to estimate the effect of performance management (in the form of the
![Page 10: Testing the Effectiveness of “Managing for Results ...aefpweb.org/sites/default/files/webform/42/Managing for Results in Public...results (Moynihan, 2008), have become increasingly](https://reader034.vdocument.in/reader034/viewer/2022050411/5f8810dd7376467a697a84cb/html5/thumbnails/10.jpg)
10
Empowerment Zone) on school performance. A naïve education production adopted to suit our
purposes can be found in equation (1):
𝑦𝑠 = 𝛼 + 𝛽𝐸𝑀𝑃𝑂𝑊𝐸𝑅𝑀𝐸𝑁𝑇𝑠 + 𝛾𝑆𝑇𝑈𝐷𝐸𝑁𝑇𝑠 + 𝛿𝑇𝐸𝐴𝐶𝐻𝐸𝑅𝑠 + 휃𝑆𝐶𝐻𝑂𝑂𝐿𝑠 + 휀𝑠 , (1)
where a measure of school outcomes y, like the percentage of students who met state standards
on the mathematics exam for school s, is a function of a constant term whether or not the
school was in the Empowerment Zone in the 2006-2007 school year, EMPOWERMENT, a set of
school-level student demographic characteristics like the percentage of students who are limited
English proficient, STUDENT, a set of school-level teacher characteristics like the percentage of
teachers with a master’s degree or higher, TEACHER, a set of school characteristics like school
size, SCHOOL, and a random error term s.
Ordinary least squares estimation of equation (1) is likely to yield biased estimates of ,
the effect of the Empowerment Zone on school performance, as schools were not randomly
assigned to the Empowerment Zone. Instead, this was a self-selected group, with characteristics
different from the comparison group of schools that did not enter the zone. For example, it is
possible Empowerment schools had more engaged principals and it is these more engaged
principals that were responsible for improved performance and nothing to do with the
Empowerment Zone. If we are unable to observe and control for these principals, then our
estimates of would be biased.
Instead of equation (1), we estimate a differences-in-differences education production
function formulation to address this endogeneity problem. This model is presented in equation
(2):
𝑦𝑠𝑔𝑡 = 𝛼 + 𝛽𝐸𝑀𝑃𝑂𝑊𝐸𝑅𝑀𝐸𝑁𝑇𝑔 + 𝛾𝑆𝑇𝑈𝐷𝐸𝑁𝑇𝑠𝑔𝑡 + 𝛿𝑇𝐸𝐴𝐶𝐻𝐸𝑅𝑠𝑔𝑡 + 휃𝑆𝐶𝐻𝑂𝑂𝐿𝑠𝑔 + 휁𝑡 +
휂(휁𝑡 × 𝐸𝑀𝑃𝑂𝑊𝐸𝑅𝑀𝐸𝑁𝑇𝑔) + 휀𝑠𝑔𝑡, (2)
![Page 11: Testing the Effectiveness of “Managing for Results ...aefpweb.org/sites/default/files/webform/42/Managing for Results in Public...results (Moynihan, 2008), have become increasingly](https://reader034.vdocument.in/reader034/viewer/2022050411/5f8810dd7376467a697a84cb/html5/thumbnails/11.jpg)
11
In equation (2), y is still a measure of school performance for school s in year t. However,
schools now belong to group g, either the treatment group of Empowerment schools or the
comparison group of schools not in the Empowerment Zone. In addition, 휁t is a set of year
dummy variables, i.e. year fixed effects, [2006-07 AY, 2007-08 AY, 2008-09 AY], and 휁𝑡 ×
𝐸𝑀𝑃𝑂𝑊𝐸𝑅𝑀𝐸𝑁𝑇𝑔 is a vector of interactions between each year dummy and the dummy
variable indicating if the school was in the treatment group of Empowerment schools in the
2006-07 academic year.
η is a set of coefficients representing the differences-in-differences estimates of the
Empowerment Zone. Differences-in-differences estimators are statistically equivalent to a classic
pre-test, post-test two group quasi-experimental design. The change in outcomes of the
comparison group serves as a counterfactual for what would have happened to the Empowerment
schools had they not entered the Empowerment Zone. As it is the change in outcomes over time
that we are interested in, it is not essential for both groups to be similar before the treatment
period.
We interact the dummy variable indicating the treatment group with each of the year
dummies to allow the effect of the Empowerment Zone to vary by each year in the zone. It
maybe that the effect of performance management is only felt several years in the zone, giving
schools the time to change their administrative structures to improve performance.
We estimate six specifications for our main regression models. The dependent variables
for these regressions are the percentage of students meeting standards in ELA in grades 3-8, the
percentage of students meeting standards in mathematics in grades 3-8, a standardized variable
measuring the total performance of the school (combining mathematics and ELA proficiency) for
schools with grades 3-8, the percentage of students in the cohort receiving Regents high school
![Page 12: Testing the Effectiveness of “Managing for Results ...aefpweb.org/sites/default/files/webform/42/Managing for Results in Public...results (Moynihan, 2008), have become increasingly](https://reader034.vdocument.in/reader034/viewer/2022050411/5f8810dd7376467a697a84cb/html5/thumbnails/12.jpg)
12
diplomas, total graduation rates, and the turnover rate of teachers with fewer than five years of
experience.
We estimate all regressions with Huber-White robust standard errors to mitigate the
effects of heteroscedasticity. Standard errors are also clustered by school as errors for
observations for the same school are likely to be correlated.
Data and Variables
This study uses publicly available data published by the New York City Department of
Education and New York State Education Department (NYSED). Our sample includes all
regular elementary, middle and high schools that reported their test and graduation results.
Schools with missing data were dropped. Special education schools and charter schools were
excluded from this study. In the end, our sample included 962 elementary and middle schools
and 282 high schools. This study covered four school years from 2005-06 AY to 2008-09 AY.
Dependent Variables
We have a set of six dependent variables in our regression models. Table 1 presents
summary statistics for Empowerment schools and the comparison group of all other schools in
the 2005-06 academic year (the baseline), before the creation of the Empowerment Zone. This
allows us to compare how similar or different the schools were before the program.
Percentage of students meeting and exceeding proficiency in English-Language Arts and
mathematics
The first two dependent variables are the rate of students in a school meeting or
exceeding proficiency standards on ELA and mathematics exams. The tests are statewide tests
![Page 13: Testing the Effectiveness of “Managing for Results ...aefpweb.org/sites/default/files/webform/42/Managing for Results in Public...results (Moynihan, 2008), have become increasingly](https://reader034.vdocument.in/reader034/viewer/2022050411/5f8810dd7376467a697a84cb/html5/thumbnails/13.jpg)
13
based on New York State Learning Standards. Students in New York State public schools grades
3-8 took the tests and were categorized into four proficiency levels based on their test
performance. Students were classified as Level I (Below Standards), Level II (Meets Basic
Standards), Level III (Meets Proficiency Standards), and Level IV (Exceeds Proficiency
Standards). The cutoff points were set by the NYSED and were consistent over the study period.
Children meeting or exceeding proficiency were determined to demonstrate a thorough
understanding of subject matter and content expected at the grade level.
Table 1 reports that more students met or exceeded proficiency in a school in
mathematics than in ELA. The comparison group of schools had slightly higher performance
than the Empowerment schools in 2005-06. Empowerment schools had an average of 51.54
percent of students achieving proficiency in ELA and 57.05 percent in mathematics versus 54.94
percent for all other schools in ELA and 63.07 percent in mathematics. The data show that the
elementary and middle Empowerment Schools were not performing better before they entered
the Empowerment Zone.
![Page 14: Testing the Effectiveness of “Managing for Results ...aefpweb.org/sites/default/files/webform/42/Managing for Results in Public...results (Moynihan, 2008), have become increasingly](https://reader034.vdocument.in/reader034/viewer/2022050411/5f8810dd7376467a697a84cb/html5/thumbnails/14.jpg)
14
Table 1
Summary Statistics at Baseline
Empowerment Schools All Other Schools
Mean
Standard
deviation N Mean
Standard
Deviation N
Dependent Variables
Percentage of students meeting proficiency
in ELA 51.54 19.79 160 54.94 19.77 782
Percentage of students meeting proficiency
in mathematics 57.05 21.76 160 63.07 20.81 783
Overall performance score -0.52 0.984 160 -0.29 0.964 782
Regents diploma graduation rate 33.08 29.47 73 36.60 24.32 158
Overall graduation rate 60.79 24.02 73 54.70 22.22 158
Turnover rate for teachers with fewer than
five years of experience 23.02 15.33 230 21.11 15.43 881
Independent Variables
Overall teacher turnover rate 22.00 11.35 231 18.84 9.97 894
Percentage of teachers with fewer than
three years of teaching experience 27.61 14.21 231 17.77 12.16 894
Percentage of teachers with a master’s
degree plus 30 credit hours or doctorate 27.61 14.22 231 34.68 14.94 894
Percentage of English language learners 11.55 15.74 267 12.03 11.26 919
Percentage of special education students 11.05 6.06 274 13.08 5.79 928
Percentage of White students 9.05 14.84 274 15.02 21.53 928
Percentage of Asian students 7.66 13.08 274 12.37 16.95 928
Percentage of Black students 37.68 24.83 274 32.91 29.02 928
Percentage of Hispanic students 43.35 24.15 274 38.47 25.77 928
Percentage of male students 48.05 10.42 274 50.58 5.47 928
Percentage of students eligible for free or
reduced-price lunch 67.52 21.59 272 67.90 23.76 924
Student-teacher ratio 14.83 2.72 225 14.31 2.63 874
Enrollment 570.00 560 274 790.00 633 928
![Page 15: Testing the Effectiveness of “Managing for Results ...aefpweb.org/sites/default/files/webform/42/Managing for Results in Public...results (Moynihan, 2008), have become increasingly](https://reader034.vdocument.in/reader034/viewer/2022050411/5f8810dd7376467a697a84cb/html5/thumbnails/15.jpg)
15
Overall performance score
We added ELA and mathematics proficiency rates together and standardized the resulting
score with mean 0 and standard deviation 1, to improve interpretation. We call this combined
measure, the overall performance of the school. Table 1 indicates that on average, while both
groups had slightly below average performance, the non-Empowerment schools had higher
overall performance than the Empowerment schools before the creation of the zone.
Regents diploma/Overall graduation rate
During the period of this study, children in New York State could graduate from high
schools with various forms of diplomas. The Regents diploma was the standard state-issued
diploma and was also the diploma that the majority of students got when they graduated. To
graduate with a Regents diploma, students had to score 65 or higher on five different Regents
exams (New York City Department of Education 2016). Students who met certain criteria, such
as disability, could graduate with a local diploma, which allowed them to graduate with lower
exam scores. They had to score 55 or higher on five different Regents examinations. We think it
is meaningful to use the percentage of students who got Regents diploma in addition to the total
graduate rates, which also included the local diplomas, as the measure of high school
performance, as they may be based on two different populations. While the comparison group
had higher Regents diploma graduation rates than the Empowerment schools at baseline, the
Empowerment schools had the higher overall graduation rate.
![Page 16: Testing the Effectiveness of “Managing for Results ...aefpweb.org/sites/default/files/webform/42/Managing for Results in Public...results (Moynihan, 2008), have become increasingly](https://reader034.vdocument.in/reader034/viewer/2022050411/5f8810dd7376467a697a84cb/html5/thumbnails/16.jpg)
16
Turnover rate for teachers with fewer than five years of experience
During the study period, teachers must have taught three years before they could earn
tenure in the New York City public school system. The probationary period can be extended for
one more year if teachers were not recommended tenure by their principals. Critics often
complained that teacher tenure was almost guaranteed in New York City public schools and was
hardly based on real evidence of accomplishment (Loeb, Miller, and Wyckoff 2015). The
Empowerment Zone designation gave principals greater powers over personnel. A potential
mechanism principals may have used to improve performance hence was making tenure
decisions based on teaching effectiveness and terminating poorer performing teachers. The
turnover rate of beginning teacher might increase, which is a sign that teacher tenure was more
competitive and less predictable. This would potentially motivate teachers to increase
effectiveness and help build an effective teaching team. Accordingly, we see the increase in the
turnover rate of teachers with fewer than five years of experience as an intermediate outcome of
the Empowerment Zone experiment. We thus examine the effect of the Empowerment Zone on
the turnover rate of this group of teachers. The data on this measure were obtained from the
School Report Card database published by the NYSED. It is calculated as the number of teachers
with fewer than five years of experience who were not teaching in the following school year
divided by the number of teachers with fewer than five years of experience in the specified
school year, expressed as a percentage.
Treatment Variable
We requested the list of schools that entered the “Empowerment Zone” in the 2006-07
AY from the NYCDOE. The list had 331 schools that were in the Empowerment Zone during
![Page 17: Testing the Effectiveness of “Managing for Results ...aefpweb.org/sites/default/files/webform/42/Managing for Results in Public...results (Moynihan, 2008), have become increasingly](https://reader034.vdocument.in/reader034/viewer/2022050411/5f8810dd7376467a697a84cb/html5/thumbnails/17.jpg)
17
that year. These schools became the experimental group in our study, and all other schools that
were not in the Zone in that year served as the control group in our main regressions. We then
investigated if there were any differences between the two groups in school performance after
the experiment started in the 2006-07 AY. The 2006-2007 cohort of Empowerment schools was
not the only cohort of Empowerment schools. Some schools entered the Zone in 2007-08 AY
and 2008-09 AY, which eventually enlarged the number of empowered schools to more than
500. In addition, some schools left the Empowerment Zone after the 2006-07 AY. The changes
in the experimental and control groups may contaminate the effect of the “Empowerment Zone.”
In our robustness checks, we conduct all regressions with alternative experimental and control
groups to reflect these differences and double check if the results are consistent.
Control Variables
In addition to the Empowerment Zone variables and year fixed effects variables, we
control for a large set of variables that may affect school performance. These include teacher,
student, and school characteristics. Teacher characteristics come from data published by
NYSED, and school and student characteristics come from data published by NYCDOE.
Teacher characteristics
One of the teacher characteristics we control for is the overall teacher turnover rate.
Using data from New York City (the setting of this study) Ronfeldt, Loeb, and Wyckoff (2013)
examined the effect of year-to-year differences in teacher turnover at the grade level on
achievement. Their study found that a one standard deviation increase in teacher turnover
decreased math achievement by approximately two percent of a standard deviation.
![Page 18: Testing the Effectiveness of “Managing for Results ...aefpweb.org/sites/default/files/webform/42/Managing for Results in Public...results (Moynihan, 2008), have become increasingly](https://reader034.vdocument.in/reader034/viewer/2022050411/5f8810dd7376467a697a84cb/html5/thumbnails/18.jpg)
18
We also control for the percentage of teachers with fewer than three years of teaching
experience. There is a fairly developed literature on the effect of teacher experience on student
outcomes. Research by Boyd et al. (2008), Rivkin, Hanushek, and Kain (2005), and others
suggest that teacher experience has the greatest impact on achievement the first few years of
teaching. Therefore, schools with high shares of inexperienced teachers may also have lower
performance.
The final teacher variable we control for is the percentage of teachers with a master’s
degree plus 30 credit hours or a doctorate. New York City and some other school districts pay
teachers more for earning a master’s degree and teachers who have another 30 credits that were
taken after the date of their bachelor’s but not as part of their master’s credits earn an additional
increase (United Federation of Teachers 2016). Most studies do not find a master’s degree has
any effect on achievement (Clotfelter, Ladd, and Vigdor 2007, Croninger et al. 2007). However,
Goldhaber and Brewer (1997) and Goldhaber and Brewer (2000) find a subject-specific master’s
degree has a positive impact on achievement.
Our data suggest that the comparison group of schools had better credentialed teachers.
Empowerment schools had more inexperienced teachers and fewer teachers with a master’s
degree plus 30 credit hours or a doctorate. They also had more teacher turnover than non-
Empowerment schools.
Student characteristics
We also control for a list of student characteristics that researchers have long found to be
related to achievement. These are the percentages of English language learners, special
education, Asian, Black, Hispanic, male, and free or reduced priced lunch students in the school.
![Page 19: Testing the Effectiveness of “Managing for Results ...aefpweb.org/sites/default/files/webform/42/Managing for Results in Public...results (Moynihan, 2008), have become increasingly](https://reader034.vdocument.in/reader034/viewer/2022050411/5f8810dd7376467a697a84cb/html5/thumbnails/19.jpg)
19
The statistics in Table 1 show that the Empowerment schools had greater shares of Black and
Hispanic students than the comparison group of schools. Researchers have found significant and
in some cases, large, test score gaps between English as a second language learners and native
speakers (Rumberger and Willms 1992), between Black and White students (Stiefel, Schwartz,
and Gould Ellen 2007, Todd and Wolpin 2007), between Hispanic and White students
(Rumberger and Willms 1992, Todd and Wolpin 2007, Stiefel, Schwartz, and Gould Ellen 2007,
Loveless 2015), between disabled and non-disabled students (Wei, Lenz, and Blackorby 2012),
and between poor and wealthy students (Coleman 1988, Sirin 2005).
Many researchers have also documented a gender gap in academic achievement, with
girls performing better in ELA and boys performing better in mathematics (Fryer and Levitt
2010, Loveless 2015, Ready et al. 2005). We also expect schools with large shares of Asian
students to have better performance. Kao (1995), Lee and Zhou (2015), Sun (1998), and others
reported that Asian children outscore white children, particularly in mathematics, which may
owe to high parental expectations and high social capital among Asian parents.
School characteristics
Our regressions also control for the log of enrollment and pupil-teacher ratio. The
literature on school size has generally been quite supportive of smaller schools (Stiefel,
Schwartz, and Wiswall 2015, Lee and Loeb 2000). New York City, in fact, has embarked on a
recent reform based on creating smaller high schools. Stiefel, Schwartz, and Wiswall (2015)
argued that this reform had positive effects on both graduation rates and exam scores. We control
for the pupil-teacher ratio as a proxy for class size. Research on class size is mixed and
controversial. The most famous study on class size reduction in Tennessee found large effects on
![Page 20: Testing the Effectiveness of “Managing for Results ...aefpweb.org/sites/default/files/webform/42/Managing for Results in Public...results (Moynihan, 2008), have become increasingly](https://reader034.vdocument.in/reader034/viewer/2022050411/5f8810dd7376467a697a84cb/html5/thumbnails/20.jpg)
20
achievement from class size reduction (Krueger 1999). Jepsen and Rivkin (2009), however,
concluded that in California, the benefits of class size were offset by a reduction in the teacher
quality.
While Empowerment and non-Empowerment schools have roughly equal student-teacher
ratios, Empowerment schools on average had smaller school sizes. The average Empowerment
school had 570 students while non-Empowerment schools averaged 790 students. There is
considerable variation in school sizes for both groups. The Empowerment schools had a standard
deviation of 560 and the other schools had a standard deviation of 633.
The summary statistics together suggest that the Empowerment schools faced many
challenges compared to schools that did not join the Empowerment Zone. The elementary and
middle schools had lower performance in both ELA and mathematics. The high schools had
lower Regents diploma graduation rates. The poorer performance of the Empowerment schools
may be related to their teachers. Empowerment schools had higher teacher turnover and less
experienced teachers. These teachers also contended with schools that had higher shares of
minority students.
Results
Main Regression Results
Table 2 presents our main regression results. These are differences-in-differences
regressions comparing how the performance of schools in the Empowerment Zone changed
compared to the performance of non-Empowerment schools in the years after joining the zone.
First, we find little evidence that Empowerment schools and non-Empowerment schools
performed differently on a wide variety of measures before the Empowerment Zone even
existed. The dummy variable indicating the treatment status is not significant for each dependent
![Page 21: Testing the Effectiveness of “Managing for Results ...aefpweb.org/sites/default/files/webform/42/Managing for Results in Public...results (Moynihan, 2008), have become increasingly](https://reader034.vdocument.in/reader034/viewer/2022050411/5f8810dd7376467a697a84cb/html5/thumbnails/21.jpg)
21
variable in Table 2, with the exception of the Regents diploma graduation rate, which is
significant only at the .10 level (schools in the Empowerment Zone had a lower Regents
Diploma graduation rate). This indicates that the differences between schools in the Zone and not
in the Zone were not statistically significant at the baseline.
We interacted the treatment variable with each year in the Empowerment Zone, allowing the
effect of the Empowerment Zone to vary by time in the zone. Generally, we find positive and
statistically significant benefits to belonging to the zone in the third year (2008-09 AY). In that
year, Empowerment schools have 2.188 percent more of their students achieving proficiency for
their grade in ELA and 4.350 percent more of their students reaching proficiency in mathematics
than schools not in the Empowerment Zone. The overall performance for these elementary and
middle schools is .158 standard deviations greater than for the comparison group in the third year
in the zone. The ELA result is significant at the .10 level, and the mathematics and overall
performance results at the .01 level.
![Page 22: Testing the Effectiveness of “Managing for Results ...aefpweb.org/sites/default/files/webform/42/Managing for Results in Public...results (Moynihan, 2008), have become increasingly](https://reader034.vdocument.in/reader034/viewer/2022050411/5f8810dd7376467a697a84cb/html5/thumbnails/22.jpg)
22
Table 2
Differences-in-Differences Regression Results
ELA
proficiency
rate
Mathematics
proficiency
rate
Overall
performance
score
Regents
diploma
graduation
rate
Overall
graduation
rate
Turnover
rates of
teachers
with fewer
than five
years of
experience
Differences-in-differences estimators
Empowerment Zone school * 2006-07 AY 1.819 * 0.392
0.055
3.102
-0.122
-0.640
(2.16)
(0.40)
(1.36)
(1.17)
(0.06)
(0.44)
Empowerment Zone school * 2007-08 AY 0.087
1.954
0.050
9.124 ** 2.437
0.084
(0.09)
(1.57)
(1.00)
(3.16)
(1.12)
(0.06)
Empowerment Zone school * 2008-09 AY 2.188 † 4.350 ** 0.158 ** 11.705 *** 2.082
3.605 *
(1.94)
(3.07)
(2.71)
(3.90)
(1.11)
(2.51)
Empowerment Zone school in 2006-07 -0.676
-2.052
-0.067
-6.180 † 1.445
-0.105
(0.56)
(1.35)
(1.04)
(1.87)
(0.66)
(0.98)
2006-07 AY -1.154 *** 6.62 *** 0.132 *** 2.429 * 2.211 * -0.883
(3.95)
(20.26)
(9.77)
(2.13)
(2.16)
(-
1.40)
2007-08 AY 6.393 *** 14.77 *** 0.508 *** 6.083 *** 4.730 *** 0.074
(18.66)
(32.75)
(29.30)
(4.43)
(3.66)
(0.12)
2008-09 AY 15.319 *** 20.22 *** 0.858 *** 11.659 *** 8.403 *** -2.489 ***
(34.59)
(38.05)
(38.30)
(8.43)
(6.69)
(3.74)
Overall teacher turnover rate -0.324 *** -0.412 *** -0.018 *** 0.055
0.093
(10.12)
(10.17)
(10.80)
(0.54)
(1.10)
Percentage of teachers with fewer than
three years of teaching experience -0.105 *** -0.115 ** -0.005 *** -0.002
0.044
0.246 ***
(3.32)
(3.09)
(3.28)
(0.982)
(0.59)
(9.47)
Percentage of teachers with a master's
degree plus 30 credit hours or doctorate -0.023
-0.029
-0.001
-0.078
-0.145 † 0.068 **
(0.98)
(1.03)
(0.86)
(0.84)
(1.75)
(2.82)
![Page 23: Testing the Effectiveness of “Managing for Results ...aefpweb.org/sites/default/files/webform/42/Managing for Results in Public...results (Moynihan, 2008), have become increasingly](https://reader034.vdocument.in/reader034/viewer/2022050411/5f8810dd7376467a697a84cb/html5/thumbnails/23.jpg)
23
Percentage of English language
learners -0.272 *** -0.157 * -0.009 *** -0.490 *** -0.337 *** 0.031
(5.32)
(2.23)
(3.85)
(7.10)
(5.05)
(1.21)
Percentage of special education
students -0.777 *** -0.780 *** -0.038 *** -1.620 *** -1.253 *** 0.372 ***
(11.12)
(10.06)
(11.23)
(6.54)
(6.34)
(6.68)
Percentage of Asian students 0.086 *** 0.060 ** 0.003 *** 0.137
-0.010
-0.008
(3.89)
(2.91)
(3.47)
(1.22)
(0.13)
(0.36)
Percentage of Black students -0.247 *** -0.256 *** -0.012 *** -0.355 *** -0.268 *** 0.128 ***
(12.02)
(12.23)
(12.6)
(4.33)
(4.96)
(7.35)
Percentage of Hispanic students -0.177 *** -0.172 *** -0.009 *** -0.312 *** -0.219 ** 0.123 ***
(7.08)
(6.70)
(7.43)
(3.36)
(2.95)
(6.10)
Percentage of male students -0.232 ** -0.133
-0.009 * -0.125 † -0.094
-0.015
(2.96)
(1.44)
(2.20)
(1.85)
(1.41)
(0.45)
Percentage of students eligible for free
or reduced-price lunch -0.212 *** -0.086 *** -0.007 *** -0.007
-0.019
0.034 †
(9.29)
(3.58)
(6.79)
(0.08)
(-0.25)
(1.81)
Student-teacher ratio -0.157
-0.822 *** -0.022 ** -0.138
0.121
0.888 ***
(0.377)
(4.04)
(2.59)
(0.31)
(0.24)
(6.16)
Enrollment(log) -2.937 *** -2.260 * -0.141 *** -0.307
-2.238
-2.666 ***
(4.06)
(2.36)
(4.06)
(0.17)
(1.33)
(5.41)
Constant 139.184
139.521
3.680
100.120
118.044
2.90
(23.14)
(18.75)
(12.34)
(6.23)
(8.69)
(0.77)
N 3,723
3,727
3,723
915
915
4,597
R-Squared 0.719 0.609 0.682 0.481 0.379 0.168
Notes: a. Absolute value of t-statistics based on Huber-White robust standard errors adjusted for clustering by school in parentheses.
b. † p<0.1; * p<0.05; **p<0.01; ***p<0.001.
![Page 24: Testing the Effectiveness of “Managing for Results ...aefpweb.org/sites/default/files/webform/42/Managing for Results in Public...results (Moynihan, 2008), have become increasingly](https://reader034.vdocument.in/reader034/viewer/2022050411/5f8810dd7376467a697a84cb/html5/thumbnails/24.jpg)
24
The Empowerment Zone also had positive effects on high schools. We find that holding
all else constant, Empowerment schools had 9.142 percent higher Regents diploma graduation
rates than non-Empowerment schools after two years in the zone, and 11.705 percent higher
Regents diploma graduation rates in their third year. These results are significant at the .01 and
.001 level, respectively. The Empowerment Zone does not appear to have influenced the overall
graduation rate which also includes local diplomas. As local diplomas have lower academic
standards than Regents diplomas during this period, the result suggests that the benefits of
autonomy are felt at the high school level primarily by relatively higher performing students.
The turnover rate of teachers with fewer than five years of experience is an intermediate
outcome of the Empowerment Zone. It indicates whether principals used their delegated
authority to fire ineffective teachers and hence put more pressure on beginning teachers to
increase effectiveness. In the third year, when the positive effects of the zone are manifested,
Empowerment schools had 3.605 percent higher turnover among teachers with fewer than five
years of experience compared to non-Empowerment zone schools; this result is significant at the
.01 level. Teachers in Empowerment schools were less likely to be tenured, and principals may
have improved performance by dismissing poorer performing teachers after two years of greater
autonomy.
Our study also provides a unique longitudinal perspective on the effects of MFR. These
results suggest that it takes some time for schools at all gradespans to improve after they join the
Empowerment Zone. As Table 2 shows, most of the statistically significant improvements did
not manifest until the third year, with the exception of ELA proficiency rate and Regents
diploma graduation rate. It may be that principals did not wield their new powers as soon as they
received them. Perhaps, the initial changes were more incremental in nature, and it took several
![Page 25: Testing the Effectiveness of “Managing for Results ...aefpweb.org/sites/default/files/webform/42/Managing for Results in Public...results (Moynihan, 2008), have become increasingly](https://reader034.vdocument.in/reader034/viewer/2022050411/5f8810dd7376467a697a84cb/html5/thumbnails/25.jpg)
25
years before principals had the confidence to make meaningful changes that improved school
performance, though there was an immediate positive effect on ELA, according to our results.
Most of the year fixed effects are significant, suggesting substantial year to year
differences in school performance. Schools as a whole appeared to be improving over this time
period, particularly in the 2008-09 year. However, the coefficient of the 2008-09 differences-in-
differences estimators suggest that the Empowerment Zone schools had better performance than
other schools the same year.
We also find that teacher turnover and experience affected school performance, but only
for elementary and middle schools. A 10 percent increase in the overall teacher turnover rate
resulted in a decrease in overall school performance by approximately two-tenths of a standard
deviation, a 3.24 percent decrease in children meeting proficiency in ELA and a 4.12 percent
decrease in mathematics. On average, elementary and middle schools with higher shares of
teachers with fewer than three years of experience, had lower performance. A one percent
increase in the share of teachers with fewer than three years of teaching experience reduced the
percentage of children meeting proficiency in ELA and mathematics by a tenth of a percent,
corresponding to a .005 standard deviation decrease in performance. Each of these results is
significant at the .01 level. The lack of significance of these variables for high school
performance measures may owe to the variety of subject-specific teachers students encounter in
high schools, rather than the single teacher that is responsible for most of instruction as in
primary school. Consistent with Boyd et al. (2008) and Hanushek and Rivkin (2004), who did
not find any effect on achievement from teachers with a master’s degree, we find little evidence
that the percentage of teachers with a master’s degree plus 30 credit hours or a doctorate had any
significant effect on school performance.
![Page 26: Testing the Effectiveness of “Managing for Results ...aefpweb.org/sites/default/files/webform/42/Managing for Results in Public...results (Moynihan, 2008), have become increasingly](https://reader034.vdocument.in/reader034/viewer/2022050411/5f8810dd7376467a697a84cb/html5/thumbnails/26.jpg)
26
Coefficients on student characteristic variables generally have the expected signs in our
regressions. Schools with higher shares of English language learners, special education students,
Black, Hispanic and free or reduced priced lunch eligible students, had lower performance. For
instance, a 10 percent increase in the share of special education students in a school lowered the
overall performance score of a school by four tenths of a standard deviation, and the percentage
of students who achieve proficiency in ELA and mathematics by about eight percent. The shares
of English language learners and special education students were negatively and significantly
associated with the overall graduation rate and Regents Diploma graduation rate. Interestingly,
most of these characteristics increased the turnover rate of teachers with fewer than five years of
experience. This may owe to a preference of teachers for higher achieving, non-minority, non-
low income students (Hanushek, Kain, and Rivkin 1999). The share of male students had a
negative effect on ELA proficiency, which is consistent with national patterns (Loveless 2015).
In line with research on Asian achievement by Kao (1995), Lee and Zhou (2015), Sun (1998),
and others, the share of Asian students was positively associated with school performance in the
earlier grades.
The student-teacher ratio of a school had a negative impact on the mathematics
proficiency rate. An increase in the student-teacher ratio of one student per teacher reduced the
share of children achieving proficiency in mathematics by .822 percent. Enrollment generally
had a negative effect on performance. A one percent increase in enrollment lowered ELA and
mathematics proficiency rates by 2.937 and 2.260 percent, respectively.
R-squared values for the performance regressions are generally impressive. Our
regression models explained 71.9 percent of the variation in ELA score and 60.9 percent of the
variation in math.
![Page 27: Testing the Effectiveness of “Managing for Results ...aefpweb.org/sites/default/files/webform/42/Managing for Results in Public...results (Moynihan, 2008), have become increasingly](https://reader034.vdocument.in/reader034/viewer/2022050411/5f8810dd7376467a697a84cb/html5/thumbnails/27.jpg)
27
Robustness Checks
We also conduct a series of robustness checks to examine the sensitivity of our results to
various assumptions. As we mention above, changes in the experimental and control groups may
contaminate our findings. Therefore, we perform robustness checks with alternative experimental
and control groups to reflect these differences. In Table 3, the Empowerment Schools are
compared to a group of schools that had never entered the Empowerment Zone (i.e. schools that
entered the Empowerment Zone in the 2007-2008 and 2008-2009 academic years were excluded
from the original control group). These schools were never “contaminated” by the effect of the
Empowerment Zone. The results are generally similar to the results in Table 2, though the
magnitudes of the coefficients are slightly smaller. In all our performance measures,
Empowerment schools in their third year in the zone had higher performance than the
comparison group of schools the same year. For example, Empowerment schools had on average
0.151 standard deviations higher overall performance than the comparison group in this
specification, versus 0.158 standard deviations in Table 2. As in Table 2, we find a significant
effect of the Empowerment Zone in the third year on the turnover of teachers with fewer than
five years of experience with this sample, suggesting personnel decisions may have played a role
in the improved performance of schools.
Some schools left the Empowerment Zone after 2006-07 AY. In Table 4, we exclude
these schools from the experimental group, and compare the new experimental group to the
control group that excluded all schools that entered into the zone after the 2006-07 AY. In other
words, we compare a group of schools that were in the EZ Zone through the entire study period
to a group of schools that were not in the zone through the entire study period. This is the most
restrictive sample. As in Table 2 and Table 3, there is a generally a positive relationship between
![Page 28: Testing the Effectiveness of “Managing for Results ...aefpweb.org/sites/default/files/webform/42/Managing for Results in Public...results (Moynihan, 2008), have become increasingly](https://reader034.vdocument.in/reader034/viewer/2022050411/5f8810dd7376467a697a84cb/html5/thumbnails/28.jpg)
28
the Empowerment schools in the 2008-09 academic year and measures of school performance. In
this specification, this coefficient is no longer significant for ELA proficiency rate, while other
results are in line with the results in Table 3.
![Page 29: Testing the Effectiveness of “Managing for Results ...aefpweb.org/sites/default/files/webform/42/Managing for Results in Public...results (Moynihan, 2008), have become increasingly](https://reader034.vdocument.in/reader034/viewer/2022050411/5f8810dd7376467a697a84cb/html5/thumbnails/29.jpg)
29
Table 3
Robustness Test (Sample Excludes Schools that Entered Empowerment Zone in 07-08 and 08-09 Academic Year)
ELA
proficiency
rate
Mathematic
s
proficiency
rate
Overall
performance
score
Regents
diploma
graduation rate
Overall
graduation
rate
Turnover
rate of
teachers
with fewer
than five
years of
experience
Empowerment Zone school * 2006-07 AY 1.640 † 0.127
0.043
2.172
-1.117
-1.307
(1.92)
(0.13)
(1.08)
(0.81)
(0.55)
(0.89)
Empowerment Zone school * 2007-08 AY 0.008
1.745
0.042
8.111 ** 1.251
-0.179
(0.01)
(1.38)
(0.84)
(2.73)
(0.54)
(0.13)
Empowerment Zone school * 2008-09 AY 2.102 † 4.300 ** 0.151 ** 10.145 *** 0.628
3.452 *
(1.83)
(2.98)
(2.60)
(3.36)
(0.32)
(2.35)
N 3,044
3,048
3,044
830
830
3,837
R-Squared 0.715 0.608 0.681 0.477 0.380 0.170
Notes: a. Absolute value of t-statistics based on Huber-White robust standard errors adjusted for clustering by school in parentheses.
b. Regressions include controls for Empowerment Zone, indicators for academic year, the percentage of teachers with fewer than three years of teaching experience,
the percentage of teachers with a master's degree plus 30 credit hours or doctorate, the percentages of English language learners, special education, Asian, Black,
Hispanic, male, free or reduced priced lunch students, student-teacher ratio and the log of enrollment.
c. † p<0.1; * p<0.05; **p<0.01; ***p<0.001.
![Page 30: Testing the Effectiveness of “Managing for Results ...aefpweb.org/sites/default/files/webform/42/Managing for Results in Public...results (Moynihan, 2008), have become increasingly](https://reader034.vdocument.in/reader034/viewer/2022050411/5f8810dd7376467a697a84cb/html5/thumbnails/30.jpg)
30
Table 4
Robustness Test (Sample Excludes Schools that Entered Empowerment Zone in 07-08 and 08-09 Academic Year and Empowerment
Schools that Left the Zone After the 06-07 Academic Year)
ELA
proficienc
y rate
Mathematic
s
proficiency
rate
Overall
performanc
e score
Regents
diploma
graduation
rate
Overall
graduatio
n rate
Turnover rate of
teachers with
fewer than five
years of
experience
Empowerment Zone school * 2006-07 AY 1.561 † 0.991
0.062
2.281
-1.798
-0.986
(1.67)
(0.90)
(1.39)
(0.75)
(0.86)
(0.61)
Empowerment Zone school * 2007-08 AY 0.091
2.571 † 0.064
9.767 ** 1.408
0.959
(0.09)
(1.93)
(1.21)
(2.86)
(0.61)
(0.63)
Empowerment Zone school * 2008-09 AY 1.197
4.282 ** 0.130 * 13.187 *** 1.964
3.752 *
(0.97)
(2.70)
(2.05)
(3.60)
(0.89)
(2.32)
N 2,914
2,918
2,914
726
726
3,596
R-Squared 0.716 0.612 0.684 0.490 0.447
0.176
Notes: a. Absolute value of t-statistics based on Huber-White robust standard errors adjusted for clustering by school in parentheses.
b. Regressions include controls for Empowerment Zone, indicators for academic year, the percentage of teachers with fewer than three years of teaching
experience, the percentage of teachers with a master's degree plus 30 credit hours or doctorate, the percentages of English language learners, special education,
Asian, Black, Hispanic, male, free or reduced priced lunch students, student-teacher ratio and the log of enrollment.
c. † p<0.1; * p<0.05; **p<0.01; ***p<0.001.
![Page 31: Testing the Effectiveness of “Managing for Results ...aefpweb.org/sites/default/files/webform/42/Managing for Results in Public...results (Moynihan, 2008), have become increasingly](https://reader034.vdocument.in/reader034/viewer/2022050411/5f8810dd7376467a697a84cb/html5/thumbnails/31.jpg)
31
Discussion
Our results show that the “Empowerment Zone” reform that gave principals more
managerial authority and autonomy in return for accountability significantly improved the
performance of public schools. In the baseline year, the difference in school performance was not
statistically significant. After two years, the Empowerment Zone produced statistically
significant and positive impacts on student performance in ELA and mathematics and increased
the high school Regents Diploma graduation rate, although the reform did not have a significant
impact on total high school graduation rates. Our results also show that the reform achieved an
intermediate goal of disrupting the much-criticized teacher tenure process that almost always
guaranteed tenure. The reform made it easier for principals to fire ineffective teachers and retain
effective ones. Our results therefore, generally support the effectiveness of managing for results.
This study adds more evidence that performance management, if properly implemented,
can improve organizational performance. In the educational context, Childress et al. (2011)
argued that simply ratcheting up accountability without a concurrent effort to increase capacity
may undermine school performance. On the other hand, performance management without
strong accountability systems may not be effective. For example, Hvidman and Andersen (2014)
found that performance management had no effect on the performance of Danish public schools,
while Nielsen (2014) found that the main effect of performance management was negative or
zero on school performance. It seems that the perceived measure of performance management
used in both studies did not include strong accountability mechanisms, such as rewards or
punishments based on performance. The lack of these accountability mechanisms may explain
why these studies found no effect or a negative main effect of performance management.
Combining the findings from our study and the aforementioned studies supports Moynihan’s
![Page 32: Testing the Effectiveness of “Managing for Results ...aefpweb.org/sites/default/files/webform/42/Managing for Results in Public...results (Moynihan, 2008), have become increasingly](https://reader034.vdocument.in/reader034/viewer/2022050411/5f8810dd7376467a697a84cb/html5/thumbnails/32.jpg)
32
(2006) argument that the effective implementation of an MFR reform needs to be complete:
focusing on results should go hand in hand with delegated managerial authority; missing either
one risks the failure of the performance management reform.
From a broader perspective, our findings also support the argument that “management
matters.” Scholars have been interested in how management affects the performance of public
organizations for many years (Meier and O’Toole 2002; Moynihan and Pandey 2005; O’Toole
and Meier 1999). In an MFR reform, management has a positive influence by first, clarifying
goals. Goal clarity has been found to be positively related to organizational performance (Jung
2014; Moynihan and Pandey 2005). Public managers often face ambiguous goals, which makes
it hard to demonstrate accomplishments, and to estimate the type or amount of efforts needed or
to reallocate resources for the best use (Chun and Rainey 2005; Jung 2014). In an MFR reform,
goals are typically clearly defined, which gives managers a clear direction of where to go and
motivate them to work towards achieving the goals. In the “Empowerment Zone” case, student
learning gains were the key outcome, and specific measures such as standardized test scores and
graduation rates were used. Principals were thus motivated to work towards these goals.
Second, continuous monitoring of performance gives organizations timely feedback to
correct mistakes and make sure that they are on the right track (Nielsen 2014). Using
performance information is an important part of organizational learning, and has been found to
be positively associated with organizational performance (Kroll 2015; Moynihan and Landuyt
t2009). In the case of the Empowerment Zone, the feedback allowed principals to compare goals
with their status and keep making necessary changes.
Third, decision-making is decentralized to front-line managers who have the local
knowledge and information advantage. Scholars in organization studies have argued that more
![Page 33: Testing the Effectiveness of “Managing for Results ...aefpweb.org/sites/default/files/webform/42/Managing for Results in Public...results (Moynihan, 2008), have become increasingly](https://reader034.vdocument.in/reader034/viewer/2022050411/5f8810dd7376467a697a84cb/html5/thumbnails/33.jpg)
33
decentralized forms of organizing, or multidivisional structures, improve performance (Ouchi
2006). This is also a doctrine of the New Public Management movement that decision-making
authority should be decentralized to empower front-line managers with the most extensive local
knowledge. The extant research has argued that centralized decision-making is negatively related
to organizational performance (Moynihan and Pandey 2005). In this research context, the
authority and autonomy that principals had, enabled them to allocate their funding to meet the
needs that were unique to their schools (e.g. offering extra-curricular activities for students;
training teachers), make decisions about instruction and curriculum that best fit the unique
characteristics of their students, and build an effective teacher team.
Another big takeaway is that people should be patient when it comes to the results of
reforms. It took time for an MFR reform to produce impacts. The “Empowerment Zone” reform
significantly improved students’ ELA performance in the first year before it became marginally
significant in the third year. Statistically significant improvements were made in Regents
Diploma graduation rates in the second year, and increases in mathematics proficiency rate and
the turnover rates of teachers with fewer than five years of experience were not felt until the third
year. The reform did not produce immediate impacts, which is not surprising. Principals need
time to diagnose problems, act on performance information to design measures to address
problems, and rebuild an effective teaching team by firing incompetent teachers and retaining
effective ones. This certainly speaks to how hard it is to improve school performance, but similar
processes should be expected for reforms in other areas. Public managers may need to overcome
significant barriers to make changes happen. No reform is easy, and people should give public
managers not only the opportunity but also the time to turn things around.
![Page 34: Testing the Effectiveness of “Managing for Results ...aefpweb.org/sites/default/files/webform/42/Managing for Results in Public...results (Moynihan, 2008), have become increasingly](https://reader034.vdocument.in/reader034/viewer/2022050411/5f8810dd7376467a697a84cb/html5/thumbnails/34.jpg)
34
In terms of implications for future research, we believe it is necessary to keep examining
the effects of different components of performance management. Performance management is a
broad concept, and bundling different components together may obscure important differences.
In addition, given the abundance of studies using perceptual measures of performance
management or its components, more studies based on specific real-world performance
management reforms, such as the “Empowerment Zone”, are needed.
This research also has some limitations. One is that, due to data limitations, we are not
test the effect of the Empowerment Zone at the student level. Our results show that school
performance improved as a result of the reform, but it would give us more information if we can
test how the reform affect the performance of individual students. Moreover, just like other
quasi-experimental studies, a common limitation is that the comparison and treatment groups
were not randomly assigned, so we cannot completely rule out the influence of some extraneous
variables that only or mainly affected one of the two groups. The results of this study should be
interpreted in light of the limitations.
![Page 35: Testing the Effectiveness of “Managing for Results ...aefpweb.org/sites/default/files/webform/42/Managing for Results in Public...results (Moynihan, 2008), have become increasingly](https://reader034.vdocument.in/reader034/viewer/2022050411/5f8810dd7376467a697a84cb/html5/thumbnails/35.jpg)
35
References
Bifulco, Robert, William Duncombe, and John Yinger. 2005. "Does whole-school reform boost
student performance? The case of New York City." Journal of Policy Analysis and
Management no. 24 (1):47-72.
Boyne, George A., and Alex A. Chen. 2007. “Performance Targets and Public Service
Improvement.” Journal of Public Administration Research and Theory: J-PART 17 (3):
455–77.
Boyd, Donald, Hamilton Lankford, Susanna Loeb, Jonah Rockoff, and James Wyckoff. 2008.
"The narrowing gap in New York City teacher qualifications and its implications for
student achievement in high-poverty schools." Journal of Policy Analysis and
Management no. 27 (4):793-818.
Childress, Stacey, Monica Higgins, Ann Ishimaru, and Sola Takahashi. 2011. “Managing for
Results at the New York City Department of Education.” In Education Reform in New
York City: Ambitious Change in the Nation’s Most Complex School System, 87–108.
Cambridge, MA: Harvard Education Publishing Group.
Chun, Young Han, and Hal G. Rainey. 2005. “Goal Ambiguity and Organizational Performance
in U.S. Federal Agencies.” Journal of Public Administration Research and Theory: J-
PART 15 (4): 529–57.
Clotfelter, Charles T., Helen F. Ladd, and Jacob L. Vigdor. 2007. How and Why do Teacher
Credentials Matter for Student Achievement? In NBER Working Paper No. 12828.
Cambridge, MA: National Bureau of Economic Research.
Coleman, James S. 1988. "Social capital in the creation of human capital." American Journal of
Sociology no. 94:S95-S120.
Croninger, Robert G., Jennifer King Rice, Amy Rathbun, and Masako Nishio. 2007. "Teacher
qualifications and early learning: Effects of certification, degree, and experience on first-
grade student achievement." Economics of Education Review no. 26 (3):312-324.
Dee, Thomas S., and Brian Jacob. 2011. “The Impact of No Child Left Behind on Student
Achievement.” Journal of Policy Analysis and Management 30 (3): 418–46.
Favero, Nathan, Kenneth J. Meier, and Laurence J. O’Toole. 2016. “Goals, Trust, Participation,
and Feedback: Linking Internal Management With Performance Outcomes.” Journal of
Public Administration Research and Theory 26 (2): 327–43.
French, Dan, Karen Miles, and Linda Nathan. 2014. “The Path Forward: School Autonomy and
Its Implications for the Future of Boston’s Public Schools.”
http://www.bostonpublicschools.org/cms/lib07/MA01906464/Centricity/Domain/238/BP
S_Report_2014_6-2-14.pdf.
Fryer, Roland G., and Steven D. Levitt. 2010. "An empirical analysis of the gender gap in
mathematics." American Economic Journal: Applied Economics no. 2 (2):210-240.
Gerrish, Ed. 2016. “The Impact of Performance Management on Performance in Public
Organizations: A Meta-Analysis.” Public Administration Review 76 (1): 48–66.
Goldhaber, Dan D., and Dominic J. Brewer. 1997. "Evaluating the effect of teacher degree level
on educational performance." In Developments in School Finance, 1996, edited by
William J. Fowler. Washington, DC: U.S. Department of Education.
Goldhaber, Dan D., and Dominic J. Brewer. 2000. "Does Teacher Certification Matter? High
School Teacher Certification Status and Student Achievement." Educational Evaluation
and Policy Analysis no. 22 (2):129-145.
![Page 36: Testing the Effectiveness of “Managing for Results ...aefpweb.org/sites/default/files/webform/42/Managing for Results in Public...results (Moynihan, 2008), have become increasingly](https://reader034.vdocument.in/reader034/viewer/2022050411/5f8810dd7376467a697a84cb/html5/thumbnails/36.jpg)
36
Hanushek, Eric A. 1979. "Conceptual and empirical issues in the estimation of educational
production functions." Journal of Human Resources no. 14 (3):351-388.
Hanushek, Eric A. 1986. "The economics of schooling: Production and efficiency in public
schools." Journal of Economic Literature no. 24 (3):1141-1177.
Hanushek, Eric A., John F. Kain, and Steven G. Rivkin. 1999. Do higher salaries buy better
teachers? In NBER Working Paper Series. Cambridge, MA: National Bureau of
Economic Research.
Hanushek, Eric A., and Margaret E. Raymond. 2005. “Does School Accountability Lead to
Improved Student Performance?” Journal of Policy Analysis and Management 24 (2):
297–327.
Hanushek, Eric A., and Steven G. Rivkin. 2004. "How to improve the supply of high-quality
teachers." Brookings Papers on Education Policy (7):7-44.
Heinrich, Carolyn J. 2002. “Outcomes-Based Performance Management in the Public Sector:
Implications for Government Accountability and Effectiveness.” Public Administration
Review 62 (6): 712–25.
Hvidman, Ulrik, and Simon Calmar Andersen. 2014. “Impact of Performance Management in
Public and Private Organizations.” Journal of Public Administration Research and
Theory 24 (1): 35–58. doi:10.1093/jopart/mut019.
Jepsen, Christopher, and Steven Rivkin. 2009. "Class size reduction and student achievement:
The potential tradeoff between teacher quality and class size." Journal of Human
Resources no. 44 (1):223-250.
Jung, Chan Su. 2014. “Extending the Theory of Goal Ambiguity to Programs: Examining the
Relationship between Goal Ambiguity and Performance.” Public Administration Review
74 (2): 205–19.
Kao, Grace. 1995. "Asian Americans as Model Minorities? A Look at Their Academic
Performance." American Journal of Education no. 103 (2):121-159.
Kelleher, Maureen. 2014. “New York City’s Children First: Lessons in School Reform.” Center
for American Progress.
file:///C:/Users/wwang/Dropbox/documents/human%20resource%20management/NY%2
0School%20project/empowerment%20schools/literature/NYC%20education%20Report.
pdf.
Kelman, Steven, and John N. Friedman. 2009. “Performance Improvement and Performance
Dysfunction: An Empirical Examination of Distortionary Impacts of the Emergency
Room Wait-Time Target in the English National Health Service.” Journal of Public
Administration Research and Theory 19 (4): 917–46.
Kettl, Donald F. 1997. “The Global Revolution in Public Management: Driving Themes, Missing
Links.” Journal of Policy Analysis and Management 16 (3): 446–62.
Kroll, Alexander. 2015. “Exploring the Link Between Performance Information Use and
Organizational Performance: A Contingency Approach.” Public Performance &
Management Review 39 (1): 7–32.
Krueger, Alan B. 1999. "Experimental Estimates of Education Production Functions." The
Quarterly Journal of Economics no. 114 (2):497-532.
Lee, Jennifer, and Min Zhou. 2015. The Asian American Achievement Paradox. New York:
Russell Sage Foundation.
![Page 37: Testing the Effectiveness of “Managing for Results ...aefpweb.org/sites/default/files/webform/42/Managing for Results in Public...results (Moynihan, 2008), have become increasingly](https://reader034.vdocument.in/reader034/viewer/2022050411/5f8810dd7376467a697a84cb/html5/thumbnails/37.jpg)
37
Lee, Valerie E., and Susanna Loeb. 2000. "School size in Chicago elementary schools: Effects
on teachers' attitudes and students' achievement." American Educational Research
Journal no. 37 (1):3-31.
Loeb, S., L. C. Miller, and J. Wyckoff. 2015. “Performance Screens for School Improvement:
The Case of Teacher Tenure Reform in New York City.” Educational Researcher 44 (4):
199–212. doi:10.3102/0013189X15584773.
Loveless, Tom. 2015. How well are American students learning? With sections on the gender
gap in reading, effects of the Common Core, and student engagement. Washington, DC:
Brookings Institution.
Meier, Kenneth J., and Laurence J. O’Toole. 2002. “Public Management and Organizational
Performance: The Effect of Managerial Quality.” Journal of Policy Analysis and
Management 21 (4): 629–43.
Moynihan, Donald P. 2008. The Dynamics of Performance Management. Washington, DC:
Georgetown University Press.
———. 2005. “Why and How Do State Governments Adopt and Implement ‘Managing for
Results’ Reforms?” Journal of Public Administration Research and Theory 15 (2): 219–
43.
———. 2006. “Managing for Results in State Government: Evaluating a Decade of Reform.”
Public Administration Review 66 (1): 77–89.
Moynihan, Donald P., and Sanjay K. Pandey. 2005. “Testing How Management Matters in an
Era of Government by Performance Management.” Journal of Public Administration
Research and Theory 15 (3): 421–39.
———. 2010. “The Big Question for Performance Management: Why Do Managers Use
Performance Information?” Journal of Public Administration Research and Theory 20
(4): 849–66.
New York City Department of Education. 2006. “EMPOWERMENT SCHOOL
PERFORMANCE AGREEMENT 2006-2007.”
http://www.crpe.org/sites/default/files/NYCDOE%20Performance%20Agreement.pdf.
New York City Department of Education. 2016. Summary of NYSED Regulations as of April
2016. New York.
Nielsen, Poul A. 2014. “Performance Management, Managerial Authority, and Public Service
Performance.” Journal of Public Administration Research and Theory 24 (2): 431–58.
O’Toole, Laurence J., and Kenneth J. Meier. 1999. “Modeling the Impact of Public
Management: Implications of Structural Context.” Journal of Public Administration
Research and Theory 9 (4): 505–26.
Ouchi, William G. 2006. “Power to the Principals: Decentralization in Three Large School
Districts.” Organization Science 17 (2): 298–307.
Patrick, Barbara A., and P. Edward French. 2011. “Assessing New Public Mana Gement’s Focus
on Performance Measurement in the Public Sector.” Public Performance & Management
Review 35 (2): 340–69.
Poister, Theodore H., Obed Q. Pasha, and Lauren Hamilton Edwards. 2013. “Does Performance
Management Lead to Better Outcomes? Evidence from the U.S. Public Transit Industry.”
Public Administration Review 73 (4): 625–36.
Ready, Douglas D. , Laura F. LoGerfo, David T. Burkam, and Valerie E. Lee. 2005.
"Explaining girls’ advantage in Kindergarten literacy learning: Do classroom behaviors
make a difference?" The Elementary School Journal no. 106 (1):21-38.
![Page 38: Testing the Effectiveness of “Managing for Results ...aefpweb.org/sites/default/files/webform/42/Managing for Results in Public...results (Moynihan, 2008), have become increasingly](https://reader034.vdocument.in/reader034/viewer/2022050411/5f8810dd7376467a697a84cb/html5/thumbnails/38.jpg)
38
Rivkin, Steven G., Eric A. Hanushek, and John F. Kain. 2005. "Teachers, Schools, and
Academic Achievement." Econometrica no. 73 (2):417-458.
Ronfeldt, Matthew, Susanna Loeb, and James Wyckoff. 2013. "How Teacher Turnover Harms
Student Achievement." American Educational Research Journal no. 50 (1):4-36.
Rumberger, Russell W., and J. Douglas Willms. 1992. "The impact of racial and ethnic
segregation on the achievement gap in California high schools." Educational Evaluation
and Policy Analysis no. 14 (4):377-396.
Schwartz, Amy Ellen, and Leanna Stiefel. 2001. "Measuring School Efficiency: Lessons from
Economics, Implications for Practice." In Improving Educational Productivity, edited by
David H. Monk, Herbert J. Walberg and Margaret C. Wang, 115-137. Greenwich, CT:
Information Age Publishing.
Schwartz, Amy Ellen, and Jeffrey E. Zabel. 2005. "The Good, the Bad, and the Ugly: Measuring
School Efficiency Using School Production Functions." In Measuring School
Performance and Efficiency: Implications for Practice and Research, edited by Leanna
Stiefel, Amy Ellen Schwartz, Ross Rubenstein and Jeffrey E. Zabel, 37-66. Larchmont,
NY: Eye on Education.
Sirin, Selcuk R. 2005. "Socioeconomic status and academic achievement: A meta-analytic
review of research." Review of Educational Research no. 75 (3):417-453.
Steinberg, Matthew P. 2013. “Does Greater Autonomy Improve School Performance? Evidence
from a Regression Discontinuity Analysis in Chicago.” Education Finance and Policy 9
(1): 1–35.
Stiefel, Leanna, Amy Ellen Schwartz, and Ingrid Gould Ellen. 2007. "Disentangling the racial
test score gap: Probing the evidence in a large urban school district." Journal of Policy
Analysis and Management no. 26 (1):7-30.
Stiefel, Leanna, Amy Ellen Schwartz, and Matthew Wiswall. 2015. "Does small high school
reform lift urban districts? Evidence From New York City." Educational Researcher no.
44 (3):161-172.
Sun, Rusi, and Gregg G. Van Ryzin. 2014. “Are Performance Management Practices Associated
With Better Outcomes? Empirical Evidence From New York Public Schools.” The
American Review of Public Administration 44 (3): 324–38.
Sun, Yongmin. 1998. "The Academic Success of East-Asian–American Students—An
Investment Model." Social Science Research no. 27 (4):432-456.
Todd, Petra E., and Kenneth I. Wolpin. 2007. "The production of cognitive achievement in
children: Home, school, and racial test score gaps." Journal of Human Capital no. 1
(1):91-136.
United Federation of Teachers. 2016. Salary Differentials [cited November 12 2016]. Available
from http://www.uft.org/our-rights/salary-differentials.
Van Langen, Annemarie, and Hetty Dekkers. 2001. “Decentralisation and Combating
Educational Exclusion.” Comparative Education 37 (3): 367–84.
Wei, Xin, Keith B. Lenz, and Jose Blackorby. 2012. "Math growth trajectories of students with
disabilities: Disability category, gender, racial, and socioeconomic status differences
from ages 7 to 17." Remedial and Special Education.
Wilson, James. 1991. Bureaucracy: What Government Agencies Do And Why They Do It.
58336th edition. New York, NY: Basic Books.
![Page 39: Testing the Effectiveness of “Managing for Results ...aefpweb.org/sites/default/files/webform/42/Managing for Results in Public...results (Moynihan, 2008), have become increasingly](https://reader034.vdocument.in/reader034/viewer/2022050411/5f8810dd7376467a697a84cb/html5/thumbnails/39.jpg)
39
Yang, Kaifeng, and Jun Yi Hsieh. 2007. “Managerial Effectiveness of Government Performance
Measurement: Testing a Middle-Range Model.” Public Administration Review 67 (5):
861–79.