what is the effect of performance measurement on perceived ......communication, data collection,...

41
What is the Effect of Performance Measurement on Perceived Accountability Effectiveness in State and Local Government Contracts Anna Amirkhanyan ABSTRACT Designing and implementing performance measurement systems in public contracts is not an easy task. Little guidance has been available on which specific measures work better in producing certain managerial benefits. The objective of this study is to evaluate the effect of different performance measurement practices on accountability effectiveness in government contracts. The findings suggest that the overall scope of performance measurement has a positive impact on the government’s ability to effectively manage contracts. More specifically, measuring costs, client impact, service timeliness and disruptions, as well as specifying the detailed processes for service delivery are associated with higher accountability effectiveness. On the other hand, evaluating quality, client satisfaction, and using informal monitoring techniques has a negative impact on perceived accountability effectiveness. The results of this study provide motivation for the contract managers to optimize performance monitoring and reduce transaction costs by relying on the measures that are more likely to improve contract implementation.

Upload: others

Post on 01-Oct-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: What is the Effect of Performance Measurement on Perceived ......communication, data collection, reporting, inspections, sanctioning, terminations, and renewals. Most of these activities

What is the Effect of Performance Measurement on Perceived Accountability Effectiveness

in State and Local Government Contracts

Anna Amirkhanyan

ABSTRACT

Designing and implementing performance measurement systems in public contracts is not an

easy task. Little guidance has been available on which specific measures work better in

producing certain managerial benefits. The objective of this study is to evaluate the effect of

different performance measurement practices on accountability effectiveness in government

contracts. The findings suggest that the overall scope of performance measurement has a

positive impact on the government’s ability to effectively manage contracts. More specifically,

measuring costs, client impact, service timeliness and disruptions, as well as specifying the

detailed processes for service delivery are associated with higher accountability effectiveness.

On the other hand, evaluating quality, client satisfaction, and using informal monitoring

techniques has a negative impact on perceived accountability effectiveness. The results of this

study provide motivation for the contract managers to optimize performance monitoring and

reduce transaction costs by relying on the measures that are more likely to improve contract

implementation.

Page 2: What is the Effect of Performance Measurement on Perceived ......communication, data collection, reporting, inspections, sanctioning, terminations, and renewals. Most of these activities

1

INTRODUCTION

In his inaugural address President Barack Obama noted that the question Americans

should be asking today is not whether the government is too big or too small, but whether it

works. This once again renewed attention to performance in the environment of financial crisis

is especially relevant to the large number of privatized services delivered by nonprofit and for-

profit organizations. While performance monitoring and measurement have a long history in the

United States, these functions traditionally receive little attention by government managers (Van

Slyke 2003). Little is known about the effectiveness and relative efficacy of measures used by

the public managers overseeing contract implementation (Ho 2006; Yang, Hsieh, and Li 2009).

The value of performance monitoring and measurement is questionable even in localities known

for contracting out virtually all of their services (Prager 2008). Nonetheless performance

measurement involves critical management decisions that “operationalize policies” and “provide

real-world specificity to abstract ideas and policy and are therefore of great consequence”

(Cohen and Eimicke 2008, 151).

Designing and implementing performance measurement systems in government agencies

and private contracted organizations is not an easy task both theoretically and practically

(Heinrich 2002). The complex nature of performance necessitates the use of multiple measures

to capture all aspects of organizational well-being. Importantly, these oversight systems are not

static: some scholars recommend using detailed performance metrics initially and later

simplifying them, abandoning some measures and focusing on others in order to build trust

between parties (Linder 2004). To date, little guidance is available on which specific measures

work better in producing certain managerial benefits. In this connection, Cohen and Eimicke

(2008, 155) note: “[t]he challenge for the managers is how to create a set of measures that is

comprehensive and still limited enough to focus the organization on what is most important.”

Page 3: What is the Effect of Performance Measurement on Perceived ......communication, data collection, reporting, inspections, sanctioning, terminations, and renewals. Most of these activities

2

Understanding how to build the optimal performance measurement practices is especially

important since these activities are almost always associated with higher administrative costs

(Melkers and Willoughby, 2005; Van Slyke 2003, 306; Zimmerman and Stevens 2006). This

goal is particularly relevant in the context of public-private partnerships that are subject to

informational asymmetry and opportunistic behavior of private actors.

This research examines performance measurement practices with the purpose of

understanding their benefits. While the long term programmatic outcomes of contracted services

are often hard to verify, managerial outcomes can become useful proxies for organizational

performance. Of the many managerial impacts of performance measurement this study focuses

on accountability effectiveness1 (Romzek and Johnston 2002). Both “accountability” and

“effectiveness” have been widely used in a variety of contexts and defined somewhat differently.

Our approach here is more narrow and specific to the context of government contracting. In this

field, the concept of accountability effectiveness has been proposed by Romzek and Johnston

(2002, 2005) to describe the capacity of a government agency “to design, implement, manage,

and achieve accountability for its social service contracts. This includes the state’s ability to

obtain timely and accurate reporting from the contractor and to use that information to evaluate

performance and correct deficiencies” (Romzek and Johnston, 2005: 437). Accountability

effectiveness is different from the overall program results; instead, it refers to the managerial

effectiveness in contract implementation. The first objective of this study is to evaluate the

influence of the scope of performance measurement on accountability effectiveness in a sample

of state and local government contracts. The second objective is to examine the effect of fifteen

distinct performance measurement practices on contract accountability effectiveness. Past

research suggests that some performance monitoring systems are developed unilaterally by the

government agency, while others result from a collaborative dialogue between the government

Page 4: What is the Effect of Performance Measurement on Perceived ......communication, data collection, reporting, inspections, sanctioning, terminations, and renewals. Most of these activities

3

and the contractor (Amirkhanyan 2009). Hence, the third objective of this study is to determine

whether these collaborative measurement efforts affect accountability effectiveness. The results

of this study may help contract managers to optimize the process of performance monitoring and

reduce transaction costs by focusing on a shorter list of measures that are likely to improve the

governments’ capacity to implement contracts while maintaining high levels of accountability.

PERFORMANCE MEASUREMENT AND ACCOUNTABILITY

Performance measurement is the collection, reporting, and review of data reflecting

various aspects of organizational performance, including service quality, cost-effectiveness and

others (Blasi 2002, 531; Cohen and Eimicke 2008). As the classical upward accountability and

compliance with the formal rules in the public sector gradually gives way to the new concept of

managerial answerability to a variety of actors operating in the “hollow state,”2 the demands for

optimizing the performance measurement process increase (De Vries 2007; Holzer and Kloby

2005). The measures currently used to evaluate publically delivered as well as privatized

programs are diverse and complex (Behn 2003; Blasi 2002; Boyne 1998; Byrnes, Freeman, and

Kauffman 1997; Callahan and Kloby 2007; Dilger, Moffett and Struyk 1997). They include

organizational inputs, processes, outputs and outcomes; reflect internal capacities and external

perceptions, and provide both quantitative and qualitative accounts of organizational activities

(Blasi 2002; Cohen and Emicke 2008; Dalehite 2008; Dilger, Moffett and Struyk 1997; Kelly

and Swindell 2002a).

As the U.S. government continues to rely on contractors in the delivery of public services

(Hefetz and Warner 2007), measuring and monitoring the contractors’ performance is as critical

as in the public sector. While, ideally, competitive private markets replace the government

monopolies and create pressures to improve performance (Savas 2000, 2003), the government

Page 5: What is the Effect of Performance Measurement on Perceived ......communication, data collection, reporting, inspections, sanctioning, terminations, and renewals. Most of these activities

4

agencies need to stay involved in contract management since they are ultimately responsible for

the outcomes (Brown and Potoski 2006; Brown, Potoski, and Van Slyke 2006; Goldsmith and

Eggers 2004; Johnston and Romzek 1999; Prager 1994; Rubin 2006). Close monitoring is

necessary because (a) market competition is limited in many areas (Johnston and Girth 2010) and

contractors in such markets can behave opportunistically; (b) market forces may fail to pressure

the contractors to achieve some publicly important outcomes, such as equal access or legal

compliance (Chan and Rosenbloom 2010), and (c) even in competitive markets the information

about the contractor and the services is often incomplete, which makes it difficult to choose the

right contractor (Arrow 1984; Eisenhardt 1989). Empirical studies of performance monitoring

confirm that contractors are in fact monitored at least as intensively and closely as the programs

delivered in-house (Marvel and Marvel 2007).

Contract management involves four important policy choices, including the make-or-buy

decision (or agenda setting), contracting formulation, implementation, and evaluation (Brown

and Potoski 2003; Yang, Hsieh, and Li 2009). The specific managerial activities pursued during

these phases include evaluation of provider markets, political feasibility assessments, initial

examination of service characteristics, pre-award conferences, ongoing contract specification,

communication, data collection, reporting, inspections, sanctioning, terminations, and renewals.

Most of these activities have the goal of improving the contractor’s performance.

Elements of performance measurement are in fact present in each of the four domains of

contract management. During the agenda setting phase, governments consider service

measurability – the ease of developing a clear set of quantifiable and reliable measures – and,

accordingly, determine the feasibility of privatization (Amirkhanyan, Kim and Lambright 2007).

The contract formulation phase involves planning for performance measurement and the

preliminary measurement of the contractors’ past performance (Yang, Hsieh, Li 2009). The latter

Page 6: What is the Effect of Performance Measurement on Perceived ......communication, data collection, reporting, inspections, sanctioning, terminations, and renewals. Most of these activities

5

includes investigating the contractor’s reputation, reviewing its past service quality, management

capacity, regulatory compliance and professional certifications. At this stage, the partners discuss

the basic approaches to service delivery, standards and expectations. The contract

implementation and evaluation phases often run in parallel, with the service delivery being

observed, inspected, recorded, and reported upon, while the partners clarify and, sometimes,

modify performance standards in a collaborative fashion (Amirkhanyan 2009).

The specific measures used within a contracting arrangement may be similar to those

used in-house such as cost-effectiveness, service scope, quality, regulatory compliance and equal

access. The measurement process, however, may be organized in different ways. First, several

parties may be responsible for contract evaluation. It may be outsourced or performed directly by

the government (Brown and Potoski 2006), and may utilize data that are self-reported, observed

by the government agency, or collected by a third-party (Cohen and Eimicke 2008, 151).

Second, different contract monitoring theories may underline these efforts. One approach

aspires to achieve complete contract specification and involves defining all contingencies,

expectations, standards, inputs, processes, outputs, and outcomes at the onset of the contract and

tracking these data through a pre-determined monitoring and reporting procedure (Milgrom and

Roberts 1992). Due to a high degree of goal ambiguity and environmental uncertainty within

many service fields, achieving complete contract specification is regarded impossible (Brown,

Potoski and Van Slyke 2006; Tirole 1999), and its practical implementation raises concerns

about micromanagement and excessive hierarchical control which undermined the idea of

market-based alternatives. As an alternative approach, the New Public Management movement

suggests focusing on the end results in performance measurement. Performance-based

contracting focuses the government’s attention on a set of outcomes, allowing the contractors to

determine the inputs and the process, and linking the outcomes to monetary rewards (Goldsmith

Page 7: What is the Effect of Performance Measurement on Perceived ......communication, data collection, reporting, inspections, sanctioning, terminations, and renewals. Most of these activities

6

and Eggers 2004; Heinrich 1999). While this mode of contracting may save costs (Straub 2009)

and generate data for resource allocation decisions (Heinrich 1999), several problems have been

noted. The performance standards are often not well-designed (Heinrich 1999), the sanctions are

rarely executed in response to inadequate performance (Cohen and Eimicke 2008; Van Slyke

2007), and the democratic process of service delivery may be undermined (Chan and

Rosenbloom 2010). The third contract management philosophy – relational or cooperation

contracting – expands its focus back to inputs and processes and a broader set of democratic

values, such as fairness and equal treatment. Rather than specifying all measures and standards

in advance, relational contracts are long-term open-ended trust-based partnerships that allow

performance criteria to evolve in response to the changing conditions; here, informal and

professional pressures replace the rigid sanctions; and the parties work collaboratively on

overcoming the obstacles (Allen 2002; Amirkhanyan 2009; Beinecke and DeFillippi 1999;

Campbell and Harris 1993; Davis 2007; DeHoog 1990; Sclar 2000; Smith 2005). Amirkhanyan

(2009) cites numerous examples of the more informal performance measurement practices: e.g.,

a contracting officer voluntarily attends the theater performances involving incarcerated youth –

a service provided by a for-profit arts company – and informally seeks the participants’ and their

parents’ feedback which she does not have to formally record or report. While relational model

attempts to address the limitations of the previous approaches, the contractors’ close involvement

in performance evaluation raises objectivity related concerns by biasing the agency towards a

particular contractor (DeHoog 1990; Van Slyke 2003, 306).

Irrespective of the theoretical underpinnings of contractor performance measurement, the

latter remains among “the most serious management challenges” facing public managers

(Kelman 2002, 312). Similar to the publicly administered programs, contract outcomes are

difficult to quantify, and the pursued goals may be diverging and unclear (Meyers, Riccucci, and

Page 8: What is the Effect of Performance Measurement on Perceived ......communication, data collection, reporting, inspections, sanctioning, terminations, and renewals. Most of these activities

7

Lurie, 2001; Riccucci, 2005). Cost concerns may supersede the quality considerations, and the

performance measures may not reflect the initial goals (Heinrich 1999). These problems not

only increase the costs of contract management (Sclar 2000), but also raise the central issue of

government contracting that has to do with accountability (Hodge 2000). As Heinrich writes:

“[u]seful performance management systems will improve programs by assisting public managers

to identify poor performers, to follow-up with corrective actions, and to reward good performers

and replicate their approaches” (Heinrich 1999, 367). The question of the main features and

components of such useful performance measurement systems remains largely unanswered.

Despite the long history and scope of evaluation efforts, both organizational performance

and contract management literatures continue to ask whether performance measurement matters

(Ho 2006). While performance measurement is eventually expected to be translated into better

programmatic outcomes and have some political or symbolic effects, the more immediate impact

of these practices is expected to be on management (Moynihan 2005; Yang and Hsieh 2007).

Specifically, performance measurement has been viewed as “the newest method of ensuring

accountability” (Zimmerman and Stevens 2006, 315). While it is often assumed that

performance measurement leads to better accountability, few studies actually explored this

question. In the broader public sector literature, Ho (2006) and Berman and Wang (2000) suggest

that performance measurement leads to improved perceived accountability of government

agencies. This question has not been examined in the government contracting literature.

Moreover, no data exist on whether the scope and type of measures may improve accountability

effectiveness in government contracts.

The first objective of this study is to determine whether the scope of performance

measurement in government contracts can positively influence accountability effectiveness in

government contracts. Here, the term “scope” refers to the aggregate of all types of performance

Page 9: What is the Effect of Performance Measurement on Perceived ......communication, data collection, reporting, inspections, sanctioning, terminations, and renewals. Most of these activities

8

measurement activities used to monitor and evaluate a contract. Using multiple measures of

contractor performance can help generate more performance data and reveal service delivery

problems. Multiple measures are likely to supply different kind of information – qualitative and

quantitative, perceptual and more objective – drawing a more comprehensive picture of the

contractor’s performance. If one measure helps identify a case of substandard performance

associated with one specific aspect of service delivery, other measures may help verify and

clarify the extent of the problem and determine its impact on other aspects of performance or

other stakeholders involved in the implementation process. For example, increasing costs of

production may be interpreted differently depending on the associated changes in client welfare,

service quality, and compliance with the regulatory requirements. Thus, we expect the scope of

performance measurement to improve perceived accountability effectiveness.

The second objective of this study is exploratory: to examine fifteen distinct performance

measurement and monitoring techniques and to evaluate their individual effects on the overall

accountability effectiveness in government contracts. As mentioned earlier, government

agencies use a variety of performance measurement techniques focusing on costs, quality, impact

on service recipients, timeliness, compliance with the laws, fairness, reputation, and customer

satisfaction; they collect qualitative and quantitative data, and rely on formal and informal ways

of collecting and handling information. Past research found that the so-called higher-order

measures, such as efficiency, are more likely to influence management and operation of public

organizations rather than the lower-order measures, such as workload and outputs (Ammons and

Rivenbark 2008). Thus, our goal is to determine which performance measures (detailed in the

methodology section) are more likely to be associated with the more effective accountability

relationships in government contracts. On one hand, the data on contractor performance

outcomes, such as the impact of services on the clients, or service quality, should be critical for

Page 10: What is the Effect of Performance Measurement on Perceived ......communication, data collection, reporting, inspections, sanctioning, terminations, and renewals. Most of these activities

9

the agency’s ability to evaluate performance, detect deficiencies and correct them. On the other

hand, quality and impact data may be hard to obtain and interpret, while using the more straight-

forward data on costs, compliance with the industry laws, service timeliness and disruptions may

be more feasible in order to detect and promptly correct performance problems.

The third objective of this study has to do with how these measures are developed. The

literature on performance measurement found evidence of both the government agencies and the

private contractors participating in the formation of monitoring systems. Amirkhanyan (2009)

found that a variety of collaborative activities are employed by both parties: monitoring officers

seek their contractors’ input on performance evaluation, meanwhile the contractors develop and

propose new measures and actively negotiate the existing monitoring arrangements. Several

other studies also provide evidence of multiple parties participating in the development and

implementation of performance measures (Heikkila and Isett 2007; Holzer and Kloby 2005;

Romzek and Johnston 2002, 428). Such participatory mechanisms, though complex and lengthy,

ensure that all decision-makers understand the background and the advantages of the

measurement process and perceive the monitoring systems as credible (Kravchuk and Schack

1996). Thus, collaboration between the agency and the contractor in the process of developing

performance measurement mechanisms might affect the timeliness and the accuracy of

performance data and influence the likelihood of imposing the sanctions and eventually

correcting the deficiencies. In this connection, the third objective of this study is to determine

whether collaborative practices used to develop performance measurement and monitoring

systems positively influence the governments’ ability to manage their contracts effectively.

Accountability in contract implementation is a function of many other factors, such as the

degree of trust between the agency and the contractor. As detailed in the next section, in this

analysis we will control for the effect of several organizational and environmental factors.

Page 11: What is the Effect of Performance Measurement on Perceived ......communication, data collection, reporting, inspections, sanctioning, terminations, and renewals. Most of these activities

10

METHODS

Data3

Sixty-nine interviews4 were administered with government contract managers in state and local

government agencies as well as the managers of nonprofit and for-profit organizations contracted

by these governments.5 Jurisdictions under consideration included the District of Columbia, three

adjacent counties, and one adjacent state.6 Searchable open-ended online listings of contracted

services were accessed on the procurement office web sites of each jurisdiction.7 While the

stratified random sampling would strengthen the design of this study, the lack of jurisdiction-

level data on the proportional distribution of service fields made this strategy impractical.

Purposive sampling, which involves identification of cases that appear to represent the

population with the purpose of capturing a broad range of its characteristics, was appropriate and

practical considering the objectives of the study and the data sources.

The sampling procedure started with an extensive review of service contract listings on

the procurement web sites of the five examined jurisdictions. I sought to maximize the

representation of service fields, service measurability, award amounts, vendor ownership status

and other factors. After reviewing over two hundred contracts as well as the organizational

structures of each jurisdiction, a preliminary list of service areas was created (shown in Table 1).

Contracts for long term and medical care, construction and maintenance, management consulting

services, and IT were very prevalent in the listings, and were also prominently represented in the

sample. Respondents associated with the relatively infrequent contracts such as translation,

parking meter maintenance or animal care were also included, which helped improve the

variation of award amounts. This study avoided two-sided representation of contracts, i.e.,

public and private respondents in this study were not associated with the same contract. While

this does not allow us to examine the same contractual arrangements from the two sides,

Page 12: What is the Effect of Performance Measurement on Perceived ......communication, data collection, reporting, inspections, sanctioning, terminations, and renewals. Most of these activities

11

independence of observations in the sample allows us to run a single regression model without

biasing the result. A control variable indicating each respondent’s public or private status is

included in all regressions (coded as one for public respondents and zero for the contractors).

This study uses a non-probability (purposive) sample, and its limitations (e.g., the

possibility of over-representing certain service areas) may apply to this analysis. However, by

diversifying the service fields, this study makes a contribution to the contracting literature by

capturing the richness of performance measurement strategies used by state and local

governments. Specifically, it was critical to incorporate contracts falling on the wide spectrum of

measurement efforts considering the 15 identified performance measurement tactics detailed

below. This would be difficult to achieve in a sample of contracts in the same field. Also, while

the sample appears quite heterogeneous, the listings reviewed by the author were also

characterized by high levels of diversity. Thus, the sample appears to be representative; however,

it is most representative of the five studied jurisdictions8.

After two pre-tests, the final sample of thirty-nine public employees and thirty private

managers was interviewed by the investigator. Program officers were interviewed in 96% of all

public-sector interviews, meanwhile, procurement officers knowledgeable about monitoring

procedures were interviewed in the remaining 4% of cases. Government officers who oversaw

multiple contracts were asked to discuss the most typical contract they monitored. Each person

participated in one interview, which took approximately one hour. The percentage of contracts

with for-profit vendors was higher among both public and private respondents: sixty-seven

percent of public contract managers discussed a contract with a for-profit organization, with the

remaining thirty-three percent discussing a contract with a nonprofit organization. Among

private respondents, sixty-three percent were for-profit, while thirty seven percent were

nonprofit. As shown in table 1, numerous service fields have been examined, including health

Page 13: What is the Effect of Performance Measurement on Perceived ......communication, data collection, reporting, inspections, sanctioning, terminations, and renewals. Most of these activities

12

and psychological care (e.g., nursing home care or special therapy for incarcerated youth), as

well as management consulting (e.g., public program design and evaluation), construction and

maintenance (e.g., waste management equipment maintenance) and many others. Appendix A

presents six short vignettes describing six typical contracts included in the sample and

characterized by high, medium, and low levels of performance measurement and monitoring.

Dependent Variable

Six items, administered to both public and private respondents in the sample, were used to

measure the perceived levels of accountability effectiveness. These items are based on the

operational definition provided by Romzek and Johnston (2002, 2005). Respondents were asked

if they (a) strongly agreed, (b) somewhat agreed, (c) somewhat disagreed, and (d) strongly

disagreed with each of the following six statements:

1. The contractor accurately complies with our performance measurement requirements.9

2. We receive all the necessary information in a timely manner.

3. The information we receive is accurate.

4. We use various sanctions in cases when the contractor fails to provide timely and

accurate information on their performance.

5. Performance measures that we use help us reveal inadequate performance of our

contractor.

6. In general, we are very effective in terms of our ability to manage and implement this

contract.

While originally the sum of these six items was intended to be used as the dependent

variable in regression analysis, the obtained scale had an overall raw Cronbach alpha of only 0.6.

Exploratory factors analysis indentified two underlying factors: the first one strongly correlated

with items 1, 2, and 3, and the second one with items 5 and 610

. Based on this analysis, three

dependent variables were created:

a). Contractor complies by providing timely and accurate information, an ordinal

12-point scale, representing the sum of items 1 through 3 listed above.

b). Government reveals problems and effectively manages the contract, an ordinal 8

point scale, representing the sum of items 5 and 6 listed above.

c). Government uses sanctions, representing item 4 above.

Page 14: What is the Effect of Performance Measurement on Perceived ......communication, data collection, reporting, inspections, sanctioning, terminations, and renewals. Most of these activities

13

Importantly, this research focuses on the managers’ perceived accountability effectiveness.

Certainly, accountability effectiveness is independent of the agencies’ and vendors’ perceptions

of thereof, but this study focuses on the respondents’ subjective evaluation of this phenomenon.

Independent Variables

The central independent variables of interest are the performance measurement practices used in

the course of contract monitoring as well as the collaborative practices used to develop these

measures. As shown in Appendix B, respondents were asked to recall if the government agency

“collected, monitored, or evaluated information” on fifteen distinct aspects of contractor

performance such as costs, quality, workload, impact on clients, customer satisfaction and

several others. Past studies have effectively used dichotomous (i.e., “yes” and “no”) response

categories to study performance measurement capacity in the contracting setting (Brown and

Potoski 2003). In this study, positive responses to each of the fifteen items, indicating that a

particular aspect of performance was indeed evaluated by the government, were coded as one.

Negative and the “don’t know/don’t recall” answers were coded as zero. In addition to the

fifteen variables measuring each of these performance measurement practices, the sum of all

fifteen items was calculated for each respondent. This variable – the number of performance

measurement techniques used – reflects the scope of performance measurement in each contact.

Six questions helped identify the collaborative practices used by the government and the

contractor in performance measurement. Questions 2 through 7, shown in Appendix B, were

used to create six dummy variables (asking for input, contractor negotiation, input incorporated,

communication affects performance, contractor seeking clarification, and government seeking

clarification), each coded as one for affirmative responses. Variable collaborative performance

measurement index, the sum of these six items, was used in the regression analysis.

Analysis

Page 15: What is the Effect of Performance Measurement on Perceived ......communication, data collection, reporting, inspections, sanctioning, terminations, and renewals. Most of these activities

14

Since the three dependent variables were ordinal, ordered logit was used to examine the effect of

performance measurement and their collaborative development on the accountability

effectiveness measures. A set of other controls measuring important organizational and

environmental factors was included in each model (described in Appendix C). For each of the

three dependent variables, sixteen regression models were obtained, using the following process:

In the first model, the number of performance measurement techniques used was used as

the central explanatory variable. In subsequent fifteen models, the number of performance

measurement techniques used was replaced with each of the fifteen performance measures. 11

Regressions obtained with one of the three dependent variables - government uses

sanctions – had low overall statistical power and this model was excluded from the analysis.

The two remaining measures of accountability effectiveness – contractor complies by providing

timely and accurate information and government reveals problems and effectively manages the

contract – were statistically significant and produced interesting results.12

FINDINGS

Table 2 shows the prevalence of each measure in the sample. Consistent with the

reviewed literature, the numbers vary considerably. Measurement of quality, timeliness,

continuity, vendors’ workload, and using informal monitoring is quite common. Less common

are the contracts focusing on client satisfaction and the impact of services on clients, as well as

those using quantitative and qualitative performance indicators. Finally, very few respondents

reported monitoring cost-effectiveness,13

reputation, ability to provide services without

discrimination, specify detailed procedures for service delivery, or tailor performance

measurement to the contracted organization. Notably, government respondents are more likely

to report the use of each measure than the contractors. Several arguments have been proposed to

explain these findings (Amirkhanyan 2009). First, government monitors may experience

Page 16: What is the Effect of Performance Measurement on Perceived ......communication, data collection, reporting, inspections, sanctioning, terminations, and renewals. Most of these activities

15

evaluation apprehension and tend to over-report their activities. Second, the contractors may not

be aware of the extent of government agency’s evaluation, especially if performance data is

collected by the government. Third, contractors may be skeptical of government’s efforts. Thus,

while government employees reported on their propensity to monitor performance, contractors

may have been commenting on the governments’ propensity to monitor performance effectively.

When asked if the government agency evaluated service quality, one private respondent

answered negatively and then elaborated: “They think they do! …But I really don’t think they do

it…” This finding was reiterated in other interviews.

Tables 3 and 4 provide descriptive statistics for the dependent variables which are

measured ordinally. Regression analysis pertaining to the first dependent variable – contractor

complies by providing timely and accurate information – is presented in table 5. Sixteen ordered

logit models were obtained by regressing the dependent variable on the total number of

performance measures (along with the control variables), and consequently, upon each of the

fifteen individual performance measures and the same set of controls. The total number of

measures and eleven out of fifteen individual performance measures had no significant influence

on this first measure of accountability effectiveness (results not shown). Meanwhile, four

measures produced significant results (see table 3). First, respondents who reported that their

contracts involved monitoring service costs reported significantly higher perceived levels of

contractor compliance and information timeliness and accuracy (Model 1). Similarly,

monitoring service disruptions had a positive effect on the dependent variable (Model 2). On the

other hand, monitoring service quality as well as monitoring contractors informally had a

negative association with contractor compliance and information timeliness and accuracy

(Models 3 and 4). Importantly, collaborative performance measurement index was insignificant

in all models. Among the control variables, “hard” services were associated with reduced

Page 17: What is the Effect of Performance Measurement on Perceived ......communication, data collection, reporting, inspections, sanctioning, terminations, and renewals. Most of these activities

16

contractor compliance and information quality. Working with a contractor that has a unique

expertise and was selected using a competitive bidding process improved perceived compliance

and timely/accurate information, while dynamic environments and use of performance

information “self-reported” by the contractors had a negative effect on the dependent variable.

Finally, public rather than private respondents and those with more extensive contract

management experience were less likely to perceive higher levels of perceived contract

accountability effectiveness. This is a key finding suggesting that government program managers

and contractors give systematically different accounts to a third party about the degree to which

they felt the contractors were being held accountable by government agencies.

Regression analysis pertaining to the second dependent variable – government reveals

problems and effectively manages the contract – is presented in tables 6a and 6b. As explained

above, the dependent variable was regressed, consecutively, on the total number of performance

measures and each of the fifteen individual performance measures (along with the control

variables). This analysis also suggests that while some measures have a positive association with

the dependent variable, others have a negative effect. The overall scope of performance

measurement – the sum of all measures – had a positive effect on the government’s ability to

reveal problems and effectively manage contracts (Model 5). Measuring impact of services on

clients and service equitability, measuring service timeliness and disruptions, as well as

specifying the details of service provision had a positive association with the dependent variable

(models 6, 8, 9, 10, and 11). Meanwhile, measuring client satisfaction and monitoring informally

had a negative effect on the government’s ability to reveals problems and effectively manage

contracts (models 7 and 12). Similar to models 1 through 4, collaborative development of

performance measurement practices had no effect on the second measure of perceived

accountability effectiveness. Nonprofit status of contracted organizations reduced government’s

Page 18: What is the Effect of Performance Measurement on Perceived ......communication, data collection, reporting, inspections, sanctioning, terminations, and renewals. Most of these activities

17

perceived ability to reveal problems and effectively manage contracts. The contractors’ financial

dependency was associated with a lower perceived accountability effectiveness. Monitoring done

directly by government agencies through inspections, observations, and other methods appeared

to improve perceived accountability. Government’s in-house professional capacity also

significantly improved accountability effectiveness. In two regressions, relationship length had a

positive association with the dependent variable. Finally, in two models (9 and 10), public sector

respondents had a significantly lower perception of accountability effectiveness than their private

counterparts.

DISCUSSION

Performance measurement activities are complex tasks that require specialized

knowledge and significant resources. Therefore, understanding the benefits of these activities is

useful to ensure a more efficient use of government funds. This study focuses on identifying

specific performance measurement approaches that impact government agency’s ability to keep

its contractors accountable. It attempts to address a question raised in the literature: “Is

monitoring always good and the more the better?” (Yang, Hsieh, and Li 2009 681). Our findings

suggest that the answer to this question is not simple. Some measures appear to have no

significant impact on government’s ability to collect good data and effectively manage contacts.

Furthermore, while some measures have a positive significant effect on perceived accountability

effectiveness, others have a negative impact. Notably, the overall scope of performance

measurement does in fact improve public agency’s ability to reveal problems and effectively

manage contracts. More diverse performance measurement activities – those involving multiple

and complementary ways of evaluating the contractors’ work – appear to improve the agency’s

ability to detect performance problems and have a perception of effective contract management.

While this does not necessarily suggest that “complete” contracting is the answer, it certainly

Page 19: What is the Effect of Performance Measurement on Perceived ......communication, data collection, reporting, inspections, sanctioning, terminations, and renewals. Most of these activities

18

means that by diligently investing in multi-dimensional, diverse, and complex performance

measurement systems the agencies will receive a payoff in terms of improved perceived

accountability. This, perhaps, is the most important finding of this study relevant for the

practitioners: it suggests the importance of investing more time and effort in the design of multi-

faceted performance measurement systems. Nonetheless, some performance measures can

evidently be more effective than others.

When examining the timeliness and accuracy of information provided by the contractor

as well as the contractor’s propensity to comply with the requirements, monitoring service costs

has a positive impact. Privatization in the U.S. has been promoted primarily as a cost-cutting

tool, and it is not surprising that the examination of costs plays a central role in the oversight

process. While many government agencies may lack the capacity to effectively evaluate quality

and other non-financial aspects of performance, the review of financial data is a more traditional

and straightforward contract management procedure. Qualitative data provided by the

respondents in this study supports this assumption – several respondents described receiving

regular financial reporting from the contractor as a key element of contract evaluation - one that

essentially ensures that the contract continues as intended.

Notably, monitoring quality has a negative effect of the respondent’s perception of the

contractor’s compliance and timely and accurate information sharing. This may suggest a

number of things: a lack of capacity to collect data and evaluate service quality (Gianakis 2002),

poorly designed quality monitoring tools, as well as ambiguous, unsatisfying or contradictory

information on service quality (Frederickson and Frederickson 2006; Kravchuk and Schack

1996; Nicholson-Crotty, Theobald, Nicholson-Crotty 2006; Radin 2006). The general

performance measurement literature suggests that broad and ambiguous objectives of public

programs often make it difficult to measure success, and they introduce political tradeoffs

Page 20: What is the Effect of Performance Measurement on Perceived ......communication, data collection, reporting, inspections, sanctioning, terminations, and renewals. Most of these activities

19

between multiple measures of quality, costs and others (Amirkhanyan, Kim and Lambright 2008;

Blasi 2002; Callahan and Kloby 2007; Frederickson and Frederickson 2006; Kravchuk and

Schack 1996). Obviously, it is easier to receive timely and accurate data on costs, than on

quality. In this connection, several authors note that performance measurement is often

inaccurate due to the public managers’ failure to invest into long-term and continuous evaluation

that can provide the “helicopter view” of performance (Blasi 2002; Bouckaert and Peters 2002;

Courty and Marschke 2007).

Similarly, this study finds that the use of informal monitoring techniques in the oversight

process has a negative effect on both measures of perceived accountability effectiveness:

receiving timely and accurate performance information and using it to reveal problems. These

findings are important for the discourse on trust-based relational contract management. The use

of informal monitoring may suggest a lack of formally designed oversight systems. Our findings

may also reflect respondents’ frustration with the inadequacy of informal contract monitoring

tools. DeHoog (1990) warns that cooperative contract management models that are based on

trust and informal ties may become more important than programmatic success, limit objectivity

and lead to collusion. Our findings may also suggest a missing variable in the model. Some

unobserved and potentially problematic characteristics of the contractor may prompt the public

agency to supplement the formal monitoring tools with the informal ones. This study suggests

that these strategies do not appear to improve perceived accountability effectiveness.

Notably, the monitoring of service timeliness and disruptions is also important for both

measures of effective contract accountability. Both of these items pertain to the process of

service delivery. They are relatively easy to track and may also be easy to interpret. A failure to

fix a parking meter undermines the timeliness of several other work orders, and clearly suggests

the contractor’s failure to perform well. Many contracts in our sample involved monitoring

Page 21: What is the Effect of Performance Measurement on Perceived ......communication, data collection, reporting, inspections, sanctioning, terminations, and renewals. Most of these activities

20

systems (some computerized) that would promptly inform the government agency of any service

disruptions. This type of data serves as a first signal of a problem and is therefore critical to

effectively understand the status of service delivery and to correct deficiencies. Importantly,

these findings suggest that monitoring some aspects of service delivery process rather than the

outcomes can provide valuable information for accountability effectiveness which would not

necessarily be achieved in the typical performance-based contracts.

Regression analysis suggests that assessing the impact of service delivery on clients is

positively associated with the perception of effective contract monitoring. This measure is

designed to detect the outcomes or the end result of the contractors’ activities. This information

is what many external and internal stakeholders ultimately care about in contract

implementation. For instance, the parents’ dissatisfaction with the quality of care their children

receive at a child care center will have important implications for the contracting Head Start

Agency’s contract management practices. A nursing facility’s failure to ensure high-quality long

term care (e.g., follow practices geared towards minimizing the prevalence of bed sores or the

use of psychotropic drugs) will be the primary factors to be considered by the government

agency while managing its contractor. Thus, it is not surprising that understanding the impact of

services on the clients is helpful in effective management and implementation of these contracts.

In the light of these findings, however, it is interesting that evaluation of client satisfaction

actually has a negative effect on the dependent variable. On possible interpretation of these

findings may be that while the data on client impact may be generated by the public agency and

reflects some broader impacts and implications of service delivery (e.g., general trends in public

health improvement, environmental conditions, and others), client satisfaction surveys may

reflect some discontent and complaints related to service delivery. Receiving data directly from

Page 22: What is the Effect of Performance Measurement on Perceived ......communication, data collection, reporting, inspections, sanctioning, terminations, and renewals. Most of these activities

21

the clients (i.e., from a separate stakeholder) may certainly complicate the overall perception of

contract management effectiveness.

Importantly, evaluating whether or not services were delivered in a fair and equitable

manner is also positively associated with the perception of government’s ability to reveal

performance problems and effectively manage contracts. The use of this measure may reflect

some attention to the core public sector values of fair distribution of resources and may suggest

higher levels of commitment to good oversight among these public agencies. Assessing fairness,

access and equitability of service delivery may be a signal of mission-orientation and better

articulation of contract priorities which in turn improves accountability.

Finally, specifying the detailed process for service delivery – providing directions on

precisely how, when and by whom services should be provided – is a performance monitoring

practice associated with better perceived accountability. In this connection, Behn (2003, 594)

writes about performance measures: “Do not be fooled. These guidelines are really requirements,

and these requirements are designed to control. The measurement of compliance with these

requirements is the mechanism of control.” This is especially true when an agency attempts to

maximize the specification of service delivery parameters. Thus, government agency’s

propensity to “control” its contractors by developing a more complete specification of how,

when and by whom the service should be delivered improves the contractor’s compliance and the

agency’s ability to obtain clear and informative data, detect problems, and act upon them.

Process specification also undoubtedly reflects a higher level of in-house service delivery

proficiency which allows the agency to spell out the expectations and effectively monitor their

attainment.

This study finds that the way performance measures are created (i.e., whether they are

created unilaterally by a government agency and imposed upon a contractor, or are a result of

Page 23: What is the Effect of Performance Measurement on Perceived ......communication, data collection, reporting, inspections, sanctioning, terminations, and renewals. Most of these activities

22

negotiations and clarifications) has no effect on accountability effectiveness. Thus, performance

measures, no matter how they are created, can serve as useful tools that can help generate useful

information on performance and prompt the agency to use sanctions. While not undermining the

value of collaborative practices in the course of contracting, this study suggests that an agency

may unilaterally develop and enforce a set of measurement techniques that would help achieve

higher levels of contractor perceived accountability.

This paper does not argue that only seven measures are useful in improving

accountability or that other measures are of no value. In fact, the findings pertaining to the scope

of performance measurement suggest that the more measurement is done the more accountability

can be achieved. Rather, this analysis shows that different measures can have a different effect

on the managerial outcome considered here. This analysis also suggests that the process of

performance measurement should not be static. It is hard to develop a perfect system at the onset

of the contracting relationship. At that point, a greater scope of performance measurement may

be advisable. This will ensure higher accountability while the contracting agency develops a

better understanding of the most optimal performance measurement “package.” The latter may

be different for different service fields and levels of government. Thus, the findings of this study

provide further motivation for contract managers to work continuously on improving their

performance measurement and monitoring practices.

Results of this analysis also suggest that, aside from the scope and the type of

performance measurement activities, the perceived level of accountability may be determined by

a range of other factors. First, government agencies involved in direct monitoring of their

contractors - inspections, observations, or site visits – are able to achieve greater accountability

even after controlling for service measurability. On the other hand, data that are self-reported by

the contractor has the opposite effect. This is hardly surprising: the delegation of monitoring to a

Page 24: What is the Effect of Performance Measurement on Perceived ......communication, data collection, reporting, inspections, sanctioning, terminations, and renewals. Most of these activities

23

vendor or a third party can produce informational silos or exacerbated principal-agent issues.

Meanwhile, direct monitoring often increases the likelihood of developing informal ties, having

first-hand knowledge of the implementation process (often, by virtue of being “on site” with the

contractor), and eventually results in more accurate and timely reporting and correction of

problems. Similarly, relationship length allows both parties to develop a common language and

to create a shared understanding of the contractor’s capabilities and the government’s propensity

to impose sanctions – these factors can facilitate a more cooperative contract evaluation process.

Potentially, this provides some evidence on the effect of stability and relationships (though, not

necessarily informal) on the effectiveness of oversight.

This research also suggests that government respondents are less likely to have a

perception of high accountability effectiveness. While the contractors, who are often inundated

with government’s monitoring activities, may be satisfied with the extent of evaluation,

government employees may feel that their efforts are not sufficient. Public managers may be

dissatisfied with the accuracy of data and their overall ability to keep the contractors

accountable. This is consistent with the contracting literature that finds inadequate monitoring

capacity among public managers overseeing state and local contracts (Amirkhanyan 2009; Van

Slyke 2003). This interpretation is reinforced by the findings of this study: as respondents’

contract management experience goes up their perception of accountability effectiveness goes

down. Possibly, as someone’s contract management experience increases so does their

understanding of service ambiguities and complexities, and appreciation of the limitations of

performance measurement. This undermines the public respondents’ perception of accountability

in government contracts. These results are important in the light of the earlier findings about the

vendors’ sentiments suggesting high degree of informational asymmetry (“They think they

[measure service quality]… But I really don’t think they do it”). The contract managers’

Page 25: What is the Effect of Performance Measurement on Perceived ......communication, data collection, reporting, inspections, sanctioning, terminations, and renewals. Most of these activities

24

skepticism about accountability effectiveness undermines our earlier assertion that the public

managers may have an inaccurately optimistic view of their monitoring efforts. Instead, these

findings support the results of several past studies highlighting the contract managers’ frustration

with their inadequate monitoring capacity, despite a wide scope of responsibilities and actions

(Amirkhanyan 2009; Van Slyke 2003). The governments’ skepticism about their monitoring

capacity found in this paper suggests that this is an important area for the future research,

especially in the context of politically motivated contracts.

Supporting some of the basic tenets of contract management theory, this study suggests

that competitive procurement of contracts has a positive effect on the contractor’s compliance

with performance monitoring. Competitive private markets may pressure the contractors to

provide the information necessary to maintain the contract. Similarly, government’s in-house

expertise and professional capacity improves effective contract monitoring, while the lack of

thereof undermines public agency’s ability to understand the intricacies of service delivery and

reveal performance problems. This finding suggests that maintaining some degree of service

implementation capacity is important for effective contract monitoring.

Interestingly, we find that nonprofit status of the contractors is associated with a

significant decrease in perceived accountability effectiveness. On the one hand, nonprofit

organizations are often perceived are more trustworthy than their for-profit counterparts due to

their mission-driven nature and the fact that they re-invest profits towards their organizational

missions (Hansmann, 1987). These and other factors may decrease their opportunistic behavior

and increase compliance and cooperation in the oversight process. Amirkhanyan (2009) finds

that nonprofit service contractors are more likely to collaborate with public agencies in the

oversight process, and Van Slyke (2009) suggests that nonprofits may be initially perceived as

more trustworthy. On the other hand, contracting research suggests that public managers are

Page 26: What is the Effect of Performance Measurement on Perceived ......communication, data collection, reporting, inspections, sanctioning, terminations, and renewals. Most of these activities

25

aware of the performance problems, lack of management capacity, financial abuses, and political

behavior prevalent in the nonprofit sector (Arenson 1995; Grimaldi and Trescott 2008; Reaves

2001; Hansmann 1986; Johnston and Romzek 1999; Prager 1994; Rose-Ackerman 1996). Our

findings may suggest lower capacity of nonprofit organizations to comply with the monitoring

process, and the nature of services delivered by nonprofit organizations may be characterized by

a higher degree of complexity and ambiguity that also undermines accountability effectiveness.

Finally, one of the most puzzling findings of this analysis pertains to the negative effect

of the contractors’ financial dependency on the perception of accountability effectiveness. One

would expect that the contractors’ dependency on a single contract would translate into their

willingness to cooperate in the evaluation process which would eventually improve the

likelihood of providing timely and accurate information and resolving performance problems.

What our findings may suggest is that the financial importance of a contract gives some vendors

an incentive to reduce transparency and evade government oversight. When more is at stake for a

contractor, the situation may create additional tensions and conflict, and the contractors may be

reluctant to cooperate out of fear of sanctions. Contractors that are dependent on a specific

government contract rather than those relying on diversified revenues sources, are also more

likely to be smaller and less professionalized which may affect the perception of accountability

effectiveness.

Another intriguing finding pertains to the negative effect of “hard” or easily measurable

services on the contractors’ compliance with the monitoring, as well as with the timely and

accurate information provided to the government agency. This finding may in fact imply some

procedural distinctions associated with the “hard” (i.e, tangible or more easily measurable)

services. In the qualitative portion of the interviews, several public managers discussed direct

supervision and inspections as important monitoring techniques in the fields of construction,

Page 27: What is the Effect of Performance Measurement on Perceived ......communication, data collection, reporting, inspections, sanctioning, terminations, and renewals. Most of these activities

26

maintenance, waste removal and others. Potentially, quality of information reported by the

contractors in these fields may be inadequate and more direct inspections are used in such

contracts. Service type does not, however, affect the government’s ability to reveal problems and

effectively monitor these contracts.

While this study contributes to our understanding of which performance measurement

strategies matter for contract implementation, several unanswered questions remain. First, this

study focuses on the effect of performance measurement on one aspect of managerial

effectiveness – contractor accountability. The latter is important, as Cohen and Eimicke (2008,

155) argue: “innovation and customer needs may very well be less important than accountability

and transparency.” Nonetheless, public agencies pursue multiple goals when measuring

organizational performance (Smith and Larimer 2004). Therefore, future studies should assess

the impact of performance measurement on diverse programmatic and stakeholder outcomes

(including creativity and customer needs). In particular, this will allow us to see which measures

are more suitable for “motivating”, “celebrating”, “learning” and “improving”, and for achieving

other organizational goals. Measures found to be less important for contractor accountability

may in fact be critical for goal establishment, budgeting, motivation, promotion, and learning

(Behn 2003; Ho 2006). Agencies pursuing performance measurement should clarify the

objectives of their efforts and refine their contract monitoring systems depending on these

objectives. Additional and related research directions include comparing the vendors' and the

purchasers' perceptions of accountability effectiveness on the same contracts with a third

assessment done by a researcher. Understanding the relationship between these perceptual

measures of accountability effectiveness and actual program outcomes is important to adequately

assess the importance of performance monitoring efforts. More importantly, it is an open

question still not just whether perceived accountability effectiveness is related to "program

Page 28: What is the Effect of Performance Measurement on Perceived ......communication, data collection, reporting, inspections, sanctioning, terminations, and renewals. Most of these activities

27

outcomes" but also whether actual accountability effectiveness is related to either of those

variables.

Furthermore, while this research approaches the use of performance measures as a

dichotomy, past literature suggests that such efforts may have different levels of intensity and

accuracy (Blasi 2002). Future studies should investigate not only the scope but also the intensity

and accuracy of various performance measurement approaches, and examine their effect on the

programmatic outcomes. Specifically, large-scope performance measurement may allow public

managers to effectively evaluate contractor performance, but it can also divert the contractor’s

resources and attention away from program implementation towards regulatory compliance, and

may eventually undermine service outcomes. In addition, while this manuscript tests the direct

effects of various performance measures, these practices may not only have different direct

effects, but also interact together to have interactive effects. Finally, different performance

measures (e.g., service disruptions) may have a different meaning and relevance in different

fields, and therefore is it important to investigate how performance measures are applied for

certain goods and services with similar production and consumption characteristics.

APPENDIX A. Mini-case studies illustrating specific services across sectors. NONPROFIT FOR-PROFIT

HIG

H s

cop

e o

f P

erfo

rma

nce

Mea

sure

men

t

(PM

) (1

1-1

5 m

easu

res)

Contract A is delivered by a homeless shelter

monitored by a county government using monthly

expenditures, client age, gender, ethnicity, length

of stay, employment, income, savings upon

arrival and departure, and data on applications

and admissions. All fifteen measures listed in the

questionnaire, except timeliness and customer

satisfaction, are used in the monitoring process.

The contractor submits reports and participates in

monthly sessions to discuss programmatic issues

and the state of homelessness in the county.

Recently, the reporting system has been enhanced

with new information technology. Since the

contractor feels the new system fails to

incorporate descriptive information about their

work and long-term client outcomes, they

voluntarily provide this information in their

informal communication with the county.

Contract D is with a for-profit organization

performing maintenance of solid waste

management equipment. The monitoring officer

prepares a detailed daily list of tasks for the

contractor, directly observes the contractor’s work,

daily selects a sample of trucks for a closer inspection

using a special evaluation checklist developed for

each type of repair, and seeks the truck drivers’

feedback to determine their satisfaction with the

quality of repairs and to illuminate any unaddressed

problems. With the exception of the contractor’s

reputation and equitable access to services, all

performance measures used in the interview guide are

utilized by the government agency. Recently, the

government conducted a longitudinal study comparing

the current and the previous contractors’ performance

and found some improvement in the incidence of

performance problems.

Page 29: What is the Effect of Performance Measurement on Perceived ......communication, data collection, reporting, inspections, sanctioning, terminations, and renewals. Most of these activities

28

MO

DE

RA

TE

sc

op

e o

f P

M

(6-1

0 m

easu

res

)

Contract B is for nursing and medical services

for the juveniles referred by a local

correctional department. Through regular

reports and formal meetings, the monitoring

officer tracks the timeliness, the costs, the

workload, as well as multiple other quantitative

indicators and qualitative accounts of the clients’

health status and the types of services provided.

The officer does not use any informal channels of

communication to monitor care. He does not

examine the contractor’s reputation, client

satisfaction with services, or whether services are

delivered equitably, and according to the laws

and regulations in the field. The officer also

argues he is unable to detect the short-term or

long-term effects of care on the client health.

Contract E provides nursing services for

incarcerated children. The monitoring officer

maintains extensive communication with the

contractor, but is frustrated that most information

originates from self-reporting or whistle-blowing.

Timeliness, disruptions in service delivery, and the

contractor’s workload are among the monitored

aspects of performance. Indirect measures of care

quality are also used. No consistent assessment of

costs or correspondence of care provision to the

regulations in the field is conducted. Similarly, client

satisfaction, equitable access to care, and the

contractor’s reputation are not assessed. Monitoring

officer points to his inability to evaluate “how the

contractor does due diligence,” and cites the need to

reiterate contract requirements due to the contractor’s

staff turnover.

LO

W s

cop

e o

f P

M

(0-5

mea

sure

s re

port

ed)

Contract C is delivered by a nonprofit foundation

that facilitates quality improvement and

develops various population evaluation

frameworks for a local government health

department. The contractor is not aware of any

formal evaluation of quality or any other aspect

of their work, with the exception of timeliness of

submitted quality-improvement materials and

continuity in their work. Communication between

the agency and the contractor focuses on

eliminating the factors that hinder the contractor’s

ability to provide the expected deliverables on

time (e.g., those dealing with receiving the

necessary data from the government agency in

order to develop quality assessment and

improvement strategies and tools).

Contract F is a small for-profit therapy program

conducting substance abuse counseling. When asked

about government monitoring, the contractor notes

“[a]s hard as it is to believe, they do nothing.” The

contractor submits brief reports with a basic overview

of its work, however there is no feedback or

indication that these reports are reviewed. As a part of

internal evaluation the contractor conducts

professional client assessment the results of which are

voluntarily shared with the government agency (also,

with no feedback). Communication is generally

initiated by the contractor who raises concerns

regarding specific clients’ noncompliance. The

contractor indicated that only one of fifteen items is

used by the agency: the agency interacts with the

clients and obtains some informal feedback.

APPENDIX B

Interview Questions Utilized to Examine the Prevalence and the Process of Collaboration

1. Today I would like us to talk about performance evaluation and measurement, specifically, about any

kind of information that you might use to make sure that your contractor is complying with your

expectations and doing their job well. Some of these performance measures can be more formal and

quantitative (e.g., reporting the number of service units produced every week). Other measures can be

more informal (e.g., informally discussing service provision details). For the following questions

please choose one of the following answers: (1) yes, (2) no, (3) don’t know/don’t recall/refuse to

answer. In this contract, do you collect, monitor, or evaluate information on:

a. cost-effectiveness of contracted services?

b. quality of services provided by the contractor?

c. contractor’s workload (e.g., number of clients served, units of services provided, # hours of

work)?

d. the impact that services have on clients or service-recipients?

e. customer satisfaction?

f. contractors’ ability to provide equitable access to services without any discrimination (e.g.,

based on income, gender, or race)?

g. compliance of service provision with the law?

h. timeliness of service delivery?

Page 30: What is the Effect of Performance Measurement on Perceived ......communication, data collection, reporting, inspections, sanctioning, terminations, and renewals. Most of these activities

29

i. service continuity or any disruptions in service delivery?

j. do you specify the detailed procedures for service delivery; in other words, precisely how

services should be delivered, and by whom?

k. do you use any quantitative measures of performance (for instance number of clients served,

# services provided, quantifiable impact on the clients’ status)?

l. do you use any qualitative, descriptive information on your contractor’s performance?

m. do you use any informal ways of obtaining performance information, such as through an

informal conversation with a client, contractor staff or a third party?

n. are the performance measures that you use tailored to this particular contractor (i.e., they

wouldn’t be used for another contractor)?

o. do you collect information on the reputation of your contractor formally or informally?

p. do you evaluate your contractor’s performance in any other ways that I have not listed?

2. Have you ever asked for contractor’s input on the performance measures that are used in this

contract? If yes, ask: Why was this done? Can you explain how this was done?

3. Has your contractor ever attempted to negotiate or discuss with you how their performance should be

measured or evaluated? If yes, ask: When did this happen? When this happened, how did your

agency respond?

4. Have you incorporated any performance measures that were modified in response to contractor’s

comments or proposed by your contractor?

5. Do you believe that your overall communication with the contractor influenced the type of measures

that you use? If yes, ask: In what way?

6. Has the contractor ever asked you for clarifications on the performance information that you

requested? If yes, ask: How did your agency respond?

7. Have you ever asked your contractor for clarifications on their performance information that was

gathered about them? If yes, ask: how did they respond?

APPENDIX C. Measurement of independent variables

Interview questions used to create each

variable Collaboration Determinants and Created

Measures

I would like to seek your input on the effects

of contractors’ nonprofit or forprofit status on

the overall contractual relationship. Is your

contractor a nonprofit or a forprofit

organization?

Nonprofit Status. Variable was created to indicate

nonprofit status of the vendor in the discussed

contracting arrangement (dummy variable).

N/A (respondent status recorded by the

primary investigator at the time of the

interview).

(A). Government Respondent. In the data, records

pertaining to government monitoring officers were

assigned the value of 1, while respondents

representing the vendors were assigned the value of 0.

We are here to discuss the contract with

________. Could you describe for me, very

briefly, what kind of services does this

contractor provide?

(B). Hard Services. Responses were categorized into

“hard” (easily measurable) services: IT, construction,

maintenance, public works, planting and plant control,

food supply and quality monitoring, animal care,

janitorial, translation, and recreational (camps, dance

lessons). “Soft” (hard-to-measure) services: long-term

care, medical, nursing care, health management,

mental health, psychological consultation, arts

therapy, programs for women and children,

consulting, evaluation and training, criminal justice,

animal care, substance abuse, and homelessness

(dummy variable). Using Wilson’s classification,

“hard” services also corresponded to those provided

Page 31: What is the Effect of Performance Measurement on Perceived ......communication, data collection, reporting, inspections, sanctioning, terminations, and renewals. Most of these activities

30

by coping and procedural agencies, while “soft” ones

corresponded to s-called craft agencies (no production

agencies were involved in the analysis).

Some contractors’ financial health depends

solely on the government contract. Other

organizations are more fiscally independent, and

rely on other sources of revenues. Is your

contractor (a) financially very dependent on

your funding, (b) somewhat dependent, (c)

somewhat independent, (d) financially very

independent, (e) don’t know.

(C). Vendor’s financial dependency on the contract Variable “Contractor’s financial dependency” was

coded as one for responses (a) financially very

dependent on government funding and (b) somewhat

dependent, and zero for all other options (dummy

variable).

Do you believe that your contractor has its

own internal ways to evaluate its performance?

(C). Contractor uses internal performance

measures. Variable “internal measures” was created

and coded as one for affirmative responses to the

question (dummy variable).

Does your contractor have a unique expertise

that is difficult to find elsewhere? (Probe: Are

there any other organizations in this area that

provide similar services? Is this market very

competitive?)

(D). Contractor has unique expertise. Variable

“unique expertise” was created for affirmative

responses (dummy variable).

Did you go through the process of

competitive bidding for this contract? (D). Competitive bidding used. Variable

“competitive bidding” was coded as one for

affirmative responses (dummy variable).

Some contracts exist in the fields that

undergo rapid changes in service needs, service

technology, suppliers, or funding. Other

contracts exist in more stable, less uncertain

environments. Where would you place this

contract on this continuum between very

dynamic and very stable environments? (a)

Very dynamic, (b) Somewhat dynamic, (c)

Somewhat stable, (d) Very stable.

(D). Dynamic versus stable environment. Variable

“dynamic environment” was coded as one for

responses (a) Very dynamic and (b) Somewhat

dynamic (dummy variable).

When was this contract initiated? Have you

been working with this contractor before? If

yes, ask: In what capacity? For how long?

(E). Long-term vs. short-term relationship.

Variable “relationship length” was created reflecting

the number of years (interval-ratio variable). Eight

cases in the data had missing values and median

values were imputed in order to retain this variable

and maximize the size of the sample.

Some practitioners say that contractual

relationships often begin as more rigid (more

formal) and over time evolve into relationships

that are based on trust. Has this been the case

with this contract?

(E). Perceived goal congruence or trust Variable “trust” was coded as one for responses

confirming that the relationship was presently

characterized by trust between the government agency

and the vendor (dummy variable).

In some cases government agencies collect

and monitor all the information pertaining to

contractor’s performance directly. In other

cases, governments use information collected

and provided by the contractor (so-called self-

reported measures). There are also agencies

that use third parties to collect information and

(E). Monitoring: Self-reported measures used; (E). Monitoring: government collects performance

information; (E). Monitoring: third party monitoring is used Three dummy variables (“self-reporting”,

“government inspection”, “other monitors”) were

created based on the descriptive answer to the

Page 32: What is the Effect of Performance Measurement on Perceived ......communication, data collection, reporting, inspections, sanctioning, terminations, and renewals. Most of these activities

31

do the monitoring (for instance the clients or 3rd

party inspectors). Which strategy do you use? question.

Do you publicize the information pertaining

to the performance of the contractor? If yes, ask:

In what way?

(E). Contractor performance publicized. As an

additional measure of third-party monitoring, variable

“publicized performance” was created and affirmative

responses to the second question were coded as one

(dummy variable).

Do you have professionals among your staff

who can thoroughly understand the nature of the

service delivered by your contractor (individuals

with similar education, degrees, professional

norms, etc.)?

(F). In-house capacity to deliver the service.

Variable “in-house professional capacity” was created

and coded as one for affirmative responses.

Do you think contractors should be engaged

in the development of performance measures to

oversee their own work? If yes or no, ask: Can

you explain, why?

(G). Contractor’s participation in performance

evaluation is perceived as desirable. Variable

“contractors should be engaged” was created and

coded as one for affirmative responses to question 17.

Note for the interviewer: verify respondent’s

employment status. Do you currently serve as

______________? How long have you been

working in this position? How long have you been involved in

managing contracts?

(G). Respondent’s work and contract management

experience. Two variables “work experience” and

“contract management experience” were created to

reflect the number of years. One case had a missing

value, and the average for the whole sample was

calculated and imputed in order to retain this variable

and maximize the sample. REFERENCES

Allen, P. 2002. “A Socio-legal and Economic Analysis of Contracting in the NHS Internal Market Using

a Case Study of Contracting for District Nursing.” Social Science & Medicine 54(2): 255-266.

Amirkhanyan, A.A., Kim, H.J., & Lambright, K.T. (2007). Putting the pieces together: comprehensive

framework for understanding the decision to contract out and contractor performance.

International Journal of Public Administration (IJPA), 30, 699-725.

Amirkhanyan, Anna A., Hyun J. Kim, and Kristina T. Lambright. 2008. Does the public sector outperform

the nonprofit and for-profit sectors? Evidence from a national panel study on nursing home

quality and access. Journal of Policy Analysis and Management (JPAM) 27 (2): 326-53.

Amirkhanyan, Anna A. 2009. Collaborative Performance Measurement: Examining and explaining the

prevalence of collaboration in state and local government performance monitoring. Journal of

Public Administration Research and Theory (JPART) 19 (3): 523-54.

Amirkhanyan, Anna A. 2010. Monitoring across sectors: Examining the effect of nonprofit and forprofit

contractor ownership on performance monitoring in state and local government contracts. Public

Administration Review, 70(5): 742-755.

Ammons, David N., and William C. Rivenbark. 2008. Factors Influencing the Use of Performance Data to

Improve Municipal Services. Public Administration Review 68 (2): 304-18.

Arenson, K. W. (1995). Ex-United Way leader gets seven years for embezzlement. New

York Times, June 23. Arrow, Kenneth J. The Economics of the Agency. In Principals and Agents: The Structure of Business;

Pratt, John W., Zeckhauser, Richard J., Eds.; Harvard Business School Press: Boston, 1984; 1–35.

Behn, Robert D. 2003. Why Measure Performance? Different Purposes Require Different Measures.

Public Administration Review 63 (5): 586-606.

Beinecke, R.H. and R. DeFillippi. 1999. “The Value of the Relationship Model of Contracting in Social

Services Reprocurements and Transitions: Lessons from Massachusetts.” Public Productivity and

Management 22(4): 490-501.

Page 33: What is the Effect of Performance Measurement on Perceived ......communication, data collection, reporting, inspections, sanctioning, terminations, and renewals. Most of these activities

32

Berman, Evan M., and XiaoHu Wang. 2000. Performance Measurement in U.S. Countries: Capacity for

Reform. Public Administration Review 60 (5): 409-20.

Blasi, Gerald J. 2002. Government Contracting and Performance Measurement in Human Services.

International Journal of Public Admin 25 (4): 519-38.

Boukaert, Geert and B. Guy Peters. 2002. Performance Measurement and Management: The Achilles' Heel

in Administrative Modernization. Public Performance & Management Review (25) 4: 359-62.

Boyne, George A. 1998. Bureaucratic Theory Meets Reality: Public Choice and Service Contracting in

U.S. Local Government. Public Administration Review 58 (6): 474-84.

Brown, Trevor L., and Matthew Potoski. 2003. Contract-Management Capacity in Municipal and County

Governments. Public Administration Review 63 (2): 153-64.

------. 2006. Contracting for Management: Assessing Management Capacity Under Alternative Service

Delivery Arrangements. Journal of Policy Analysis and Management 25 (2): 323-46.

Brown, Trevor L., Matthew Potoski, and David M. Van Slyke. 2006. Managing Public Service Contracts:

Aligning Values, Institutions, and Markets. Public Administration Review 66 (3): 323-31.

Byrnes, Patricia, Mark Freeman, and Dean Kauffman. 1997. Performance Measurement and Financial

Incentives For Community Behavioral Health Services Provision. International Journal of Public

Administration 20 (8-9): 1555-78.

Callahan, Kathe, and Kathryn Kloby. 2007. Collaboration Meets the Performance Measurement Challenge.

Public Manager 36 (2): 9-24.

Campbell, D. and D. Harris. 1993. “Flexibility in Long-term Contractual Relationships: The Role of Co-

operation.” Journal of Law and Society 20(2): 166-191.

Chan, Hon S. and David H. Rosenbloom. (2010). Four challenges to accountability in contemporary public

administration: Lessons from the United States and China. Administration & Society,

forthcoming.

Cohen, Steven and William Eimicke. 2008. The Responsible Contract Manager: Protecting the Public

Interest in an Outsourced World. Washington, DC: Georgetown University Press.

Courty, Pascal, and Gerald Marschke. 2007. Making Government Accountable: Lessons from a Federal

Job Training Program. Public Administration Review 67 (5): 904-16.

Dalehite, Esteban. 2008. Determinants of Performance Measurement: An Investigation into the Decision to

Conduct Citizen Surveys. Public Administration Review 68 (5): 891-907.

Davis, P. (2007). The effectiveness of relational contracting in a temporary public organization: intensive

collaboration between an English local authority and private contractors. Public Administration,

85, 2, 383–404.

De Vries, Michiel. 2007. Accountability in the Netherlands: Exemplary in its Complexity. Public

Administration Quarterly 31 (3/4): 480-507.

DeHoog, R. 1990. “Competition, Negotiation, or Cooperation: Three Models for Service Contracting.”

Administration & Society 22(3): 317-340.

Dilger, Robert J., Randolph R. Moffett, and Linda Struyk. 1997. Privatization of Municipal Services in

America’s Largest Cities. Public Administration Review 57 (1): 21-6.

Eisenhardt, K. M. Agency Theory: An Assessment and Review. Academy of Management Review 1989,

14, 57–74.

Frederickson, David G. and George Frederickson. 2006. Measuring the Performance of the Hollow State.

Washington, DC: Georgetown University Press.

Gianakis, Gerasimos A. 2002. The Promise of Public Sector Performance Measurement: Anodyne or

Placebo? Public Administration Quarterly 26 (1/2): 35-64.

Goldsmith, Stephen and William D. Eggers. (2004). Governing by Network: The New Shape of the Public

Sector. Washington, DC: Brookings Institution Press.

Grimaldi, J.V., & Trescott, J. (2008). Smithsonian official resigned in wake of

ethics probe: Internal report cited Latino center leader for multiple violations.

Washington Post, April 15.

Page 34: What is the Effect of Performance Measurement on Perceived ......communication, data collection, reporting, inspections, sanctioning, terminations, and renewals. Most of these activities

33

Hansmann, H.B. (1986). The role of nonprofit enterprise. In S. Rose-Ackerman (Ed.), The

Economics of Nonprofit Institutions: Studies in Structure and Policy (pp. 57-86). New

York: Oxford University Press

Hansman, H. (1987). Economic theories of nonprofit organizations. In W.W. Powell (Ed.), The

nonprofit sector: A research handbook (pp. 27-42). New Haven, CT: Yale University

Press. Hefetz, Amir, and Mildred Warner. 2007. Beyond the Market versus Planning Dichotomy: Understanding

Privatization and Its Reverse in U.S. Cities. Local Government Studies 33(4): 555–72.

Heikkila, Tanya, and Kimberley R. Isett. 2007. Citizen Involvement and Performance Management in

Special-Purpose Governments. Public Administration Review 67 (2): 238-48.

Heinrich, K.J. (1999). Do Govenrment Bureaucrats Make Effective Use of Performance Management

Information? JPART, 9,3, 363-393.

Heinrich, Carolyn J. 2002. Outcomes-Based Performance Management in the Public Sector: Implications

for Government Accountability and Effectiveness. Public Administration Review 62 (6): 712-25.

Ho, Alfred Tat-Kei. 2006. Accounting for the Value of Performance Measurement from the Perspective of

Midwestern Mayors. Journal of Public Administration Research and Theory 16 (2): 217-37.

Ho, Shih-Jen Kathy, and Yee-Ching Lilian Chan. 2002. Performance Measurement and the

Implementation of Balanced Scorecards in Municipal Governments. The Journal of Government

Financial Management 51 (4): 8-19.

Hodge, Graeme A. 2000. Privatization: An International Review of Performance. Boulder: Westview

Press.

Holzer, Marc, and Kathryn Kloby. 2005. Public Performance Measurement: An Assessment of the State-

of-the-Art and Models for Citizen Participation. International Journal of Productivity and

Performance Management 54 (7): 517-32.

Johnston, J. M. and Girth, A. M. "Public service contracting and weak competition: Assessing the costs

and impacts of strategies used to “manage" provider markets" Paper presented at the annual

meeting of the Midwest Political Science Association 67th Annual National Conference, The

Palmer House Hilton, Chicago, IL. 2010-06-05 from

http://www.allacademic.com/meta/p361722_index.html

Johnston, Jocelyn M. and Barbara S. Romzek. (1999). Contracting and accountability in state Medicaid

reform: Rhetoric, theories, and reality. Public Administration Review 59(5): 383-399.

Kelly, Janet M., and David Swindell. 2002a. A Multiple-Indicator Approach to Municipal Service

Evaluation: Correlating Performance Measurement and Citizen Satisfaction across Jurisdictions.

Public Administration Review 62 (5): 610-21.

------. 2002b. Service Quality Variation Across Urban Space: First Steps Toward a Model of Citizen

Satisfaction. Journal of Urban Affairs 24 (3): 271-88.

Kelman, Steven. (2002). Contracting. In Lester A. Salamon, ed. The Tools of Government: A Guide to the

New Governance. New York: Oxford University Press.

Kopczynski, Mary, and Michael Lombardo. 1999. Comparative Performance Measurement: Insights and

Lessons Learned from a Consortium Effort. Public Administration Review 59 (2): 124-24.

Kravchuk, Robert S., and Ronald W. Schack. 1996. Designing Effective Performance-Measurement

Systems Under the Government Performance and Results Act of 1993. Public Administration

Review 56 (4): 348-58.

Linder, Jane C. 2004. Outsourcing for Radical Change: A Bold Approach to Enterprise Transformation.

New York: American Management Association.

Marvel, Mary K. and Howard P. Marvel. (2007). Outsourcing oversight: A comparison for in-house and

contracted services. Public Administration Review 67(3): 521-30.

Melkers, Julia, and Katherine Willoughby. 2005. Models of Performance-Measurement Use in Local

Governments: Understanding Budgeting, Communication, and Lasting Effects. Public

Administration Review 65 (2): 180-90.

Page 35: What is the Effect of Performance Measurement on Perceived ......communication, data collection, reporting, inspections, sanctioning, terminations, and renewals. Most of these activities

34

Meyers, Marcia K., Norma M. Riccucci, and Irene Lurie. (2001). Achieving goal congruence in complex

environments: The case of welfare reform. Journal of Public Administration Research and Theory

11:165–201.

Milgrom, Paul and J. Roberts. 1992. Economics, Organizations, and Management. Englewood Cliffs,

NJ: Prentice Hall.

Milward, H. Brinton, and Keith G. Provan. 2000. Governing the Hollow State. Journal of Public

Administration Research and Theory 10 (2): 359-79.

Moynihan, Donald P. 2005. Why and How Do State Governments Adopt and Implement “Managing for

Results” Reforms? The Journal of Public Administration Research and Theory 15 (2) 219-43.

Nicholson-Crotty, Sean, Theobald, Nick A., and Jill Nicholson-Crotty. 2006. Disparate Measures: Public

Managers and Performance-Measurement Strategies. Public Administration Review 66 (1): 101-

13.

Prager, J. (1994). Contracting out government services: Lessons from the private sector. Public

Administration Review, 54, 176-184. Prager, Jonas. 2008. Contract City Redux: Weston, Florida, as the Ultimate New Public Management

Model City. Public Administration Review 68 (1): 167-80.

Radin, Beryl A. 2006. “One Size fits All.” In Challenging the Performance Movement: Accountability, Complexity, and Democratic Values, 33-52. Washington, DC: Georgetown University Press.

Reaves, J. (2001). Red faces at the Red Cross. Time, November 14. Riccucci, Norma M. (2005). Street-level bureaucrats and intrastate variation in the implementation of

Temporary Assistance for Needy Families policies. Journal of Public Administration Research

and Theory 15:89–111.

Romzek, Barbara S., and Jocelyn M. Johnston. 2002. Effective Contract Implementation and Management:

A Preliminary Model. Journal of Public Administration Research and Theory 12 (3): 423-53.

------. 2005. State Social Services Contracting: Exploring the Determinants of Effective Contract

Accountability. Public Administration Review 65 (4): 436-49.

Rose-Ackerman, S. (1986). The economics of nonprofit institutions: Studies in structure and

policy. New York: Oxford University Press. Rubin, Edward. (2006). The myth of non-bureaucratic accountability. in Michale W. Dowdle, ed. Public

Accountability: Designs, Dilemmas, and Experiences. Cambridge: Cambridge University Press.

Savas, E.S. (2000). Privatization and public-private partnerships. New York: Chatham House Publishers.

Savas, E.S. (2003). Competition and Choice in New York City Social Services. Public Administration

Review, 62, 1, 82-91.

Sclar, E.D. 2000. You Don’t Always Get What You Pay for: The Economics of Privatization. Ithaca,

NY: Cornell University Press.

Smith, Kevin B., and Christopher W. Larimer. 2004. A Mixed Relationship: Bureaucracy and School

Performance. Public Administration Review 64 (6): 728-36.

Smith, S.R. 2005. “NGOs and Contracting.” Pp. 591-614 in E. Ferlie, L.E. Lynn, Jr., and P. Christopher,

eds., The Oxford Handbook of Public Management. Oxford: Oxford University Press.

Straub, A. (2009). Cost savings from performance-based maintenance contracting, International Journal of

Strategic Property Management (2009) 13, 205–217.

Tirole, J. 1999. “Incomplete Contracts: Where Do We Stand?” Econometrica 67(4): 741-781.

Van Slyke, David M. 2003. The Mythology of Privatization in Contracting for Social Services. Public

Administration Review 63 (3): 296-315.

Van Slyke, David M. (2007). Agents or stewards: Using theory to understand the government-nonprofit

social service contracting relationship. Journal of Public Administration Research and Theory.

17(1): 157-187.

Yang, Kaifeng, and Jun Yi Hsieh. 2007. Managerial Effectiveness of Government Performance

Measurement: Testing a Middle-Range Model. Public Administration Review 67 (5): 861-79.

Yang, K, Hsieh, J.Y., and Li, T.S. (2009). Contracting Capacity and perceived Contracting Performance:

Nonlinear Effects and the Role of Time. PAR, 69, 4, 681-696.

Page 36: What is the Effect of Performance Measurement on Perceived ......communication, data collection, reporting, inspections, sanctioning, terminations, and renewals. Most of these activities

35

Zimmermann, Jo An M., and Bonnie W. Stevens. 2006. The Use of Performance Measurement in South

Carolina Nonprofits. Nonprofit Management & Leadership 16 (3): 315-27.

Table 1. Sampled respondents and contract areas (% and # shown)

Service Area

# % # %

Consulting, evaluation and management training 9 23.1 6 20

Long-term care 7 17.9 1 3.3

Construction, maintenance, public works 6 15.4 3 10

Medical, nursing care, health management 5 12.8 2 6.7

Mental health, psychological consultation, arts therapy 4 10.3 1 3.3

Information Technology 3 7.7 2 6.7

Programs for women and children 1 2.6 3 10

Criminal Justice 1 2.6 2 6.7

Environmental (planting, plant control) 2 5.1 2 6.7

Food supply (and quality) monitoring 1 2.6 0 0

Animal care 0 0 1 3.3

Substance abuse, homelessness 0 0 3 10

Janitorial 0 0 1 3.3

Translation 0 0 1 3.3

Recreational (camps, dance lessons) 0 0 2 6.7

TOTAL 39 100 30 100

Government

respondents

(N=39)

Contractors

(N=30)

Page 37: What is the Effect of Performance Measurement on Perceived ......communication, data collection, reporting, inspections, sanctioning, terminations, and renewals. Most of these activities

36

Table 2. The prevalence of performance measures used (% "yes" shown)

Performance Measurement Public

respondents

Contractors

1. Costs/cost-effectiveness 64.1 46.7

2. Quality 94.9 66.7

3. Workload 87.2 63.3

4. Impact on clients 79.5 56.7

5. Client satisfaction 84.6 53.3

6. Equitable delivery of services 41 30

7. Compliance with laws/regulations 64.1 43.3

8. Timeliness 94.9 66.7

9. Disruptions 97.4 73.3

10. Process specified in details 69.2 43.3

11. Quantitative indicators 76.9 60

12. Qualitative indicators 82.1 66.7

13. Informal monitoring 94.9 70

14. Measures tailored to organizations 41 40

15. Reputation 64.1 56.7

N=69

Table 3. Descriptives. Contractor complies by providing timely and accurate information.

Values Frequency Percent

8 3 4.35

9 8 11.59

9.5 1 1.45

10 4 5.8

11 14 20.29

12 39 56.52

Table 4. Descriptives. Government reveals problems and effectively manages the contract.

Values Frequency Percent

2 1 1.45

3 1 1.45

4 2 2.9

5 1 1.45

6 12 17.39

7 18 26.09

8 34 49.28

Page 38: What is the Effect of Performance Measurement on Perceived ......communication, data collection, reporting, inspections, sanctioning, terminations, and renewals. Most of these activities

37

Table 5. Dependent variable: Contractor complies by providing timely and accurate information

b OR sig. b OR sig. b OR sig. b OR sig.

PERFORMANCE MEASUREMENT

Costs 1.69 5.40 **

Service disruptions 2.27 9.70 *

Service quality -2.24 0.11 **

PM conducted through unformal means -2.36 0.09 *

CONTROLS

Nonprofit Status -0.53 0.59 0.31 1.36 0.16 1.17 -0.11 0.89

Hard services -2.31 0.10 *** -2.16 0.12 ** -2.48 0.08 *** -2.49 0.08 ***

Collaborative perf. measurement index -0.01 0.99 -0.16 0.85 -0.23 0.80 -0.08 0.92

Contractor’s financial dependency 0.00 1.00 0.18 1.20 0.43 1.53 -0.37 0.69

Contractor has unique expertise 2.50 12.14 *** 2.56 12.96 *** 2.46 11.76 *** 2.45 11.62 ***

Contract awarded competitively 3.36 28.86 *** 3.06 21.24 *** 3.53 34.08 *** 3.21 24.67 ***

Environment perceived as dynamic -2.47 0.08 *** -2.38 0.09 *** -2.48 0.08 *** -2.24 0.11 ***

Relationship length -0.02 0.98 -0.02 0.98 -0.05 0.96 -0.03 0.98

Perceived trust -0.22 0.81 0.21 1.23 -0.39 0.67 0.25 1.28

Monitoring: self-reporting -3.29 0.04 ** -3.02 0.05 ** -3.20 0.04 ** -3.07 0.05 **

Monitoring: direct gov. inspections 0.75 2.13 0.81 2.26 0.94 2.56 1.04 2.82

Monitoring: third party monitoring -1.43 0.24 * -0.72 0.49 -0.54 0.58 -0.80 0.45

Government’s in-house prof. capacity 0.08 1.08 0.07 1.07 0.41 1.50 0.29 1.34

Cntr's engagement in PM is desirable 0.69 2.00 0.26 1.30 0.62 1.86 0.85 2.35

Respondent’s contract mgmt experience -0.16 0.85 *** -0.15 0.86 *** -0.17 0.85 *** -0.14 0.87 ***

Gov./contractor respondent dummy -4.51 0.01 *** -4.62 0.01 *** -4.59 0.01 *** -3.85 0.02 ***

Likelihood Ratio (ChiSq) 65.44 *** 63.52 *** 67.41 *** 63.76 ***

Note: *** - p < 0.01; ** - p <0.05; * - p <0.1; N=69.

Model 1 Model 2 Model 3 Model 4

Page 39: What is the Effect of Performance Measurement on Perceived ......communication, data collection, reporting, inspections, sanctioning, terminations, and renewals. Most of these activities

38

Table 6a. Dependent variable: Government reveals problems and effectively manages the contract

b OR sig. b OR sig. b OR sig. b OR sig.

PERFORMANCE MEASUREMENT

Sum of performance measures used 0.23 1.26 **

Impact of services on clients 1.26 3.54 *

Client satisfaction -1.60 0.20 **

Services delivered equitably 1.25 3.50 **

CONTROLS

Nonprofit Status -1.23 0.29 ** -0.90 0.41 -1.06 0.35 * -1.28 0.28 **

Hard services 0.17 1.18 0.06 1.06 0.43 1.54 0.59 1.81

Collaborative perf. measurement index 0.06 1.06 0.02 1.02 0.18 1.19 0.14 1.15

Contractor’s financial dependency -1.44 0.24 * -0.91 0.40 -0.96 0.38 -1.57 0.21 **

Contractor has unique expertise 0.27 1.31 0.24 1.27 0.18 1.20 0.04 1.04

Contract awarded competitively -0.31 0.74 -0.06 0.94 -0.39 0.68 -0.33 0.72

Environment perceived as dynamic -0.58 0.56 -0.58 0.56 -0.11 0.90 -0.29 0.75

Relationship length 0.08 1.08 * 0.06 1.07 0.04 1.04 0.07 1.07

Perceived trust -0.44 0.64 -0.29 0.75 -0.56 0.57 -0.39 0.68

Monitoring: self-reporting -0.31 0.74 -0.86 0.42 -0.31 0.73 -0.14 0.87

Monitoring: direct gov. inspections 1.54 4.68 ** 1.79 5.99 *** 2.44 11.52 *** 2.06 7.82 ***

Monitoring: third party monitoring -0.75 0.47 -0.53 0.59 -0.16 0.85 -0.71 0.49

Government’s in-house prof. capacity 1.33 3.78 ** 1.62 5.05 *** 1.87 6.48 *** 1.68 5.37 ***

Cntr's engagement in PM is desirable -0.95 0.39 -0.80 0.45 -0.59 0.55 -0.55 0.58

Respondent’s contract mgmt experience -0.02 0.98 -0.02 0.98 0.00 1.00 -0.02 0.98

Gov./contractor respondent dummy -1.21 0.30 -0.72 0.49 -0.37 0.69 -0.89 0.41

Likelihood Ratio (ChiSq) 28.84 ** 27.30 * 28.60 ** 28.48 **

Note: *** - p < 0.01; ** - p <0.05; * - p <0.1; N=69.

Model 5 Model 6 Model 7 Model 8

Page 40: What is the Effect of Performance Measurement on Perceived ......communication, data collection, reporting, inspections, sanctioning, terminations, and renewals. Most of these activities

39

Table 6b. Dependent variable: Government reveals problems and effectively manages the contract

b OR sig. b OR sig. b OR sig. b OR sig.

PERFORMANCE MEASUREMENT

Service timeliness 2.70 14.88 ***

Service disruptions 3.63 37.59 ***

Details service specifications 1.01 2.74 *

PM conducted through unformal means -2.23 0.11 **

CONTROLS

Nonprofit Status -0.96 0.38 -1.14 0.32 * -1.06 0.35 * -1.52 0.22 **

Hard services 0.31 1.36 0.36 1.43 -0.03 0.97 -0.40 0.67

Collaborative perf. measurement index 0.23 1.26 0.13 1.14 0.08 1.09 0.10 1.11

Contractor’s financial dependency -1.81 0.16 ** -1.67 0.19 ** -1.48 0.23 * -1.67 0.19 **

Contractor has unique expertise 0.37 1.45 0.38 1.46 0.22 1.24 0.12 1.13

Contract awarded competitively -0.87 0.42 -0.99 0.37 -0.30 0.74 -0.22 0.81

Environment perceived as dynamic -0.95 0.39 -1.00 0.37 * -0.33 0.72 -0.20 0.82

Relationship length 0.10 1.11 ** 0.11 1.12 ** 0.06 1.06 0.07 1.08

Perceived trust -0.67 0.51 -0.35 0.71 -0.56 0.57 -0.35 0.71

Monitoring: self-reporting -0.62 0.54 -0.28 0.76 -0.34 0.71 -0.48 0.62

Monitoring: direct gov. inspections 1.82 6.14 *** 1.98 7.22 *** 1.88 6.56 *** 2.51 12.31 ***

Monitoring: third party monitoring -0.83 0.43 -0.87 0.42 -0.60 0.55 -0.57 0.56

Government’s in-house prof. capacity 1.82 6.20 *** 1.60 4.95 ** 1.58 4.87 ** 1.88 6.53 ***

Cntr's engagement in PM is desirable -1.13 0.32 -0.62 0.54 -1.00 0.37 0.02 1.03

Respondent’s contract mgmt experience -0.03 0.97 -0.01 0.99 -0.02 0.98 -0.03 0.97

Gov./contractor respondent dummy -1.64 0.19 ** -1.57 0.21 ** -1.05 0.35 -0.50 0.61

Likelihood Ratio (ChiSq) 33.16 ** 37.61 *** 26.74 * 29.52 **

Note: *** - p < 0.01; ** - p <0.05; * - p <0.1; N=69.

Model 11 Model 12Model 9 Model 10

Page 41: What is the Effect of Performance Measurement on Perceived ......communication, data collection, reporting, inspections, sanctioning, terminations, and renewals. Most of these activities

40

FOOTNOTES

1 This study adopts Romzek and Johnston’s definition and operationalizations of this term. Importantly, in some

studies, Romzek and Johnston use the term contract implementation and management effectiveness. These two

terms have been defined similarly and used interchangeably. 2 This is a term coined by Milward and Provan (2000).

3 Most of the information pertaining to data and measurement appeared in earlier publications based on the same

data: Amirkhanyan (2009) and Amirkhanyan (2010). 4 The instrument included questions with nominal or ordinal response categories, as well as those requiring

descriptive explanation and, hence, producing data that were appropriate for both qualitative and quantitative

analyses. 5 Please, contact the author for a copy of the interview guide and the data collection protocol.

6 These jurisdictions were conveniently accessible to the primary investigator and allowed in-person interviews. The

unique location of Washington, DC metropolitan area allowed collecting data from several jurisdictions in Maryland

and Virginia (population ranging from 195,000 to 5,600,000; proportion of Whites ranging from 38% to 80%, and

the median household income ranging between 43,000 and 77,000). 7 Telephone book was used in one jurisdiction in the absence of a procurement office web site.

8 The results of this research can therefore be applied more readily to jurisdictions with higher median income and

located near large metropolitan areas. The reliability of data collection was improved by using a common data

collection protocol. However follow-up research performed in other locations is needed to enhance the external

validity of the findings. 9 The statement offered to the private respondents was: “We accurately comply with government agency’s

performance measurement requirements.” Other statements were also modified accordingly. 10

Hypothesis testing rejected the “no common factors” hypothesis (p<.0001) and failed to reject the “2 factors are

sufficient” hypothesis (p=.82). Eigen values of the weighted reduced correlation matrix were 4.19 and 2.04 for the

two factors and they explained 67% and 33% of the variance. 11

Since some performance measures strongly correlated with one another, including all fifteen measures in one

model suggested multi-collinearity. 12

The design of this study has its limitations. Most importantly, this study covers a limited number of neighboring

jurisdictions which limits its generalizability. Thus, our findings may be more readily applied to locations with

higher population income and those in large metropolitan areas. The reliability of data collection was enhanced by

using a common data collection protocol, however, the replications of this research in other locations may help

enhance its external validity. 13

Low prevalence of cost assessment may have been caused by a large proportion of fixed-cost contracts especially

in the human services area. Thus, in these cases, monitors reported reviewing the billing documentation, rather than

evaluating the cost-effectiveness of the contracted services.