the impact of institutional forces on software metrics program

Upload: luis-carlos-gonzales-rengifo

Post on 14-Apr-2018

219 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/27/2019 The Impact of Institutional Forces on Software Metrics Program

    1/16

    The Impact of Institutional Forceson Software Metrics ProgramsAnandasivam Gopal, Tridas Mukhopadhyay, and M.S. Krishnan

    AbstractSoftware metrics programs are an important part of a software organizations productivity and quality initiatives as

    precursors to process-based improvement programs. Like other innovative practices, the implementation of metrics programs is prone

    to influences from the greater institutional environment the organization exists in. In this paper, we study the influence of both external

    and internal institutional forces on the assimilation of metrics programs in software organizations. We use previous case-based

    research in software metrics programs as well as prior work in institutional theory in proposing a model of metrics implementation. The

    theoretical model is tested on data collected through a survey from 214 metrics managers in defense-related and commercial software

    organizations. Our results show that external institutions, such as customers and competitors, and internal institutions, such as

    managers, directly influence the extent to which organizations change their internal work-processes around metrics programs.

    Additionally, the adaptation of work-processes leads to increased use of metrics programs in decision-making within the organization.

    Our research informs managers about the importance of management support and institutions in metrics programs adaptation. In

    addition, managers may note that the continued use of metrics information in decision-making is contingent on adapting the

    organizations work-processes around the metrics program. Without these investments in metrics program adaptation, the true

    business value in implementing metrics and software process improvement will not be realized.

    Index TermsProduct metrics, process metrics, software engineering, metrics programs, software development, institutional forces,

    metrics adaptation, metrics acceptance.

    1 INTRODUCTION

    THEmanagement of software development is an integralpart of industry today but most software organizationsface significant barriers in managing this activity. AnInformationWeek survey found that 62 percent of theirrespondents feel that the software industry has troubleproducing good quality software [17]. Losses due to

    inefficient development practices lead to inadequatequality that cost the US industry approximately $60 billionper year [41]. One approach that has been shown to resultin improved quality and reduced costs is the use ofsoftware process improvement activities. Investments inprocess improvement activities lead to improved qualityand reduced rework and lifecycle costs [25], [14]. One ofthe important determinants of success in software processimprovement is the presence of metrics programs. Indeed,the availability of reliable and accurate metrics has beenhighlighted in prior research [35]. In this paper, we focuson this important antecedent of process improvementinitiativessoftware metrics programs.

    A metrics program is defined as the set of on-goingorganizational processes required to define, design, and

    implement an information system for collecting, analyzing,and disseminating measures of software processes, pro-ducts, and services [3]. A primary objective of a softwaremetric is to quantitatively determine the extent to which asoftware process, product, or project possesses a certainattribute [22]. Additionally, the metrics program is respon-

    sible for analyzing the metrics data from the developmentactivities and providing stake-holders with the appropriatefeedback. The metrics provided are used by the organiza-tion to focus on key process areas that need improvement inthe organization.

    The business value of a metrics program is considerablein firms that have implemented them successfully, such asMotorola [10] and Contel Corporation [34]. However,anecdotal evidence shows that the successful adoptionand implementation of these programs is limited. Pfleeger[34] reports that two out of three metrics programs fail inthe first two years. Another study reported that, thoughsoftware managers believe in the value from metrics

    programs, fewer than 10 percent of the industry classifiedtheir attempt at implementing a metrics program as positive[10]. Prior research has also identified low incidence ofsuccessful metrics programs as being due to organizationaland managerial problems rather than short-comings in thedefinitions of software metrics [3]. It is thus important forsoftware managers to understand what constrains thesuccessful assimilation of these programs into their organi-zations.

    Past research has studied the determinants of success inestablishing and maintaining metrics programs in softwareorganizations [10], [34]. However, there is a lack ofempirical evidence from the software industry as to what

    factors lead to successful implementation of metrics

    IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, VOL. 31, NO. 8, AUGUST 2005 679

    . A. Gopal is with the Robert H. Smith School of Business, University ofMaryland, College Park, MD 20742. E-mail: [email protected].

    . T. Mukhopadhyay is with the Tepper School of Business, Carnegie MellonUniversity, Schenley Park, Pittsburgh, PA 15217.E-mail: [email protected].

    . M.S. Krishnan is with the Ross School of Business, University ofMichigan, Ann Arbor, MI 48109. E-mail: [email protected].

    Manuscript received 16 July 2004; revised 25 Mar. 2005; accepted 21 June2005; published online 12 Aug. 2005.Recommended for acceptance by A. Anton.For information on obtaining reprints of this article, please send e-mail to:

    [email protected], and reference IEEECS Log Number TSE-0143-0704.0098-5589/05/$20.00 2005 IEEE Published by the IEEE Computer Society

  • 7/27/2019 The Impact of Institutional Forces on Software Metrics Program

    2/16

    programs in organizations. In particular, it is necessary tounderstand that the process of successful adoption andimplementation of metrics programs is not a binary yes/nodecision but represents a series of organizational movestoward the optimal use of metrics programs, as discussedby Berry and Jeffery [3]. In this paper, we provide amultidimensional view of metrics program implementation

    in organizations. In order to provide a clear framework tothe process by which metrics programs are assimilated intoorganizations, we use the six-stage innovation diffusionmodel proposed by Cooper and Zmud [8]:

    . Initiation, the first stage, consists of active or passivescanning of the organizational problem and theopportunities available in the external environmentto address the problem.

    . Adoption, the second stage, involves mobilizing theorganizational and financial resources to reach thedecision to formally adopt the process innovation.

    . Adaptation, the third stage, involves the processbeing developed and implemented. Organizationalprocedures are revised to fit the innovation andpersonnel are trained in the new process.

    . Acceptance, the fourth stage, consists of activelyinducing usage of the process innovation indecision-making.

    . Routinization is the fifth stage wherein the processinnovation is part of every-day routine and is seen asnormal activity.

    . Infusion, the final stage, involves building value-added processes to gain the maximum leverage outof the process innovation.

    Past research has indicated that the mere adoption of aninnovation, as seen in the six-stage model above, does not

    lead to its routine use within the organization. Adoption isonly the second stage of the model. Further resources andcommitment is required on the part of the organization togain the maximum benefits from the innovation. Prior workin metrics programs has discussed these issues in theimplementation of metrics programs. For example, Pfleeger[34] states that the mere presence of a metrics program doesnot add value unless software managers can be motivatedto use metrics information in their decision-making.Similarly, Berry and Jeffery [3] state that the socio-technicalcontext that metrics programs exist in need to be studied todetermine how metrics programs implementations succeed.We believe that using the above-mentioned six-stage model

    will enable us to provide with an opportunity to studymetrics programs success at a greater level of granularity. Inaddition, it will also enable us to study the effects of thesocio-technical contexts that the metrics programs exist inand are influenced by.

    In our analysis, we focus on organizations that haveformally adopted metrics programs and have thus crossedthe second stage of the six-stage model. We are interestedin analyzing the factors that influence the extent to whichorganizationsadapt to metrics programs, i.e., the extent towhich the organization has implemented work-processesaround the metrics programs. Thus, one of the contribu-tions of our work is to propose and empirically validate a

    measure for adaptation (Stage 3) of metrics programs

    within software organizations. The successful adaptation

    of a metrics program should lead to increased use of the

    metrics programs in decision-making within the organiza-

    tion, i.e., increased acceptance of the metrics program

    (Stage 4). Acceptance in this context is defined as the level

    of use of metrics information in decision-making across

    different tasks in the organization [38]. Therefore, we also

    study the link between adaptation and acceptance of

    metrics programs.Innovative practices and programs such as metrics

    programs exist in highly complex organizational environ-

    ments that are driven by strong institutions existing in the

    industry. These institutions can be both external to the

    organization as well as internal. For example, the pressure

    applied on firms by potential customers and clients is a

    strong external motivating force. In other cases, organiza-

    tions adopt innovative practices because they believe these

    are the right way to do business, thereby establishing

    strong internal institutions. Thus, institutions in general are

    instrumental in the adoption of innovative practices inorganizations. We therefore study software metrics pro-

    grams using institutional theory as a theoretical basis.

    Institutional theory describes the forces that drive organiza-

    tions to adopt practices and policies in order to gain

    legitimacy in the market. Thus, firms that conform to

    prevalent institutional norms in an industry are seen as

    being more legitimate than firms that do not conform [26].

    We therefore evaluate the effect of these institutional forces

    on metrics programs implementation in organizations.Our analysis is conducted on data collected through a

    survey of 214 metrics professionals in software organiza-

    tions in the United States. The Web-based survey contained

    questions on existing metrics programs and demographicinformation from 145 organizations, including both defense

    contractors and commercial sector companies. To summar-

    ize our work in this paper, we study the implementation of

    metrics programs in software organizations. We extend past

    research in metrics programs by conducting an industry-

    wide survey of metrics programs focusing on institutional

    and organizational factors. Past research has argued that the

    implementation of complex innovations such as metrics

    programs requires a series of adaptation moves [8]. We

    therefore measure adaptation by using a multidimensional

    view that includes the extent to which individual work-

    processes are modified or instituted to support the metrics

    program. Subsequently, we examine the effects of adapta-

    tion on an outcome measure that reflects the extent to which

    the metrics program is used and accepted in decision-

    making, i.e., acceptance [38]. In the next section, we review

    some of the prior work in software metrics programs and

    institutional theory.

    2 BACKGROUNDLITERATURE

    The background literature pertinent to our analysis comes

    from two sourcesthe anecdotal literature on metrics

    programs and prior research on institutional theory. In this

    section, these two streams of research are briefly reviewed.

    680 IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, VOL. 31, NO. 8, AUGUST 2005

  • 7/27/2019 The Impact of Institutional Forces on Software Metrics Program

    3/16

    2.1 Software Metrics

    The bulk of prior research in software metrics and theirsuccessful implementation in organizations is anecdotal.The focus of this literature has been on examining factorsthat lead to successful implementation of metrics programsin organizations. Berry and Jeffery [3] call this line ofresearch factor-based (p. 185), where the objective is to

    rank factors in terms of their association with success. Notethat the definition of success varies from study to study[34]. The anecdotal literature falls into two categories. Casestudies on metrics programs in a single organization are inthe first category. The second category includes studies thatcompare metrics programs in two or more organizations.Based on these comparisons, the authors then presentfactors that are influential in determining success. Wediscuss these two streams of work briefly.

    Company-specific case studies describe metrics imple-mentations at organizations such as Motorola and ContelCorporation. For example, Daskalantonakis [10] discussesthe introduction and implementation of a metrics program

    at Motorola in some detail. He focuses on the need for a setof common metrics, guidelines for collection and inter-pretation, and tools for automating the use of metrics.Likewise, Pfleeger [34] discusses the introduction of ametrics program in Contel Corporation. She identifies theimportance of simple and usable metrics and a streamlineddata collection and analysis process in addition to severalother factors. While some of the factors discussed areparticular to Contel, other factors are generalizable to othersoftware development companies. Related case studies onEastman Kodak [40], Loral Labs [27], and the US Army [12]bring out the relevance of very similar sets of factors thatdetermine metrics programs success.

    In contrast to the above-mentioned papers, other workin metrics programs has looked at two or more firms inanalyzing success. Hall and Fenton [15] contrast twoprograms in two different companies, one successfulwhile the other was not as successful. The authors showthat upper management support, resource availability,communication, and the need for adequate feedback fromthe stakeholders in the program increase the probabilityof success. Gopal et al. [14] survey more than a hundredorganizations that have metrics programs in-house toidentify technical and organizational factors that lead tometrics success. The importance of management support,communication, and feedback loops is underscored in

    their work.Our work here complements the above-mentioned priorresearch in several ways. First, we recognize the validity ofthe factor-based approach to metrics programs success anduse it to drive our definition of adaptation. The importantfactors identified therein, such as communication andfeedback, are included in our definition of adaptation.Thus, our measure of adaptation is a composite measurebuilding on prior factor-based work. Second, we add to thisliterature by empirically testing one aspect of successacceptance. This construct captures the often-expressedsentiment in the literature that eventual use of metricsinformation in decision-making is a strong indicator of

    successful implementation. Note that Acceptance is only

    one such dimension of success in this context; furtherresearch is needed to study other dimensions of success,such as enhanced decision-making speed and improvedquality of decisions. Third, much of the prior work inmetrics programs has focused on factors within theorganization. Our work adds to this by explicitly consider-ing the effects of the greater institutional environment

    outside the firm on the metrics programs [3]. Thus, the roleof institutions in this context is important and we discussthese next.

    2.2 Institutional Forces

    The focus in the literature has been on success factorsinternal to the organization. However, it is important torecognize the influence of the external environment thatsoftware organizations exist in. In particular, softwareorganizations exist in institutionalized environments thatexert strong pressures on the organizations to conform tocertain industry-wide norms. Like any other industry, thematuring of the global software industry has led toinstitutions and institutional norms emerging in theenvironment. Institutional theory studies the sources ofthese institutional norms and the manner in which theyaffect decision-making.

    Institutional theory describes the institutional forces thatdrive organizations to adopt practices and policies in orderto gain legitimacy in the market. At its core, institutionaltheory tries to explain institutional isomorphism, i.e., theconstraining process that forces one unit in a population toresemble other units that face the same environment [9].Institutional pressure can be applied by rules, laws, publicopinion, views of important constituents such as customersand suppliers, knowledge legitimated through educationand universities, and social prestige. The institutional forces

    on firms typically arise from the presence of professionalbodies, cadres of similarly trained people, and strongmethods and processes [11]. Note that these forces are notimposed by any one body but are based on the generalemergence of strong norms and accepted beliefs within theindustry.

    Strong institutions have emerged in the softwareindustry in the last decade and these norms have comefrom the growth of the worldwide software industry and amovement toward an engineering focus. Formal programsin computer science and information systems, strongacademic institutions such as the ACM, growth of soft-ware-related journals such as IEEE Software, and software

    organizations such as IBM, Oracle, and Motorola, haveadded to the institutionalization of the industry. Theseinstitutions provide organizations with the incentives toadopt methods and practices that have, over time, becomethe norm.

    With respect to metrics programs in particular, theimportance given to process-based improvement initiativesin software organizations has increased pressure onorganizations to adopt such programs [25], [35]. This hasbeen facilitated by organizations such as the SoftwareEngineering Institute (SEI) and the International StandardsOrganization (ISO) which provide companies with riskframeworks, process frameworks, and quality management

    techniques [30]. The SEI has proposed the Capability

    GOPAL ET AL.: THE IMPACT OF INSTITUTIONAL FORCES ON SOFTWARE METRICS PROGRAMS 681

  • 7/27/2019 The Impact of Institutional Forces on Software Metrics Program

    4/16

    Maturity Model (CMM), a staged model of processimprovement [30]. The ISO, on the other hand, hasproposed several frameworks for quality systems inproduction of which the ISO 9001 describes a model forquality assurance in design, development, and installationservices [29]. Although some differences exist between theCMM and the ISO model, both standards have gained

    acceptance in the software development community as thenormative way to develop software.

    Certification of software development organizations onthe basis of the Capability Maturity Model (CMM) is oftenrequired by clients in the US and has become accepted aspart of legitimate practice [2]. European software compa-nies and clients tend to use the ISO 9001 and the newerISO/IEC 15504 standard [31] as accepted frameworks forassessing software vendors. Software organizations are alsomotivated to acquire process certification by competingorganizations in the software market [17]. Moreover, theevaluation and selection of software vendors by bothgovernment and commercial customers is sometimes

    conditional on process certification [5]. As the value ofprocess certification and process improvement activitiesincreases, the need for metrics programs within organiza-tions will also increase. Thus, the value of process-basedimprovement and the associated process certification is asignificant factor in promoting the implementation ofmetrics programs in organizations.

    Although researchers in software metrics have notexplicitly addressed the institutional environment in theiranalysis, they have recognized the importance of theseforces in determining metrics programs success. Theimportance of management commitment has been men-tioned in several case studies on metrics programs [10], [34].

    The role of management is important in setting up internalinstitutions within the organization, i.e., the managersestablish norms within the firm. The external environmenthas also been discussed as influencing metrics programssuccess [20]. In another paper, the same authors describethe relevance of external stakeholders such as customersand potential customers on metrics programs [3]. Offen andJeffery [28] have included the business and strategic goals intheir M3P framework for establishing measurement pro-grams. Business and strategic goals are set in conjunctionwith the external environment and partners such assuppliers, customers, and competitors. Thus, the use ofinstitutional theory provides a good fit with the extant

    literature in metrics programs. We discuss our researchmodel and hypotheses next.

    3 RESEARCH MODEL AND HYPOTHESES

    3.1 Adaptation and Acceptance

    As mentioned before, adaptation is defined as the stage inthe implementation process in which the innovation isdeveloped, installed, and maintained. During adaptation,organizational procedures around the innovation (orinnovative practice) are revised within the organization ornew procedures are developed. New work-practices aredeveloped to fully leverage the innovation and organiza-

    tional members are trained both in the procedures and use

    of the innovation [6]. From adaptation, the organizationthen progresses to the next stage of change, acceptance, inwhich the innovation is accepted and used by theorganization. Thus, acceptance represents a direct outcomeof the investments in new work-processes and proceduressurrounding metrics programs made by the organization.

    The empirical definition of adaptation we use is similar

    to that used by Westphal et al. [43] in their study of TQMadoption in hospitals. The authors measured the extent to

    which hospitals adopted each of 20 different TQM-relatedactivities. Each of these activities was required to some

    extent for the hospital to have adopted TQM, but hospitalsvaried in terms of how many of these activities had been

    adopted fully. Therefore, the extent to which a hospital wasconsidered to have adapted to TQM was based on an

    average of the extent to which each individual activity wascarried out. We use a similar definition of adaptation in our

    context. Successful adaptation of metrics programs consistsof several different but related organizational activities.Adaptation is reflected by the extent to which each of these

    individual organizational activities is begun and conductedin a systematic manner. In order to identify these activities

    that together constitute adaptation, we use the factor-basedliterature in software metrics discussed before. Specifically,we identify five primary factors that have been addressed in

    all the factor-based past research in metrics programs, as

    highlighted below.

    3.1.1 Regularity of Metrics Collection

    Pfleeger [34] mentions the discipline involved in regularlycollecting the appropriate metrics as being key to thesuccess of a metrics program. Even if the set of metricscollected is small and focused, the organization needs to

    establish due processes by which the metrics are collectedand archived at the appropriate times. Indirectly, theregularity of metrics collection also measures metricscollection across the phases of a project. Thus, althoughwe dont explicitly capture where in the developmentprocess metrics are collected, a systematic and regular set ofmetrics collected implies that, on average, metrics arecollected across different phases of development. Note thatthis assumption might not be true for metrics needed forpredictive purposes and this is a potential limitation of ourmodel. Therefore, a key work-process is the regularity withwhich metrics selected by the organization are collected.

    3.1.2 Data Collection MethodologyIn addition to metrics regularity, there must be a concertedeffort to introduce a seamless and well-understood datacollection methodology in the organization. Prior work inmetrics programs has discussed the need for a properlystructured data collection methodology that is less burden-some and is well integrated into the work of an averagesoftware engineer [10], [27], [34].

    3.1.3 Sophisticated Data Analysis

    A key component of a metrics programs value is the abilityto conduct sophisticated analysis on the metrics informationcollected [15]. Briand et al. [7] provide a taxonomy of

    analyses that are possible with software metrics, from the

    682 IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, VOL. 31, NO. 8, AUGUST 2005

  • 7/27/2019 The Impact of Institutional Forces on Software Metrics Program

    5/16

    simplest effort estimates to sophisticated forecasting appli-cations, and stress the role of analysis in ensuring metricsprogram success. Note that the ability to use sophisticatedanalysis techniques does not discount the value ofsimplicity in metrics analysis. However, the ability to carryout more sophisticated analyses such as forecasting,statistical quality control, and defect prevention increases

    the breadth of decision-making that metrics programs cansupport. Moreover, higher-level process areas in both theCMM and the ISO/IEC 15504 standard implicitly requirethe ability to perform more sophisticated analyses [31].

    3.1.4 Communication Mechanisms

    It is important to create processes to communicatethe resultsof the data analysis back to the stakeholders involved in atimely manner. This could be through formal means such asphase documents and memos or through informal meanssuch as at review meetings and project meetings. The lackof adequate communication is also a success factor inmetrics programs mentioned by project managers in a

    survey of 13 companies carried out by Hall et al. [16].

    3.1.5 Automated Tools

    Most unsuccessful metrics programs involve additionalwork imposed on engineers through intrusive and labor-intensive data collection methods [15], [27], [34]. The idealmethod would be to automate the process of data collectionin a manner that is entirely transparent to the user.Therefore, the level of automated support available to theprogram is an indication of the extent of adaptation.

    Note that the five factors discussed above are broadenough to be implemented across most organizations butare specific enough to characterize adaptation within any

    given organization. This definition of metrics adaptation isby no means exhaustive but provides a first attempt atcapturing metrics program adaptation in organizations.Therefore, we propose:

    Hypothesis 1.The extent of metrics programs adaptation into asoftware organization is reflected by the following work-processes

    1. Regularity of metrics collection,2. Seamless and efficient data collection,3. Use of sophisticated data analysis techniques,4. Use of suitable communication mechanisms,5. Presence of automated data collection tools.

    Organizational work-processes that have been adaptedto suit the metrics programs benefits prepare the organiza-tion for acceptance. Acceptance is an outcome of the effortstaken to induce organizational members to commit to theuse of the innovation in decision-making [38].

    In the case of metrics programs, adaptation is aparticularly crucial precursor to acceptance. Increasedusage of metrics information is contingent on the qualityof metrics collected, the level of trust and confidence theusers have in the analysis, the communication process, andmanner in which the metrics are collected [10]. Clearly,inadequate processes for the collection and analysis ofmetrics information would be a case of Garbage In

    Garbage Out. Since metrics programs are adapted in

    different ways across different organizations (based onorganizational structure and strategy), acceptance of me-trics programs need not be uniform. However, on anaverage, higher adaptation will be associated with greateracceptance and we therefore propose:

    Hypothesis 2. Greater levels of metrics programs adaptation insoftware organizations are associated with increased levels ofacceptance.

    3.2 Institutional Forces and Adaptation

    Institutional forces can be differentiated into three typesmimetic, coercive, and normative. Mimetic forces influ-ence organizations operating in uncertain environmentsand are defined as those forces which induce an organiza-tion to mimic other organizations that are perceived to besuccessful. They manifest when technologies, processes,and goals are vague and ambiguous [11]. This is true of thesoftware industry which has been associated with highuncertainty, inadequate processes, and serious cost andquality issues [17]. Thus, software organizations react to

    uncertainty by instituting metrics-based process initiativesthat mimic successful firms with metrics programs. Suc-cessful metrics implementation in firms like Motorola [10]induces other firms to follow suit with their own metricsprograms in order to appear legitimate.

    Coercive forces are exerted on organizations by otherorganizations upon which it is dependent, such as govern-ment organizations, present and potential clients, andregulatory agencies. Customers could enforce metricsprograms on software organizations as a means to control-ling costs and ensuring quality. For example, customers andpotential clients sometimes require either CMM or ISOcertification as a prerequisite for business [33]. The presence

    of such metrics initiatives in competing organizations alsodrives software organizations toward implementing metricsprograms [2]. Therefore, coercive forces acting on theorganization induce the organization to make the requiredchanges to implement metrics programs.

    Normative forces arise fromprofessionalization, defined asa move by members of an occupation to define theconditions and methods of their work to establish legiti-macy for their occupation [11]. Professionalization arisesfrom universities and academic institutions on one hand,and from professional and trade organizations on the otherhand. The process initiatives in software engineering thatstarted in the 1980s can be seen as steps in the professio-nalization of the software industry [19]. Normative forces inthe software industry have manifested themselves in theform of popular software process models such as the CMMand the ISO 9001. Collectively, software organizations haveadopted software processes to bring legitimacy to theirdevelopment process. Thus, normative forces operating onsoftware organizations motivate them to make adaptivechanges around metrics programs. Therefore, we propose:

    Hypothesis 3. Higher levels of institutional forces perceived bysoftware organizations are associated with higher levels ofadaptation of work-processes associated with metrics programs.

    In contrast to the external institutional forces discussed

    above, there are internal institutional forces acting on

    GOPAL ET AL.: THE IMPACT OF INSTITUTIONAL FORCES ON SOFTWARE METRICS PROGRAMS 683

  • 7/27/2019 The Impact of Institutional Forces on Software Metrics Program

    6/16

    metrics programs as well. The internal institutions arecreated by management support and commitment to themetrics program. The external institutions influence soft-ware managers who, in turn, establish the appropriate

    environment within the organization. The internal institu-tions allocate resources, set broad strategic directions for thefirms activities, and establish organizational norms andculture [26]. Therefore, demonstrated management commit-ment is an important part of the internal institution of theorganization.

    In the context of software metrics, management commit-ment is an important facilitator of metrics implementation.Upper management commitment is an enabler for greaterlevels of metrics implementation within the organization intwo ways. First, upper management support results ingreater resources being allocated to the program [14].Second, it increases the visibility and legitimacy of the

    program within the organization [15], [28]. Research insoftware process improvement has also identified manage-ment commitment as an important determinant of successin process improvement activities [35]. We believe thereforethat the level of management commitment exhibited to ametrics program in the organization will directly drive thelevel of investments made in the adaptation of work-processes around the program. Thus, we propose thefollowing:

    Hypothesis 4. Higher levels of management commitment insoftware organizations are associated with higher levels ofmetrics programs adaptation.

    The over-all research model postulated is shown in Fig. 1with each hypothesis located on the figure.

    4 RESEARCH METHODOLOGY

    4.1 Data Collection

    Given that the focus of our paper is on the adaptation andacceptance of metrics programs, data was collected fromorganizations that were in the postadoption phase. It wasimportant for us to locate companies that were in theprocess of implementing metrics programs. Moreover, sinceour research methodology was a survey, we needed tocontact the appropriate manager within these organiza-

    tions. In order to locate the appropriate software companies

    and the required survey respondent within these compa-nies, two trade organizations active in the software metricsarea were contacted.

    The first organization contacted was a private com-

    pany that ran tutorials and courses for metrics managers,published a journal relating to metrics information, andorganized a major annual conference for metrics profes-sionals. Thus, this company had access to metricsprofessionals from software organizations who attendedtheir conference and meetings. Lists of attendees to theannual conference held by the organization were collectedfor three years and collated. A final list of 296 names wasidentified through this list. Software companies sent oneto three persons to the conference. In certain cases,different divisions within the same organization haddifferent metrics programs and, therefore, sent differentteams to the conference. Eighty people were from military

    organizations and the remaining were either privatesector or defense contractors.

    The second organization we contacted was a division ofthe US Department of Defense (DoD) that coordinatedmetrics-related activities for software divisions and con-tractors. This organization worked with different contrac-tors for the DoD as well as with commercial softwarevendors in disseminating metrics information throughmeetings and conferences. The lists of attendees to theirquarterly meetings were collected for two years andcollated. This list yielded 219 names of which 115 weremilitary agencies or departments.

    In addition to the above two sources, we were allowed

    access to the list of attendees to the Software EngineeringInstitutes (SEI) training programs in software metricsprograms for two years. This list included 178 names ofwhich 55 were in the defense and the remaining in theprivate sector. Putting the lists of names together fromthe two aforementioned sources with the list of attendeesfrom the SEI provided us with an addressable sample of594 names.

    The data collection methodology used for this paper wasa survey. The data was collected on the Internet through anonline survey instrument hosted on the SEIs Web-server.Before the survey was administered, it was pretested byfive members of the in-house Software Measurement Group

    at the SEI, which is responsible for providing training to

    684 IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, VOL. 31, NO. 8, AUGUST 2005

    Fig. 1. Conceptual model of metrics adaptation.

  • 7/27/2019 The Impact of Institutional Forces on Software Metrics Program

    7/16

    practitioners on metrics initiatives. The survey was alsoadministered to 15 practitioners attending a training sessionat the SEI on metrics, who provided valuable feedback.These 15 respondents were not included in the finalsampling frame used. All changes and feedback wereincorporated into the questionnaire before it was adminis-tered to respondents. The changes pertained to alteringwords and structure of questions, providing definitions forterms used, and introducing clarificatory instructions in thesurvey. In addition, we randomly distribute questionspertaining to a single construct within the questionnaireto avoid respondent bias. Finally, a small set of questionswere dropped from the survey due to space constraints. The

    final questionnaire contained approximately 80 questionsspanning roughly 12 pages on paper and took about30 minutes to complete. The questionnaire also containeddemographic information on the respondent and respon-dents organization.

    The 594 potential respondents were contacted by emailand were also sent unique IDs and passwords to protect theintegrity and confidentiality of their responses. The instruc-tions included in the email and on the Website indicatedthat if multiple persons from the same metrics programwere being contacted, the senior-most member was asked torespond to the questionnaire. In cases where participantswere likely from the same organization, they were all

    copied on the introductory email. The recipients were alsoasked to contact the research team in cases where multiplepeople in the same organization were possibly contacted. Inmany cases, the authors received email from respondentsindicating that multiple people had been contacted and thatonly one designated person would be responding for theorganization. Given the number of such interactions, webelieve that most responses can be associated with a uniquemetrics program. As mentioned before, in some cases,different divisions of the same company had differentmetrics initiatives in place and these responses werecounted separately. This was particularly true of defenseorganizations where the scale of the organization was much

    larger and included a number of independent software

    organizations. Of the 594 names, 90 were removed from the

    sample due to these interactions with the authors.Due to the time elapsed between the capture of the e-

    mail addresses at our sources and data collection, some

    respondents on our e-mail lists were no longer available.

    Therefore, our initial sampling frame of 504 names was

    reduced to 385 names due to unavailability of persons

    concerned. Of these 385 persons contacted, 228 completeresponses were received resulting in a 59 percent response

    rate.1 Of these 228, 14 were deleted due to incomplete data

    for the purposes of this study. The final sample size used insubsequent analysis is 214. The sample profile and

    summary statistics are shown in Tables 1 and 2, respec-

    tively. We describe the measures used next.

    4.2 Variable Definitions

    The following section describes the measures used to

    capture the various constructs in our research model. There

    are four central constructsadaptation, acceptance, institu-

    tional forces, and management commitment. Each of these

    constructs was measured using a distinct set of question-naire items, which are described below. The questionnaire

    items were measured on 5-point Likert scales and were

    created on the basis of prior research in both metrics

    programs and institutional theory. We describe the mea-

    sures in our model and their psychometric tests next.

    4.2.1 Metrics Adaptation

    We model adaptation as consisting of five reflective factors

    discussed in the previous section. They are described

    below.

    GOPAL ET AL.: THE IMPACT OF INSTITUTIONAL FORCES ON SOFTWARE METRICS PROGRAMS 685

    TABLE 1Sample Profile

    TABLE 2Summary Statistics

    1. No incentives were offered for participation in the survey. We thank

    an anonymous reviewer for this point.

  • 7/27/2019 The Impact of Institutional Forces on Software Metrics Program

    8/16

    1. Metrics Regularity (Metrics): Most metrics programsare built on a foundation of basic software engineer-ing metrics such as cost data, schedule information,and effort information. This construct was measuredusing four questionnaire items addressing regularityof the collection of base metrics in the organization.

    2. Data Collection (Collect): This construct measured

    how well the organization had specified responsi-bility for metrics data collection and was based onprior research [10], [30]. The measures for thisconstruct reflect the need for a regular and systema-tic methodology for collecting project-level metricsdiscussed in past work [4]. The construct wasmeasured using three questionnaire items.

    3. Quality of Analysis (Analysis): This construct cap-tured the level of sophisticated analysis conductedusing metrics information and is based on Briandet al.s [7] taxonomy of analysis options. Briefly, thetaxonomy describes three broad kinds of goals thatmetrics enabledescriptive, evaluative, and pre-

    scriptive goals. The basic set of metrics can providedescriptions or snapshots of the current state of

    development. More sophisticated analysis can pro-vide managers with the tools to evaluate the devel-opment process, thereby enabling better decision-making. Prescriptive models use metrics data andsophisticated techniques to enable prediction ofperformance and make prescriptive recommenda-tions. For our purposes, we use four items in thesurvey that address the increasing order of sophis-tication in analysis. Since this is a more broad-based

    study of metrics in the industry, the more commonlyused set of analyses were included in the survey.

    4. Communication (Comm): This construct measuredthe level to which feedback on metrics informationwas communicated to various stakeholders in theprogram. It also measured the extent to whichformal and informal communication methods wereused based on prior research [24]. The variable wasmeasured using four questionnaire items.

    5. Automated Tools (Auto): The presence of automatedtools has often been addressed in past work and weuse three questionnaire items to measure thisconstruct. The questions pertain to the presence oftools for data collection, management, and analysis.

    Before the constructs measured above can be used in

    analysis, we need to establish their reliability and validity.Reliabilityof the measures expresses the degree to which it

    is consistent and repeatable. Thus, reliability involves

    establishing high test-retest correlation, indicated by Cron-

    bachs alpha coefficient of 0.70 and higher.Validityindicates

    the extent to which the questionnaire items truly measure

    what it claims to measure, i.e., the underlying construct. We

    need to establish construct and discriminant validity in our

    context. Construct validity tests whether the questionnaire

    items measuring a construct are intercorrelated well enough

    to jointly form the construct. Discriminant validity tests if

    the items measuring one construct are sufficiently uncorre-

    lated from other constructs, thereby establishing the ability

    to discriminate between constructs. We perform these testson our measures below.

    All constructs displayed good reliability as shown,along with questionnaire items, in Appendix A. In orderto test construct validity, each construct was subjected toexploratory factor analysis and the results are shown inAppendix A. Each set of questionnaire items associated

    with an underlying construct loaded appropriately on onefactor when subjected to factor analysis, indicating goodconstruct validity. To assess discriminant validity, all thequestionnaire items were factor analyzed together toexamine the factor structure and loadings. The factorstructure emerging fitted the hypothesized 5-factor modeland is shown in Table 3. Thus, our constructs exhibitacceptable construct and discriminant validity.

    4.2.2 Institutional Forces

    The institutional forces were measured using five items inthe questionnaire. As discussed before, mimetic and

    coercive forces in this industry are applied by external

    entities such as customers and competitors. Therefore, weinclude two questionnaire items that capture the coerciveelements and pertain to the extent to which measurement or

    external certifications are required by customers. Themimetic aspect is measured using one item that captures

    the extent to which competitors have used measurement totheir advantage. The normative aspect of institutional forcesis built around professionalization, which is best captured

    by the extent to which metr ics-trained persons areemployed within the organization [11]. Although this

    concept is difficult to assess directly, we can infer it throughthe extent to which the organization adapts itself to be

    assessed at higher maturity levels. To capture this, we

    included two questionnaire items that pertained to thepresence of quality initiatives such TQM, CMM, or ISO in

    the organizations. The five questionnaire items used hereshow good reliability (Cronbachs alpha = 0.81) and load

    well on one factor in factor analysis.

    4.2.3 Management Commitment

    To capture the level of management commitment displayedby the organization, four questionnaire items are used. Wemeasure management commitment through demonstratedsupport to the metrics program and the allocation ofresources to the metrics program. In terms of resources,we use two items to capture the training provided to the

    metrics program and the availability of required fundingfor the program. The four items load on one factor andshow high reliability (Cronbachs alpha = 0.79).

    4.2.4 Metrics Acceptance

    Acceptance is typically captured in one of three waysattitudes toward use of an innovative process, intention touse, and frequency of use [38]. In our context, theappropriate measure of acceptance is the frequency withwhich metrics information is used in decision-making. Weuse prior work in metrics programs that describe howmetrics information can be used [7] as well as the factor-based research on metrics programs [28], [41] to create

    our measures. This construct is measured using four

    686 IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, VOL. 31, NO. 8, AUGUST 2005

  • 7/27/2019 The Impact of Institutional Forces on Software Metrics Program

    9/16

    questionnaire items that pertain to the frequency withwhich software organizations use metrics information indecision-making. The construct shows good reliability(Cronbachs alpha = 0.76) and loads well on one factor.

    4.3 Data Analysis

    The data analysis for our model was carried out using

    structural equation modeling (SEM). SEM is useful whenit is important to estimate all the paths of a hypothesizedmodel together rather than evaluate the paths individu-ally. There are two primary analytical procedures thatwere conducted in evaluating our research model. First,we ran a confirmatory factor analysis procedure on thefive work-processes that constitute adaptation. In thisanalysis, we empirically test for the presence of a higher-order latent construct, Adaptation, which is composed ofthe five underlying factors. Second, we estimate theresearch model shown in Fig. 1 to address our researchhypotheses. In this analysis, each of the paths shown inthe model was estimated along with standard errors. In

    addition, overall goodness of fit indices for the overallmodel were also estimated. The statistical procedures aredescribed in detail in Appendix B. We briefly describe thestatistical results below.

    The first objective, to perform confirmatory factoranalysis of adaptation, was strongly supported by theanalysis. Each of the underlying five factors loaded well ona higher-order construct and the over-all goodness of fitindices were excellent, as shown in Table 4. Thus, ouranalysis shows that the definition of Adaptation as beingcomposed of five underlying work-processes receivesstrong empirical support. Adaptation of metrics programsin an organization can be thus driven to a large extent by

    instituting work-processes such as the five factors studied.

    The second procedure to estimate the research model (Fig. 1)shows goodness of fit indices that are statistically signifi-cant, indicating a sound model.2 In addition, each of thepaths estimated were statistically significant as shown inTable 5 and Fig. 2. Thus, the statistical procedure shows thatthe data fit our proposed model in a statistically significantmanner. We discuss the individual results next.

    5 RESULTS AND DISCUSSION

    5.1 Hypotheses Tests

    The results of the models presented above show strong

    support for Hypotheses 1 and 2. The confirmatory factor

    analysis results on metrics adaptation show that the five

    work-processes hypothesized to constitute adaptation re-

    ceive strong empirical validation, thus supporting Hypoth-

    esis 1. Hypothesis 2 pertained to the significant association

    between adaptation and acceptance and the results support

    this reasoning. As seen in Fig. 2, the parameter is positive

    (1.09) and is statistically significant (p < 0.05). Acceptance of

    metrics programs is higher in organizations with work-processes that have substantially adapted to the program.

    Our results also show support for the assertions made in the

    anecdotal literature in software metrics about the need for

    appropriate procedures and methods to be defined. These

    work-processes have to be defined and put in place before

    metrics-based information is used in decision-making.The hypotheses pertaining to Institutional Forces and

    Management Commitment are also strongly supported. Theinstitutional pressures exerted on organizations are asso-ciated with greater investments in defined work-processes

    GOPAL ET AL.: THE IMPACT OF INSTITUTIONAL FORCES ON SOFTWARE METRICS PROGRAMS 687

    2. The actual procedures and statistics are described in detail in

    Appendix B.

    TABLE 3Exploratory Factor Analysis on Individual Work-Processes (N=214, Principal Components with Varimax Rotation)

  • 7/27/2019 The Impact of Institutional Forces on Software Metrics Program

    10/16

    within the organization, thereby possibly leading to greater

    adaptation. As adaptation is required for acceptance of the

    metrics programs, the second-order effects of the institu-

    tional forces include promoting acceptance of metrics

    programs within the organization. The presence of demon-

    strated management commitment also influences the

    adaptation of software metrics programs and supports past

    work in software metrics [34], [15].

    688 IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, VOL. 31, NO. 8, AUGUST 2005

    TABLE 4Confirmatory Factor Analysis for Metrics Adaptation (Second Order Latent Variable) (N = 214, All Paths Significant at p < 0.05)

    TABLE 5Measurement Model for Institutional Forces on Metrics Adaptation (N = 214, All Paths Significant at p < 0.05)

  • 7/27/2019 The Impact of Institutional Forces on Software Metrics Program

    11/16

    5.2 Posthoc Analysis

    Our sample consists of two sectorsthe government ordefense sector and the commercial sector. It is conceivable

    that these two sectors exhibit some fundamental differ-ences in the organization of metrics programs. In parti-cular, there is a possibility that institutional forces oradaptation differs systematically between the defense andcommercial sectors. Therefore, a difference of means t-testwas performed on all the factor variables between thetwo subsamples. The only significant result was in theinstitutional forces. The perception of institutional forces islower in the defense sector than in the commercial sector(t = 3.43, p < 0.001). This could be due to the fact that, inthe defense sector, metrics related activities are required bygovernmental agencies. The defense sector has led thecommercial sector in the process movement since metrics

    and measurement have longer been an integral part of alldefense-related software development [16]. The presence ofcase studies from the defense sector such as the US Army[12], NASA [41], and Goddard Space Flight Center [36]attests to this fact.

    It can be argued that larger organizations are morewilling to make adaptive changes to their organizationsince the returns could be greater. Researchers havehypothesized that larger organizations have more slackresources and are therefore more likely to adopt innova-tions [8]. We test for this effect in our sample by includingsize as a control variable in our model. We use the totalnumber of full-time employees within the business unit as

    our measure of size. Unfortunately, this data is onlyavailable for 162 responses in our sample. We estimatedour research model with size as a control variable onmetrics adaptation using 162 responses. The log of totalfulltime employees was used to reduce the scale of thevariable. The size variable was not significant and did notchange other results.

    5.3 Limitations

    A common problem with survey methodologies iscommon method variance. In other words, the question-naire items could be tapping a latent, common factor notcaptured in the instruments. In order to test for this issue,

    we use Harmons one factor test [32]. According to this

    test, if there was indeed one latent common factor that

    was driving the results, a factor analysis of all the

    questionnaire items together should show the presence of

    one significant underlying factor. Accordingly, all thequestionnaire items and factor scores in the model were

    subjected to a single factor analysis. Four factors were

    extracted with three eigenvalues over 1.0 and the fifth

    eigenvalue was 0.92. The first factor captured only

    36 percent of the variance. Since a single factor did not

    emerge from the factor analysis and one factor did not

    capture the majority of the variance in the data, common

    method variance might not be causing significant pro-

    blems in our analysis.All of the data in our analysis is perceptual, which is a

    limitation of the survey research methodology. Moreover,

    pretesting of the survey was done with 20 persons, whichmight not have identified all sources of bias. Another

    limitation of our analysis is that we do not capture where in

    the development process the metrics are collected. How-

    ever, our measure of Metrics Regularity addresses the

    frequency with which metrics are collected in the organiza-

    tion, which indirectly gets at collection of project-level

    metrics across all phases of projects. The unit of analysis in

    our paper is the organization and, therefore, if the

    organization regularly and systematically collects metrics

    on all projects, it seems reasonable to expect that metrics are

    collected on all projects across all phases. It would be hard

    to collect project-level data in our case where the unit of

    analysis is the organization. Therefore, we try to indirectlycapture the phases of a project through the Metrics

    Regularity construct.Since our objective was to study organizations with

    existing metrics programs, our sampling frame included

    only organizations with metrics programs. Additionally,

    there is a possible sampling bias, i.e., the organizations that

    responded were probably not randomly distributed in the

    population. Since our response rate is relatively high, we

    believe that the sampling frame might not affect the

    robustness of our results. However, the sampling bias is a

    common issue with most survey research and our results

    have to be interpreted keeping this issue in mind.

    GOPAL ET AL.: THE IMPACT OF INSTITUTIONAL FORCES ON SOFTWARE METRICS PROGRAMS 689

    Fig. 2. Results of the Structural Model (N = 214, Parameter estimates with t-stats in parenthesis).

  • 7/27/2019 The Impact of Institutional Forces on Software Metrics Program

    12/16

    6 MANAGERIALIMPLICATIONS AND CONCLUSION

    Software managers are typically the primary users andbeneficiaries of software metrics in their decision-making.Our work informs them at multiple levels. At the level ofthe metrics program, managers need to understand theimportance of adaptation. The onus of setting up appro-priate procedures for collecting metrics data, analyzing it,and disseminating this information is up to softwaremanagers. Thus, our research provides them with a first-level description of the five important factors that need tobe addressed for adaptation. First-time metrics managerscan benefit by addressing these five work-procedures firstbefore venturing onto other aspects of these programs. Thisphased approach can provide them with some probabilityof success. Our research also cautions software managersthat expecting people to use metrics information indecision-making without adequate investments in adaptingwork-procedures may not work. Mandating metrics use inorganizations without underlying structural changes in thework-procedures will result in inadequate acceptance.

    Finally, software managers are also usually cognizant ofthe institutional norms active in the software industry.Therefore, it is through their initiatives that the bulk of theadaptive processes take place. Our research informs them ofthe beneficial and, in some cases, harmful, effects ofinstitutional norms. Although we dont explicitly analyzethese issues, software managers must assess the institu-tional norms in the industry and structure the metricsprogram accordingly.

    Our research also has direct implications for uppermanagement in software development organizations. First,our analysis directly informs upper management about theimportance of continued support to metrics programs. It is

    vital to not just formally adopt metrics programs but toenable the adaptive process so that the metrics programassimilates fully into the organization. Second, managementis responsible for setting up the internal institutionalenvironment that motivates and incentivizes softwaremanagers to make the investments required. It is alsoimportant to note that incentive-based behavior often doesnot last once the incentives are removed. Management isalso responsible for providing resources and demonstratedsupport for the metrics programs. Our research shows thatthese sources of support are required for adaptation.Finally, upper management needs to understand the vitalrole of metrics in software process initiatives that lead to

    improved quality and reduced costs. The business valueindicated in prior research from process initiatives [25] islargely contingent on the availability of good metrics. Thus,an investment in a productive metrics programs pays offthrough process initiatives that address larger cost andquality issues.

    Our work points out several avenues for future work.First, our definition of adaptation is a first attempt tocapture the essential components of this construct. Morework is required to refine and possibly add other factors tothis definition. Second, the metrics programs success isdetermined by the organizations returns on investment.We measure acceptance which is one possible outcome

    measure of a metrics program. However, more work is

    needed to quantify the benefits from the implementation ofa metrics program in organizations. Third, several structur-al issues about metrics programs need study. For example,is it preferable to have a unified suite of metrics for thewhole organization or is decentralized metrics collectionappropriate? These questions are best addressed in thecontext of the organizational strategy. More work is also

    required in analyzing the institutional contexts that soft-ware organizations function in. The increasing institutiona-lization of the software industry makes this an interestingarea for research, both from theoretical and empiricalviewpoints.

    APPENDIX A

    QUESTIONNAIRE

    The sample questionnaire is presented in Tables 6a and 6b.

    APPENDIX B

    STATISTICAL PROCEDURESThere are primarily two statistical procedures described inthis appendix. The first procedure performs confirmatoryfactor analysis to test out the proposed five-factor model ofAdaptation. The second procedure is used to evaluate theresearch model shown in Fig. 1. We use structural equationmodeling techniques to perform both statistical procedures.Structural equation modeling is a technique that allows theestimation of all the paths in a model simultaneouslyinstead of breaking them down into constituent paths. Thisanalysis also provides us with overall goodness of fitindices which represent the extent to which the hypothe-

    sized model fits the data. Thus, we get estimates of pathsbetween constructs of interest as well as indices of overallmodel fit. In cases where the whole model is more than justthe sum of the constituent paths, structural equationmodeling is useful and informative.

    Structural equation modeling involves the estimation oftwo separate modelsthe measurement model and thestructural model. The measurement model addresses thedegree to which the observed variables, i.e., the question-naire items, capture the underlying constructs. This modelalso allows the estimation of measurement error in theestimation of the measurement model, thus allowing theresearcher to test for the strength of the measures used. In

    this way, the measurement model also provides evidence ofconstruct and discriminant validity [39]. The structuralmodel describes the relationships between the constructs,i.e., the hypothesized paths between the variables of interest[1]. The structural model is therefore used to test theresearch hypotheses proposed in the paper. In order toperform our analysis, we use a structural equation model-ing software called LISREL. LISREL is one of the mostwidely used software packages used to perform this kind ofanalysis.3 We first describe the confirmatory factor analysison Adaptation and subsequently describe the estimation ofthe research model.

    690 IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, VOL. 31, NO. 8, AUGUST 2005

    3. More information on LISREL is available at www.ssicentral.com.

  • 7/27/2019 The Impact of Institutional Forces on Software Metrics Program

    13/16

    B.1 Confirmatory Factor Analysis of MetricsAdaptation

    Metrics adaptation was hypothesized to be a latent

    construct characterized by a set of underlying work-

    processes. In order to establish that the five work-processes

    indeed were reflective of a higher-order construct (adapta-

    tion in this case), we conducted a confirmatory factor

    analysis of a second-order latent variable Metrics Adapta-

    tion, as shown in the box labeled Hypothesis 1 in Fig. 1.

    Thus, the five factors (Regularity, Data Collection, Analyses,

    Communication, and Automated Tools) are the first-order

    latent constructs, which are hypothesized to collectively

    form a second-order latent construct called Adaptation.

    This analysis will confirm the validity of the adaptation

    measure well as strengthen subsequent analysis of institu-

    tional forces on adaptation. Thus, we perform confirmatory

    factor analysis on the factors that constitute Adaptation. The

    estimation of all the paths in the factor model is done using

    maximum likelihood (ML) estimation.Structural equation modeling is based on two assump-

    tionsmultivariate normality and the presence of interval-

    scaled, continuous data [39]and we test for these

    assumptions. Prior research has shown categorized data

    with five or more categories can be treated as continuous for

    GOPAL ET AL.: THE IMPACT OF INSTITUTIONAL FORCES ON SOFTWARE METRICS PROGRAMS 691

    TABLE 6aQuestionnaire

  • 7/27/2019 The Impact of Institutional Forces on Software Metrics Program

    14/16

    the purposes of confirmatory factor analysis [21]. In a set ofMonte Carlo simulations examining the effect of categoriza-tion of continuous variables, the use of a covariance matrix

    with maximum likelihood estimation provides estimatesthat are sufficiently robust even in the case of categorizedvariables [18]. In addition, research has shown that withfive categories used to approximate underlying continuousvariables, the bias or distortion in correlation coefficients issmall [6]. It is therefore acceptable, in some conditions, totreat categorical data as continuous. Consistent with thisapproach, we treat our five-point Likert scaled data ascontinuous.

    The presence of nonnormal data, i.e., nonzero third andfourth-order moments, provides consistent but inefficientestimates of parameters [42]. Therefore, it is important totest for the presence of univariate and multivariate normal-

    ity in the data. In Monte Carlo simulations, Curran et al. [9]characterize moderate nonnormality with skewness of 2and kurtosis of 7 and their results show that maximumlikelihood estimation is robust even in the presence of mildnon-normality. Univariate skewness in our sample rangesfrom -1.35 to 1.719 and univariate kurtosis ranges from-1.381 to 2.075, indicating low nonnormality. The estimateof relative multivariate kurtosis is 1.089 for the sample.Therefore, we can safely assume multivariate normality.The results from the ML analysis are shown in Table 4.

    The estimated model shows strong support for thesecond-order latent variable of Metrics Adaptation. Thegoodness of fit index for the model is 0.90, indicating that

    the overall hypothesized factor structure fits the data well.

    In addition, the ratio of the fit function chi-square to thedegrees of freedom is 1.7, again indicating a good model fit[39]. The acceptable range for this statistic is under 5 [39].

    The high comparative fit index (0.95) and low root meansquare residual (0.05) also indicate good model fit [18].Thus, our confirmatory factor model of metrics adaptationreceives strong support.

    B.2 Structural Model of Institutional Forces onMetrics Adaptation

    Having established the factor structure for adaptation, wenext estimate the research model in Fig. 1. Since our focus isestimating the institutional effects on adaptation, we usefactor scores for the individual work-processes that com-pose metrics adaptation instead of individual questionnaireitems. Although some information is lost in using factorscores instead of raw data, there are two advantages tousing factor scores. First, factor scores more closelyapproximate normality since they are combinations ofindividual questionnaire items. Second, they increase thedegrees of freedom available by reducing the number ofestimated parameters. Moreover, based on our confirma-tory factor analysis of metrics adaptation, we believe thisapproach will not detract from the primary objective of thestructural model. The factor scores were calculated basedon exploratory factor analysis reported in Table 3.

    As discussed before, we treat our data as continuous andtest for normality in the sample. Univariate skewness in oursample ranges from -0.689 to 0.762, and kurtosis from -1.411

    to 0.678, indicating reasonable univariate normality. The

    692 IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, VOL. 31, NO. 8, AUGUST 2005

    TABLE 6bQuestionnaire (cont.)

  • 7/27/2019 The Impact of Institutional Forces on Software Metrics Program

    15/16

    coefficient of relative multivariate kurtosis is 1.063, indicat-

    ing mild nonnormality. We use ML estimation and also

    report the Satorra-Bentler chi-square [39] that corrects the

    minimum-fit chi-square for nonnormality. The Satorra-

    Bentler chi-square is not based on the multivariate normal-

    ity assumption and, therefore, provides an indirect measure

    of the level of nonnormality in the data. In cases of mild

    nonnormality, the Satorra-Bentler chi-square will be verysimilar in value to the overall goodness of fit chi-square.

    The results of the measurement and structural models are

    shown in Table 5 and Fig. 2, respectively.The measurement model in Table 5 shows support for

    the factor structure hypothesized amongst the indicator

    variables. The paths from the observed questionnaire items

    to the latent constructs are all positive and significant. The

    structural model fit is acceptable, with a goodness of fit

    index of 0.88. Other goodness of fit indices such as the

    comparative fit index (0.93) and the root mean square

    residual (0.05) are also within acceptable ranges [15]. The

    ratio of the chi-square to degrees of freedom is 1.98, wellwithin the acceptable limit of 5 [32]. The root mean square

    residual is also low at 0.05, indicating low discordance

    between the hypothesized model and the data. Thus, all fit

    indices indicate that the measurement and structural model

    are significant and valid. In addition, the Satorra-Bentler

    chi-square is slightly lower than the normal-theory-based

    chi-square. The difference between the minimum-fit chi-

    square (246.64) and the Satorra-Bentler chi-square (228.21)

    is small, indicating low nonnormality. Thus, the proposed

    overall model of institutional forces on metrics adaptation is

    statistically significant.

    ACKNOWLEDGMENTS

    The authors are grateful to Dennis Goldenson and Ritu

    Agarwal for their support during the course of this

    research.

    REFERENCES[1] J.C. Anderson and D.W. Gerbing, Structural Equation Modeling

    in Practice: A Review and Recommended Two-Step Approach,Psychological Bull.,vol. 103, no. 3, pp. 411-423, 1988.

    [2] G.H. Anthes and J. Vijayan, Lessons from India Inc, Computer-world, vol. 35, no. 14, pp. 40-42, 2001.

    [3] M. Berry and R. Jeffery, An Instrument for Assessing SoftwareMeasurement Programs,Empirical Software Eng.,vol. 5, pp. 183-

    200, 2000.[4] M. Berry and M.F. Vandenbroek, A Targeted Assessment of theSoftware Measurement Process, Proc. Seventh Software MetricsSymp.,pp. 222-235, 2001.

    [5] A. Binstock, Outside Development Partners, InformationWeek,pp. 133-140, Oct. 1999.

    [6] K.A. Bollen and B.H. Barb, Pearsons R and Coarsely CategorizedMeasures,Am. Sociological Rev., vol. 46, pp. 232-239, 1981.

    [7] L.C. Briand, C.M. Differding, and H.D. Rombach, PracticalGuidelines for Measurement-Based Process Improvement, Soft-ware ProcessImprovement and Practice, vol. 2, pp. 253-280, 1996.

    [8] R.B. Cooper and R.W. Zmud, Information Technology Imple-mentation Research: A Technological Diffusion Approach,

    Management Science,vol. 36, no. 2, pp. 123-139, 1990.[9] P.J. Curran, S.G. West, and J.F. Finch, The Robustness of Test

    Statistics to Nonnormality and Specification Error in ConfirmatoryFactor Analysis, Psychological Methods, vol. 1, no. 1, pp. 16-29,

    1996.

    [10] M. Daskalantonakis, A Practical View of Software Measurementand Implementation Experiences Within Motorola, IEEE Trans.Software Eng., vol. 18, no. 11, pp. 998-1010, Nov. 1992.

    [11] P.J. DiMaggio and W.W. Powell, The Iron Cage Revisited:Institutional Isomorphism and Collective Rationality in Organiza-tional Fields,Am. Sociological Rev., vol. 48, pp. 147-160, 1983.

    [12] S. Fenick, Implementing Management Metrics: An ArmyProgram,IEEE Software, Mar. 1990.

    [13] N.E. Fenton and S.L. Pfleeger, Software Metrics: A Rigorous and

    Practical Approach.London, U.K.: Intl Thompson Publishers, 1998.[14] A. Gopal, T. Mukhopadhyay, M.S. Krishnan, and D.R. Goldenson,

    Measurement Programs in Software Development: Determinantsof Success,IEEE Trans. Software Eng., vol. 28, no. 9, pp. 863-875,2002.

    [15] T. Hall and N. Fenton, Implementing Effective Software MetricsPrograms,IEEE Software, pp. 55-65, 1997.

    [16] T. Hall, N. Baddoo, and D. Wilson, Measurement in SoftwareProcess Improvement Programmes: An Empirical Study, Proc.Intl Workshop Software Measurement (IWSM 2000), pp. 73-82, 2000.

    [17] M. Hayes, Precious Connection, InformationWeek,pp. 34-50, Oct.2003.

    [18] L. Hu and P.M. Bentler, Evaluating Model Fit, StructuralEquation Modeling: Concepts, Issues and Applications, 1995.

    [19] S.W. Humphrey, Discipline for Software Engineering. Addision-Wesley, 1995.

    [20] R. Jeffery and M. Berry, A Framework for Evaluation and

    Prediction of Metrics Programs Success, Proc. First Intl SoftwareMetrics Symp., 1993.

    [21] D.R. Johnson and J.C. Creech, Ordinal Measures in MultipleIndicator Models: A Simulation Study of Categorization Error,

    Am. Sociological Rev.,vol. 48, pp. 398-407, 1993.[22] B. Kitchenham, S.L. Pfleeger, and N. Fenton, Toward a Frame-

    work for Software Measurement Validation,IEEE Trans. SoftwareEng.,vol. 21, no. 12, pp. 929-944, Dec. 1995.

    [23] B.A. Kitchenham, R.T. Hughes, and S. Linkman, ModelingSoftware Measurement Data, IEEE Trans. Software Eng., vol. 27,no. 9, pp. 788-804, Sept. 2001.

    [24] R.E. Kraut and L.A. Streeter, Coordination in Large ScaleSoftware Development, Comm. ACM, vol. 38, no. 7, pp. 69-81,1995.

    [25] M.S. Krishnan, C.H. Kriebel, S. Kekre, and T. Mukhopadhyay,An Empirical Analysis of Productivity and Quality of SoftwareProducts,Management Science, vol. 46, no. 6, pp. 745-759, 2000.

    [26] J.W. Meyer and B. Rowan, Institutionalized Organizations:Formal Structure as Myth and Ceremony, Am. J. Sociology,vol. 83, no. 2, pp. 340-363, 1983.

    [27] R.E. Nusenoff and D.C. Bunde, A Guidebook and A SpreadsheetTool for a Corporate Metrics Program, J. Systems and Software,vol. 23, pp. 245-255, 1993.

    [28] R.J. Offen and R. Jeffery, Establishing Software MeasurementPrograms,IEEE Software, pp. 45-53, Mar./Apr. 1997.

    [29] F. OHara, European Experiences with Software Process Im-provement,Proc. Intl Conf. Software Eng. (ICSE 2000), 2000.

    [30] M.C. Paulk, B. Curtis, M.B. Chrissis, and C.V. Weber, CapabilityMaturity Model for Software Version 1.1, Technical Report SEI-93-TR-24, Software Eng. Inst., Pittsburgh, Penn., 1993.

    [31] M.C. Paulk, Analyzing the Conceptual Relationship betweenISO/IEC 15504 and the Capability Maturity Model for Software,Proc. Intl Conf. Software Quality, 1999.

    [32] P.M. Podsakoff and D. Organ, Self-Reports in OrganizationalResearch: Problems and Prospects,J. Management, vol. 12, pp. 531-543, 1986.

    [33] O. Port, Will Bugs Eat Up the US Lead in Software? BusinessWeek,Dec. 1999.

    [34] S.L. Pfleeger, Lessons Learned in Building a Corporate MetricsProgram,IEEE Software, vol. 10, no. 3, pp. 67-74, 1993.

    [35] A. Rainer and T. Hall, A Quantitative and Qualitative Analysis ofFactors Affecting Software Processes, J. Systems and Software,vol. 66, pp. 7-21, 2003.

    [36] L.H. Rosenberg and L. Hyatt, Developing a Successful MetricsProgram,Proc. 1996 Software Technology Conf., 1996.

    [37] I. Rozman, R.V. Horvat, and J. Gyorkos, United View on ISO 9001Model and SEI CMM, Proc. IEEE Intl Eng. Management Conf.,pp. 56-63, 1994.

    [38] V.L. Saga and R.W. Zmud, The Nature and Determinants of ITAcceptance, Routinization and Infusion, Diffusion, Transfer and

    Implementation of Information Technology, 1994.

    GOPAL ET AL.: THE IMPACT OF INSTITUTIONAL FORCES ON SOFTWARE METRICS PROGRAMS 693

  • 7/27/2019 The Impact of Institutional Forces on Software Metrics Program

    16/16

    [39] R.E. Schumacker and R.G. Lomax,A Beginners Guide to StructuralEquation Modeling.New Jersey: Lawrence Erlbaum Assoc., 1996.

    [40] C. Seddio, Integrating Test Metrics within a Software Engineer-ing Measurement Program at Eastman Kodak Company: AFollow-Up Case Study, J. Systems and Software, vol. 20, 1993.

    [41] P. Thibodeau and L. Rosencrance, Users Losing Billions Due toBugs, Computerworld, vol. 36, no. 27, pp. 1-2, 2002.

    [42] S.G. West, J.F. Finch, and P.J. Curran, Structural Equation Modelswith Non-Normal Variables: Problems and Remedies, Structural

    Equation Modeling: Concepts, Issues, and Applications, 1995.[43] J.D. Westphal, R. Gulati, and S.M. Shortell, Customization orConformity? An Institutional and Network Perspective on theContent and Consequences of TQM Adoption, AdministrativeScience Quarterly,vol. 42, pp. 366-394, 1997.

    Anandasivam Gopalreceived the PhD degreein information systems from Carnegie MellonUniversity in 2000. He is an assistant professorof information systems at the University ofMaryland College Park. His research interestsinclude software engineering economics, off-shore software development, and softwaremetrics programs. In specific, his interests focuson contracts in software development and thesuccess factors of metrics programs. His re-

    search has been published in the Communications of the ACM, IEEE

    Transactions on Software Engineering, and Management Science.

    Tridas Mukhopadhyay received the PhD de-gree in computer and information systems fromthe University of Michigan in 1987. He is theDeloitte Consulting Professor of e-Business atCarnegie Mellon University. He is also thedirector of the Institute for Electronic Commerceand MSEC (Master of Science in ElectronicCommerce) program at CMU. His researchinterests include strategic use of IT, business-

    to-business commerce, Internet use at home,business value of information technology, and software developmentproductivity. In addition, Professor Mukhopadhyay has worked withseveral organizations, such as General Motors, Ford, Chrysler, PPG,Port of Pittsburgh, Texas Instruments, IBM, Diamond TechnologyPartners, LTV Steel, and governmental agencies including the UnitedStates Post Office and the Pennsylvania Turnpike. He has publishedmore than 60 papers. His research appears in Information SystemsResearch, Communication of the ACM, Journal of Manufacturing andOperations Management,MIS Quarterly,Omega,IEEE Transactions onSoftware Engineering, Journal of Operations Management, AccountingReview, Management Science, Journal of Management InformationSystems, Decision Support Systems, Journal of Experimental andTheoretical Artificial Intelligence, Journal of Organizational Computing,International Journal of Electronic Commerce, American Psychologist,and other publications.

    M.S. Krishnan (Krishnan) received the PhDdegree in information systems from the Gradu-ate School of Industrial Administration, CarnegieMellon University, in 1996. He is the Mary andMike Hallman e-Business Fellow, Area Chair-man, and Professor of Business InformationTechnology at the University of Michigan Busi-ness School. Dr. Krishnan is also a codirector ofthe Center for Global Resource Leverage: Indiaat the Michigan Business School. He was

    awarded the ICIS Best Dissertation Prize for his doctoral thesis onCost and Quality Considerations in Software Product Management.His research interests include corporate IT strategy, business value of ITinvestments, and management of distributed business processes,software engineering economics, metrics and measures for quality,productivity, and customer satisfaction for products in software andinformation technology industries. In January 2000, the American

    Society for Quality (ASQ) selected him as one of the 21 voices ofquality for the 21st century. His research articles have appeared inseveral journals including Management Science, Information SystemsResearch Information Technology and People, Strategic ManagementJournal, IEEE Transactions on Software Engineering, IEEE Software,Decision Support Systems, Harvard Business Review, InformationWeek, Sloan Management Review, Optimize, and Communications ofthe ACM. His article, The Role of Team Factors in Software Cost andQuality, was awarded the 1999 ANBAR Electronic Citation ofExcellence. He serves on the editorial board of reputed academic

    journals including Management Science and Information SystemsResearch. Dr. Krishnan has consulted with Ford Motor Company,NCR, HIP, IBM, Bellsouth, TVS group, and Ramco Systems.

    . For more information on this or any other computing topic,please visit our Digital Library at www.computer.org/publications/dlib.

    694 IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, VOL. 31, NO. 8, AUGUST 2005