project management journal -...

74
Project Management Journal PAPERS 5 Project Management in the Information Systems and Information Technologies Industries Francis Hartman and Rafi A. Ashrafi 16 Scheduling Programs With Repetitive Projects Using Composite Learning Curve Approximations Jean-Pierre Amor 30 A Measure of Software Development Risk James J. Jiang, Gary Klein, and T. Selwyn Ellis 42 A Hybrid Intelligent System to Facilitate Information System Project Management Activities Hamid R. Nemati, Dewey W. Todd, and Paul D. Brown 53 The Fall of the Firefly: An Assessment of a Failed Project Strategy Bud Baker 58 The Impact of the Project Manager on Project Management Planning Processes Shlomo Globerson and Ofer Zwikael The Professional Journal of the Project Management Institute Volume 33, Number 3 | September 2002

Upload: danghanh

Post on 23-Jul-2018

213 views

Category:

Documents


0 download

TRANSCRIPT

ProjectManagementJournal

PAPERS

5 Project Management in the Information Systems and Information Technologies IndustriesFrancis Hartman and Rafi A. Ashrafi

16 Scheduling Programs With Repetitive Projects Using Composite Learning Curve ApproximationsJean-Pierre Amor

30 A Measure of Software Development RiskJames J. Jiang, Gary Klein, and T. Selwyn Ellis

42 A Hybrid Intelligent System to Facilitate Information System Project Management ActivitiesHamid R. Nemati, Dewey W. Todd, and Paul D. Brown

53 The Fall of the Firefly: An Assessment of a Failed Project Strategy Bud Baker

58 The Impact of the Project Manager on Project Management Planning ProcessesShlomo Globerson and Ofer Zwikael

The Professional Journal of the Project Management Institute

Volume 33, Number 3 | September 2002

ProjectManagementJournal

3 From the EditorParviz F. Rad, PhD, PMP

4 Research ReportHarry Stefanou, PhD

5 Project Management in the Information Systems and InformationTechnologies IndustriesFrancis Hartman and Rafi A. Ashrafi

16 Scheduling Programs With Repetitive Projects Using Composite LearningCurve ApproximationsJean-Pierre Amor

30 A Measure of Software Development RiskJames J. Jiang, Gary Klein, and T. Selwyn Ellis

42 A Hybrid Intelligent System to Facilitate Information System ProjectManagement ActivitiesHamid R. Nemati, Dewey W. Todd, and Paul D. Brown

53 The Fall of the Firefly: An Assessment of a Failed Project Strategy Bud Baker

58 The Impact of the Project Manager on Project Management PlanningProcessesShlomo Globerson and Ofer Zwikael

65 Cover to Cover—Book ReviewsKenneth H. Rose, PMP

69 Guidelines for PMJ Book Reviews

70 Calendar

71 Notes for Authors

72 Advertisers Index

The Professional Journal of the Project Management Institute

Volume 33, Number 3 | September 2002

2 Project Management Journal September 2002

© 2002 Project Management Institute, Inc. All rights reserved.

“PMI” is a trade and service mark registered in the United States and other nations; “PMP” and the PMP logo are registered certification marks in the United States and other nations; “PMBOK” is atrademark registered in the United States and other nations; and the PMI logo, “PM Network”, “Project Management Journal”, “PMI Today”, and “Building professionalism in project management.” aretrademarks of the Project Management Institute, Inc.

EDITORIAL REVIEW BOARDFrank T. Anbari, PMP, The George WashingtonUniversity; Bud Baker, Wright State University; RickBilbro, The Innova Group, Inc.; David Christensen,Cedar City, UT; David Cleland, University of Pittsburgh;Helen S. Cooke, Cooke & Cooke; Dan H. Cooprider,Creative People Management; Jeffrey Covin, IndianaUniversity; Steven V. DelGrosso, IBM; Deborah Fisher,University of New Mexico; Vipul Gupta, Saint Joseph’sUniversity; Soren Hansen, Pennoni Associates of Ohio;Kenneth O. Hartley, PMP, Parsons Brinckerhoff; FrancisT. Hartman, The University of Calgary; Gary C.Humphreys, Humphreys & Associates; Lewis Ireland,PMP, Project Technologies Corp.; Peter Kapsales,Bellcore; Lee R. Lambert, PMP, Lambert ConsultingGroup, Inc.; Alexander Laufer, Technion-Israel Inst. of Technology; Bill Leban, Keller GraduateSchool of Management; Robert Loo, The University ofLethbridge; Kenneth G. Merkel, PMP, University of Nebraska; James J. O’Brien, PMP, O’Brien-Kreitzberg& Assoc.; Michael D. Okrent, Agilent TechnologiesInc.; John B. Phillips, Engineering DevelopmentInstitute; Peggy C. Poindexter, Great Falls, VA; Tzvi Raz,Tel Aviv Unversity; Paul B. Robinson, The EasternManagement Group; Arnold M. Ruskin, PMP, ClaremontConsulting Group; Avraham Shtub, Tel Aviv University;Richard L. Sinatra, PMP, Potomac, MD; Larry A. Smith,Applied Management Associates; Dwight Smith-Daniels, Arizona State University; James Snyder,Springfield, PA; Paul Solomon, PMP, B-2 Earned ValueManagement Systems; Robert G. Staples, Monroe, VA;Walter Taylor, Delta Airlines; Charles J. Teplitz, Universityof San Diego; Veljko M. Torbica, PMP, FloridaInternational University; Walter A. Wawruck,Vancouver, BC, Canada; Itzhak Wirth, Saint John’sUniversity; Janet K. Yates, San Jose State University

PUBLICATIONS ADVISORY BOARDLinda C. Cherry, PMI Publisher; Greg Hutchins, QualityPlus Engineering; Sandy Jenkins, Sanjenco; William V.Leban Jr., PMP, Keller Graduate School of Management;Charles J. Teplitz, PMP, University of San Diego Schoolof Business Administration

PUBLICATION & MEMBERSHIPThe Project Management Journal (USPS 8756-9728/02) is published quarterly (March, June, September,December) by the Project Management Institute. PMJis printed in the USA by Cadmus Magazine, Richmond,VA. Periodical postage paid at Newtown Square, PA19073 USA and at additional mailing offices.Canadian agreement #40030957. Postmaster: Sendaddress changes to Project Management Institute,Headquarters, Four Campus Boulevard, NewtownSquare, Pennsylvania 19073-3299 USA. Phone +610-356-4600, fax +610-356-4647.

The mission of the PMJ is to provide informationadvancing the state of the art of the knowledge of pro-ject management. PMJ is devoted to both theory andpractice in the field of project management. Authorsare encouraged to submit original manuscripts thatare derived from research-oriented studies as well aspractitioner case studies. (See Notes for Authors in theback of this journal.) All articles in PMJ are the viewsof the authors and are not necessarily those of PMI.Subscription rate for members is $14 per year and isincluded in the annual dues.

Claims for undelivered copies must be made nolater than two months following month of publication.The publisher will supply missing copies when losseshave been sustained in transit and when the reservestock will permit.

PMI is a nonprofit professional organization whosemission is to serve the professional interests of its col-lective membership by: advancing the state of the artin the leadership and practice of managing projectsand programs; fostering professionalism in the man-agement of projects; and advocating acceptance ofproject management as a profession and discipline.Membership in PMI is open to all at an annual dues of$119 U.S. For information on PMI programs and mem-bership, or to report change of address or problemswith your subscription, contact:

Project Management Institute, Headquarters,Four Campus Boulevard, Newtown Square, pennsyl-vania 19073-3299 USA; tel: +610-356-4600; fax:+610-356-4647; e-mail: [email protected]

EDITORIAL & ADVERTISING SERVICESAddress manuscripts and other editorial submissions,advertising and mailing list rental inquiries, andrequests for reprints/bulk copies/reprint permission to:

Project Management Institute, PublishingDepartment, Four Campus Boulevard, Newtown Square,Pennsylvania 19073-3299 USA. Phone +610-356-4600,fax +610-356-4647; e-mail: [email protected]

Unless otherwise specified, all letters and articlessent to the PMI Publishing Department are assumedfor publication and become the copyright property ofPMI if published. PMI is not responsible for loss, dam-age, or injury to unsolicited manuscripts or other mate-rial.

READER SERVICESPhotocopies. PMJ has been registered with theCopyright Clearance Center, Inc. Consent is given forcopying of articles for personal or internal use, or forthe personal use of specific clients. This consent isgiven on the condition that the copier pays through theCenter the per copy fee stated in the code on the firstpage of each article for copying beyond that permittedby Sections 107 or 108 of the U.S. Copyright Law. Theappropriate fee should be forwarded with a copy of thefirst page of the article to the Copyright ClearanceCenter, Inc., 222 Rosewood Drive, Danvers, MA 01923USA (tel: +508-750-8400; fax: +508-750-4744). Thefee indicated on the first page of an article in this issuewill apply retroactively to all articles published in thejournal, regardless of the year of publication. This con-sent does not extend to other kinds of copying, such asfor general distribution, resale, advertising and promo-tion purposes, or for creating new collective works.Special written permission must be obtained from thepublisher for such copying.

Permissions. Requests to reprint articles pub-lished in PMJ must be made in writing to the publisher.

Reprints. Individual articles may be purchasedthrough the Knowledge & Wisdom Department (www.pmi.org/k&wc) at a cost of $10.00 per articlefor members and $15.00 per article for nonmembers.

Glossy Reprints. Requests for glossy reprints ofindividual articles in quantities of 100 or more can besent to [email protected].

Bulk Copies of Current Issues. Copies of the cur-rent PMJ can be obtained in quantities of 25 or more.Orders must be placed 40 days prior to date of issue.The cost is $5.50 per copy plus shipping.

Back Issues. Back issues are available on requestat $20 each. Call +610-356-4600 for availability.

Project Management Journal

PROJECT MANAGEMENT JOURNAL EDITOR

Parviz F. Rad, PhD, PMPStevens Institute of Technology

Book Review EditorKenneth H. Rose, PMP

PMI PUBLISHING STAFF

PublisherLinda Cherry; [email protected]

Editor in Chief Dan Goldfischer; [email protected]

Publishing Support SpecialistNatasha Pollard; [email protected]

Book Development EditorRichard Schwartz; [email protected]

Permissions CoordinatorInsuk Choe; [email protected]

Bookstore AdministratorRegina Madonna; [email protected]

Book Publishing PlannerDanielle Moore; [email protected]

Administrative AssistantDotti Bobst; [email protected]

General e-mail:[email protected]

PRODUCTION SERVICES PROVIDEDBY IMAGINATION PUBLISHING,CHICAGO, IL, USA

Senior EditorRoss Foti; [email protected]

Assistant Managing EditorLauren Strandquist; [email protected]

Art DirectorDoug Kelly; [email protected]

Associate Art DirectorTonya Weiland; [email protected]

Associate Art DirectorTheresa Rogulic; [email protected]

Publications Services ManagerAngela Kramer; [email protected]

Typically, internal projects either develop new tools andprocesses or expand and enhance the existing objectives of

the organization. Enlightened project-oriented organizationshave a formalized mechanism for managing the portfolio ofthese projects. If the organization installs any variation of aproject management office (PMO), the task of managing theproject portfolio falls squarely within the jurisdiction of thePMO. Further, if the PMO is a mature and sophisticated orga-nizational entity, the tools and processes for portfolio man-agement are highly formalized and consistently effective.

Regardless of the nature of the organizational unit incharge of the portfolio management and the formality bywhich this task is conducted, the task includes two sets of evaluations: the initial evaluation to determine which pendingproject should receive implementation funding and the mid-stream evaluation/audit to decide whether the project fundingshould continue. If formalized procedures are used to autho-rize project initiation and if these procedures are sufficientlycomprehensive, the same procedures and/or models can beused for project selection and as part of the midstream audit.

The initial selection process and the midstream auditprocess includes two distinct components. One part deter-mines whether the project deliverable is in line with the cur-rent organizational vision and, for that matter, if the projectcharter is in line with organizational profitability and com-petitiveness strategies. The other component is more focused;it reflects the willingness of the organization to sponsor thisproject. The sponsorship will be demonstrated by approval ofthe estimated cost and duration of the project during theselection phase.

The willingness to sponsor the project is repeatedlyaffirmed during any subsequent updates to the values of costand duration. The latter usually is a sobering exercise becausethe estimate of the cost and duration of the vast majority ofprojects tends to increase as the details of the deliverables arefleshed out during the project life cycle.

The metrics that determine the effectiveness of the projectteam in delivering the project objectives are reasonably wellidentified and extensively quantified, in most cases. These met-rics deal with the current/updated estimate of cost, schedule,quality, and scope in comparison with the planned and/orexpected values. Equally important to the project success butless quantified are those metrics that assess the people-relatedissues such as team morale, conflict management, client satis-faction, and stakeholder relations.

The models that determine the initial attractiveness of pro-ject and midstream deliverable appropriateness probably arethe most qualitative, although the indices that comprise these

models are somewhat numerous. This list includes indices thatdeal with enterprise objectives, operational implications,financial impacts, sales and marketing interests, and those thatreflect the organizational willingness to dedicate the full rangeof resources to the project. Clearly, this facet of portfolio man-agement is more an art than a science.

Some skillful project managers formulate portfolio man-agement models that are largely intuitive and informal. Thus,the deliverable relevance indices become part of an implicitand unspoken model that determines the attractiveness of theproject during the selection phase and the viability of the pro-ject during a midstream audit. As such, only organizations thatare fortunate to employ seasoned and highly intuitive projectmanagement professionals would do well in this area.

Thus, there is a need for a set of comparison indices that for-malize the project evaluation process and make explicit what isimplicit in these seemingly subjective evaluations. A formal-ized structure for portfolio management will capitalize on theskills of the more experienced and innovative project managersfor the good of the organization. If these intuitive indicesbecome formalized, then managerial intuition can become thelogical basis and the structural foundation for an explicit andstandardized evaluation system to be used by all PMO personnelduring any phase of the portfolio management process.

It would be a major advancement in the best practices ofportfolio management if the analysis of the pertinent data pro-vided guidance as to what extent project and team performanceand the deliverable relevance should be quantified with scalesand plateaus. It would be equally intriguing to refine historicaldata, demonstrating to what extent the survival of a projectshould depend on nonquantified judgment and professionalintuition of project management professionals.

As always, on behalf of the editorial board, I invite readersto reflect on their professional experiences and empiricalobservations and to share these observations with the projectmanagement community via submitting articles dealing withany facet of the project portfolio management.

September 2002 Project Management Journal 3

From the EditorParviz F. Rad, PhD, PMP

4 Project Management Journal September 2002

Knowledge Generation and Sharing—Shaping the Future

by Harry Stefanou, PhD

Research Report

Growing and sharing the body of knowledge for the projectmanagement profession is a critical desire of the Project

Management Institute (PMI) and a key focus of the PMI®Project Management Research Program. The PMI® ResearchProgram, with the advice and counsel of its Member AdvisoryGroup, has chosen to take a broad approach toward this out-come. It is committed to create and enhance opportunities togrow the body of knowledge and to communicate the learning.In this way, the program hopes to help you shape the future ofthe profession.

Clearly the recent PMI® Research Conference 2002, held inSeattle, WA, USA, in July, is one example. Almost 300 acade-mics and practitioners gathered to share their interest in project management research, to learn from each other and toset the direction for future learning. Twenty-three countrieswere represented, and the ratio of practitioners to academicswas 2 to 1. Because of invited speakers, contributing presenta-tions, panel sessions, and networking events, new knowledgehas been generated and exchanged. One value of new knowl-edge is its implementation or practice. Time will tell but giventhe attendance at the Research Conference, we can anticipatethis practice will occur. As was brought out at the conference,knowledge sharing is a two-way interaction. We expect, there-fore, that the practitioners have laid the groundwork with theresearchers for the next wave of research.

Other examples of our commitment to share knowledge arePMI’s recent publication of The Frontiers of Project ManagementResearch and, of course, the long history of the ProjectManagement Journal, which serves as a tribute to the effort ofproject management researchers over the years. Yet anotherexample is the Research Topics Track at PMI 2002 in SanAntonio, TX, USA, this October, where, again, the results ofresearch will reach the minds of practitioners.

The number of opportunities and interest in project man-agement research continues to grow. In recognition of that fact,PMI is proud to announce the creation of the PMI ResearchAchievement Award, which will be presented for the first timein 2003. The Institute’s hope is that recognition will encouragethe quest for knowledge and the achievement of excellence inthe profession. Watch for a call for nominations in theDecember issue of PM Network and on our Web site,www.pmi.org.

PMI also encourages research through its sponsorship ofresearch projects. Two such projects recently have been com-pleted and are expected to be published as PMI books later thisyear: Quantifying the Value of Project Management by WilliamIbbs and Justin Reginato and Selling Project Management toSenior Executives: Framing the Moves that Matter by JaniceThomas, Connie Delisle, and Kam Jugdev. Through this fund-ing from PMI and separately from the PMI EducationalFoundation, four additional external grants are under way thisyear (see PMJ’s June 2002 Research Report). PMI also plans tocontinue to advance the generation of new knowledge throughfunding additional research in 2003.

September 2002 Project Management Journal 5

Project Management in the InformationSystems and Information TechnologiesIndustries

Francis Hartman, University of Calgary, 2500 University Dr. NW, Calgary, Alberta T2N 1N4Canada

Rafi A. Ashrafi, University of Calgary, 2500 University Dr. NW, Calgary, Alberta T2N 1N4Canada

Information systems (IS) and information technologies (IT) are the fastest growingindustries in developed countries. Huge amounts of money continue to be invest-

ed in these industries (Abdel-Hamid & Madnick, 1990). Due to pressure of time-to-market, there is a corresponding pressure to increase productivity. To maintain a competitive edge in today’s fast-changing world, an organization’s successdepends on effectively developing and adopting IS. Literature has discussed concernfor problems related to IT/IS development and implementation.

According to Zells (1994) and other studies, approximately 85% of software pro-jects undertaken in Europe and North America are at level one of the SoftwareEngineering Institute’s capability maturity model (CMM). Level one is the lowestlevel of CMM. The challenges at level one are to have project planning, project man-agement, configuration management, and software quality assurance in place—andhave them working effectively. To improve project delivery performance, a number oforganizations are adopting project management approaches and setting up projectmanagement offices (Barnes, 1991; Butterfield & Edwards, 1994; King, 1995; Munns& Bjeirmi, 1996; Raz, 1993; Redmond, 1991).

Current literature on software projects shows that most of the software problemsare of a management, organizational or behavioral nature, not technical (Johnston,1995; Martin, 1994; Whitten, 1995).

A survey of high-tech firms showed that if project management improved, time andcost could be reduced by more than 25% and profits would increase by more than 5%(Fisher, 1991). This has since been validated by use of Stratigically Managed AlignedRegenerative Transitional (SMART) project management, based on internal bench-marking by the companies involved in the field trials.

Objectives of the StudyIn this paper, the authors report findings on current project management practices inthe IT/IS industries. The purpose of the study was to find out what practices areimportant to IT/IS industries in successfully accomplishing their projects. Do theyuse proven project management practices? Whatever the IT/IS industries regard asimportant for the success of their projects, do they measure it? What are the projectdrivers? Are these three important elements aligned with each other? The authorsinvestigated these questions not only for various phases of a project, but also fromthe perspective of three major stakeholders. These stakeholders include an owner orsponsor, a major contractor or supplier, and a consultant for the same project.

AbstractFor many enterprises, sustainable success isclosely linked to information systems (IS) andinformation technologies (IT). Despite signifi-cant efforts to improve software project suc-cess, many still fail. Current literature indicatesthat most of the software project problems are related to management, organizational,human, and cultural issues—not technicalproblems. This paper presents results of a survey of 36 software owners/sponsors, con-tractors/suppliers, and consultants on 12 pro-jects. The empirical results address answers to questions related to success, performancemetrics, and project business drivers. A lack of alignment on these critical issues emergeconsistently by phase as well as across the entire project. The results of this studyalso are compared with others that spanseven additional industry sectors. As a result,the authors have developed an approach thatlinks project critical success factors (CSFs) tocorporate strategy, and project metrics to theCSFs. An important finding of this study is thecritical need to identify and manage realisticexpectations of the stakeholders to achieveperceived project success.

Keywords: information systems; informationtechnology; managing stakeholder expecta-tions; critical success factors; software project management

©2002 by the Project Management Institute

2002, Vol. 33, No. 3, 5–15

8756–9728/02/$10.00 per article + $0.50 per page

6 Project Management Journal September 2002

In the next section, the authors review the current literature,summarizing major problems of IT/IS projects. In the fourthsection, the authors discuss their research methodology. This isfollowed by a discussion of the results of the study and a sum-mary of the findings. Finally, the authors propose an approachfor managing projects based on the SMART framework andimplemented on a number of software and other projects withmarkedly improved results, followed by conclusions. Theauthors hope that this study will help project managers inunderstanding the state of the art of project management inthe software industry and how it might be improved.

Literature SurveyThe horror stories about delay, cost overrun, and abandon-ment of software projects are widely reported in the literature(Bailey, 1996; Gibbs, 1994; Lucas, 1995; Martin, 1994; Ward,1994). In other industries, causes of project failures are inves-tigated and reports written, but in the computer industry theircauses are covered up or ignored. As a result, the IT/IS industrykeeps making the same mistakes over and over again(Johnston, 1995).

There are differences in the opinions of experts as towhether software project management is similar or different toproject management in other industries. The authors believethat the principles are the same across industries, but the ter-minology and some applications are specific to each industryand sometimes to each company or physical location. Butmany believe that software management is very different(Otto, Dhillon, & Watkins, 1993; Raybould,1996; Roetzheim,1993; Samuels, 1996). However, in Duncan’s (1991) view, software projects are not different from other projects. In theauthors’ opinion there are both differences and commonaltiesin all types of projects, let alone software projects. Any two pro-jects from one industry sector can be unique, and we can ben-efit from other industries’ experiences.

In summary, the most commonly reported causes of soft-ware project failure are as follows (based on a content analysisof the cited literature):■ Misunderstood requirements (business, technical, and social)(King, 1995; Lane, Palko, & Cronan, 1994; Lavence, 1996);■ Optimistic schedules and budgets (Martin, 1994);■ Inadequate risk assessment and management(Johnston, 1995);■ Inconsistent standards and lack of training in project man-agement (Jones, 1994; O’Conner & Reinsborough, 1992;Phan, Vogel, & Nunamaker, 1995);■ Management of resources (people more than hardware andtechnology) (Johnston, 1995; Martin, 1994; Ward, 1994);■ Unclear charter for a project (Lavence, 1996);■ Lack of communication (Demery, 1995; Gioia, 1996;Hartman, 2000; Walsh & Kanter, 1988).

The authors of this paper believe that these are symptoms ofthe disease and not the root causes of the disease.

Main Reasons for Failures of IT/IS Projects Before looking into the main causes of project failures in theIT/IS industry, we must define critical success factors (CSFs)

and review the importance of metrics. The CSFs are the ele-ments that make a project a success. These include trust,effective communication, top management support, etc. Keyresult areas (KRAs) are specific results that are needed todeliver a successful project. CSF methodology has beenhighly successful in identifying KRAs crucial for the successof a project (Atkinson, 1999; Baccarini, 1999; Belassi &Tukel, 1996; Byers & Blume, 1994; Clarke, 1999; Cooke-Davies, 2002; Fisher & L’Abbe, 1994; Forsberg & Mooz,1996; Fowler & Walsh, 1999; Johnston, 1995; Levene,Bentley, & Jarvis, 1995; Lim & Mohamed, 1999; Martin,1982; Pinto & Kharbanda, 1995; Raz, 1993; Shank,Boynton, & Zmud, 1985; Tan, 1996; Wateridge, 1999;Whitten, 1995; Zahedi, 1987; Zells, 1991).

With changing business conditions, half-century-old project performance metrics are no longer effective for themonitoring and control of today’s projects. Proper mea-surement tools and metrics are necessary for effective con-trol of projects (Hartman & Jearges, 1996; Kiernan, 1995;Simmons, 1992; Thamhain, 1994).

Based on both consulting and earlier research, theauthors found that the main reasons for most of theseproblems are:■ Major stakeholders generally do not have a clear idea ofproject success or have differing views of what successconstitutes. If a clear vision exists, it is not effectively com-municated or the project team does not understand it.This leads to scope creep, inappropriate measurement,churn in developments, specification changes, delays, andother issues;■ Generally there is a problem in identifying KRAs andCSFs and linking them to the stakeholders’ business strate-gy. This leads to lack of support by senior management; ■ The project team and major stakeholders are not veryclear on what the performance and control metrics shouldbe. Normally the focus is on time, cost, performance, andquality. But this focus is not consistent between stakehold-ers or over time. Some have recognized the importance ofcustomer and end-user satisfaction;■ Project control and performance metrics are not linkedto KRAs and CSFs. This means we measure the wrongthings and distract the team from what is important to suc-cess. It looks like inadequate or ineffective project control;■ Generally, there is very little or sometimes no alignmentamong major stakeholders on success criteria, KRAs, CSFs,performance metrics, project drivers, and on the dynamicsof change for these elements over the project life cycle. Thisleads to inappropriate decision-making and inconsistencyin management style and focus.

Current literature also supports these views, albeit piece-meal in many cases, as the focus of many papers is on spe-cific aspects. A number of researchers have commented onthe lack of project success criteria and on a lack of properproject metrics (Adams, Sarkis, & Liles, 1995; Demery,1995; Ingram, 1994; Jiang, Klein, & Balloun, 1996;Johnston, 1995; Peters, 1996; Pinto & Slevin, 1988;Raybould, 1996; Stevens, 1991; Turner, McLaughlin,

Thomas, & Hastings, 1994; Wateridge, 1995). Hartman andAshrafi (1996) reported an overview on CSFs, project drivers,and metrics of various industries. Some of the results of thecurrent study were reported in Hartman and Ashrafi (1998).

As a first step to collecting empirical evidence to test thehypotheses, the authors decided to collect data on the cur-rent state of affairs for these aspects of project manage-ment. This included but was not limited to:■ Were the criteria for success clearly defined at the begin-ning of the project? Were KRAs and CSFs identified?■ Was there any alignment between major stakeholderson these CFSs?■ What project metrics were used for monitoring projectperformance during various phases of the project?■ Was there an alignment of major stakeholders on whatthese metrics should be?■ Were these metrics linked to KRAs and CSFs?■ Were the project priorities set at the beginning of theproject? Did the priorities change during various phases ofthe project life cycle?■ Were the KRAs, CSFs, metrics, and project priorities con-sistent with each other?■ Were the CSFs, metrics, and project priorities changedduring various phases of the project?■ Was there any alignment between major stakeholderson the dynamics of such change across the various phasesof the project?

The first of these aspects is to identify what KRAs wouldbe crucial to the successful accomplishment of the project.This allows the project team to keep a focus on them andnot get led astray by the everyday fire fighting on projectmanagement problems. The second aspect is to link theseKRAs and CSFs to corporate strategy and to get buy-in of all the major stakeholders. This linkage validates the pro-ject and helps senior management see its relevance and,thus, provide appropriate support to the project. The third aspect is to monitor, control, and measure thoseelements regarded as critical for project success. In otherwords, once we know what is important for success, projectelements that contribute to this success are the ones we should be measuring to monitor performance duringimplementation. The fourth aspect is to identify project business drivers. This helps make project priorities very clear to everyone. The fifth aspect is to align all major stake-holders and the project team on KRAs, CSFs, project drivers, and metrics. Finally, it is important to havean understanding of the dynamics of these elements over

various phases of the project. The authors strongly believe that if project success crite-

ria are defined at the beginning of a project, KRAs are iden-tified and related to corporate strategy through a clear pro-ject mission, metrics are linked to these KRAs, project pri-orities are made clear, and buy-in is obtained by all majorproject stakeholders on all these aspects, most of the prob-lems reported in the literature could be avoided. As aresult, efficiency and success of projects could be signifi-cantly improved.

Research MethodologyThe authors developed a survey instrument to collect data onall the stated aspects of project management. The survey wasdivided into five sections. The first section collected project-related and demographic information such as industry sector,experience of project manager, project value, duration, loca-tion, completion date, purpose of the project, role of therespondent, etc. The second section provided a list of 33 itemsidentified by the authors as potential CSFs. These CSFs weresynthesized from the extensive literature on this subject. Therespondents were asked to rate these factors in terms of theirimportance on a scale of 0 to 5 on each of the four projectphases (5 = very important; 1 = not important; 0 = not applic-able). These four phases were definition, planning, execution,and termination.

The third section of the survey dealt with project met-rics. A list of 20 different project metrics were provided tothe respondents and they were asked to rank the impor-tance of these metrics on the scale of 0 to 5 over the fourphases of a project. These metrics were drawn from standard project management texts and were guided by theProject Management Institute’s A Guide to the ProjectManagement Body of Knowledge (PMBOK® Guide) (PMIStandards Committee, 2000). In the fourth section, a listof six project priorities was given and the respondents wereasked to rank the importance of these project drivers ateach of the four phases. Last, several open-ended questionswere asked. Was this project successful? If so, on whatbasis? Other relevant information the respondent wantedto add was recorded here.

Data was collected on 12 projects in Canada through personal interviews of 36 project owners/sponsors, con-tractors/suppliers, and consultants—three people per pro-ject. This was part of a much larger study spanning eightindustry sectors and more than 100 projects. A brief sum-mary of projects is included in Appendix A.

First, an owner/sponsor of a suitable project was con-tacted and interviewed. With permission of theowner/sponsor, a major contractor or a supplier and a con-sultant to the same project were identified and inter-viewed. The respondents were asked to reply in the contextof actual project management practices and not in terms ofcompany policy or their personal opinions or preferences.The sample used in the study was small and based inCalgary, Alberta, Canada. However, based on correlationwith other findings and observations from the literature,the authors believe these results have broad application.

Results of Survey AnalysisOne of the main goals of this study was to identify KRAs andCSFs and to find out if project metrics were linked to theseKRAs and CSFs. The authors also wanted to establish projectpriorities during various phases of the project life cycle. Inaddition to these, the authors wanted to answer several ques-tions including:■ Is there a change in the CSFs, metrics, and priorities over thelife of the project?

September 2002 Project Management Journal 7

8 Project Management Journal September 2002

■ How consistent are the three major stakeholders (owner,contractor, and consultant) in their perceptions of CSFs, met-rics, and project priorities?■ Are the perceived CSFs consistent with the metrics used andthe project priorities identified by these stakeholders?

An average of all scores of the responses in the appropriatesurvey groups was calculated. The most important characteristicsthen were defined as those that had the highest average score:■ CSFs by Phases. Figure 1 shows the 10 most importantCSFs over the four phases of projects;■ CSFs by Stakeholders. The 10 most important CSFs bystakeholders group are shown in Figure 2; ■ Project Metrics by Phases. Figure 3 shows the 10 mostimportant project metrics for four phases of the projects inves-tigated;■ Project Metrics by Stakeholders. The most important pro-ject metrics according to each of the stakeholders are shown inFigure 4; ■ Project Priority Ranking by Phase. Figure 5 shows projectpriority ranking during four phases of the projects studied; ■ Project Priority Ranking by Stakeholders. Figure 6 showsthe most important project priority rankings by stakeholders.

From the results, it was concluded that the value of metrics asa predictive tool was not fully exploited by the project teams. Itmay have been possible to place more importance on key met-rics earlier on in the project. This could have been done to ensure

that things did not get out of hand by the time the executionstage rolled around because, at that point, the project has enoughmomentum that it becomes quite difficult to get it back on track.

Table 1 shows the most important overall CSFs and projectmetrics as identified by all the three stakeholder groups overfour project phases. The respondents showed inconsistenciesbetween what they identified as the project success factors andwhat were used as project metrics. It was observed that in somecases that respondents in the same project agreed on theimportance of certain CSFs, but they did not agree on how theCSFs were measured. In other cases, respondents agreed on theimportance of these factors, but they indicated that they didnot have a metric established for measuring them.

It also was observed that project owners, contractors, andconsultants do not have a clear understanding of the methodsthat are used on their projects to measure how well projectgoals are met. Although there is some agreement as to whichfactors are important to the success of the project, there alsoshould be agreement on how to measure success. If projectmetrics are not clearly understood, it is difficult to determinethe level of success of the project. Each individual involved inthe project may have a different opinion as to how successfulthe project is, depending on his or her own measurement.Another important point is that success factors considered tobe important to a project should be measured in some way during execution to determine ahead of time whether

Stakeholders

Minimum Scope Changes

Change Management

Technology andExpertise

Project Plan

Business Purpose

Top ManagementSupport

Project Mission

Communication

Owner’s Consultation

Owner’s Approval

0 1 2 3 4 5

Suc

cess

Fac

tors

Importance

Termination

Execution

Planning

Definition

Figure 1. Project Success Factors by Phases

September 2002 Project Management Journal 9

Stakeholders

Minimum Scope Changes

Change Management

Technology andExpertise

Project Plan

Business Purpose

Top ManagementSupport

Project Mission

Communication

Owner’s Consultation

Owner’s Approval

0 1 2 3 4 5

Suc

cess

Fac

tors

Importance

Figure 2. Success Factors by Stakeholders

New TechnologyAccepted

ResponsibilitiesAssigned

Resources Supplied

Within Budget

Completion Defined

Activities Schedule

Scope Defined

Deliverables Identified

Milestones Met

On Time

0 1 2 3 4 5

Met

rics

Importance

Figure 3. Project Metrics by Phases

Termination

Execution

Planning

Definition

Contract

Consultant

Owner

10 Project Management Journal September 2002

performance objectives will be met. If the project stakeholdersand the team do not formally measure the factors that theydeem to be most important, they cannot hope to predict its success and take corrective action as required. The projectstakeholders and the team may be spending their time on mea-suring less important factors that will lead it to an incorrectongoing measurement of whether or not the project is a success.

Summary of FindingsBased on the results, the authors found:■ The ratings for a particular project success factor did notchange very significantly between different phases;■ Throughout all project phases, there was general agreementamong survey participants that a project mission, consultationwith the project owner, good communication, and the avail-ability of resources are important factors for project success;■ Participants on each project agreed on certain project suc-cess factors, but they tended to either disagree on how the suc-cess factor should be measured, or they did not attempt tomeasure the success factor at all;■ Project metrics were not fully utilized as a predictive toolbut rather as a measure of how well the project performed tothat point in time. This often is too late to allow effective cor-rective action;■ The owners of the projects agreed unanimously that it is veryimportant for the project to meet the needs of the end user;

■ Responsibility breakdown structures, work breakdownstructures, and CSFs were not well utilized;■ The owners did not have control, monitoring or feedbacksystems independent of those used by the contractors and/orconsultants;■ Time taken to align stakeholders on what is important tothe project probably would help improve communication,reduce rework, and enhance the possibility of success;■ The alignment of project metrics with project success factorsand priorities appears to be an opportunity for improvementin the software industry.

Recommendations It is widely accepted that there is room for improvement in thedelivery of software projects including new software develop-ment, upgrades, or implementation. Many of the specific stud-ies in this area suggest either what the problems may be or whatneeds to be in place for success. While this is useful informa-tion, it does not help the practitioner with the question: “Howdo I achieve greater success?” This study set out to link thesymptoms for success or failure with what constructive actionmay be needed to achieve such success. These recommenda-tions, which have been tested on live projects to validate them,make that critical link. Based on internal benchmarks in the testcompanies, savings in time and cost of between 10% and 30%were matched by improved quality and end-user acceptance.

New TechnologyAccepted

ResponsibilitiesAssigned

Resources Supplied

Within Budget

Completion Defined

Activities Schedule

Scope Defined

Deliverables Identified

Milestones Met

On Time

0 1 2 3 4 5

Met

rics

Importance

Figure 4. Project Metrics by Stakeholders

Contract

Consultant

Owner

September 2002 Project Management Journal 11

Team Development

Career Development and Training

End-User Satisfaction

Cost

Performance

Time

0 1 2 3 4 5

Pri

orit

ies

Importance

Figure 5. Project Priorities by Phase

Team Development

Career Development and Training

End-User Satisfaction

Cost

Performance

Time

0 1 2 3 4 5

Pri

orit

ies

Importance

Figure 6. Project Priorities by Stakeholders

Contract

Consultant

Owner

Termination

Execution

Planning

Definition

12 Project Management Journal September 2002

The recommendations that follow represent the four mostsignificant elements identified and tested in this study:■ Link your project to corporate business strategy;■ Align major stakeholders on key issues;■ Simplify project controls and metrics;■ Make sure effective communication and expectation management is maintained throughout the project life.

Greater detail on how these aspects are implemented can befound in Hartman (2000).

Conclusions Although the projects surveyed were rated as successes, someprojects lacked defined goals or defined metrics to measure thissuccess. If the owner, contractor, and consultant on a project allhave different ideas of what success is and how success will bemeasured, it is unlikely that everyone (or possibly anyone) willbe satisfied when the project is completed. There are many toolsthat can be utilized to ensure a successful project. For the

software industry, it may just be a matter of learning what toolsare available and how to use them properly to raise the numberof successful software projects to an acceptable level.

The authors hope that this study will help in: ■ Considering a holistic approach for the project;■ Understanding what is important for success;■ Understanding the dynamics of project drivers and priori-ties and that these may shift over time;■ Getting and maintaining alignment of major stakeholdersincluding the immediate project team on all important strate-gic and tactical issues; ■ Realizing better planning and more effective control;■ Accomplishing a successful project with satisfied stake-holders, project teams, and customers.

Some general guidelines for how this may be achievedhave been offered. The suggested approaches to achievingproject success have been tested on live projects with consis-tently successful outcomes.

Table 1. Overall 10 Most Important Critical Success Factors and Metrics

Rank order

1

2

3

4

5

6

7

8

9

10

Critical success factors

Owner is informed of the project status andhis/her approval is obtained at each stage

Owner is consulted at all stages of develop-ment and implementation

Proper communication channels are estab-lished at appropriate levels in the project team

The project has a clearly defined mission

Top management is willing to provide the necessary resources (money, expertise, equipment)

The project achieves its stated business purpose

A detailed project plan (including time schedules, and milestones) with a detailed budget in place

The appropriate technology and expertise are available

Project changes are managed through a formal process

The project is completed with minimal and mutually agreed scope changes

Project metrics

Project completed on time or ahead of schedule

Milestones are identified and met

Deliverables are identified

The scope of the project is clearly defined and quantified

Activities and logical sequences are determined and scheduled (CPM)

Project completion is precisely defined

The project is completed within a predetermined budget

Resource requirements are identified and supplied as needed

Responsibilities are assigned

A specific new technology is adopted andaccepted by end users

AcknowledgmentsThe authors would like to thank all the students of the fundamentals of project management class of the project man-agement specialization at the University of Calgary who conducted the interviews reported in this paper. Thanks alsoare due to all industry personnel who gave their time and participated in the interview surveys. The authors also wouldlike to thank the Natural Sciences and Engineering ResearchCouncil of Canada (NSERC), the Social Sciences and theHumanities Research Council of Canada (SSHRC), and indus-try partners who support the research program of the projectmanagement specialization at the University of Calgary. Inaddition, the authors thank referees of this paper for their use-ful comments and constructive suggestions to improve it.

ReferencesAbdel-Hamid, T., & Madnick, S. (1990). The elusive silver

lining: How we fail to learn from software development fail-ures. Sloan Management Review, 32 (1), 39–47.

Adams, S.M., Sarkis, J., & Liles, D. (1995). The develop-ment of strategic performance metrics. EngineeringManagement Journal, 7 (1), 24–32.

Atkinson, R. (1999). Project management: Cost, time,and quality, two best guesses and a phenomenon, its time toaccept other success criteria. International Journal of ProjectManagement, 17 (6), 337–342.

Baccarini, D. (1999). The logical framework method for defin-ing project success. Project Management Journal, 30 (4), 25–32.

Bailey, R. (1996). Approving systems projects: Eight ques-tions an executive should ask. PM Network, 10 (5), 21–24.

Barnes, M. (1991). Innovation—Why project manage-ment is essential to successful business. International Journalof Project Management, 4 (4), 207–209.

Belassi, W., & Tukel, O.I. (1996). A new framework fordetermining critical success/failure factors in projects.International Journal of Project Management, 14 (3), 141–151.

Butterfield, L., & Edwards, R. (1994). PM software devel-opment using PM techniques. Proceedings of the ProjectManagement Institute’s 25th Annual Symposium, Vancouver,Canada. Upper Darby, PA: PMI, 522–526.

Byers, R., & Blume, D. (1994). Tying critical success fac-tors to systems development. Information and Management,26 (1), 51–61.

Clarke, A. (1999). A practical use of key success factors toimprove the effectiveness of project management. InternationalJournal of Project Management, 17 (3), 139–145.

Cooke-Davies, T. (2002). The ‘real’ success factors on projects. International Journal of Project Management, 20 (3),185–190.

Demery, K. (1995). Magic schedules delivering consumersoftware on time. Proceedings of the Project ManagementInstitute’s 26th Annual Symposium, New Orleans, LA. UpperDarby, PA: PMI, 662–667.

Duncan, W.R. (1991). Concern of project managers coun-terpoint vive la difference. PM Network, 5 (6), 33–34.

Fisher, F., & L’Abbe, M. (1994). PM implementationthrough organizational change. Proceedings of the Project

Management Institute’s 25th Annual Symposium, Vancouver,Canada. Upper Darby, PA: PMI, 277–283.

Fisher, K.J. (1991). Project management will drive revenuerecognition for software providers. PM Network, 5 (6),25–28.

Fowler, A., & Walsh, M. (1999). Conflicting perceptionsof success in an information systems project. InternationalJournal of Project Management, 17 (1), 1–10.

Gibbs, W. (1994). Software’s chronic crisis. ScientificAmerican, 271 (3), 86–95.

Gioia, J. (1996). Twelve reasons why programs fail. PMNetwork, 10 (11), 16–19.

Hartman, F. (2000). Don’t park your brain outside: APractical Guide to Improving Shareholder Value with SMARTManagement. Newtown Square, PA: PMI.

Hartman, F., & Ashrafi, R. (1996). Failed successes andsuccessful failures. Proceedings of the Project ManagementInstitute’s 27th Annual Symposium, Boston, MA. Upper Darby,PA: PMI, 907–911.

Hartman, F., & Ashrafi, R. (1998). Project management inthe IT/IS industry. Proceedings of the Project ManagementInstitute’s 29th Annual Symposium, Long Beach, CA. NewtownSquare, PA: PMI, 706–709.

Hartman, F., & Jearges, G. (1996). Simplifying project suc-cess metrics. Proceedings of the 40th Meeting of the AmericanAssociation of Cost Engineers (AACE) International, Vancouver,B.C., Canada. Morgantown, WV: AACE International,PM.7.1–PM.7.4.

Ingram, T. (1994). Managing client/server and open sys-tem projects: A 10-year study of 62 mission-critical projects.Project Management Journal, 25 (2), 26–36.

Jiang, J., Klein, G., & Balloun, J. (1996). Ranking of sys-tem implementation success factors. Project ManagementJournal, 27 (4), 49–53.

Johnston, A.K. (1995). A hacker’s guide to project manage-ment. Oxford, U.K.: Butterworth-Heinemann.

Jones, C. (1994). Assessment and control of software risks.Burlingtion, MA: Yourden Press.

Kiernan, M. (1995). Get innovative or dead. Vancouver,Canada: Dougas & Mcintyre.

King, J. (1995). ‘Tough love’ reins in IS projects.Computerworld, 29 (23), 2.

Lane, P., Palko, J., & Cronan, T. (1994). Key issues in theMIS implementation process: An update using end usercomputing satisfaction. Journal of the End User Computing, 6(4), 3–13.

Lavence, D. (1996). Project management in IT/MIS: Anever increasing challenge. Proceedings of the ProjectManagement Institute’s 27th Annual Symposium, Boston, MA.Upper Darby, PA: PMI, 464–466.

Levene, R.J., Bentley, A.E., & Jarvis, G.S. (1995). The scaleof project management. Proceedings of the Project ManagementInstitute’s 26th Annual Symposium, New Orleans, LA. UpperDarby, PA: PMI, 500–507.

Lim, C.S., & Mohamed, M.Z. (1999). Criteria of projectsuccess: An exploratory re-examination. International Journalof Project Management, 17 (4), 243–248.

September 2002 Project Management Journal 13

14 Project Management Journal September 2002

Lucas, J.J. (1995). Work management: Why can’t infor-mation managers manage? Proceedings of the ProjectManagement Institute’s 26th Annual Symposium, New Orleans,LA. Upper Darby, PA: PMI, 304–310.

Martin, E.W. (1982). Critical success factors of chiefMIS/DP executive. MIS Quarterly, 6 (2), 1–9.

Martin, J.E. (1994). Revolution, risk, runaways: Three Rsof IS projects. Proceedings of the Project Management Institute’s25th Annual Symposium, Vancouver, Canada. Upper Darby,PA: PMI, 266–272.

Munns, A.K., & Bjeirmi, B.F. (1996). The role of projectmanagement in achieving project success. InternationalJournal of Project Management, 14 (2), 81–87.

O’Conner, M.M., & Reinborough, L.M. (1992). Qualityprojects in the 1990s: A review of past projects and futuretrends. International Journal of Project Management, 10 (2),107–114.

Otto, R.A., Dhillon, J., & Watkins, T. (1993).Implementing project management in large-scale information-technology projects. In P.C. Dinsmore (Ed.).AMA Handbook of Project Management (pp. 352–361). New York: Amacom.

Peters, L. (1996). The master’s touch in project success-where, how, when, what, of leading projects to excellence.Proceedings of the Project Management Institute’s 27th AnnualSymposium. Boston, MA. Upper Darby, PA: PMI, 806–812.

Phan, D., Vogel, D., & Nunamaker, J.F., Jr. (1995).Empirical studies in software development projects: Fieldsurvey on OS/400 study. Information and Management, 28(4), 271–280.

Pinto, J., & Slevin, D. (1988). Critical success factorsacross the project life cycle. Project Management Journal, 19(3), 67–75.

Pinto, J.K., & Kharbanda, O.P. (1995). Successful projectmanagers: Leading your team to success. New York: VanNostrand Reinhold.

PMI Standards Committee. (2000). A guide to the projectmanagement body of knowledge. Newtown Square, PA: ProjectManagement Institute.

Raybould, M. (1996). Is project management of softwareprojects a special case. Proceedings of the Project ManagementInstitute’s 27th Annual Symposium. Boston, MA. Upper Darby,PA: PMI, 549–554.

Raz, T. (1993). Introduction of the project managementdiscipline in a software development organization. IBMSystems Journal, 32 (2), 265–277.

Redmond, W.F. (1991). The evolution of project manage-ment in corporate systems development projects. Proceedingsof the Project Management Institute’s 22nd Annual Symposium,Dallas, TX. Drexel, PA: PMI, 129–134.

Roetzheim, W. (1993). Managing software projects:Unique problems and requirements. In P.C. Dinsmore (Ed.).AMA Handbook of Project Management (pp. 347–352). NewYork: Amacom.

Samuels, R. (1996). Managing software programs: A dif-ferent kind of animal. Proceedings of the Project Management

Institute’s 27th Annual Symposium, Boston, MA. Upper Darby,PA: PMI, 627–633.

Shank, M., Boynton, A., & Zmud, R. (1985). Critical suc-cess factor analysis as a methodology for MIS planning. MISQuarterly, 9 (2), 121–129.

Stevens, W.M. (1991). Concerns of project managers this& that “Yes, but …”. PM Network, 5 (2), 14–18, 22.

Simmons, D.B. (1992). A win-win metric based softwaremanagement approach. IEEE Transactions on EngineeringManagement, 39 (1), 32–41.

Tan, R. (1996). Success criteria and success factors forexternal technology transfer projects. Project ManagementJournal, 27 (2), 45–56.

Thamhain, H. (1994). Designing modern project man-agement systems for a radically changing world. ProjectManagement Journal, 25 (4), 6–7.

Turner, J.R., McLaughlin, J.J., Thomas, R.D., & Hastings,C. (1994). A vision of project management in 2020:Interactive session. Proceedings of the Project ManagementInstitute’s 25th Annual Symposium, Vancouver, Canada. UpperDarby, PA: PMI, 634–635.

Walsh, J.J., & Kanter, J. (1988). Towards more successfulproject management. Journal of Systems Management, 39(319), 16–21.

Wateridge, J. (1995). IT projects: A basis for success.International Journal of Project Management, 13 (3), 169–172.

Wateridge, J. (1998). How can IS/IT projects be measuredfor success? International Journal of Project Management, 16(1), 59–63.

Ward, J.A. (1994). Productivity through project manage-ment: Controlling the project variables. Information SystemsManagement, 11 (1), 16–21.

Whitten, N. (1995). Managing software development projects(2nd ed.). New York: John Wiley & Sons.

Zahedi, F. (1987). Reliability of information systemsbased on the critical success factors—formulation. MISQuarterly, 11 (2), 187–204.

Zells, L. (1991). Balancing trade-offs in quality, cost,schedule, resources, and risk. Proceedings of the ProjectManagement Institute’s 25th Annual Symposium, Dallas, TX.Drexel, PA: PMI 112–118.

Zells, L. (1994). World-class software practices.Proceedings of the Project Management Institute’s 25th AnnualSymposium, Vancouver, Canada. Upper Darby, PA: PMI,249–253.

Francis Hartman, PhD, PMP, is a professor of project

management at the University of Calgary and holder of

the Natural Sciences and Engineering Research Council

of Canada (NSERC) and Social Sciences and Humanities

Research Council of Canada (SSHRC) Chair in Project Management.

Before accepting this position in 1991, he gained more than 30 years

of experience in the industry on more than $30 billion worth of diverse

projects. His industrial experience spans all phases of projects from

selection to decommissioning. He is the principal researcher behind the

development and testing of Strategically Managed Aligned Regenerative

Transitional (SMART) project management, which is used to enhance the

effective management of a growing number of projects, programs, and busi-

nesses. Through Quality Enhanced Decisions Inc., he offers consulting ser-

vices related to use of SMART management at the project, program, and

corporate levels to Fortune 100 companies, merging enterprises, and gov-

ernment agencies at the local, national, and global levels.

Rafi Ashrafi, PhD, PMP, obtained his master’s degree

in computer science and a PhD in project management

from the University of Bradford, U.K. He has more than

20 years of experience in academia and business in

the United Kingdom, Middle East, and Canada. Ashrafi is a project

management consultant and has worked in the information technology

(IT), telecommunications, energy and utility industries. He also is an

adjunct professor at the University of Calgary and an instructor at the

PMI South Alberta Chapter Project Management Professional (PMP®)

preparation workshop. His research interests include project manage-

ment maturity models, project management office, CSFs, and project

management issues in IT/information systems and e-Business/e-

Commerce. He has published 25 research papers in global journals

and conference proceedings.

Project Value Duration

Facilities information and reporting management system $1 million 15 months

Data transmission security system $4 million Two years

Network management software $4 million One year

Financial systems $14 million Two years

Software project for a major defense project Not reported Two years

Photo and driver’s license information system $2 million One year

Flip-Chip implementation Not reported Not reported

Business process control system Not reported Not reported

Implementation of a new corporate reserve database $6.5 million One year

Development of a new version of software $1.5 million One year

Accounting system implementation $0.5 million Three months

Design and implementation of a software system to $1.2 million Two yearsmanage customer contract information

Appendix 1. Project DetailsData on 12 software projects was collected. A brief description of the projects follows for the interest of the readers of this paper.

THayburn
This article is copyrighted material and has been reproduced with the permission of PMI. Unauthorized reproduction of this material is strictly prohibited.

16 Project Management Journal September 2002

Scheduling Programs With RepetitiveProjects Using Composite LearningCurve Approximations

Jean-Pierre Amor, University of San Diego, School of Business Administration, 5998 AlcalaPark, San Diego, CA 92110–2492 USA

Programs that deliver a relatively small number of similar products arise in a varietyof industries. In the defense industry, the budgetary reductions that followed the

end of the Cold War and the increasing complexity and cost of major weapon sys-tems have led to the procurement of shrinking quantities of combat ships and air-craft from the prime contractors. In the aerospace industry, the nature of space mis-sions and their related activities imply fairly small orders for spacecraft, satellites, androcket boosters. And in the housing industry, it has long been a common practice tobuild the more expensive homes in relatively small tracts. These types of pro-grams also occur in the provision of certain services, such as management con-sulting, the upgrade of existing equipment, the change of software, or the intro-duction of a process improvement or of a new monitoring system.

Frequently, these types of programs consist of a one-time order for a product thatthe contractor has never produced before, and due to the complexity of the product,each “unit” requires the execution of a distinct project. Thus, the scheduling of suchprograms—that is programs with repetitive projects—is very complicated. It also isvery important because the cost per unit produced can be quite high. The programmanager, who is responsible for providing an accurate delivery schedule to all parties,must make the development of this schedule the most important planning activity.

What the Problem Entails The scheduling of a program is used to make an a priori estimate of its overall dura-tion and cost. This enables the manager to set a completion date, to budget resources,and to allocate resource usage. Scheduling also permits control over the timing ofongoing activities to ensure the timely completion of each project.

The scheduling of programs with repetitive projects often involves balancingtwo opposite tendencies. On the one hand, when the contractor has little or noprevious experience with the product, it is imperative to take advantage of thelearning phenomenon and execute as many projects as possible with the sameresources, e.g., individuals, crews, teams, materials, or equipment. This lessens thecost of the resources used because the project performance time decreases as thenumber of repetitions increases. In the extreme case, all the projects would be per-formed in a single sequence—all in series. On the other hand, the need to delivereach unit by its contracted due date to avoid a penalty cost often requires that sev-eral projects be executed simultaneously. In the extreme case, all the projectswould be performed simultaneously—all in parallel. Thus, a major challenge

AbstractPrograms that require executing a modestnumber of similar projects arise in severalindustries, such as aerospace, construction,and defense. The scheduling of such programsoften involves deciding how many projects willrun simultaneously (in parallel) to “optimally”trade-off resource utilization and penalty costs.One approach for scheduling such programs isto perform a quick approximation and follow-ing it later with a “full-blown” procedure. Byassuming a common learning rate for all pro-ject activities, the quick approximation,although computationally inexpensive, is notsufficiently accurate to eliminate the need forthe full procedure, which requires so muchdata tracking and so many calculations that itis discouraging to practitioners. The approxi-mation proposed in this paper is significantlymore accurate than the quick approximation,while requiring only slightly more computation,making it more valuable for program managers.

Keywords: schedule development; learningcurves; multiproject planning; estimating

©2002 by the Project Management Institute2002, Vol. 33, No. 2, 16–298756–9728/02/$10.00 per article + $0.50 per page

to the program manager in order to minimize the total costof the program is to determine: ■ How many parallel sequences to operate; ■ How many projects to assign to each sequence.

How the Problem Has Been AddressedThis important scheduling problem has been addressedbefore, albeit with some restrictions. In 1996, Shtub,LeBlanc, and Cai developed an integer programming prob-lem, which seeks the least cost assignment of repetitive projects to a given number of available teams. Thus, in theirformulation, the number of parallel sequences is fixed, whilethe optimal assignment of projects to these sequences mustbe determined. Their objective function is defined as thesum of the production and penalty costs, less any incentivesfor early project completion. Although their function is notclosed due to the function’s ability to handle any type oflearning curve and of penalty/incentive cost structure, theauthors efficiently obtained near-optimal solutions usingthe pair-wise swap algorithm (1996).

In 1991, Shtub offered a heuristic search procedure for solv-ing a somewhat different formulation of this scheduling prob-lem. His goal was to find the number of parallel projectsequences, which minimizes the sum of the production andpenalty costs, less any early completion incentives. However, aconstraint implied by his search procedure is that thesequences must be of equal or nearly equal length. That is, thenumbers of projects assigned to the parallel sequences/teamscan differ by, at most, one. For example, with eight projects andthree sequences, the assignments would be as follows: ■ Sequence/Team 1: Project 1, Project 4, Project 7;■ Sequence/Team 2: Project 2, Project 5, Project 8;■ Sequence/Team 3: Project 3, Project 6.

Thus, in this formulation, the assignment of projects to par-allel sequences is a fixed process while the optimal number ofsequences must be determined. While many feasible projects-to-sequence assignments are ignored by this simple, partialenumeration procedure, Shtub does obtain satisfactory solutions to his formulation (1991).

Limitations of the Previous ApproachesThese two formulations suffer from a common limitation.They do not explicitly address the contents of the projects suchas the activities involved, their precedence relationships, andthe learning rates of the resources that perform these activities.This omission can cause substantial inaccuracies when fore-casting the program schedule and costs.

Focusing on heuristic algorithms that might efficiently solvetheir optimization problem, Shtub, Leblanc, and Cai (1996)ignore the project details altogether. Essentially, they assumethat a project can be viewed as a macro-activity performed bya single, possibly very large, resource entity (team). Hence, theonly learning rate required for their model’s time and cost cal-culations is that of a projectwide resource (the project learningrate). Because this rate is typically unknown at the start of anew program, the authors assume its value is a priori. However,because a project’s activities usually are defined in terms of the

specialists who perform them and because these specializedactivities are learned at rates that can vary considerably (theactivity learning rates), this assumption can lead to substantialerrors when estimating the project completion dates and cer-tain costs associated with the overall program. Consequently,errors also can be made when searching for a combination ofsequences and projects, which would minimize the total costof the program. To address these issues, the authors recom-mend extending their model by breaking down the work content of each project into specific tasks, i.e., by explicitlyaddressing the activities.

In his earlier article, Shtub (1991) does model the variousprogram costs down to the level of the project activities. Healso assumes that each activity is, under normal conditions,performed by one unit, i.e., worker or crew, of a single type ofresource. However, to reduce the extensive computationsassociated with repeated applications of the critical pathmethod (CPM), Shtub further assumes that all the activities/resource types learn at the same rate. This quick approxima-tion is tantamount to assuming a priori knowledge of theoverall project-learning rate, as is done later (1996). It also issubject to the same possibilities for error.

Admittedly, Shtub advises the reader utilize this restrictiveassumption/approximation “… during the early stages of theprogram (when data availability is limited) to analyze thetradeoffs between alternative schedules” (1991, p. 53). He sub-sequently recommends relaxing that assumption and using thefull-blown procedure “… for fine tuning of the delivery sched-ule when accurate data … are accumulated” (1991, p. 53). Atthat point, however, many of the extensive computations thatwere avoided by the quick approximation still must be per-formed. Thus, it is doubtful that practitioners would ever carryout the full-blown procedure.

Proposed SolutionTo improve the scheduling of programs with repetitive pro-jects, the analysis must be conducted at the level of projectactivities. However, the data tracking and the computationalworkload associated with this level of detail quickly becomeoverwhelming. Thus, it is imperative to make the schedule andcost estimation processes very efficient—sufficiently accurateand relatively inexpensive.

In a 1993 article, Amor and Teplitz do explicitly address theproject contents in the scheduling of programs with repetitiveprojects by incorporating detailed (task level) learning effectsinto the CPM (1993). However, soon thereafter, several practi-tioners expressed concern over the heavy computationalrequirements of that approach. As a result, they developed anefficient approximation method (Amor & Teplitz, 1998) byextending Badiru’s composite learning rate approximation incritical resource diagramming (1995). Their approximationmethod dramatically reduces the computational workload,while still providing accurate estimates of project deliverydates. In both the 1993 and the 1998 articles, however, con-tractual due dates and program costs are not included. Thus, inthe absence of penalties, the projects always are conducted in asingle sequence—all in series.

September 2002 Project Management Journal 17

18 Project Management Journal September 2002

In this research effort, an efficient program-scheduling toolis developed by implementing the Amor and Teplitz approxi-mation method for project composite learning curves (Amor &Teplitz, 1998) in the context of Shtub’s full-blown procedurefor scheduling programs with repetitive projects (Shtub, 1991).

Evaluation of Shtub’s Scheduling ApproachThe Basic Tradeoff.The scheduling of programs with repetitive pro-jects is characterized by two potentially conflicting considerations: ■ The need to complete each unit by its due date (accordingto the contract schedule); ■ The learning phenomenon. Frequently, the contractor haslittle or no experience with the product, and therefore, sub-stantial reductions in time and cost per unit (as the number ofrepetitions increases) are anticipated.

Because of the first consideration, there is a tendency to setup operations all in parallel, so that meeting the due dates isassured. However, the second consideration encourages pro-ducing the units all in series, so that the learning effects aremaximized. Of course, there are many other less extremeoptions. The delivery schedule depends on how many parallelsequences are utilized and how many projects are assigned toeach sequence. Thus, to make a sound scheduling decision, itis essential to compare a variety of parallel sequencing options,which trade-off the penalty associated with late deliveries withthe savings due to learning and to any incentive payments forearly completion. Also, the analysis should be conducted at thelevel of project activities to ensure sufficient accuracy.

In the articles previously discussed, the risk of overshootingthe contracted due dates is not considered. Hence, a correcttrade off between penalty and resource utilization costs occurswhen the total cost of the program is minimized. In the context of Shtub’s approach, this means when the optimalnumber of (equal/nearly equal) sequences to be operated inparallel has been found (Shtub, 1991). A slightly generalizedversion of Shtub’s cost model, which was used in this researcheffort, is presented in Appendix 1. In this model, the total programcost is the sum of the resource utilization, hiring, firing, andpenalty costs, minus the value of any incentives resulting fromearly project completions. Although the cost equations are

fairly complex, the following discussion is based primarily onthe number of projects in the program (N), the number ofactivities in each project (M), the assumed common learningrate (r), and the number of parallel sequences to be used (x).

Computational Workload. Because Shtub’s numericalsearch heuristic for identifying an optimal number of parallelsequences (1991) is quite straightforward, the core of his con-tribution resides in his program cost calculations. However, asmentioned earlier, his equal learning rates assumption essen-tially bypasses the analysis at the project level and can lead toappreciable errors in program schedule and cost calculations.This assumption does yield a quick approximation to the full-blown procedure. Shtub makes this approximation because, ashe calculates it, the full-blown procedure requires applying theCPM to N program networks (one for each possible value ofx), each with NM activities, and evaluating the M activity learn-ing curves, each for N repetitions. Thus, even for moderate values of M and N, the original procedure can represent anenormous amount of data-tracking and computations.

With his quick approximation (which the author refers toas tangent approximation) on the other hand, Shtub showsthat the work simply consists of executing the CPM on oneproject network (that of the first project) with M activities andevaluating one learning curve, i.e., the common one, for N rep-etitions using the critical path duration of the first project asthe duration of the first repetition. This work covers the case ofa single sequence (x = 1). Following this, Shtub applies hisfixed project-to-sequence assignment process (illustrated earlier)to “pick off” the estimated project delivery dates in the variousparallel sequence options (x = 2 through x = N). Although thequick approximation yields a tremendous computationaladvantage, it is not quite as dramatic as it appears.

The computational reduction due to the equal learningrates assumption appears larger than it actually is because, toperform the full-blown procedure, it really suffices to deal withthe all in series (x = 1) situation; that is, to apply the CPM to Nproject networks, each with M activities, and evaluating the Mactivity learning curves, each for N repetitions. Following this,the fixed projects-to-sequence assignment process can be usedto “pick off” the estimated project delivery dates for the otheroptions (x = 2 through x = N). Thus, there is no need to applythe CPM to entire program networks, and the computationalworkload associated with the full-blown procedure really is farless than reported by the author. The author calls this compu-tational simplification the “streamlined full-blown procedure”and, of course, it leads to exactly the same solution as Shtub’sfull-blown procedure.

For the all-in-series situation, the computational require-ments of the streamlined full-blown procedure are similar innature to those addressed by Teplitz and Amor (1993).Nevertheless, as mentioned earlier, such requirements still aretoo much for most practitioners. For each repetition of theproject, each activity time must be adjusted for the learningeffect, using appropriate parameters in the learning curveequation (typically of the log-linear form) before applying theCPM. Hence, there is a need to approximate the streamlinedfull-blown procedure in such a way that the data-tracking and

Schedulingprocedures

Full-Blown (FB)

Tangent approx.

Streamlined FB

Secant approx.

Table 1. Required Calculations for Various SchedulingProcedures

Number ofCPMs

(number ofactivity)

N (NM)

1 (M)

N (M)

2 (M)

Number of learning curves

(number of reps.)

M (N)

1 (N)

M (N)

M (2)

September 2002 Project Management Journal 19

Task

A

B

C

D

E

F

G

H

I

J

K

L

M

N

O

P

Q

R

S

T

U

V

W

Note. From “Case and Readings in Production and Operations Management,” by J.C. Latona and J. Nathan (1994), Boston, MA: Allyn and Bacon; and“Improving CPM’s Accuracy Using Learning Curves,” by C.J. Teplitz and J.P. Amor, 1993, Project Management Journal, 24, pp. 15–19.

Table 2. Tasks, Precedences, Times and Improvement Factors for Housing Example

Description

Excavate and pour footers

Pour concrete foundation

Erect wooden frame, including rough roof

Lay brickwork

Install basement drains and plumbing

Pour basement floor

Install rough plumbing

Install rough wiring

Install heating and ventilating

Fasten plaster board and plaster

Lay finish flooring

Install kitchen fixtures

Install finish plumbing

Finish carpentry

Finish roofing and flashing

Fasten gutters and downspouts

Lay storm drains for rain water

Paint

Sand and varnish floors

Finish electrical work

Finish grading

Pour walks and complete landscaping

Finish

Immediatepredecessor

A

B

C

B

E

E

C

C, F

G, H, I

J

K

K

K

D

O

B

L, M

N, R

R

P. Q

U

S, T, V

Time @SRQ (days)

4

2

4

6

1

2

3

2

4

10

3

1

2

3

2

1

1

3

2

1

2

5

0

Rate of improvement

90%

95%

85%

85%

90%

70%

85%

85%

90%

90%

85%

90%

85%

80%

85%

90%

90%

95%

70%

80%

90%

90%

computational requirements are dramatically reduced.Obviously, such an approximation must be sufficiently accu-rate to obviate the need for a follow-up with the (even stream-lined) full-blown procedure, as suggested by Shtub. For thispurpose, Shtub’s quick approximation will not do.

Accuracy. Basically, Shtub’s quick approximation requiresvery few computations (one CPM and N learning curve calcu-lations), but its accuracy is not very predictable. By assuming,a priori, a common learning rate, r, for all the project activities,two types of errors arise, which eventually affect the scheduleand cost estimations.

First, the composite learning curve of the M activities [whichis not log-linear even if, as is usually assumed, the individualactivity learning curves are log-linear (Amor & Teplitz, 1998)]is approximated by a log-linear curve, which coincides with thecomposite curve only at one point—corresponding to the firstproject. The amount and direction of the divergence betweenthe composite curve and Shtub’s “tangent approximation” isunknown and totally dependent on the value of r, the assumedcommon learning rate. There is no guidance regarding how toselect a “good” value for r.

Because the composite learning curve of the M activities is atthe heart of the resource utilization cost calculations (seeAppendix 1), it follows that the amount and direction of theerror in the approximated resource costs are likewise unknownand unpredictable. Strictly speaking, Shtub’s quick approxima-tion is not necessarily a tangent to the composite learning curve.It simply coincides with that curve at only one point and tendsto diverge rapidly from that curve as the number of projectsincreases. Nonetheless, from this point on, the author shallrefer to that approximation as the tangent approximation.

Second, the project composite learning curve [which also isnot log-linear even if, as is usually assumed, the individualactivity learning curves are log-linear (Amor & Teplitz, 1998)]is approximated by a log-linear curve, which coincides with thecomposite curve at only one point—corresponding to the firstproject. This composite curve reflects the fact the critical path

may change from repetition to repetition (Amor & Teplitz,1998). Again, the amount and direction of the divergencebetween the composite curve and Shtub’s “tangent approxima-tion” is unknown and totally dependent on the value of r, theassumed common learning rate. Again, there is no guidanceregarding how to select a “good” r.

Because the project composite learning curve is the basis fordetermining the delivery dates, and because these dates are atthe heart of the penalty/incentive cost calculations (seeAppendix 1), it follows that the amount and direction of theerror in the approximated penalty/incentive costs are likewiseunknown and unpredictable.

Essentially then, for any given number of parallel sequences,the project delivery dates and the total program cost cannot bereliably and accurately estimated with Shtub’s tangent approxi-mation. Hence, the optimal number of sequences, provided byhis search heuristic, also may be in error. Consequently, there isa need for a more predictable approximation method that iseconomical and sufficiently accurate to obviate the need for afollow-up with the streamlined full-blown procedure.

An Efficient Approximation ProcedureThe efficient approximation method mentioned earlier (Amor &Teplitz, 1998) constructs a log-linear secant to a project’s com-posite learning curve. This approximation requires calculating thecritical path times of the first and last projects of interest—usingappropriate task times from the individual activity learningcurves—and estimating the slope of the log-linear curve passing

20 Project Management Journal September 2002

Figure 2. Completion Times: IDEAL – Up to Six ParallelSequences

Cal

enda

r D

ays

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29

Project Number

One

Two Parallel

Three Parallel

FourParallel

FiveParallel

SixParallel

180

160

140

120

100

800

600

400

200

0

Note. From “Cases and Readings in Production and OperationsManagement,” by J.C. Latona and J. Nathan (1994), Boston, MA: Allyn andBacon, and “Improving CPM’s Accuracy Using Learning Curves,” by C.J.Tepliz and J.P. Amor (1993), Project Management Journal, 24, pp. 15–19.

Figure 1. Activity-on-Node Network for Housing Example

G

E F I J

A B C H

D O P

Q U

K L

N S W

V

R T

M

through these two points. Thus, the “secant approximation” gen-erates, a postiori, an approximate learning rate, rp, for the project.The computational requirements of this method (two CPMs,each with M activities, and two M learning curve calculations) arenot very different from those of Shtub’s “tangent approxima-tion.” For the composite learning curve of the M activities, thesecant approximation generates, a postiori, another approximatelearning rate, rs, which can be viewed as that of an all-in-seriesnetwork of the project’s M activities. This learning curve is used inthe resource utilization cost calculations (see Appendix 1).

The direction of the divergence between a compositelearning curve and its “secant approximation” is always thesame. Because the composite curve is convex (Amor &Teplitz, 1998), the secant approximation always overesti-mates it. The amount of divergence cannot be bounded the-oretically, but Amor and Teplitz demonstrate, via examplesfrom several industries, that the accuracy of the approxima-tion is well within 4%. In fact, the accuracy is within 2% inmost cases (Amor & Teplitz, 1998).

For comparison purposes, the numbers of required cal-culations for the various procedures are summarized inTable 1. Note that the number of calculations associatedwith the “secant approximation” is independent of N, thenumber of projects in the program—an attractive featurefrom a practitioner’s point of view.

Therefore, an efficient program-scheduling tool is developedin this research effort by implementing the Amor and Teplitz(1998) approximation method in the context of Shtub’s (1991)

September 2002 Project Management Journal 21

full-blown procedure. This tool combines the CPM, individualtask learning curves, and the “secant approximation” withShtub’s cost model and search heuristic. The resulting packagedramatically reduces the computational workload required byShtub’s approach (quick/tangent approximation followed bythe streamlined full-blown procedure), while providing suffi-ciently accurate and robust results. This last point is illustratedin the next section, which compares the results from the twoapproximation methods (for those practitioners who mightconsider using only the quick approximation and ignore thefollow-up) to the “ideal” results of the full-blown procedure.

ResultsA spreadsheet is used to integrate the various elements of theauthor’s approximation procedure for scheduling sequences ofsimilar projects. An example from the house constructionindustry, developed by Amor and Teplitz (1998), is used to:■ Compare the accuracies of the tangent and secant approxi-mation methods; ■ Examine the robustness of these methods over a range ofplausible scenarios.

This example is used because it is conservative; it generated thelargest error (3.3%) among the various examples reported. Theauthor’s results indicate that the secant approximation method isconsistently more accurate than Shtub’s tangent approximation.

House Construction Example. The author uses the multi-unit construction program presented by Amor and Teplitz(1998). This example had been drawn from Latona and

Figure 3. Cost Curves: IDEAL – All Types of Costs

Dol

lars

1 3 5 7 9 11 13 15

Number of Parallel Sequences

Resource

Hire

Fire

Penalty

Incentive

TOTAL

$2,000,000

$1,500,000

$1,000,000

$500,000

Figure 4. Learning Curves: Composite – All Activities

Res

ourc

e D

ays

1 4 7 10 13 16 19 22 25 28

Project Number

Tgnt (0.90)

IDEAL

Scnt (1–30)

Tgnt (0.85)

Tgnt (0.80)

200

180

160

140

120

100

80

60

22 Project Management Journal September 2002

Description

Number of projects

Contract schedule excursions

Contract schedule A

Contract schedule B

Contract schedule C

Contract schedule D

Penalty cost excursions

Daily penalty costDaily penalty costDaily penalty costDaily penalty costDaily penalty cost

Daily incentive cost

Number of activities

Acitivity first performance time (days)

Daily resource cost

Hiring cost

Firing cost

Initial number of crews

Table 3. Input Data for Base Case and Excursions

Notation

N

Dj

Dj

Dj

Dj

pjpjpjpjpj

gj

M

tj

ki

hi

fi

ci

Value (s)

30

120 days180 days240 days

180 days270 days360 days

270 days405 days540 days

360 days540 days720 days

$0$50$125$250$500

$0

23 (the twenty-third represents project completion — 0 days)

8.1; 2.8; 11.8; 17.7; 2.0; 21.4; 8.8;5.9; 8.1; 20.1; 8.8; 2.0; 5.9; 13.2;5.9; 2.0; 2.0; 4.2; 21.4; 4.4; 4.0;10.1 (based on data from Table 2)

$300/crew

$300/crew

$400/crew

1

j = 1 through j = 10j = 11 through j = 20j = 21 through j = 30

j = 1 through j = 10j = 11 through j = 20j = 21 through j = 30

j = 1 through j = 10j = 11 through j = 20j = 21 through j = 30

j = 1 through j = 10j = 11 through j = 20j = 21 through j = 30

j = 1 through j = 30j = 1 through j = 30j = 1 through j = 30j = 1 through j = 30j = 1 through j = 30

j = 1 through j = 30

i = 1 through i = 22

i = 1 through i = 22

i = 1 through i = 22

i = 1 through i = 22

Nathan (1994), with the learning curve data extracted fromTeplitz and Amor (1993) and adapted to that situation.

Table 2 lists the tasks, precedence, and times at standard reference quantity (SRQ) and rates of improvement associatedwith the construction of a single home. The SRQ normally rep-resents a unit beyond which a fully experienced worker is con-sidered to demonstrate no perceptible future improvement. In this example, the SRQ is assumed to be unit 100. Althoughthere are 23 tasks listed in the table, only the first 22 are time-consuming activities; the twenty-third task simply representsthe project completion. Based on their precedence relation-ships, the tasks are arranged in a moderately complex network(see Figure 1). Coincidentally, there are 22 possible pathsthrough the network, all which could be a critical path at anytime throughout this repetitive program.

The author examined the construction of a 30-house subdi-vision and, as done earlier (Amor & Teplitz, 1998), made twosimplifying assumptions. The first assumption is that all work-ers and crews are new hires with no previous experience withtheir upcoming tasks. This assumption serves two purposes: ■ If all workers are equally experienced/inexperienced, it isnot necessary to maintain learning curves depicting the actualrepetitive position of each worker;■ The “early stage” of a learning curve is the most demandingon the accuracy potential of the approximations.

The second assumption is that no house in a parallel sequenceis started until the previous house has been completed, i.e., only x houses are under construction at any given time in the

program. Normally, of course, a given task is performed onmore than one house at a time (within any given sequence),resulting in a shorter program duration. While this assumptionavoids the complexity of dealing with “overlapping” projects, itdoes tend to overstate the beneficial impact of incorporatinglearning curves (Amor & Teplitz, 1997).

Because, in this paper, the house construction example isused with a cost model, additional data are obtained fromShtub’s paper (1991) and from informal conversations with abuilding contractor. These data then were adapted to the cur-rent example. Table 3 lists the values of all the inputs for the costmodel defined in Appendix 1. It includes the values that areused later in the robustness excursions. For the base case, ana-lyzed in the next section, Contract Schedule B (first 10 housesdue on day 180, next 10 houses due on day 270, and last 10houses due on day 360) and a penalty cost of $50/day are used.This table presents the information in the same order as it isintroduced in Appendix 1 under “Assumptions and Notation.”

Comparative Accuracy Results. In this section, the authorcompares the accuracy of the tangent and secant approxima-tions methods for the base case in the house constructionexample. The streamlined, full-blown procedure is used toobtain the ideal results, to which the approximated results werecompared. In the Figures 2, 3, 4, 5, 6, 7, 9 and 10, the curves cor-responding to the full-blown procedure are labeled “IDEAL.”

To use the tangent approximation method, an a priori valuefor the common learning rate, r, must be selected. This value isused for approximating:

September 2002 Project Management Journal 23

Figure 5. Resource Cost

Dol

lars

1 3 5 7 9 11 13 15

Number of Parallel Sequences

$1,800,000

$1,600,000

$1,400,000

$1,200,000

$1,000,000

$800,000

Figure 6. Learning Curves: Composite – Project

Res

ourc

e D

ays

1 4 7 10 13 16 19 22 25 28

Project Number

Tgnt (0.90)

IDEAL

Scnt (1–30)

Tgnt (0.85)

Tgnt (0.80)

120

100

80

60

40

20

Tgnt (0.90)

IDEAL

Scnt (1–30)

Tgnt (0.85)

Tgnt (0.80)

■ The composite learning curve of all the activities, which isrequired for calculating the resource utilization cost; ■ The project composite learning curve, which is needed forcalculating the delivery dates and the penalty costs.

Shtub furnishes no guidance for selecting a good value of r.In his example, he uses 0.85 and, because the activity learningrates are not provided, one can only assume that 0.85 is a rel-atively central value for the individual rates he has in mind. Inthe author’s housing example, the largest and smallest learningrates are 0.95 and 0.70 respectively; their mean is 0.861, theirmedian is 0.875, their mode is 0.90, and their Delionbackweighted average (NASA, 1975)—at the first unit—is 0.831.From among the activities that are on the critical path of thefirst project, the largest and smallest learning rates still are 0.95and 0.70, respectively; their mean is 0.844, their median is0.90, their mode is 0.90, and their Delionback weighted aver-age (NASA, 1975)—at the first unit—is 0.804. Which one ofthese numbers would make a good, a priori estimate for r?

The initial estimate for r can affect greatly the computationsin the tangent approximation method and, hence, the com-parative accuracy results. Consequently, the author was reluc-tant to select only one value for r. Based on the potential choic-es listed, the author used three values: 0.80, 0.85, and 0.90.Because they are extreme among the calculated measures ofcentrality and to develop empirical bounds on the error of thetangent approximation method, the values of 0.80 and 0.90were selected. The value of 0.85 also was used because it isShtub’s choice and because it is halfway between 0.80 and0.90. In the Figures 4, 5, 6, 7, 9, 11, 12, 13, and 14, the curvescorresponding to these three estimates are labeled “Tgnt(0.80),” “Tgnt (0.90),” and “Tgnt (0.85).”

As mentioned earlier, the secant approximation method pro-duces a postiori approximate learning rates, rs and rp, for the all-in-series and project networks. These rates are derived from theslopes of the log-linear secants (to the composite learning curves)“passing through” projects/houses 1 and 30. In this example,these rates turn out to be 0.844 and 0.831, respectively. In theFigures 4, 5, 6, 7, 9, 11, 12, 13, and 14, the curves correspondingto a secant approximation are labeled “Scnt (1–30).”

Ideal Completion Times and Costs. Figure 2 displays theideal house completion times for up to six parallel sequences ofprojects as obtained with the full-blown procedure. Note that thisfigure illustrates the rapid increase in completion times and thusin the likelihood of penalty costs as the number of sequencesdecreases. Figure 3 displays the various program cost curves asfunctions of the number of parallel sequences used. Becausethere are not daily incentive “costs” in this example, the incentive“cost” curve lies completely on the horizontal axis. Because theinitial number of crews is one for all the activities, the hire andfire cost curves simply increase linearly. Note how the resourceutilization cost and penalty cost curves “pull” in opposite direc-tions and, essentially, determine the U shape of the total costcurve. For this base case, the optimal number of sequences tooperate is four, which yields a minimum total cost of $1,421,277.

Comparative Resource Utilization Costs. Figure 4 dis-plays the composite learning curves for all M activities. Notethat the secant method approximates the ideal curve quitewell. However, the tangent method can yield a broad spec-trum of approximations between Tgnt (0.80) and Tgnt(0.90). As expected, the Tgnt (0.85) curve diverges from theideal curve less than the other two tangents, but it does notfit the ideal curve as well as Scnt (1–30). Figure 5 displaysthe resource utilization cost curves as functions of the num-ber of parallel sequences used. Again, the ideal curve is wellapproximated by the secant method, and the Tgnt (0.80)and Tgnt (0.90) curves bound a broad spectrum of approx-imations. Note, however, the convergence of the tangentcurves as the number of sequences increases. This is due to:■ The inverse relationship between the number of projectsin a sequence and the number of sequences used; ■ The fact that increasingly shorter portions of the earlypart of the composite learning curve (of all the activities) areused as the number of sequences increases.

Comparative Penalty Costs. Figure 6 displays the projectcomposite learning curves. Although the vertical scale is dif-ferent from that of Figure 4, the general observations aresimilar, as should be expected, for both sets of curves. Figure7 displays the penalty cost curves as functions of the

24 Project Management Journal September 2002

Method and Optimal number of Minimum total Error Percentageparameter parallel sequences (x*) cost (TC*) error

IDEAL 4 $1,421,277 N/A N/A

Scnt (1–30) 5 $1,454,832 $33,554 2.4%

Tgnt (0.85) 5 $1,479,731 $58,454 4.1%

Tgnt (0.80) 4 $1,328,746 ($92,532) –6.5%

Tgnt (0.90) 6 $1,619,996 $198,719 14.0%

Table 4. Comparative Optimization Results for the Base Case

September 2002 Project Management Journal 25

CS (A) CS (B) CS (C) CS (D)

Scnt (1—30) 2.4% 2.5% 3.8% 2.9%

Tgnt (0.85) 3.9% 4.1% 6.5% 9.3%

Tgnt (0.80) –14.8% –14.8% –14.8% –14.8%

Tgnt (0.90) 25.9% 25.9% 25.9% 25.9%

Table 5. Comparative Maximum Percentage Errors

estimated the duration of several programs (in various indus-tries), each one using a single sequence of projects.Consequently, it does not seem beneficial to work with theirother examples. Instead, to investigate the robustness of theseapproximation methods, the author conducted a series ofexcursions from the base case by varying two key parametersover relevant ranges of values. These two parameters are thecontract schedule and the daily penalties. The various otherparameters are far less significant.

Four contract schedules are examined, including the onethat is used in the base case. From the tightest to the loosestschedule, they are labeled A, B, C, and D; their details areshown in Table 3. For each contract schedule, five daily penal-ties are examined; they range from $0 to $500. (As mentionedearlier, the base case uses Contract Schedule B and a $50/daypenalty.) Because, when there are no penalty and incentivecosts the total program cost is independent of the contractschedule, there are 17 data points/excursions (including thebase case) for use in the robustness analysis.

Ideal Minimum Total Cost. Figure 10 displays the ideal mini-mum total cost, TC*, for each of the 17 excursions mentioned, asobtained with the streamlined full-blown procedure. It also indi-cates in parentheses under each data point the optimal number ofparallel sequences, x*, which yields the plotted value of TC*.

As expected, for any given penalty rate, the tighter the contractschedule, the more sequences are needed to minimize the totalcost—because there is a greater likelihood of incurring penalties.Naturally, the tighter the schedule, the larger the minimum totalcost. Also as expected, for any given contract schedule, as thepenalty rate increases, the number of sequences needed to mini-mize the total cost tends to increase. The larger the penalty rate,the larger the minimum total cost—until x* becomes greaterthan the number of sequences beyond which there would be nomore late projects. This turns out to be the case for ContractSchedule D, when x* becomes 3, because for that relatively looseschedule there are no late projects when x is greater than 2.

Comparative Errors Under the Various Contract SchedulesFigures 11, 12, 13, and 14 display the percentage error of eachapproximation method for each daily penalty under ContractSchedules A, B, C, and D. That is, for each excursion, the

number of parallel sequences used. Again, the ideal curve iswell approximated by the secant method, and the Tgnt(0.80) and Tgnt (0.90) curves bound a broad spectrum ofapproximations, especially when the number of sequencesis small. As with Figure 5, there is a convergence of the tan-gent curves. This is due to similar, though slightly morecomplex, reasons as for the resource utilization cost curves.

Comparative Hire and Fire Costs. Figure 8 displays thehire and fire cost curves. These curves do not depend oneither of the composite learning curves (all M activities orprojects). Consequently, they are the same for all the meth-ods being compared.

Comparative Total Costs. Figure 9 displays the total costcurves as functions of the number of parallel sequences used.As expected from the results, the ideal curve is well approxi-mated by the secant method. However, the tangent methodyields a broad range of possible approximations between Tgnt(0.80) and Tgnt (0.90). Of course, Tgnt (0.85) provides a bet-ter fit, but it is not a good as that of Scnt (1–30).

Comparative Optimization Results. For each method andparameter value discussed, the optimal number of parallelsequences (x*), its associated minimum total cost (TC*), andthe error and percentage error from the full-blown procedureare presented in Table 4.

Clearly, the secant approximation method results in muchsmaller cost estimation errors than the tangent approximationmethod. Although Scnt (1–30) would recommend using fivesequences rather than four, the error associated with followingthat recommendation would result in an error of only $173($1,421,277 with four sequences in IDEAL vs. $1,421,450 withfive sequences in IDEAL) after the fact, i.e., after programimplementation and assuming that the ideal curve turns out tobe a perfect forecast. Thus, for the base case example, the secantmethod provides an approximation to Shtub’s full-blown pro-cedure, which is superior to almost any reasonable applicationof the tangent approximation method. However, what aboutthe robustness of these results?

Comparative Robustness Results: Excursions from the Base CaseThe base case is derived from the example in Amor and Teplitz(1998) that turned in the largest error (3.3%) when they

26 Project Management Journal September 2002

Figure 8. Hire and Fire Costs

Dol

lars

1 2 3 4 5 6 7 8 9 10 11 12 1314 15

Number of Parallel Sequences

Hire

Fire

$140,000

$120,000

$100,000

$80,000

$60,000

$40,000

$20,000

$0

Figure 7. Penalty Costs

Dol

lars

1 3 5 7 9 11 13 15

Number of Parallel Sequences

$1,600,000

$1,400,000

$1,200,000

$1,000,000

$800,000

$600,000

$400,000

$200,000

$0

Tgnt (0.90)

IDEAL

Scnt (1–30)

Tgnt (0.85)

Tgnt (0.80)

approximated minimum total costs are compared to the idealminimum total cost, and the percentage errors are reported.Clearly, under contract Schedule A, Scnt (1–30) outperforms thetangent approximations examined—its error never exceeding2.4% ($40,143).

Likewise, under Contract Schedule B, Scnt (1–30) outper-forms the tangent approximations examined—its error neverexceeding 2.5% ($37,524). Again, under Contract Schedules Cand D, Scnt (1–30) outperforms the tangent approximationsexamined—its error never exceeding 3.8% ($51,429) and 2.9%($34,506), respectively. Also note that, as the contract scheduleloosens, the percentage error of the approximations tends toincrease, regardless of the daily penalty values.

SummaryBased on the results shown in Figures 11, 12, 13, and 14, Table5 presents the maximum percentage error in the minimum totalcost for each contract schedule and penalty cost combinationexamined. Clearly, the secant approximation method is consis-tently more accurate than the tangent approximation method.

The only way that the tangent might outperform the secant iswith an extremely fortunate and highly unlikely a priori estimatefor r. However, it is also clear that, given the activity learning ratesof the housing example, a choice of 0.85 (as Shtub made in hispaper) does turn in sufficiently accurate results. Nevertheless,why gamble with the unpredictable size and direction of its errorwhen the secant approximation errs minimally and consistently(typically less than 4% and always as an overestimation).

ConclusionAn existing two-stage approach for scheduling programs with repet-itive projects recommends performing an initial quick (tangent)approximation and following it later with a full-blown schedulingprocedure. By assuming a common learning rate for all projectactivities, the tangent approximation, although computationallyinexpensive, is not sufficiently accurate to obviate the need for thefull-blown follow-up. The full-blown method, however, requires somuch data-tracking and so many calculations that it is discouragingto practitioners. The secant approximation presented in this paper issignificantly more accurate and reliable than the tangent approxi-mation, while requiring only slightly more computation. Becausethe secant approximation comes sufficiently close to the full-blown(ideal) follow-up, the latter can be avoided altogether, making thesecant approximation a sufficiently accurate and extremely econom-ical procedure, which should be attractive to most practitioners.

The scheduling of programs with repetitive projects discussedin this paper (and in the associated references) does not consid-er two important factors: risk management and project overlap.For example, the activity times are assumed to be known withcertainty, whereas they could be treated as random variables anddealt with using risk analysis techniques. Likewise, the projectsin any given sequence are assumed to begin only upon comple-tion of their predecessor, whereas they could be overlapped andanalyzed using a newly developed technique (Amor & Teplitz,1997). These considerations, however, are beyond the scope ofthe present work and are left as suggestions for further research.If they can be incorporated into the secant approximation

September 2002 Project Management Journal 27

Figure 9. Total Costs

Dol

lars

1 3 5 7 9 11 13 15

Number of Parallel Sequences

$2,800,000

$2,600,000

$2,400,000

$2,200,000

$2,000,000

$1,800,000

$1,600,000

$1,400,000

$1,200,000

Figure 10. Min TC – IDEAL – Contract Schedulesand Daily Penalties

Dol

lars

1 100 200 300 400 500

Daily Penalty ($)

A: 120/180/240

B: 180/270/360

C: 120/405/540

D: 360/540/720

$2,000,000

$1,800,000

$1,600,000

$1,400,000

$1,200,000

$1,000,000

$800,000

Tgnt (0.90)

IDEAL

Scnt (1–30)

Tgnt (0.85)

Tgnt (0.80)

method, that procedure will make the scheduling of sequencesof similar projects even more valuable for program managers.

ReferencesAmor, J.P., & Teplitz, C.J. (1993). Improving CPM’s accuracy

using learning curves. Project Management Journal, 24 (4), 15–19.Amor, J.P., & Teplitz, C.J. (1997). On the relative impacts

of learning and overlap on the scheduling of programs withrepetitive projects. Proceedings of the Twenty-Sixth AnnualMeeting of the Western Decision Sciences Institute, Kamuela, HI.Madison, WI: Omnipress, 729–732.

Amor, J.P., & Teplitz, C.J. (1998). An efficient approxima-tion procedure for project composite learning curves. ProjectManagement Journal, 29 (3), 28–42.

Badiru, A.B. (1995). Incorporating learning curve effectsinto critical resource diagramming. Project ManagementJournal, 2 (2), 38–45.

NASA. (1975). Guidelines for application of learning/costimprovement curves (Report TM X-64968). Washington, DC:Delionback, L.M.

Latona, J.C., & Nathan, J. (1994). Cases and readings in produc-tion and operations management. Boston, MA: Allyn and Bacon.

Shtub, A. (1991). Scheduling of programs with repetitiveprojects. Project Management Journal, 22 (4), 49–53.

Shtub, A., LeBlanc, L.J., & Cai, Z. (1996). Scheduling pro-grams with repetitive projects: A comparison of a simulated annealing, a genetic and a pair-wise swap algorithm.European Journal of Operational Research, 88, 124–138.

Appendix 1. The Cost Model and its CharacteristicsAssumptions and NotationFor the program:■ The program consists of N identical projects, each result-ing in the completion of one unit of the product;■ Each project, j, refers to one product unit; j = 1 through j =N. And “J” represents the set of all projects within the program;■ A due date, Dj, is specified for each project according tothe contract schedule;■ A delivery date, Tj, eventually occurs for each project as aresult of executing the program;■ A penalty rate, pj, applies to each project completedbeyond schedule, e.g., in $/day;■ An incentive rate, gj, applies to each project completedahead of schedule, e.g., in $/day.

For the projects:■ Each project consists of a network of M activities;■ Each activity, i, is performed by one unit of the specificresource type required for that activity, e.g., one worker, onecrew; i = 1 through i = M. And “I” represents the set of allactivities within a project;■ A duration, ti, applies to each activity when performed forthe first time (by one unit of its associated resource type);■ A learning curve exponent, bi, corresponds to the learningrate, ri, of resource type i. (bi = log ri / log 2);■ A resource cost rate, ki, applies to each resource unit whenit is in use, e.g., in $/day;

28 Project Management Journal September 2002

Figure 11. Percent Error in Min TC – Contract Schedule A(120/180/240)

Erro

r in

Min

TC

(%

)

1 100 200 300 400 500

Daily Penalty ($)

30

20

10

0

–10

–20

Tgnt (0.85)

Scnt (1–30)

Tgnt (0.80)

Tgnt (0.90)

■ Each resource type has an associated hiring cost, hi, e.g., in $/crew;■ Each resource type has an associated firing cost, fi, e.g.,in $/crew;■ Each resource type is available in a certain quantity, e.g.,number of crews, ci, at the beginning of the program, e.g., ci

= 0 through ci = N, and must returned to that level at the endof the program.

For the cost functions and the optimization process:■ x represents the number of (equal/near equal) parallelsequences under consideration, x = 1 through x = N;■ Tj(x) represents the delivery date of project j when usingx parallel sequences;■ RC(x) represents the resource utilization cost when usingx parallel sequences;■ HC(x) represents the hiring cost when using x parallel sequences;■ FC(x) represents the firing cost when using x parallel sequences;■ PC(x) represents the penalty cost when using x parallel sequences;■ IC(x) represents the incentive “cost” when using x parallel sequences;■ TC(x) represents the total cost when using x parallel sequences;■ [N/x]+ represents the next integer value of N/x, wheneverN/x is not an integer.

The Cost Model

TC(x) = RC(x) + HC(x) + FC(x) + PC(x) – IC(x)

where: RC(x) = x∑ ∑ ki ti (j)bi if N/x is an integer or

= (x – [N/x]+ x + N) ∑ ∑ ki ti (j)bi

+ ([N/x]+ x – N) ∑ ∑ ki ti (j)bi if N/x is not an integer

HC(x) = ∑ hi Max{x – ci,0}

FC(x) = ∑ fi Max{x – ci,0}

PC(x) = ∑ pj [Tj (x) – Dj]

IC(x) = ∑ gj [Dj – Tj(x)]

Figure 12. Percent Error in Min TC – Contract Schedule B(180/270/360)

Erro

r in

Min

TC

(%

)1 100 200 300 400 500

Daily Penalty ($)

30

20

10

0

–10

–20

Tgnt (0.85)

Scnt (1–30)

Tgnt (0.80)

Tgnt (0.90)

j JTj(x) < Dj

j JTj(x) < Dj

i I∋

i I∋

i I∋

i I∋

i I∋

j=1

j=1

j=1

N/x

[N/x]+

[N/x]+–1

Jean-Pierre Amor, PhD, is associate professor of deci-

sion sciences in the School of Business Administration at

the University of San Diego. He received his PhD in oper-

ations research from the University of California at Los

Angeles. During his prior career with the U.S. Air Force, Colonel Amor was

commander of a high-technology laboratory, which managed research

and development projects associated with the Strategic Defense

Initiative. While in the Air Force, Amor also planned and analyzed military

operations at the Pentagon, NATO, and in Southeast Asia. At the universi-

ty, he teaches courses in operations management and management

science. His current research interests include project management,

work in the 21st century, and complexity issues in organizations.

Figure 14. Percent Error in Min TC – Contract Schedule D(360/540/720)

Erro

r in

Min

TC

(%

)1 100 200 300 400 500

Daily Penalty ($)

30

20

10

0

–10

–20

Tgnt (0.85)

Scnt (1–30)

Tgnt (0.80)

Tgnt (0.90)

Characteristics of the Model The total cost of the program, TC(x), which is to be mini-mized by choice of an appropriate number of (equal/nearlyequal) parallel sequences, x, is simply the sum of the resourceutilization, hiring, firing, and penalty costs minus the valueof any incentives resulting from early project completions.Although there is but a single variable to optimize, x, the costfunctions are relatively complex.

The resource utilization cost function, RC(x), has one oftwo forms, depending on whether or not the sequences are ofequal length. In both forms, the inside summation representsthe composite learning curve of the M activities; and the vari-able of interest, x, appears in several locations, including theupper limit of the outside summation. The hiring and firingcosts functions are essentially linear.

The penalty and incentive costs functions are perhaps themost complex, because they depend in two ways on the deliverydate, Tj(x), of each late/early project. First, in the summation lim-its, Tj(x) determines whether or not a project is included in thecalculations; second, in the individual terms, Tj(x) helps deter-mine the size of the penalty/incentive for the included projects.

Furthermore, a project’s delivery date is itself based onthat project’s critical path as well as on the critical paths ofall the projects preceding it in the (parallel) sequence towhich it belongs. Hence, ultimately, Tj(x) depends on theproject composite learning curve. Therefore, any analyticaloptimization clearly is out of the question and the searchprocess must be numerical.

Figure 13. Percent Error in Min TC – Contract Schedule C(270/405/540)

Erro

r in

Min

TC

(%

)

1 100 200 300 400 500

Daily Penalty ($)

30

20

10

0

–10

–20

Tgnt (0.85)

Scnt (1–30)

Tgnt (0.80)

Tgnt (0.90)

THayburn
This article is copyrighted material and has been reproduced with the permission of PMI. Unauthorized reproduction of this material is strictly prohibited.

30 Project Management Journal September 2002

A Measure of SoftwareDevelopment Risk

James J. Jiang, University of Central Florida, College of Business Administration, Department ofManagement Information Systems, P.O. Box 160000, Orlando, FL 32816–1400 USA

Gary Klein, The University of Colorado at Colorado Springs, College of Business andAdministration, 1420 Austin Bluffs Parkway, Colorado Springs, CO 80933–7150 USA

T. Selwyn Ellis, Louisiana Tech University, College of Administration and Business, Departmentof Computer Information Systems, Ruston, LA 71272–0001 USA

AbstractRisks to software development are presentthroughout the creation of information systems(IS). The ability of researchers and practitionersto consider risk within their models and projectmanagement methods has been hampered bythe lack of a rigorously tested instrument tomeasure risk properties. An instrument to measure software development risk based on properties described in the literature is pro-posed. A survey of 152 IS project participantswas collected to examine the reliability andvalidity of the instrument, both which appearstrong. The instrument then was used to empirically demonstrate a strong anticipatedrelationship between risk and project success.

Keywords: information systems implemen-tation; risk analysis; survey research

©2002 by the Project Management Institute2002, Vol. 33, No. 3, 30–418756–9728/02/$10.00 per article + $0.50 per page

The development of information technology (IT)-related projects has become acritical aspect of most businesses because of organizational dependence on com-

puter-based systems to remain competitive. Even with the widespread use of tools such as prototyping, data modeling, structured design, computer-assistedsoftware engineering (CASE), and other project management tools, software devel-opment still suffers extremely high failure rates (Meyer, 1998). In a recent study of100 companies, only 37% of major IT projects were completed on time and only42% were completed on budget (Gordon, 1999).

Data from The Standish Group International suggests that the problem is notgoing away. Results from a survey of more than 7,000 IT projects shows that in 1998,40% were canceled prior to completion and more than 45% of the projects were lateor over budget. Leading the way in late or over-budget projects is the government sector with more than half of their projects falling into this category. These failed pro-jects cost U.S. companies and government agencies $145 billion annually (Gordon,1999). Government agencies completed only 18% of all IT projects on time and onbudget (Davis & Wilder, 1998).

In a recent survey of 150 IT managers, poor project management was cited as theleading reason for IT project failure (Davis & Wilder, 1998). Risk management canhave a positive impact on the selection, scope, schedules, and budgets of projects(Schwalbe, 2000). Two areas of risk management are risk identification and riskquantification. Risk identification has been the topic of many research endeavors(Anderson & Narasimhan, 1979; Balloun, Jiang, & Klein, 1996; Barki, Rivard, &Talbot, 1993; Boehm, 1989; Cafasso, 1994; McFarlan, 1981; Nidumolu, 1995;Zmud, 1980). Given the volume of research addressing the risks associated withsoftware development projects, risk identification has been properly considered.

Risk quantification involves evaluating risks and satisfactorily measuring the risksassociated with a project. Very little progress in the way of theoretical developmenthas been made in this area. Barki, Rivard, and Tallard (1993) made an early effort atthe development of an instrument to measure software development risk. The instru-ment, which consisted of 34 variables and 144 items, was empirically tested using120 projects. This was an important step toward current risk management research.With the availability of a reliable and valid instrument to measure software develop-ment risks, the area of project management could be strengthened and expanded. Inthis study, the authors tested and refined previous software risk measurement con-structs. A short form risk measurement has been proposed and empirically tested

with 152 projects. The results indicate a negative relationshipbetween the software risks and project success.

Software Development Risk LiteratureAccording to McFarlan (1981), failure to assess individual pro-ject risk is a major source of the information systems (IS) devel-opment problem. Many researchers have attempted to identifythese various risk variables threatening successful software devel-opment. Anderson and Narasimhan (1979) identified eightsuch factors: unwilling users; multiple implementers; turnoveramong users, developers, or sponsors; inability to specifyrequirements; inability to cushion impact on others; lack ofmanagement support; lack of development experience; andtechnical or cost-effectiveness problems. Cafasso (1994) identi-fied user involvement, executive management support, properplanning, realistic expectations, and a clear statement of require-ments as the top five factors influencing project success. An addi-tional set of factors including personnel change, technologicalnewness, technological change, top management support, teamexpertise, novelty of applications, and users involvement signifi-cantly affect system development success (Jiang, Klein, & Pick,1995; Pressman, 1992; Saleem, 1996; Zmud, 1980).

Although, many risks have been identified, their impacts on performance have not been empirically tested. The authorssuspect two critical reasons for this lack of empirical evidencein the literature. First, development risks have been conceptu-alized in a variety of ways and lack a sound measurement con-struct (Boehm, 1989; Charette, 1989). Secondly, developmentrisks, similar to system success measures, are elusive to define.Different researchers have addressed different aspects of systemsuccess, making comparisons difficult and the prospect ofbuilding cumulative tradition for risk research similarly elusive(Barki, Rivard, & Talbot, 1993). Without such measurement, itis not surprising that one has difficulty finding reports of soft-ware risk effects in the literature.

Recognizing the importance of a risk measurement con-struct, Barki, Rivard, and Talbot (1993) proposed an initialmeasure for software development risks. Based upon a com-prehensive review of the IS risk literature, a 144-item softwarerisk measurement instrument pertaining to various characteris-tics of a software development project was developed. The 11multiple-item risk dimensions include:■ Application complexity; ■ Technical complexity/acquisition; ■ Extent of changes;■ Resources insufficiency; ■ Application size; ■ Team’s lack of expertise with task; ■ Team’s lack of general expertise; ■ Lack of user experience; ■ Lack of user support; ■ Intensity of conflicts; ■ Lack of clear of role definitions.

Reliability and structure validation were examined throughtraditional factor analysis.

This type of measurement can be applied in a variety ofways. In application settings, it may be used by IS project

managers to diagnose project risks before a project starts. It alsomay be used during the software development process to manage the potential risks more effectively. Similarly, a riskscale may be used in organizations to make selections betweencontending software development projects. Researchers maychoose to use these measures to better understand how risk fac-tors influence the success of IS. More generally, it is likely to beused by researchers who are interested in understanding the diffusion of various software development risks in organiza-tions and determining its impacts on various project outcomes,e.g., system performance, information quality, and user satisfac-tion. The relationships between software risks, project manage-ment practices, and project success can be established clearly.

Given the potential wide usage of a software risk measureby both IS practitioners and academicians, it is important toconduct studies that further test the psychometric properties ofthis instrument and examine its relationship to project success.Unfortunately, no major study was found that re-examinedthis proposed instrument. In addition, one limitation of apply-ing this instrument is its relatively large number of items. An extremely large sample is needed to establish meaningfulreliability and validity. The authors believe that a shorter version of the original scale may be more applicable.

Research MethodologyIn an attempt to respond to these issues, the authorsdeveloped a short-form instrument for measuring soft-ware development risk by modifying the original mea-surement model (Barki, Rivard, & Talbot, 1993). Factoranalysis was selected to examine the dimensions of theoriginal construct (five dimensions were found), but therelationships between these factors were not clear. Overallsoftware development risk is an aggregate of various unde-sirable events (Nidumolu, 1995). While each of thesedimensions is distinct, the overall software developmentrisk is a combination—and interaction—of the variousdimensions. Are these dimensions independent of eachother? In other words, is software project risk a higher-order phenomenon evidenced through the extent of risksacross multiple dimensions?

Such a measure is valid only if there is a link between riskand project success. Researchers and practitioners believethat there is a significant relationship between software riskand project success, but there is little empirical evidence tosupport this conventional wisdom. In fact, Barki, Rivard,and Talbot (1993) specifically call for studies examining therelationships between the risk and project success.Therefore, another question addressed was whether thereexists a significant relationship between the risk and projectsuccess. The authors examined two issues: ■ Are the proposed measurements of various softwaredevelopment risk factors (first-order) independent of eachother and, thus, must be measured as such? Or, can an inte-grated overall software project risk (second-order) beobtained in the proposed measurement?■ Are software development risk factors significantly associ-ated with project success?

September 2002 Project Management Journal 31

32 Project Management Journal September 2002

Sampling and Data CollectionIn the pilot study, 300 questionnaires were mailed to IS managers in the Midwestern region of the United States. Theitems were derived from the set provided by Barki, Rivard, andTalbot (1993) and reduced to accommodate the factors identified in their study. Self-addressed return envelopes foreach questionnaire were enclosed, and a total of 67 responseswere obtained. This data was analyzed with factor analysis todetermine agreement with the expected structure. The ques-tionnaire was modified by removing items not identified witha particular factor. The modified measurement was reviewed byfive IS managers and three IS researchers for clarity andappropriateness. Minor changes were made for instructionclarity. Due to the modification of the questionnaire, these 67responses were not included in further analysis.

In the main study, the revised questionnaires were mailedto 500 IS project managers in the United States who weremembers of the Project Management Institute (PMI). The sam-ple was chosen because IS members of PMI typically haveexpertise on software project management and represent a vari-ety of organizational settings. PMI membership has been usedwidely in project management research. Self-addressed returnenvelopes for each questionnaire were enclosed. All therespondents were assured that their responses would be keptconfidential. A total of 86 questionnaires were returned. Toincrease the response rate, a second set of 500 questionnaireswas sent three months later. A total of 66 questionnaireswere returned in this round, for a combined total of 152responses. To examine the potential bias between the first-round respondents and the second-round respondents, Chi-square statistical tests were conducted on the demographicvariables. No significant difference on the respondents’demographic backgrounds was found. These two rounds ofrespondents were combined for data analysis as described inAppendix 1. A summary of the demographic characteristicsof the sample is presented in Table 1. The demographic characteristics of this sample were similar to other studiessampling PMI members (Larson, 1997).

Short Form of Software Development Risk AssessmentThe remaining items contained in the applied instrument areshown in Table 2. The major dimensions of the original instru-ment with multiple items were included. The resulting ques-tionnaire consisted of 45 items. The questionnaire asks respon-dents about the presence of complexity and problems in theirmost recently completed IS projects as understood in the earlystages of development. Each item was presented using a five-point scale. All the items were scaled so that the greater thescore, the greater the presence of the risk in question.

Exploratory factor analysis is appropriate when the numberand nature of the underlying measurement structures are notcertain. Therefore, an exploratory factor analysis first was con-ducted as briefly described in Appendix 1. The results yieldedsix risk dimensions (technical acquisition, project size, roledefinition, user experience, user support, and team expertise),which were retained for confirmatory factor analysis (CFA) to

further examine dimensionality, reliability, and validity (seeAppendix 1).

Project Success MeasurementProject success often is measured in terms of its efficiency,effectiveness, and timeliness (Henderson & Lee, 1992).Efficiency is the ratio of outputs to inputs, the subjective perception of efficiency in team operations, and the adherenceto allocated resource, e.g., time and budget. Effectiveness is thequality of the system produced. The instrument used in thisstudy to measure project success was adopted from Hendersonand Lee (1992). The items of project success include meetingproject goals, conducting the required quantity and quality ofwork, adhering to schedules and budget, and speeding operations. Project success is the focus of this study becausedocumented reports on software development failures oftenare related to delay of schedules, budget overrun, and lack ofsystem quality.

Self-evaluation of performance has been widely adopted inthe area of organizational behavior. Bandura’s (1986) work onself-efficacy suggests that self-appraisals are a valid predictor ofsuccess. When detailed measurements are made, efficacyassessments and success are highly correlated (Schunk, 1989).Individuals often are the best judges of their own project suc-cess. In addition, by becoming involved in a project, they maybecome more motivated to improve their project success(Campbell & Lee, 1988). Based upon the theoretical andempirical research in areas of organizational behavior, theauthors expect that IS project managers’ perceptions of theirproject success will be a useful indicator of IS project success.Specifics may be found in Appendix 1.

Assessment Question 1. Are the proposed measurementsof various software development risk factors (first-order) inde-pendent of each other and should thus be measured as such?Or, can an integrated overall software project risk (second-order) be obtained in the proposed measurement?

The initial measurement model for software developmentrisks (see Appendix 1) implies that technical acquisition, projectsize, user support, user experience, team experience on applica-tion, and team members’ role definitions are associated but notgoverned by a common phenomenon. On the other hand, analternative model posits a second-order factor model governingthe correlations among the six factors. The theoretical interpre-tation of this second-order factor is an overall trait of softwaredevelopment risks. While each of these dimensions is distinct,overall software risk is an integration of each distinct risk factor.Previous research notes this operational perspective represents atheoretically strong basis for capturing complex software riskmeasures (Boehm, 1989; Barki, Rivard, & Talbot, 1993).

This second-order model explains the covariance amongfirst-order factors in a more parsimonious way, i.e., one thatrequires fewer degrees of freedom. Even when the second-order model is able to explain the factor covariance, the good-ness-of-fit of the second-order model can never be better thanthe corresponding first-order model. In this sense, the first-order model provides a target or optimum fit for the high-er-order model. However, because the second-order model

represents a more parsimonious representation of observed co-variances, it should be accepted over the baseline as a “true”representation of model structure.

Assessment Question 2. Are software development riskssignificantly associated with project success?

A correlation between software development risk and projectsuccess was found to be –0.22 using a confirmatory factor analysis model (see Appendix 1). This negative relationship was

anticipated due to the diametrically arranged adjectives betweenthe software development risks and project success dimensions.Essentially, it confirms that software risk does dampen projectsuccess. In fact, the strength of the correlation shows that riskshould be measured and controlled to improve success and thatthe measure proposed is a good indicator of total risk. Theresults confirm the authors’ anticipation that software develop-ment risk adversely affects the outcomes of a project.

September 2002 Project Management Journal 33

Characteristics First Round Second Round Total

Position

IS executive 8 6 14IS manager 12 10 22IS project leader 44 39 83Other IS professionals 18 5 23

Work experience

1–5 years 5 3 86–10 years 12 9 2111–15 years 20 5 2516–20 years 20 9 2921 or above 25 33 58

Gender

Male 65 48 113Female 18 12 30

Organization size (number of employees)

Under 100 employees 7 5 12100–500 employees 12 13 25500–1,000 employees 7 6 131,000–2,500 employees 10 6 162,500–5,000 employees 6 3 95,000–10,000 employees 22 9 3110,000 or more employees 20 18 38

Average number of members of IS project teams

2–3 members 1 1 24–5 members 13 3 166–10 members 22 21 4311–15 members 12 9 2116–20 members 15 14 2921–25 members 18 13 31

First round sample size = 86Second round sample size = 66Total sample size = 152

Table 1. Sample Demographics

34 Project Management Journal September 2002

Technological acquisition

1. Need for new hardware2. Need for new software3. Large number of hardware suppliers4. Large number of software suppliers

Application size

5. Large number of people on team6. Large number of different stakeholders

on team, e.g., information systems staff, users,consultants, suppliers, customers

7. Large project size8. Large number of users will be using this system9. Large number of hierarchical levels occupied by

users who will be using the system, e.g., office clerks, supervisors

Lack of team’s general expertise

10. Ability to work with uncertain objective11. Ability to work with top management12. Ability to work effectively as a team13. Ability to understand human implications

of a new system14. Ability to carry out tasks effectively

Lack of team’s expertise with the task

15. In-depth knowledge of the functioning of user department

16. Overall knowledge of organizational operations17. Overall administrative experience and skill18. Expertise in the specific application area

of the system19. Familiarity with this type of application

Lack of user support

20. Users have a negative opinion about the system meeting their needs

21. Users are not enthusiastic about the project22. Users are not an integral part of the development

team23. Users are not available to answer the questions24. Users are not ready to accept the changes the

system will entail25. Users slowly respond to development team request26. Users have negative attitudes regarding the use

of computers in their work27. Users are not actively participating in

requirement definition

Intensity of conflicts

28. Great intensity of conflicts among team members29. Great intensity of conflicts between users and

team members

Extent of changes brought

30. The system requires a large number of users’ tasks to be modified

31. The system will lead to major changes in the organization

Resources insufficient

32. In order to develop and implement the system, the scheduled number of people per day isinsufficient

33. In order to develop and implement the system, the dollar budget provided is insufficient

Lack of clarity of role definitions

34. Role of each member of the team is not clearly defined

35. Role of each person involved in the project is notclearly defined

36. Communications between those involved in the project are unpleasant

Application complexity

37. Technical complexity, e.g., hardware, software, database

38. Large number of links to existing systems39. Large number of links to future systems

Lack of user experience

40. Excessive requirements specifications41. Users are not very familiar with system

development tasks42. Users have little experience with the activities

to be supported by the future applications43. Users are not very familiar with this type

of application44. Users are not aware of the importance of their

roles in successfully completing the project45. Users are not familiar with data processing

as a working tool

Table 2. Initial Items for Software Development Risks

Conclusion and ImplicationsThe purpose of this study was to examine software develop-ment risk measurement in light of the importance of projectrisk management on system success. The relationship of soft-ware project development risk to the traditional project successalso was examined. This was accomplished by administeringboth the proposed software risk and project success surveyinstruments to the same sample of IS project leaders and thenderiving unique dimensions of software development risks,which contribute to the prediction of project success. It wasfound that the dimensions are integrated to a higher-order ofoverall risk, and risk is adversely related to project success.

This study represents an important step in the develop-ment of valid and reliable measures of software developmentrisk. It contributes to the IS literature with a measure that pro-vides both IS researchers and practitioners with more specificinformation concerning software development risk effects onsystem success. In particular, it suggests that IS project leadersshould obtain measures of such project risk dimensions asproject complexity, user experience, user support, and projectteam expertise, which may be overlooked when beginning aproject. In practice, the importance of these dimensionspoints to the need for stronger project management emphasison estimating project development risks and a demonstrationof how specific risks may affect the outcomes of project delivery, e.g., budget, schedule and system quality.

Although the results of this study provide insight into the rela-tionship of software development risks and project success, oneobvious limitation is the single dimension of system success.Additionally, only one stakeholder is considered, the IS projectmanager. Examination of other interest groups may indicate thepresence of other risk categories that would provide a more com-prehensive picture of the development process, including thegroup interaction items dropped during analysis. The proposedshort version of software development risk assessment may notbe comprehensive enough to capture other dimensions of risk,especially those that could best be measured directly, such asbudget size and number of impacted existing systems.Nevertheless, the proposed short form of risk measurement pro-vides a starting point to establish such a comprehensive model.

References Anderson, J.C. (1987). An approach for confirmatory mea-

surement and structural equation modeling of organizationalproperties. Management Science, 33 (4), 525–541.

Anderson, J.C., & Gerbing, D.W. (1988). Structural equa-tion modeling in practice: A review and recommended twostep approach. Psychological Bulletin, 103 (3), 411–423.

Anderson, R.E., Hair, J.F., & Tatham, R.T. (1992).Multivariable data analysis with readings (3rd ed.). New York:MacMillan Publishing.

Anderson, J., & Narasimhan, R. (1979). Assessing imple-mentation risk: A methodological approach. ManagementScience, 25 (6), 512–521.

Bagozzi, R.P., Yi, Y., & Phillips, L.W. (1991). Assessing con-struct validity in organizational research. Administrative ScienceQuarterly, 36 (3), 421–458.

Balloun, J., Jiang, J., & Klein, G. (1996). Ranking of systemimplementation success factors. Project Management Journal, 27(4), 50–55.

Bandura, A. (1986). Social foundation of the thought and action: A social cognitive theory. Englewood Cliffs, NJ:Prentice-Hall.

Barki, H., Rivard, S., & Talbot, J. (1993). Toward an assess-ment of software development risk. Journal of ManagementInformation Systems, 10 (2), 203–225.

Boehm, B.W. (1989). Software risk management.Washington, DC: IEEE Computer Society Press.

Bollen, K.A. (1989). Structural equations with latent variables.New York: John Wiley & Sons Inc.

Cafasso, R. (1994). Few IS projects come in on time, onbudget. Computerworld, 28 (50), 20–21.

Campbell, D.T., & Cook, T.D. (1979). Quasi-experimentation.Boston: Houghton Mifflin Co.

Campbell, J.P., Ghiselli, E.E., & Zedeck, S. (1981).Measurement theory for behavioral science. San Francisco: W.H. Freeman.

Campbell, J.P., & Lee, C. (1988). Self-appraisal in perfor-mance evaluation: Development versus evaluation. Academy ofManagement Review, 13 (2), 302–314.

Charette, R.N. (1989). Software engineering risk analysis andmanagement. New York: McGraw-Hill.

Cronbach, L.J. (1951). Coefficient alpha and the internalstructure of tests. Psychometrika, 16, 297–334.

Davis, B., & Wilder, C. (1998). False starts, strong finishes.InformationWeek, n711, 41–53.

Fornell, C., & Larcker, D.F. (1981). Evaluating structuralequation models with unobservable variables and measure-ment error. Journal of Marketing Research, 18, 39–50.

Gordon, P. (1999). To err is human, to estimate, divine.InformationWeek, n711, 65–72.

Hayes, F. (1997). Managing user expectation.Computerworld, 31 (44), 8–9.

Henderson, J.C., & Lee, S. (1992). Managing I/S designteams: A control theory perceptive. Management Science, 38 (6),379–387.

Hocevar, D., & Marsh, H. (1985). Application of confir-matory factor analysis to the study of self-concept: First andhigher order factor models and their invariance acrossgroups. Psychological Bulletin, 97 (3), 562–582.

Jiang, J., Klein, G., & Pick, R. (1995). Are your sure you wantto use that data in your study? IEEE Transactions on Systems,Man, and Cybernetics, 25 (2), 378–380.

Joreskog, K.G. (1993). Testing Structural Equation Models. InK.A. Bollen and L.S. Long (Eds.). Newbury Park, CA: SagePublications.

Joreskog, K.G., & Sorbam, D. (1989). LISREL 7: A guide to theprogram and applications (2nd ed.). Chicago: SPSS Inc.

Larson, E. (1997). Partnering on construction projects: Astudy of the relationship between partnering activities and pro-ject success. IEEE Transactions on Engineering Management, 44(2), 188–195.

Long, J.S. (1983). Confirmatory factors analysis: An intro-duction to LISREL. Sage University Paper Series On

September 2002 Project Management Journal 35

Quantitative Application in the Social Science. Thousand Oaks,CA: Sage Publications.

MacCallum, R.C. (1986). Specification searches in covariancestructural modeling. Psychological Bulletin, 100 (1), 107–120.

McFarlan, F.W. (1981). Portfolio approach to informa-tion systems. Harvard Business Review, 59 (5), 142–150.

Meyer, R.L. (1998). Avoiding the risks in large softwaresystem acquisitions. Information Strategy, 14 (4), 18–33.

Nidumolu, S. (1995). The effect of coordination anduncertainty on software project performance: Residual per-formance risk as an intervening variable. InformationSystems Research, 6 (3), 191–219.

Pressman, R.S. (1992). Software engineering: A practitioner’sapproach (3rd ed.). New York: McGraw-Hill.

Saleem, N. (1996). An empirical test of the contingencyapproach to user participation in information system

development. Journal of Management Information Systems, 13(1), 145–166.

Schunk, D.H. (1989). Self-efficiency and cognitive skilllearning. In C. Ames & R. Ames (Eds.). Research on Motivationin Education (Vol. 3). San Diego: Academic Press.

Schwalbe, K. (2000). Information technology project management. Cambridge, MA: Course Technology.

Zmud, R.W. (1980). Management of large softwaredevelopment efforts. MIS Quarterly, 4 (1), 45–55.

Appendix 1Determination of Factor StructureFirst, in an exploratory factor analysis using principal compo-nent analysis (PCA), if a given item has a meaningful loading,i.e., loading greater than 0.45 on more than one, that item isdropped. Likewise, items with less than a 0.45 loading were

36 Project Management Journal September 2002

Factor Loaded items Loading

Technical acquisition 1. Need for new hardware 0.642. Need for new software 0.653. Large number of hardware suppliers 0.664. Large number of software suppliers 0.65

Project 6. Large number of different stakeholders on team 0.71size 7. Large project size 0.67

8. Large number of users will be using the system 0.779. Large number of hierarchical levels occupied by users

who will be using this system 0.63

Lack of clarity of 34. Role of each member of the team is not clearly defined 0.78role definition 35. Role of each person involved in the project is not clearly defined 0.79

36. Communications between those involved in the project are unpleasant 0.77

Lack of user 41. Users are not very familiar with system development 0.67experience on 42. Users have little experience with the activities to be supportedsystems development by the future system 0.81

43. Users are not very familiar with this type of application 0.7745. Users are not familiar with data processing as a work tool 0.71

Lack of 20. Users have a negative opinion about the system meeting their needs 0.77user support 21. Users are not enthusiastic about the project 0.79

23. Users are not available to answer the question 0.7224. Users are not ready to accept the changes the system will entail 0.7925. Users slowly respond to development team requests 0.69

Lack of team 13. Ability to understand human implications of a new system 0.69expertise 15. In-depth knowledge of the functioning of user department 0.70

16. Overall knowledge of organizational operation 0.7517. Overall administrative experience and skill 0.7518. Expertise in the specific application area of the system 0.71

Table A3. Summary Results of Exploratory Factor Analysis

7

18

41

23

20

13

4

3

September 2002 Project Management Journal 37

1

2

6

8

9

15

16

17

21

24

2534

35

36

42

43

45

0.50 0.23 0.27 0.32 0.22

0.31 0.29 0.20 0.09

0.43

0.43 0.42

0.22 0.46

0.30

Figure A1. Software Risk Measurement Model

Lack of UserExperience

Lack of RoleDefinition

Lack of UserSupport

Lack of TeamExperience

ProjectSize

TechnologyAcquisition

0.80

0.77

0.62

0.79

0.83

0.78

0.97

0.56

0.84

0.83

0.72

0.97

0.94

0.54

0.85

0.92

0.62

0.84

38 Project Management Journal September 2002

dropped. The variables remaining after the PCA are shown inTable A3. All the remaining items loaded as expected, exceptthat the two a priori dimensions “the lack of team’s generalexpertise” and “lack of team’s expertise with the task” indicat-ed being combined. Four dimensions of the software develop-ment risk did not have a single item with a significant load-ing—including intensity of conflicts, extent of changesbrought, resource insufficiency, and application complexity—and were not considered in further analysis. Application

complexity may best be measured by direct observation ratherthan psychometric scales (Jiang, Klein, & Pick, 1995).

Subsequent confirmatory factor analysis (CFA) determinedconstruct validity (MacCallum, 1986). The expectation wasthat each of the developed scales in Table A3 uniquely measuresits associated factor and that the system of factors measures different aspects of software development risks. TheCovariance Analysis of Linear Structural Equations (CALIS)procedure of Statistical Analysis Systems (SAS), version 6.12,

F1. Technological acquisition (0.77*) 0.62 0.76

1 0.80 8.74 0.642 0.77 8.45 0.59

F2. Project size (0.79*) 0.57 0.78

6 0.62 7.19 0.388 0.79 9.56 0.639 0.83 10.04 0.69

F3. Lack of team expertise (0.83*) 0.62 0.80

15 0.78 9.75 0.6116 0.97 12.96 0.9417 0.56 6.62 0.32

F4. Lack of user support (0.88*) 0.66 0.88

20 0.84 11.38 0.7121 0.83 11.12 0.6924 0.84 11.36 0.7125 0.72 9.03 0.52

F5. Lack of clarity of role definition (0.87*) 0.70 0.85

34 0.97 14.15 0.9435 0.94 13.38 0.8836 0.54 6.58 0.29

F6. Lack of user experience on system development (0.86*) 0.71 0.85

42 0.85 11.32 0.7243 0.92 12.63 0.8545 0.67 8.25 0.55

Note. * Indicates composite reliability of the corresponding construct.

Table A4. Properties of the Revised First-Order Software Development Risk Model

Cronbachalpha

Varianceextractedestimate

Compositereliability

T-valueStandardizedloading

Construct indicators

was utilized as the analytical tool for testing the measurementand structural equation models.

One important assumption of confirmatory factor modelingis multivariate normality. Because multivariate normality is diffi-cult to test, it is recommended that univariate normality amongvariables be initially tested (Anderson, Hair, & Tatham, 1992).Such testing was accomplished through examination of themoments around the mean of each variate’s distribution (Bollen,1989). Among the variables of this study, analysis of these statis-tics suggests no serious departures in univariate normality.

When a CFA model provides a reasonably good approxi-mation to reality, it accounts for the observed relationships inthe data set. Four fit indices are typically used to identify over-all goodness of fit in a CFA: ■ Comparative fit index (CFI); ■ The Bentler-Bonett non-normed fit index (NNFI); ■ Adjusted goodness-of-fit index (AGFI); ■ The Chi-square divided by the degrees of freedom (< 3 preferred).

Values greater than 0.90 are desirable for CFI and NNFI andgreater than AGFI should be equal or greater than 0.80.

The CFA measurement model is shown in Figure 1. Theitems correspond to those of Table A3. Table A4 provides asummary of the analysis. In the initial phase of model estima-tion, several items (7, 13, 17, 18, and 23) were deleted due toa significant cross-loading with other constructs. In the subse-quent tests for discriminant validity (discussed in the followingsection), items 3 and 4 were deleted due to a lack of significantreliability. The final model, Chi-square/degrees of freedom =1.41 < 3, AGFI = 0.81, CFI = 0.95, and NNFI = 0.94, indicatesa good fit between the model and data. In addition, the significance of all parameter estimates, e.g., t-value > 3, shownin Table A4 support the revised model.

A second-order factor model captures correlations amongthe first-order constructs. The efficacy of such a structure canbe tested using a comparative methodology for higher-orderfactor models (Bollen, 1989; Joreskog, 1993). A secondorder analysis also was conducted using a CFA over the sixidentified factors. The fit indices showed a good fit withAGFI = 0.81, CFI = 0.94, NNFI = 0.93, and the Chi-square/degrees of freedom = 1.5. Further empirical supportfor acceptance of the higher-order factor structure is found inthe significance of estimated parameters as well as theamount of variance explained by the structural equations. Allstructural equation parameters exhibit significantly high t-values (all > 3.18, p = 0.01). Specifically, the CFA pathbetween overall software risks and its underlying first-orderdimensions are 0.61 for technical acquisition, 0.58 for pro-ject size, 0.69 for team expertise, 0.53 for user support, 0.44for role definition, and 0.40 for user support.

In the project success measurement model initial estimation, item 5, due to a lack of significant loading, wasdeleted from the model. The final model described indicat-ed a good fit AGFI = 0.87, CFI = 0.97, NNFI = 0.95, and the Chi-square/degrees of freedom = 2.6. Internal consistency(reliability) was assessed using a measurement model andreported in Table A5 (Joreskog & Sorbam, 1989). The com-posite reliability (0.92), Cronbach (1951) alpha (0.90), andvariance extracted estimate (0.65 > 0.50) indicate stronginternal consistency of this construct. In summary, the project success measure has strong validity and reliability.

It has been suggested that the efficacy of second-ordermodels be assessed through examination of the target (T)coefficient [T = Chi-square (baseline model)/Chi-square(alternative model)] (Hocevar & Marsh, 1985). This coeffi-cient has an upper bound of 1.0 with higher values implying

September 2002 Project Management Journal 39

Table A5. Properties of the Project Success Measurement

Project success (0.92) 0.65 0.90

P1. Meet project goals 0.87 12.96 0.76

P2. Amount of work 0.82 11.78 0.67

P3. Quality of work 0.75 10.30 0.56

P4. Adhere to schedules 0.88 13.14 0.77

P6. Speed of operations 0.83 12.10 0.69

P7. Adhere to budgets 0.68 9.09 0.46

Cronbachalpha

Varianceextractedestimate

Compositereliability

T-valueStandardizedloading

Constructindicators

that the relationship among first-order factors is sufficientlycaptured by the higher-order factor. In this study, theobserved Chi-square for the baseline model was 192.54degress of fiction = 137; p = 0.001) and the second-order fac-tor model is 216 (degrees of freedom 146; p = 0.0001).Adjusting for degrees of freedom, the normed value of Chi-square = 1.40 for the baseline model and Chi-square = 1.48for the higher-order model, indicating good model fit andno evidence of over-fitting. The calculated target coefficientbetween the baseline and hypothesized model is a very high0.95. This value suggests that the addition of the second-order effect does not significantly increase Chi-square.

Reliability, Convergent Validity, and Discriminant Validity.External validity refers to the extent to which the findings canbe generalized to or across times, persons, and settings(Campbell & Cook, 1979). External validity of the findings isthreatened if the sample itself is systematically biased, e.g., if

40 Project Management Journal September 2002

the responses were generally from more (or less) risky projects.The mean of software risk items (Table 2) ranged from 2.87 to3.92, median from 2.67 to 4.00, skewness from –0.04 to 0.78,and kurtosis from –1.23 to 0.18. The responses had good dis-tribution on project risks since the mean and median weresimilar, skewness was less than 2, and kurtosis was less than 5(Campbell, Ghiselli, & Zedeck, 1981). Overall, project risk-related bias seemed unlikely because of the considerable varia-tion shown on the distributional characteristics.

Additional threats to external validity occur if the sampleshowed other systematic bias in the relation of demograph-ics to the project measures. An analysis of variance (ANOVA)was conducted by using project success (as the dependentvariable) against each demographic category (independentvariables). Results did not indicate any significant relation-ships. Similar results held for each software risk factor as thedependent variables. In summary, no significant threats toexternal validity were discovered in the data.

Item reliability (indicator reliability) is defined as thesquare of the correlation between a latent factor and thatitem. This reliability indicates the percent of variation in theindicator that is explained by the item that it is supposed tomeasure (Long, 1983). The reliability of an indicator can becomputed by squaring the standardized factor loadingobtained in the analysis model. However, this test is quiteconservative; very often item reliability will be below 0.36,even when reliabilities are acceptable (Fornell & Larcker,1981). For low reliability items in any factor, the compositereliability for the factor must be examined to determinewhether the low reliability items should be eliminated. A0.70 acceptable level of composite reliability is stronglydesired. The results of the item reliability are shown in TableA4 with values of 0.77 or greater.

Composite reliability represents the internal consistencyof the items measuring a given factor. Computationally,composite reliability is the square of the sum of the stan-dardized factor loadings for that factor divided by the sum ofthe error variance of the individual indicator variables andthe square of the sum of the standardized factor loadings(Fornell & Larcker, 1981). The composite reliability for eachrisk factor is shown in Table A4. In addition, the Cronbachalpha scores were calculated and are reproduced in Table A4.The Cronbach alpha and composite reliability scores foreach construct are acceptable.

If all factor loadings for the indicators measuring the sameconstruct are statistically significant (greater than twice theirstandard error) there is evidence supporting convergent validi-ty of those indicators (Anderson & Gerbing, 1988). The factthat all t-tests are significant shows that all indicators are effec-tively measuring the same construct. The t-values for each indi-cator loading are shown in Table A4. The constructs demon-strated acceptable convergent validity.

Evidence of discriminant validity is obtained through thecomparison of an unconstrained model that estimates the cor-relation between a pair of constructs and a constrained modelthat fixes the value of the construct correlation to unity. Theresulting Chi-square difference also is a Chi-square variate with

Test Construct T-value Chi-squarecorrelations difference

Technology acquisition (F1)

F2 0.50 5.54* 53.39*F3 0.23 2.32* 73.25*F4 0.27 2.70* 71.54*F5 0.32 3.37* 70.64*F6 0.22 2.16* 74.07*

Project size (F2)

F3 0.31 3.30* 103.89*F4 0.29 3.01* 103.69*F5 0.20 2.13* 118.65*F6 0.09 0.89 124.44*

Team expertise (F3)

F4 0.43 5.36* 124.56*F5 0.22 2.47* 147.40*F6 0.46 5.78* 123.01*

User support (F4)

F5 0.43 5.41* 225.81*F6 0.42 5.08* 148.48*

Role Definition (F5)

F6 0.30 3.40* 170.62*

Note. * Indicates significant at p < 0.05 level.

Table A6. Discriminant Validity Tests: SoftwareDevelopment Risk Constructs

James J. Jiang, PhD, is a professor of management

information systems (IS) at the University of Central

Florida. He obtained his PhD in IS from the University

of Cincinnati. His research interests include IS project

management and IS personnel management. He has published more

than 70 academic articles in journals such as the IEEE Transactions

on Systems, Man, and Cybernetics, Decision Support Systems, IEEE

Transactions on Engineering Management, Decision Sciences, Journal

of Management Information Systems (JMIS), Communications of the

ACM, Information & Management, Journal of Systems & Software,

Data Base, and Project Management Journal. He is a member of the

Institute of Electronic and Electrical Engineering, Association for

Computing Machinery, Association for Information Systems, and

Decision Science Institute.

Gary Klein, PhD, is the Couger Professor of information

systems (IS) at the University of Colorado at Colorado

Springs. He obtained his PhD in management science

from Purdue University. He served with Arthur Andersen

& Company in Kansas City and was director of the IS department for a

regional financial institution. His interests include project management,

knowledge management, system development, and mathematical

modeling with more than 60 academic publications in these areas.

He has made professional presentations on decision support systems

in the United States and Japan, where he served as a guest professor

to Kwansei Gakuin University. He is a member of the Institute of

Electronic and Electrical Engineering, Association for Computing

Machinery, Institute for Operation Research and the Management

Sciences, Society of Competitive Intelligence Professionals, Decision

Science Institute, and Project Management Institute.

T. Selwyn Ellis is an assistant professor of computer

information systems (IS) at Louisiana Tech University.

He earned a BS in computer science and mathemat-

ics and an MBA from Mississippi College and a DBA

from Louisiana Tech University. He has publications in Journal of

Computer Information Systems, Data Base, Computer Personnel, and

others. His current research interests include investments in technol-

ogy, electronic commerce, and other IS/information services topics

related to corporate strategy.

one degree of freedom. A significant difference means theunconstrained model is a better fit. This condition supports theexistence of discriminant validity (Anderson, 1987; Bagozzi,Yi, & Phillips, 1991). Table A6 contains the results of the pair-wise difference tests among constructs. As shown, all differ-ences are significant at p < 0.001 levels. Hence, each scaleseems to capture a construct that is significantly unique fromother constructs, providing evidence of discriminant validity.

THayburn
This article is copyrighted material and has been reproduced with the permission of PMI. Unauthorized reproduction of this material is strictly prohibited.

A Hybrid Intelligent System to Facilitate Information System Project Management Activities

Hamid R. Nemati, Department of Information Systems and Operations Management, The University of North Carolina at Greensboro, 440 Bryan Building, P.O. Box 26165,Greensboro, NC 27014–6165 USA

Dewey W. Todd, Department of Decision Sciences, Georgia State University, University Plaza,Atlanta, GA 30303–3083 USA

Paul D. Brown, Department of Decision Sciences, Georgia State University, University Plaza,Atlanta, GA 30303–3083 USA

In most large companies, management has long recognized a deficiency in project management and quality assurance (QA) concerning information sys-

tems development and maintenance projects. Traditional “seat of your pants”management practices and scattered innovative project management techniqueshave been limiting at times. This paper provides a practical, prototypical artificialintelligence (AI)-based approach to solving these ill-structured problems. It coverstwo key areas of need for most large businesses and their information technology(IT) organizations: a hybrid network of systems proposed to facilitate projectmanagement and IT-associated activities of project estimation, tracking, develop-ment, testing and implementation, and QA.

Project management has long been viewed as a domain where automation is lim-ited to some type of timeline tool, if that much. When starting this project, theauthors asked, “Why?” It appears that this domain has resisted AI for the very reasonthat AI was developed. The newness of AI sometimes makes potential recipients ofAI systems skeptical of their possible benefits. Project management requires so muchhuman reasoning, and each project typically bears so much uniqueness, that on thesurface, developing a system to mimic that process seems impossible. To show thepotential power that AI could bring to this vital arena, this paper will present thedevelopment of a prototypical hybrid intelligence system. This system consists of theexpert system (ES) to aid in project estimate validation and two artificial neural net-work (ANN) models to be used for delivery and quality prediction.

This paper first will provide an overview of project management and QA. Next,the research process for the ES component of this hybrid system to aid in project esti-mate validation will be described. The authors also will describe the database andresearch process for developing the neural network models used for predicting thedelivery time and quality of the final deliverable. The authors also will provide abreakdown of the project management process and analysis of how AI systems canbe used to develop a hybrid network of systems to support this process. The last partof this paper is a detailed description of the total project management picture andhow these they can be supported through the development of an architecture of AIsystems similar to the one described in the paper.

Project Management: A Historical PerspectiveProject management is the planning, organizing, directing, and controlling of company resources for a relatively short-term objective that has been established to

AbstractIn most large companies, management has long recognized a deficiency in projectmanagement and quality assurance (QA)concerning information systems developmentand maintenance projects. This paper providesa practical, prototypical artificial intelligence-based approach to solving these ill-structuredproblems. The authors show how a hybridintelligence system with expert system and artificial neural network components can beused to aid in project estimate validation andquality prediction of the deliverables. The output from such a hybrid system then can be used in a traditional project managementtimeline tool for collecting the estimate infor-mation, tracking the project through its com-pletion. The authors also provide guidelines for large-scale development of such systems.

Keywords: artificial intelligence; expert sys-tems; hybrid systems; information systems;neural networks; project management;quality assurance

©2002 by the Project Management Institute2002, Vol. 33, No. 3, 42–528756–9728/02/$10.00 per article + $0.50 per page

42 Project Management Journal September 2002

complete specific goals and objectives (Kerzner, 1995).Producing high levels of productivity and quality and low lev-els of uncertainty also are objectives of project management. Itis the management of all the factors that surround and enablethe technical work to be accomplished.

Project management is an organized or structured approachfor managing a variety of independent and interdependentevents and activities leading toward a common outcome. Theobjectives of the approach are to:■ Complete planned activities according to a stated schedule;■ Deliver the completed project or product on time, withminimal slippage;■ Manage the costs of the process to ensure attainment orreduction of budgetary projections;■ Monitor the results of the process to ensure that the purposeand benefits of the system or project have been accomplished.

Project management aims to bring a project to completionon time, within budgeted cost, and to meet the planned per-formance by synthesizing all resources assigned to the projecteffectively and efficiently (Simpson, 1987). Project manage-ment assists management in defining the proper level of qual-ity and then acts to ensure the project delivers a product of thatquality. A plan is based on estimates of the size and durationof activities, and estimates are based on probabilities(Kitchenham, 1991).

Making estimates is necessary to predict cost and timerequirements for the project (Ellwood & Maurer-Williford,1994). However, estimating the effort or costs required for aproject is difficult because there are always many unknowns(Jeffrey & Low, 1990). According to Finnie and Wittig(1997), estimating software development remains a complex problem and improving estimation techniquesavailable to project managers would facilitate more effectivecontrol of time and budgets.

There are five bases for estimation (McDermid, 1990).They are professional judgment, historical database, and theuse of experts, standard times, and formulae. Very few com-panies have bothered to store previous project data in a waythat can be used later; therefore, a historical database oftencannot be used. But if such a historic database were available,project estimation would be enhanced greatly (Jeffrey & Low,1990). Standard times are factors that have been constructedby analyzing many previous project times so that a standardtime is obtained.

Research has shown that the reliability and accuracy of exist-ing models and tools is very limited (Bennatan, 1995; Ellwood& Maurer-Williford, 1994; Finnie & Wittig, 1997; Mcleod &Smith, 1996; Srinivasan & Fisher, 1995). Results from cost-esti-mating models do not correlate very well with each other whentried on a common problem (Macro, 1990). Lines of code, ametric used for estimating software development efforts, has anumber of problems (Albrecht & Gaffney, 1983). One prob-lem is the difficulty of language dependence. It is not possibleto directly compare projects developed by using different lan-guages (Jeffrey & Low, 1990). A second problem is that it alsois difficult to estimate the number of lines of code that will beneeded to develop a system.

The project plan is the primary tool used for any projectundertaken. The purpose of the project plan is to provide thefoundation and framework for the project. The main elementsof a standard project plan are project tasks and activities, aresponsible person, weeks required, beginning date, target andactual completion dates, and status. Project plans ensure thatall activities are recorded and accounted for. When developingthe outline of activities to be done, the project manager andteam must consider the scope of the project to know theboundaries within which to define what must be accom-plished. The development of this portion of the project planwill involve the identification of activities, events, and mile-stones. Activities relate to the physical work that must beundertaken to complete the desired outcome. Events are cate-gories of activities that summarize the result of the activitiesinto a final planned outcome, and a milestone is an event veryimportant to the project.

Once these objectives have been defined, the beginningdate and target completion date must be determined. Bothdates are targets or planned events. They must be plannedand recorded to manage the entire process. A third date isthe actual completion date. The actual completion date isused to compare the planned timing with the actual, thusenabling closure and a measure of performance as to slip-page (Taff, Borchering, & Fisher, 1995). Slippage shows the time by which the actual completion date exceeded theplanned target date.

Planning entails the future, and in dealing with the future,project managers deal with uncertainty (Kitchenham, 1991;Matson & Mellichamp, 1994). Often, estimates are quiterough because what is supposed to be done has never beforebeen done in precisely the same way (Taff, Borchering, &Fisher, 1995). It is important that project teams recognizehow uncertainty bears on the planning effort. The level ofuncertainty of the proposed project largely determines thecharacter of the plan. Good planning may mean phasedplanning, in which planning is divided into phases over theduration of the project.

One reality of project management is the threat that projectcosts will be exceeded. To cope with this threat, project man-agers commonly build some “fat” into their cost estimates.Project costs typically are composed of four components:direct labor costs, overhead, fringe benefits, and auxiliarycosts. In most service projects, which are not capital intensive,direct labor costs are the largest single component of projectcosts. Two approaches to cost estimating are the bottom-upcost estimation procedure and parametric cost estimating. Inbottom-up cost estimation, if project managers know thelabor costs, they can make good estimates of total projectcosts. Here, they estimate how much labor is needed to carryout project tasks.

A Hybrid Intelligence System for Project ManagementSunTrust Bank Inc. is one of the largest regional banks in theUnited States with total assets in excess of $80 billion. Like many other large businesses, it maintains a large IT

September 2002 Project Management Journal 43

44 Project Management Journal September 2002

Note. The flow diagram begins with the box labeled “Similar project done in the past?” It then flows through the qualifiers and variables and finally tothe choices. The flow diagram does not attempt to display or determine any weights used in the decision process. All alternatives are treated equallyand displayed as such.

Figure 1. Flow Diagram Showing the Influences and Relationships Among Qualifiers and Variables with the Choices

Similar ProjectDone in the Past?

Yes No How ComfortableWith Estimates?

Very Fairly Slightly

Any Tasks > 80 Hours?

Enter Percentageof Total Estimate

for Which Tasks are> 80 Hours

Yes

No

What is the Expected Duration

of the Project?2 – 4

Weeks1 – 3

Months> 3 Months

< 2 Weeks

At LeastOnce Every

2 Weeks

Once/Month

Rarely

How Often Will it be Reviewed by

Management?

What is theExperience Level of

the Estimator (Scale of 0 to 100)?

What is the Experience Level of

the Programmer (Scale of 0 to 100)?

Acceptable EstimateAdd 15% Management

Overhead Add 10% Contingency

Acceptable EstimateAdjust Management

Overhead

Acceptable EstimateAdjust Contingency

Highly QuestionableEstimate – Needs

to be Reviewed

organization called Application Systems Division (ASD). ASDis a part of SunTrust Service Corp. (STSC), which is a whollyowned subsidiary of Sun Trust Bank Inc. In the past, organi-zations such as ASD found it much easier to maneuverthrough IT requirements than is the case today with fast-moving and complex technical environments. Continualchange combined with large size means that having controlover all IT efforts is vital. This control is not possible withoutthe previously mentioned standardization. It means tactical-ly utilizing available tools and maintaining a strategic visionfor additional tools and methods.

One of the problems with managing projects at ASD wasa lack of consistency. Even if standards were implementedfor estimating, logging, and tracking projects, differences inmanagers’ leadership styles would lead to misinformationand a general breakdown in comparing and planning futureprojects. In 1996, ASD assembled a team consisting of man-agement and project management experts. The ultimate goalfor the team was to build a strategy of cost reduction throughprocess improvement. It focused on standardization, fasterdevelopment times, higher quality systems delivery, and on-time delivery.

This team, whose establishment was mandated by seniorexecutives at STSC, was a mixture of internal employees andcontract consultants and was charged with the development ofan efficient and effective project management methodology forall STSC projects. The resulting methodology was labeledVision Methodology (STSC, 1997) and mostly consists of doc-umented methodologies, system standardization (usingMicrosoft Project®), and periodic hiring of external experts tohelp guide major projects. Vision was a documented method-ology whereby STSC could create more consistency within ITdepartments surrounding the management and implementa-tion of IT projects.

There are six major phases in the methodology: definition,requirements, analysis, design, development, and implementa-tion. Each of these major phases is subdivided into majoractivities. Part of the process, for example, is definition. Withinthis phase, all projects must be defined along the same stepsand activities. This allows for common entry into the appro-priate project management programs (in the current case,Microsoft Project®). It also streamlines management reporting,project tracking, and statistical analysis. As steering committeesand managers review requested projects, the standardized for-mat gives them a common point of reference with which tocompare the benefits and costs.

Upon reviewing this methodology, the authors sensed apotential application for AI solutions. The idea of AI is notnew. In fact, there are numerous examples available where AIsolutions successfully have applied to practical problems.However, in the current situation, the ultimate objective of thisAI-based approach was to facilitate project management activ-ities as outlined in the Vision document. The approach pre-sented in this paper for creating a hybrid network of systemscan provide support for achieving the overall goals of thedesired methodologies requested by management. However,the AI-based approach presented also can facilitate managing

projects in situations lacking the overall development method-ologies similar to Vision (STSC, 1997).

For example, at first glance, one can envision using an ES for estimating projects. However, this approach may not berealistic. The rules involved would be extremely numerous andcomplex. This would hamper development of the system andmake ongoing maintenance unattractive to a potential userorganization. Instead, the authors classified a smaller portionof the process, namely, validation of provided estimates, assuitable for an ES. Others, including a separate ES designed toanalyze and profile team members, support even that system.Yet another ES is envisioned for determining project quality.The total AI system approach, though, is much bigger.

The authors feel that the addition of a neural network toanalyze the database of project information over time and pre-dict delivery time and quality is more viable than the resultsthat could be expected from an ES. This approach makes per-fect sense as an “umbrella” system under which to develop theoverall project management schema. This intelligent tool isexceptional in managing domains where records in a databaseare similar, yet typically different in many ways, and wherenew, better solution is preferred over a near-match alone. It hasthat capability. The authors believe an effective, maintainableproject management and quality assessment environment canbe properly administered and applied to the typical IT shop.

There usually is a wealth of general domain knowledgeavailable to help with estimating. This almost certainly couldbe developed into some sort of rule set for validation of esti-mates and, perhaps, for the estimating itself. For example, anES can be used to allow management to develop profiles oneach employee making up project teams, regarding experienceand knowledge levels, and a different ES can allow a managerto evaluate an estimate done by an employee and validate theestimate based on certain domain knowledge rules. A neuralnetwork also could support this function once the database ofcompleted projects is large enough. A traditional project man-agement tool for collecting the estimate information, trackingthe project through to completion, and recording project sta-tistics into a database can be an integral part of such system. AnES will allow some task force, e.g., a user group, to evaluate andscore a delivered application for quality. Again, a neural net-work also could support this function once the database ofcompleted projects is large enough.

This paper demonstrates how AI tools could be used tofacilitate project management activities. This paper also describes the development of a prototype hybrid intel-ligence system as proof of concept. For this prototypicalimplementation, the authors developed a hybrid intelligence system that consists of an ES to aid in projectestimate validation and two ANN models to be used fordelivery time and quality prediction.

Description of Expert System Model to Aid in ProjectEstimate Validation. An ES is a system that uses humanknowledge captured in a computer to solve problems thatordinarily require human expertise (Turban & Aronson, 2001).ESs are used today by most large- and medium-sized organiza-tions as a tool for improving decision-making as well as

September 2002 Project Management Journal 45

46 Project Management Journal September 2002

increasing productivity and quality. The development of an ESinvolves the construction of a problem-specific knowledgebase by acquiring knowledge from experts or documentedsources. The most widely used form of representing knowledgein the knowledge base is using rules. The system then uses therules in the knowledge base to draw conclusions similar tothose drawn by the experts.

The first component of this intelligence system is an ES. It isdesigned to help management validate estimates of the deliv-ery time and quality of a given project. After extensive reviewof project management activities and conducting interviewswith a number of project leaders, a set of rules was compiled.This was accomplished according to steps:1. The qualifiers used to decide the accuracy of the estimatewere written up;2. The variables that would be used with the qualifiers weredetermined;3. The possible choices were determined;4. A flow diagram was developed to show how these elementswould interact to move from qualifying to choosing (Figure 1);5. A table was developed (using Lotus 1-2-3® ) as a matrix ofthe possible effects of each qualifier’s choice and the range ofvalues in the variables and how they would affect the certaintyfactor for each choice (Table 1 and Table 2);6. The elements were all entered into EXSYS Professional®according to the matrix;7. The functional screens were developed using EXSYS Editorand an editor called Brief®;8. Multiple iterations of sample estimates were tested to vali-date the system.

Rule Set Definition. The elements that make up the ES forproject estimate validation and contingency are shown inFigure 1. The qualifiers for this exercise are:1. Was the estimate developed from a similar past project(similar implies that at least 40% to 60% of the tasks wouldbe roughly similar in nature)? A yes to this question takes theuser to Qualifier 3, otherwise to Qualifier 2;2. How comfortable are you with the estimates consideringthe lack of familiarity with the tasks? The user has four“fuzzy” options here ranging from “very comfortable” to“uncomfortable.” These responses have a weighted effect onthe outcome;3. Are any of the tasks in the project estimated to be morethan 80 hours? There is an “80 hour rule” that basically saysthat any estimate more than 80 hours is wrong (imprecise).The authors’ domain expert recommended that tasks greaterthan 80 hours be broken down, if possible. If no, the usercontinues to Qualifier 4, otherwise to Variable 1 to determinea percentage;4. What is the expected duration of the project? This questionis asked to determine the need for management review meet-ings. It has been shown conclusively that frequent reviews pos-itively influence the overall development time because delaysare caught and corrected early. If the project duration is lessthan two weeks, the user proceeds to Variable 2. There are threeother choices ranging from “two to four weeks” to “greater thanthree months.” These values do not currently affect the weight-ing but may in the future. The user proceeds to Qualifier 5;5. How often will the project be reviewed by management?If once a week is selected, there is no weighting, but the other

Table 1. Summary of Each Possible Choice and How It Would be Influenced by Each of the Qualifiers and Variables

Similar project The choices are yes or no. The letters “NC” indicate that this choice will not affect the weighting. A negative value means this selection reduces the associated “preferable choice” by that number (scale of 1 to 10) and a positive number means it increases a “non-preferred” choice by that number.

Comfort level These weights only come into play if “Similar Project” is No. The numbers 1 through 4 correspond to the four selections.

Tasks > 80 This selection by itself does not impact the weights.

Percentage > 80 If there are tasks greater than 80 hours, the choice from the 0 to 100 range is divided into four discrete weights and applied.

Duration This selection by itself does not impact the weights.

Review frequency These weights only come into play if “duration” is greater than two weeks. The numbers 1 through 4 correspond to the four selections.

Experience of estimator The choice from the 0-to-100 range is divided into five discrete weights and applied.

Experience of programmer The choice from the 0-to-100 range is divided into five discrete weights and applied.

three choices, ranging from “every two weeks” to “rarely” doweigh accordingly.

The variables that affect ES estimate validation and con-tingency are:1. Enter the percentage of the total estimate, which the tasksover 80 hours represent. This is a slide bar, which ranges from0 to 100. The range is divided into four discrete points andweighted appropriately;2. What is the experience level of the estimator (scale from0 to 100)? This is a slide bar that allows the user to evaluatehow closely he or she feels the estimator would come to thetrue estimate. This is broken down into five discreet ranges andis used to weight the outcome appropriately;3. What is the experience level of the programmer (scalefrom 0 to 100)? This is a slide bar that allows the user toevaluate how closely he or she feels the programmer willcome to the estimated development time. This is brokendown into five discreet ranges and is used to weight the out-come appropriately.

The choices that go into ES estimate validation and con-tingency are:1. Acceptable Estimate. This implies that the estimate seemsaccurate, with a certainty factor ranging from 1 to 10;2. Add 15% management overhead and 10% contingency.These values are standard within ASD for an accurately esti-mated project. The assumption is that direct supervisors andmiddle management also will charge time to the project formanagement, and the contingency allows for typical slow-downs and interruptions. This choice always will accompanyChoice 1, but its certainty factor may differ;3. Adjust management overhead to 20%. This, too, willaccompany Choice 1, but weighting factors surrounding thepreviously selected qualifiers may introduce the need toincrease the predicted management time;4. Adjust management overhead to 25%. Same as Choice 3;5. Adjust management overhead to 30%. Same as Choice 3;6. Adjust contingency to 15%. This, too, will accompanyChoice 1, but weighting factors surrounding the previouslyselected qualifiers may introduce the need to increase the con-tingency provision;7. Adjust contingency to 20%. Same as Choice 6;8. Adjust contingency to 25%. Same as Choice 6;9. Highly questionable estimate; needs to be reviewed. Thischoice suggests that the weighting factors are not in favor ofthis estimate being accurate. Its lack of accuracy may dictate theneed to re-estimate the project with better experience and man-agement involvement.

Rule Set Matrix. A rule set matrix was developed for this ES.This matrix is used to define each of the possible choices andhow each qualifier and variable would influence their weight-ing. The weighting is explained individually in Table 1.

The Neural Network Models for Predicting Delivery Time and Level of Quality Artificial neural networks recently have attracted the attentionof researchers from various disciplines including computerscience, psychology, mathematics, physics, and operations

research/management science. They have been applied suc-cessfully to solve many practical problems, such as timesseries predictions, classification, pattern recognition andmathematical optimization, and many more. During the pastdecades, a number of paradigms of ANNs have been intro-duced to carry out a wide variety of computational tasks, suchas the feed-forward ANNs (Chauvin & Rumelhart, 1995;Rumelhart, Hinton & Williams, 1986), Hopfield neural net-works (Hopfield & Tank, 1985), the Boltzmann Machine(Aarts & Korst, 1989; Hinton, & Sejnowski, 1983), and theSelf-Organizing networks (Kohonen, 1988). In this section,the authors provide an overview of ANNs as applied to thisproject. For a more comprehensive and detailed descriptionof ANNs, their learning algorithms, and different networktopologies, the authors recommend Rumelhart, Hinton, andWilliams (1986) and Wasserman (1989).

ANNs are biologically inspired systems of highly intercon-nected simple processing elements that achieve learning. Thesesystem are composed of many simple processing elements,called neurons, operating in parallel whose function is deter-mined by network structure or the way neurons are connectedto each other, connection strengths between neurons, and theinformation processing that is performed at each neuron. Eachneuron is connected to other neurons by means of intercon-nections or links with associated weights. Signals are passedbetween neurons over these interconnecting links. The netinput to a neuron is the weighted sum of all the signals itreceives from all other neurons to which it is connected. Thevector set of inputs to neuron, j, is (X1j through Xnj), where Xij

is the signal sent from neuron i to neuron j. This input vectoris multiplied by the weight vector of interconnections (W1j

through Wnj), where Wij is the strength of connection betweenneuron i and neuron j. The resulting Cartesian product isreferred to as the action potential. The action potential or theweighted sum of the inputs to the neuron is transformed usinga mathematical function called the activation function. Whenthe activation function reaches a given level, the neuron firesand sends a message to the other neurons connected to it.

Memories are stored or represented in a neural network inthe pattern of interconnection strengths among the neurons.Learning is achieved by changing the strengths of these inter-connecting weights. Therefore, to train a neural network, amethod is needed to allow the network to modify its weightssuch that a desired outcome is achieved.

The training method is a mathematical function thatupdates the strength of the interconnecting weights such thatthe difference between the desired output of the network andits actual output is minimized. To train a neural network, a ran-dom set of weights is assigned as the initial set of the intercon-necting weights. A training set of data consisting of the inputsand the corresponding outputs then are presented to the network. The network is allowed to generate its output basedon its current set of weights. The output from the network thenis compared to the actual output and the sum of squared oferrors is calculated.

In each succeeding iteration, the network attempts tominimize the sum of squared errors by adjusting its weight

September 2002 Project Management Journal 47

48 Project Management Journal September 2002

set using the training algorithm. The adjustment to theweight set can be done in either a single update, in which theweights are changed every time the network is presentedwith a new case data or an epoch update, in which the net-work changes the weights only after it receives the wholetraining data set. For faster training, it is possible to specifyan epoch size. For example, an epoch size of five means thatthe network adjusts the weights only after it has been pre-sented the entire training set data five times. The trainingcontinues until a prespecified condition is reached. Traininga neural network is an iterative process. The developer goesthrough the designing system, training the systems and eval-uating the system repeatedly.

In a neural network, neurons are grouped into layers, orslabs, that include neurons of the same type. There also are dif-ferent types of layers. The input layer consists of neurons thatreceive input from the external environment. The output layerconsists of neurons that communicate to the user or externalenvironment. The hidden layer consists of neurons that onlycommunicate with other layers of the network. The authorsdefine a multilayer neural network as a network in which the

interconnections between more than the two I/O layers of thenetwork change in the learning process.

The network has additional layers, hidden from the envi-ronment. The most widely multilayer neural network usednetwork today is the Backpropagation neural network(Chauvin & Rumelhart, 1995; Rumelhart, Hinton, &Williams, 1986). The basic idea in a Backpropagation net-work is that the neurons of a lower layer send their outputsup to the next layer. It is a neural network that is very goodin generating input-to-output mappings based on computa-tions of interconnection of its nodes. It is a fully connected,feed forward, hierarchical multilayer network, with hiddenlayers and no intraconnections.

Figure 2 shows a typical Backpropagation neural network.The network shown has four input neurons, four output neu-rons, and a hidden layer with three neurons. The weightsbetween output layer and the hidden layer are updated basedon the amount of error at the output layer, using generalizeddelta rule (an iterative steepest descent procedure, minimizingthe least-mean-square error measure between the desired andactual outputs). The error of the output layer propagates back

Table 2. Impact on Choices

Choice

Similar project?

Comfort level

Tasks > 80

Percentage > 80

Duration

Review frequency

Experience of estimator

Experience of programmer

Accept

Y = NCN = –3

1 = NC2 = –23 = –6

NC

0 – 33 = –134 – 66 = –3> 66 = –6

NC

1 = NC2 = –23 = –6

0 – 33 = –734 – 66 = –3> 66 = NC

0 – 33 = –734 – 66 = –3> 66 = NC

Add 15% mgmt overhead and

10% contingency

Y = NCN = –3

1 = NC2 = –23 = –6

NC

0 – 33 = –134 – 66 = –3> 66 = –6

NC

1 = NC2 = –23 = –7

0 – 33 = –734 – 66 = –3> 66 = NC

0 – 33 = –734 – 66 = –3> 66 = NC

Adjust management

overhead

Y = NCN = 4

1 = NC2 = 33 = 8

NC

NC

NC

1 = NC2 = 33 = 7

0 – 33 = 734 – 66 = 3> 66 = NC

0 – 33 = 734 – 66 = 3> 66 = NC

Adjust contingency

Y = NCN = 5

1 = NC2 = 33 = 8

NC

0 – 33 = 234 – 66 = 4> 66 = 8

NC

1 = NC2 = 33 = 8

0 – 33 = 734 – 66 = 3> 66 = NC

0 – 33 = 734 – 66 = 3> 66 = NC

Review completely

Y = 2N = 3

1 = NC2 = 33 = 8

NC

0 – 33 = 234 – 66 = 4> 66 = 8

NC

1 = NC2 = 33 = 8

0 – 33 = 834 – 66 = 4> 66 = NC

0 – 33 = 834 – 66 = 4> 66 = NC

to the hidden layer via the backward connections. The connec-tion weights between the input layer and the hidden layer areupdates based on the generalized delta rule as well.

There is a forward flow of outputs and a backward updateof the weights based on the error on the output layer. Duringthe training process, the network seeks weights that mini-mize the sum of the square of output layer errors. There is abackward connection from the output layer to the hiddenlayer, from the hidden layer to the next lower hidden layer,and from the lowest hidden layer to the input layer. Thebackward connections between each pair of layers have thesame weight values as those of the forward connections andare updated together.

Taking the parameters from the ES component describedearlier along with the additional factors that may influenceproject delivery time and quality level of the resulting deliver-able from the project, a database was constructed. This data-base was used to train two ANN models. This database con-sisted of relevant data from 200 previously completed projects.The fields contained in the database are derived from projectinformation that is typically gathered and/or available on anygiven project. The data set initially contained 57 variables.

Nonparametric methods, i.e., neural networks, tend to gen-eralize better from fixed-sized data sets if the dimensionality ofthe data set is lowered without losing significant informationas reflected by the relationships in the data set. A commonmethod used to reduce the dimensionality of the data set with-out significant loss of information is principle componentanalysis (PCA). In essence, PCA reduces the input set from n tom by identifying an m-dimensional subset of the n-dimen-sional input space that seems to be the most significant. UsingPCA, the authors reduced the initial 57 variables to a final setof 15. The model consisted of 15 input variables and one out-put variable for each model. The inputs are designed to pro-duce an estimate of a project score given a set of new inputs.

For each of the 200 records in the database, 15 fields wereeventually selected:■ Project Number. This field identifies each record with afour-digit project number that is typically used by SunTrust toidentify projects for tracking and billing purposes. It is not usedin the neural network (ANN);■ Project Manager. This is the last name of the applicationmanager assigned to the project. This person is responsible forestimation, approval, development, testing, and implementa-tion and follow up. For the ANN, only four fictional managerswere used, and their names were substituted for numbers (1 =Smith, 2 = Jones, 3 = Carlisle, 4 = Free);■ Estimate Date. This is the date the estimate was completed;it is not used in the ANN;■ Estimated Hours. This is the total estimated actual hours,including documentation, development, testing, and imple-mentation;■ Management Overhead. This value (entered as a percent-age) is an additional percentage of the overall project estimatethat will be used for managers’ hours that are not directlyinvolved but are directly responsible for the project teamassigned to the project. This includes the project manager. This

value is typically set to 15%, however, it may be set to a higherfigure if the project manager feels the project will demandmore time than normal, i.e., an inexperienced lead program-mer, and on some rare occasions may be set to less;■ Contingency. This value (entered as a percentage) is anadditional percentage of the overall project estimate that isset aside for normal project additions, i.e., unplanned pro-ject meetings, requirements’ discrepancies, etc. It is typicallyset at 10%;■ Adjusted Estimate. This is the estimated hours plus themanagement overhead and contingency factored hours;■ Comfort Level. This field takes on a value from 1 to 3, with1 indicating “Very Comfortable,” 2 indicating “Comfortable,”and 3 indicating “Slightly Comfortable;”■ Percent > 80. This is a value representing what percentageof the project tasks have been estimated to be greater than 80hours. The lower the percentage the better, because any taskgreater than 80 hours automatically can be assumed to bewrong to a certain extent;■ Number of Reviews. This is a value representing how oftenthe project will undergo reviews: 1 = At least every two weeks,2 = At least monthly, and 3 = Rarely;■ Estimator Experience. This is a percentage value represent-ing the estimator’s experience level. It ranges from 0 to 100%,with 100% being the most experience possible;■ Programmer Experience. This is a percentage value repre-senting the lead programmer’s experience level. It ranges from0 to 100%, with 100% being the most experience possible;■ Actual Hours. This is the total number of hours charged tothe project, including management overhead. It is factored inas part of the output for the neural network;■ Quality Rating. This is a rating that the authors arbitrarilyassigned for this project. It, too, is factored in as part of the pre-dictive output for the neural network. It is initially entered as anormalized value from 0 to 1 but is converted to a value from0 to 100;■ Score. This is a compilation score of the overall project andrepresents the output for the ANN. It is constructed using thefollowing formula:

[(Adjusted Hours / Actual Hours) x Quality Rating x 100] (1)

Of these fields, comfort level, percent > 80, number ofreviews, estimator experience, and programmer experienceare directly from the ES project. Actual hours and qualityrating are follow-up values entered after the project’s completion. The quality rating is conceptual, but an ES forevaluating and determining a quality rating is quite neces-sary to valid project estimation.

The authors constructed neural network models to predictthe delivery time and quality for a given project. The neuralnetwork models developed for this project wereBackpropagation neural networks consisting of an input layer,a hidden layer with 10 neurons, and output layer. The authorsused one hidden layer with multiple neurons in this studybecause this configuration is sufficient to approximate anycontinuous function to an arbitrary precision (Hornik,

September 2002 Project Management Journal 49

50 Project Management Journal September 2002

Stinchcombe, & White, 1989). To build the prediction modelfor “Score,” building two prediction models actually wasnecessary. The first, which in theory was based on the vari-ables “manager” through “programmer experience,” wasdeveloped to predict actual hours, because this is, of course,important in predicting how close the actual hours willcome to the estimate. For example, a project that completedexactly on time and achieved a perfect quality rating wouldhave a score of 100. A score of less than 100 would indicatea project that suffered from quality of less than 100%, actu-al hours exceeding its estimate, or a combination of both. Ascore higher than 100 indicates a project with high quality orwith actual hours that were less than estimated, or somecombination of both.

The authors experimented with using ANN models topredict the actual hours required for a project. This neuralnetwork model consisted of the 15 input variables and oneoutput variable. In training the network, the authorsemployed the BackPack Neural Network software packageusing its default settings. The network had one hidden layerwith 10 neurons, an epoch size of five, and 20,000 as themaximum number of iterations. Training the networkusing the training set and testing it using the testing dataset, the authors obtained a high correlation between themodel and the predicted actual hours [correlation coeffi-cient was 0.9895 and root mean squared errors (MSE) was0.03350]. The results indicates that the neural networkmodel is a powerful predictive tool.

Because this produced a satisfactory network for predictingactual hours, the authors then developed the second neuralnetwork, using the same database. This one was called “Score”and contained a neural network for predicting the score of agiven project based on all of the previous variables plus theactual hours. Here, quality is an input from the user, becauseno ES has yet been developed for this function. The networkusing the training set and testing it using the testing data set,the authors obtained a high correlation between the modeland the predicted actual hours (correlation coefficient was0.738 and root MSE was 0.0898).

Future Development of Hybrid SystemsIn this section, the authors provide a vision for future devel-opment of intelligence hybrid systems for project manage-ment and QA of information systems projects. The authorsprovide a breakdown project management/QA processesand analyze the details of development of individual com-ponents of an AI-based hybrid system that can be used tosupport these processes.

The steps that are typically taken in an IT developmentproject and the associated recommended AI systems for eachpiece are:■ Team Member Profiles. Recommended AI Approach:Expert System. This is a crucial step in developing a tacticalestimating system. In studying project management, onedoes not have to delve too deeply to realize how much influ-ence the make-up of a team has on the completion of the

Figure 2. Typical Backpropagation Artificial Neural Network

InputLayer

OutputLayer

HiddenLayer

project. If a highly experienced individual develops the esti-mates for the tasks, these estimates may be very different fromthe actual hours of development done by a very inexperi-enced programmer. This process can be very limited (asshown in the ES example the authors developed as part ofthis project) or very in-depth under which managers’ andteam members’ cognitive evaluations, experience levels, timeavailabilities, work progress ratings, etc. are taken intoaccount. The result is an overall standardized score for theteam that helps to evaluate the estimate;■ Problem Sensing. Recommended AI Approach: This isthe origination of the idea for a system or system change; noAI solution is recommended. Someone must discover theneed for a new system or enhancement, calling for a potentialproject. This is not a function that realistically can be relegatedto an AI system;■ Ballpark Estimate. Recommended AI Approach: Thiswould typically be done by a human, because little is knownabout realistic requirements or potential team members.This is typically done for initial funding approvals. Becauseuser requirements probably have not been developed by thispoint, this responsibility would best be left to humans. Thisestimate usually is not used to limit funding or measure per-formance, but is merely a source for making a tentative deci-sion to go ahead with a project. Once the actual estimate ismade, then funding decisions usually are finalized (this issometimes true only if the final estimate is beyond somepreapproved tolerance level from the ballpark);■ Requirements. Recommended AI Approach: No AI solu-tion initially, however, an ES could be developed to aid inthis process. This is usually an in-depth process involving bothdevelopment staff, end users, training sections, etc. It is, as withthe preceding steps, not usually considered a part of “projectmanagement,” so probably would be handled manually. It iscovered because an ES could be developed to facilitate thisprocess by developing a shell based on the many similar func-tions between projects;■ Detailed Project Estimate. Recommended AI Approach:This initially would be done manually, however, the eventu-al development of an ES can be used here too. Although thiswould initially have to be done manually while the base caseis being developed, it could be supported eventually by an ESestimating tool using past projects that are similar in nature.■ Project Validation. Recommended AI Approach: ExpertSystem (as discussed in this paper). Although an ES was usedhere, it may be that another AI tool, e.g., a neural network,could be better used, especially once data have been accumu-lated from prior projects.■ Long-term Base Case Development. Recommended AIApproach: Neural Network (discussed in this paper),however, the eventual development of the case-based reasoning (CBR) case base and search tool eventuallycould be used. In developing the base case for estimating, aneural network could be valuable for establishing trendsand predictive models.■ Quality Assessment. Recommended AI Approach: Expert System. This may be an appropriate platform for an ES.

A decision tree approach could be used to guide end-users,trainers, support personnel, management, and even the devel-opment team members in evaluating quality points, such as:

■ How smooth was the implementation?■ How close did the deliverable meet your expectations?■ Is the system relatively “bug” free?■ Was training handled well considering the level of changes

or degree of complexity?■ Prediction Case-Base. Recommended AI Approach: Case-based reasoning system. This CBR case-base would becomethe primary platform for estimating future projects.

ConclusionIn this paper, the authors note that, in most large companies,management has long recognized a deficiency in project man-agement and QA concerning information systems-develop-ment and maintenance projects. The authors outlined the ideaof using a hybrid network of systems for supporting projectmanagement and QA, which is an integral part of any success-ful project management processes related to developing infor-mation system projects.

Given the nature of the tasks involved in project manage-ment, a hybrid intelligence approach may be a more realisticsolution than an AI-based method consisting of a single com-ponent. The authors discussed specific project managementactivities that can be supported using this hybrid approach.These activities include project estimation, tracking, develop-ment, testing and implementation.

The authors’ hope that the guidelines provided in this papercan be used to harness the potentially powerful AI to supportthis vital IS area. To show the potential uses of the AI techniquesin project management, the authors presented the developmentof a prototypical hybrid intelligence system. This system con-sists of the ES to aid in project estimate validation and two ANNmodels to be used for delivery and quality prediction.

AcknowledgmentThis research was partially supported by a Faculty DevelopmentResearch Grant from Bryan School of Business and Economics,The University of North Carolina at Greensboro.

ReferencesAarts, E., & Korst, H. (1989). Simulated annealing and

Boltzmann machines: A stochastic approach to combinatorial opti-mization and neural computing. New York: John Wiley & Sons Inc.

Albrecht, A., & Gaffney, J., Jr. (1983). Software function,source lines of code, and development effort prediction: A soft-ware science validation. IEEE Transactions on SoftwareEngineering, 9 (4), 639–648.

Bennatan, E.M. (1995). On time, within budget software pro-ject management practices and techniques. New York: John Wiley& Sons Inc.

Chauvin, Y., & Rumelhart, D. (1995). Backpropagation:Theory, architectures, and applications. Hillsdale, NJ: Erlbaum.

Ellwood, C., & Maurer-Williford, M. (1994). Rethinkingproject management for systems. International Association ofSystems Analysts, 4 (1), 111–117.

September 2002 Project Management Journal 51

52 Project Management Journal September 2002

Finnie, G., & Wittig, G. (1997). A comparison of softwareeffort estimation techniques: Using function points withneural networks, case-based reasoning and regression models. Journal of Systems Software, 5 (2), 281–289.

Jeffrey, D., & Low, G. (1990). Calibrating estimation toolsfor software development. Software Engineering Journal, 7 (1),215–221.

Hinton, G., & Sejnowski, T. (1983). Optimal perceptualinference. Proceedings of Institute of Electrical and ElectronicsEngineers Conference on Computer Vision and Pattern Recognition,Washington, D.C. New York: IEEE, 448–453.

Hopfield, J., & Tank, D. (1985). Neural computation ofdecisions in optimization problems. Biological Cybernetics, 52(1), 141–152.

Hornik, K., Stinchcombe, M., & While, H. (1989).Multilayer feedforward networks are universal approximators.Neural Networks, 2 (3), 359–366.

Kerzner, H. (1995). Project management: A systems approach toplanning, scheduling, and controlling (5th ed.). New York: VanNostrand Reinhold.

Kitchenham, B. (1991). Making process predictions, softwaremetrics: A rigorous approach. New York: Chapman and Hall.

Kohonen, T. (1990). Self-organization and associative memory,(2nd ed.). Berlin, Germany: Springer-Verlag.

Macro, A. (1990). Software engineering: Concept and manage-ment. Upper Saddle River, NJ: Prentice Hall.

Matson, J.B., & Mellichamp, J. (1994). Software develop-ment cost estimation using function points. IEEE Transactionsof Software Engineering, 20 (4), 212–231.

McDermid, D.C. (1990). Software engineering for informationsystems. Oxford, UK: Blackwell Scientific Publications.

Mcleod, G., & Smith, D. (1996). Managing information tech-nology projects. Cambridge, MA: Course Technology.

Rumelhart, D., Hinton, G., & Williams, R. (1986). Learninginternal representations by error propagation. In D.E.Rumelhart and J.L. McClelland (Eds.). Parallel distributed pro-cessing, explorations into the microstructure of cognition I. (pp.318–362). Cambridge, MA: MIT Press.

Simpson, W. (1987). New techniques in software project man-agement. New York: John Wiley & Sons Inc.

Srinivasan, K., & Fisher, D. (1995). Machine learningapproaches to estimating software development effort. IEEETransactions on Software Engineering, 21 (2), 114–131.

SunTrust Service Corporation (STSC). (1997). Visionmethodology. Atlanta, GA: STSC Information Systems Division.

Taff, L., Borchering, J., & Hudgins, R. (1995). Estimating:Development estimates and a front-end process for a large project. IEEE Transactions on Software Engineering, 17 (8),178–201.

Turban, E., & Aronson, J. (2001). Decision support system andintelligent systems. Upper Saddle River, NJ: Prentice Hall.

Wasserman, P. (1989). Neural computing: Theory and practice.New York: Van Nostrand Reinhold.

Hamid Nemati, PhD, is an assistant professor of information systems at the Information Systems andOperations Management Department of The Universityof North Carolina at Greensboro. He holds a doctorate

from the University of Georgia and a master’s degree from TheUniversity of Massachusetts. His research specialization is in the areasof decision support systems, data warehousing, data mining andknowledge management. He has presented nationally and internation-ally, and his papers have appeared in number of leading journals. Heis currently finishing work on two books on organizational data miningand global knowledge management.

Dewey W. Todd is a doctoral candidate in decision sciences at Georgia State University’s J. Mack RobinsonCollege of Business. He teaches courses in statistics anddecision-making/problem-solving, and his research areas

include group problem-solving, decision-making biases, organizationalpsychology, organizational learning, cognitive style, and conflict manage-ment. He also has 18 years experience in banking, management information systems, and information systems project management.

Paul Brown is a doctoral student in decision sciences at Georgia State University. His research interest is data mining, supply chainmanagement, and electronic procurement. He is a member ofDecision Sciences Institute, Institute for Operations Research and theManagement Sciences, and National Association of Project Managers.

THayburn
This article is copyrighted material and has been reproduced with the permission of PMI. Unauthorized reproduction of this material is strictly prohibited.

September 2002 Project Management Journal 53

The Fall of the Firefly: An Assessmentof a Failed Project Strategy

Bud Baker, Wright State University, Department of Management, 270 Rike Hall, Dayton, OH45435–0001 USA

Evaluating the success of projects is rarely a precise science. Examples of projectambiguity abound, including the Hubble Telescope, which started its opera-

tional life as a national joke and a case study of project failure. Yet, today it contin-ues to reveal never-before-seen views of the heavens, views unobtainable from anyother source. At its completion, the Sydney Opera House was seen by most as a stu-pendous failure: a music hall with poor acoustics, stunningly over cost and behindschedule. Decades later, that same structure is a unique national treasure, its massivecost and schedule overruns long ago forgotten.

On occasion, though, a project ends in such a manner that there can be no doubtabout its failure. The U.S. Air Force’s effort to acquire the T-3A Firefly trainer aircraftis such a case. The Firefly was to improve the screening process for pilot candidates,saving the government money while helping the Air Force produce more skilledpilots. The results proved to be different: a loss of the more than $40 million invest-ed, a reduction in the Air Force’s ability to select pilots, a significant damage to theAir Force’s reputation, and, worst of all, the deaths of six young men.

Birth of the Firefly ProjectThe Air Force has used a variety of small aircraft to screen pilot candidates since atleast 1952. The rationale for such screening was based primarily on economic effi-ciency: Not all pilot candidates had the necessary motivation, aptitude, and skills tofly high-performance military aircraft. Given that fact, if such candidates could beidentified earlier and eliminated from training, less time, money, and resourceswould be wasted on them (Secretary of the Air Force Inspector General, 1998). In themid-1960s, the Air Force chose a single-engine Cessna model 172, dubbed it the T-41, and made it the primary flight screening aircraft. The reliable-but-never-glam-orous T-41 did its job well for the next 30 years: Despite the inability and inexperi-ence of thousands of student pilots over three decades, not a single fatality occurredin T-41 operations.

By the late 1980s, though, some in the Air Force argued that the T-41 needed tobe replaced. The old Cessna design could not handle—nor was it ever designed tohandle—the high stresses of aerobatic flight. And such aerobatics were deemed, byAir Force leadership, to be necessary as “a means of evaluating a candidate’s abilityto react quickly and accurately while flying more complex maneuvers representativeof follow-on trainers and operational USAF aircraft” (Secretary of the Air ForceInspector General, 1998, p. 3).

AbstractChoices made early in a project determinefuture success. Missteps in early phases willcause trouble later in the project’s life cycle.The U.S. Air Force’s acquisition of the T-3A“Firefly” trainer was just such a troubled project. Rather than develop a new aircraft,the Air Force decided to save time and moneyby buying a commercial off-the-shelf (COTS)trainer. But significant aircraft modificationsundermined the integrity of the COTS strategy.This paper suggests four project lessons: Anyproject must be managed as a system of inter-related parts; a project strategy must be flexi-ble to accommodate changing circumstances;testing must be done in realistic environ-ments; and concurrency carries with it benefitsand dangers.

Keywords: troubled execution; project strategy; risk management; commercial off-the-shelf acquisition, government projects

©2002 by the Project Management Institute2002, Vol. 33, No. 3, 53–578756–9728/02/$10.00 per article + $0.50 per page

The Air Force Chief of Staff at the time, himself a formerfighter pilot, was a strong proponent of replacing the T-41. In apithy remark that would become more widely reported whenfatalities began to occur, he claimed, “The T-41 is your grand-mother’s airplane. Our mission is to train warrior-pilots, notdentists to fly their families to Acapulco” (Thompson, 1998,electronic version).

Not everyone shared the general’s sentiments. Most Air Force pilot training graduates don’t move on to highlymaneuverable fighter/attack aircraft, but instead to heavier,more stable platforms—bombers, aerial tankers, or cargojets—where the spins, loops, and rolls of aerobatic flying arenot exactly commonplace. Furthermore, others thought thatUSAF leaders were losing sight of the mission: The purpose ofthese aircraft was only to screen prospective pilots, not to trainthem, per se. The training would come later, in other aircraft,after the initial screening. In the words of one instructor pilot:

A common question at the time was “Why are wespinning students during flight screening? The planewas simply a screener to determine who qualified toenter Undergraduate Pilot Training ... We wonderedwhy spinning and advanced acrobatics were involved inan aircraft designed to screen applicants. Some of ushad a philosophy that functioning in a flying patternand being able to land an aircraft solo was enough cri-teria to determine who should progress to pilot training(“The making of a trainer,” 1998).

The Failed Project Strategy: “Commercial Off-the-Shelf ... Sort of ...”With the strong support of Air Force leadership and in spite ofthe misgivings of others involved in the project, the acquisitionof the new aircraft, called the Enhanced Flight ScreeningProgram (EFSP), proceeded. One of the first decisions involvedacquisition strategy.

A number of aerobatic-capable flight trainers existedthroughout the world, so the Air Force decided to select one ofthese, rather than to develop a new aircraft from scratch. Thisstrategy, generically known as commercial off-the-shelf (COTS)is approved and required by the U.S. Department of Defense:

Market research and analysis shall be conducted todetermine the availability and suitability of existingcommercial and nondevelopmental items prior to thecommencement of a development effort ... Preferenceshall be given to the use of commercial items ... Theoverriding concern is to use the most cost-effectivesource of supply throughout a system’s life cycle (U.S.Department of Defense, 2000).

One reason that commercial items tend to cost less is thatthere is not so great a need to do extensive testing and evalua-tion, because such presumably already was done when theproduct was introduced commercially. Still, the EFSP movedahead rapidly, by Air Force acquisition standards. The programmanagement directive authorizing the project was released in

July 1990, with initial flight demonstration by seven compet-ing manufacturers held during the next month.

One of the competitors was the Firefly. Offered bySlingsby Aviation Ltd. of England, it was judged to be under-powered, slow to climb, and had the lowest cruising speed ofany of the competitors. Brake effectiveness was poor, seatingadjustments difficult, and visibility was limited, both overthe nose and over the low-mounted wings. But handlingearned the Firefly higher marks, with overall stability andresponsiveness both judged to be very good (Secretary of theAir Force Inspector General, 1998).

Over the next year, a critical thing happened: The slow rateof climb and sluggish cruise performance of the Firefly in the1990 tests caused Slingsby to replace the original 200-hpengine with a much larger, stronger, and heavier powerplant,generating 260 hp. While this solved the power problem, it created a host of other difficulties.

In the summer of 1991, when the new, higher-poweredFirefly again was evaluated, along with the other competitors,the problems were apparent. Spins, which were cited as easy toenter and correct a year earlier, were now cited as a problem,especially for a low-time pilot. The brakes were even moreproblematic than before. Perhaps most ominously, the Firefly’snew engine showed a tendency to just quit, both on theground and in flight. In seven missions, the engine stoppedfour times, once in the air (during a spin) and three times onthe ground. The engine stoppages were attributed to vaporlocks in the fuel system feeding the new engine (Secretary ofthe Air Force Inspector General, 1998).

In hindsight, a larger problem is clear. An aircraft is a system where everything affects everything else. Perhaps themost critical of all those elements is the engine. When Slingsbyreplaced the original four-cylinder, 200-hp engine with themuch larger six-cylinder, 260-hp motor, changes rippledthroughout the entire system. The fuel pump had to be movedand fuel lines repositioned. The exhaust system moved closer tothe fuel filter/screener, causing fuel to overheat and suggestingthat the vapor locks were systemic problems, not mere anom-alies. The new engine weighed 80 pounds more than the origi-nal, which pushed the Firefly’s critical center of gravity beyondthe forward limit: Even more repositioning and rerouting ofother systems were required to get the aircraft’s weight distribu-tion back in balance (“The making of a trainer,” 1998). Thebrakes, marginal to begin with, were now incapable of holdingthe re-engined aircraft in place on the ground (Secretary of theAir Force Inspector General, 1998).

The deeper problem is equally clear: The engine change,along with the ripple effect through the rest of the aircraft,effectively destroyed the integrity of the COTS acquisition strat-egy. The whole project approach was undermined: The aircraftbeing bought still may have been “commercial,” but there wasno longer anything “off the shelf” about it. In its brief life, theAir Force’s Firefly was the subject of wholesale changes, gener-ating 131 service/modification bulletins, an average of abouttwo per month (Secretary of the Air Force Inspector General,1998). The re-engined Firefly had become—or at least shouldave become—an experimental aircraft.

54 Project Management Journal September 2002

September 2002 Project Management Journal 55

Pressing OnFollowing the evaluation of all competing aircraft (an assess-ment which lasted only 12 days, from 26 July to 7 August 1991[Secretary of the Air Force Inspector General, 1998]), theAeronautical Systems Center issued a request for proposal inSeptember 1991. The Slingsby Firefly with the larger engine, bynow designated the T-3A, was selected on 29 April 1992, adecision immediately protested by some of the losing bidders.Following a review by the General Accounting Office, the con-tract award was upheld.

The first prototype was delivered to the Air Force on 15 June1993 (Secretary of the Air Force Inspector General, 1998). Thetotal buy was set at 113; 56 would be at the Air Force Academyin Colorado Springs, CO, USA, and 57 would go to a flyingtraining squadron at Hondo, TX, USA (Secretary of the AirForce Inspector General, 1998).

TestingBecause the T-3A was at least nominally a COTS acquisition,testing and evaluation was very much abbreviated. From 23September through 1 October 1993—a period of just eightdays—test pilots evaluated the T-3A. Surprisingly, most of thetesting was done by the contractor, with the Air Force in only asupporting role: “Slingsby primarily conducted the test, withparticipation by the 4950th Test Wing ... Slingsby’s final reportstated that the T-3A demonstrated full compliance with systemspecifications” (Secretary of the Air Force Inspector General,1998, p. 16).

While some might question allowing a contractor’s ownemployees to assess the degree of that contractor’s own compliance, there is another troubling point here: The pilotschosen for such work likely are to be highly skilled and experi-enced test pilots. Yet the people who would fly the T-3A operationally were 20-year-old college juniors, supervised byinstructor pilots of widely varying backgrounds. This concernwas to be addressed in another phase of testing, calledQualification Operational Test and Evaluation (QOT&E).

QOT&E was designed to get the aircraft out of the hands oftest pilots and into the hands of pilots who would have moretypical qualifications. The goal was to see how the aircraftwould behave in its operational environment. QOT&E tookplace at Hondo, TX, USA (at an elevation of 930 feet, Hondowas not representative of the Air Force Academy, at whose6,572-foot elevation airfield half of the T-3A’s would operate)(Secretary of the Air Force Inspector General, 1998).

QOT&E was to be in two phases. Phase I, scheduled for 14weeks, was slashed to just five weeks because the test aircraftwere delivered late. Phase II had to be cut short because of“extended grounding of the fleet due to uncommandedengine stoppages during the test” (Secretary of the Air ForceInspector General, 1998, p. 17). Still, the testing agency ratedthe T-3A as “operationally effective but not suitable,” notingin November 1994 that the criteria for aircraft availabilitywas 81%, yet the T-3A was fully mission capable only 15.8%of the time (Secretary of the Air Force Inspector General,1998, p. 17).

The Fall of the FirefliesBut by now the pipeline was open. The official “acceptance cer-emony” for the T-3A had taken place at Hondo in October1994, the month before the “operationally effective but notsuitable” assessment. In January 1995, the first T-3As arrived atthe Air Force Academy, and it was the very next month that thefirst disaster occurred.

On 22 February 1995, an instructor pilot and student werekilled instantly when their T-3A plummeted into a Coloradopasture in a spin. Investigators concluded that the young stu-dent inadvertently had put the Firefly into a spin from whichthe instructor pilot could not recover. Following the accident,spins—a major justification for the T-3A in the first place—were banned from the flight screening program (“The makingof a trainer,” 1998). Morale in the flight training squadronsdropped (Secretary of the Air Force Inspector General, 1998).

In September 1996, a second T-3A crashed, again inColorado. The pilots had been practicing simulated forcedlandings, an especially prudent part of the curriculum giventhe Firefly’s propensity for engine trouble. As in the first crash,both pilots died, and simulated forced landings were soonbanned as a result (“The making of a trainer,” 1998).

Just nine months later, the third and last fatal accidentoccurred. While executing an approach to the Air ForceAcademy’s airfield, the Firefly fell into a stall and spin, impact-ing the ground before the crew could recover. Again, both pilotsdied. A few days later, another Firefly lost power on landing, andthe entire fleet was grounded (“The making of a trainer,” 1998).

The Firefly’s Last DaysFollowing attempts by the Air Force, Slingsby, and others tofind the cause of the Firefly’s engine stoppages, the Air Forcehired defense contractor Science Applications InternationalCorp. (SAIC) to evaluate the problem. While SAIC never wasable to isolate a single cause for the failures, it proposed a listof changes to the fuel system. Ten of those changes were incor-porated, tested, and approved by the FAA (Scott, 1998).

In 1998, the year following the last of the fatal crashes andthe grounding of the fleet, the Air Force Flight Test Center final-ly was tasked to test four Fireflies, both with and without theSAIC-recommended modifications. At last, the Firefly was sub-jected to the rigorous testing that it should have undergoneyears before. For 15 months, Air Force test pilots flew 417flights for a total of 604 hours, subjecting the Fireflies to inten-tional mishandling and even “abusive conditions.” Their con-clusions were that the Firefly was “safe for training,” althoughthey recommended 27 different changes to the aircraft, flightprocedures, and training curricula. Most of the 27 recommen-dations were related to just two areas: aircraft handling andcontrol and the fuel system (United States Air Force, n.d.).

Ultimately the tests again were cut short. On 9 October1999, the Air Force decided to ground the fleet permanently(United States Air Force, n.d.). Following unsuccessful negotiations to sell the remaining 110 Fireflies back to Slingsby,the Air Force planned to scrap the entire fleet, selling the aircraft for parts (Diedrich, 2001).

Lessons for Project ManagersIf the Firefly program is to be of any value, it is in the lessons itholds for future project managers.

Lesson One. Like an aircraft, a project is a total system, inwhich all parts must fit together. This was not the case with theT-3A. All three accidents, for example, occurred at the Air ForceAcademy, with no crashes at the contractor-operated flightschool at Hondo, TX, USA. At least two systemic differencesexisted between the Academy T-3A operation and its Texascounterpart: First, the Academy airfield was a mile higher, withthe thinner air causing a significant drop in the T-3A’s perfor-mance. Second, the Air Force Academy instructor pilots tendedto be experienced in large jet aircraft, not small aerobaticplanes such as the Firefly. Nor were most of them full-timeinstructor pilots: Almost half held full-time jobs as academicfaculty members, flying only a few hours per week (Secretary ofthe Air Force Inspector General, 1998). In contrast, the com-mercial flying school instructors at Hondo flew full time, andon average had seven times as much single-engine experience asthe Air Force Academy instructors (Secretary of the Air ForceInspector General, 1998).

The testing that was eventually done by the Air Force FlightTest Center suggests that in the highly skilled hands of expertpilots, the flaws of the Firefly were not necessarily fatal ones.But replace those pilots with the less experienced Academyinstructors, and the integrity of the T-3A flight screening systemappears to have been compromised in a deadly manner.

Lesson Two. The COTS acquisition strategy proved to beinappropriate for the T-3A, given the substantial modificationsthat the aircraft required. It is true that a well-executed COTSstrategy can save time and money by reducing both develop-ment and testing effort. But as soon as the Firefly’s engine waschanged, with all the other resultant modifications to the air-craft, the COTS strategy was no longer feasible.

This lesson is acknowledged in a Department of Defense pol-icy statement issued well after the grounding of the Firefly fleet:

A commercial off-the-shelf (COTS) item is onethat is sold, leased, or licensed to the public; offeredby a vendor trying to profit from it: ... available inmultiple, identical copies; and used without modi-fication of the internals [emphasis added](Commercial Item Acquisition, 2000, p. 3).

Certainly, the scores of changes made and/or recom-mended to the Firefly qualify as pretty substantial “modifi-cation of the internals.” But there are other references in theDepartment of Defense policy statement that seem specifi-cally tailored to prevent another T-3A-style failure. For exam-ple, when considering COTS items, the Department ofDefense acknowledges that COTS is not a panacea, and therelikely will be differences between what the user/client wantsand what already is on the market:

A gap will exist between DoD and commercialuse—and the gap may be large ... Modifying the com-mercial items is not the best way to bridge the gap ... If

the gap is too great, commercial items may not beappropriate … Don’t modify the commercial item(Office of the Secretary of Defense, 2000, pp. 7–8).

The adage about “not having one’s cake and eating it, too”applies here: It is unwise to choose a strategy, accept the bene-fits of that strategy, and then try to ignore its inherent penalties.

Lesson Three. A project must be tested in the environmentin which it will actually operate and with the people who willactually operate it. As obvious as that statement is, it is impor-tant to note that it never really happened with the Firefly. Theinitial testing was done largely by the contractor’s own testpilots, and the Air Force Flight Test Center evaluation, per-formed after the fatal accidents began, also was carried out byhighly skilled professional test pilots. Even the brief opera-tional testing that was scheduled at Hondo was cut short by theFirefly’s engine problems. The testing done there was per-formed by the vastly more experienced commercial instructorpilots, not the full-time professors/part-time pilots prevalent atthe Air Force Academy.

Lesson Four. Concurrency kills. Normally, concurrencyrefers to overlap between project stages: to start testing whiledesigning or to start producing before testing is complete.Here, one could argue that the stages of the project actuallywere reversed: Purchase the plane, change the design, deliverand operate it, learn its shortcomings, and only afterwards,subject it to thorough and rigorous testing.

Concurrency can be, of course, a necessary project tactic,often for competitive reasons: Without concurrency, the three-year product development cycles common to the automotiveindustry or the much shorter cycles of high-tech firms wouldbe impossible.

In this case, though, one wonders: What was the rush? Whywas such a high degree of concurrency necessary? This was nowartime emergency, no crisis response. The previous screeneraircraft was performing safely, reliably, and—except in the eyesof some senior Air Force leaders—effectively. If there really wasa need to move to an aerobatic airplane, there was time to dothe job in a careful, measured manner: a rigorous source selec-tion with thorough and operationally representative testing.

Peter Drucker used to tell his graduate students that whenintelligent, moral, and rational people make decisions thatappear inexplicable, it’s because they see a reality different thanthe one seen by others. With that in mind, what reality did theT-3A project managers see?

Meredith and Mantel in their book, Project Management,offer a possible answer. They refer to a project model that theycall “The Sacred Cow”:

In this case, the project is suggested by a senior andpowerful official in the organization. Often the projectis initiated with a simple comment such as “If youhave a chance, why don’t you look into ...,” and therefollows an undeveloped idea for a new product … Theimmediate result is the creation of a “project” to investigate whatever the boss has suggested (Meredith& Mantel, 2000, p. 45).

56 Project Management Journal September 2002

September 2002 Project Management Journal 57

What was the fatal flaw in the Firefly tragedy? Was it a lackof consideration for the “systems approach?” A COTS buy thatevolved into a developmental program? Or, possibly, was itinadequate testing in a true-to-life operational environment,too much concurrency, or a rushed favorite project of seniorleaders? The most likely answer is that the Firefly failed as aresult of a combination of these issues. In years to come and asthe trauma caused by the project subsides, more completeassessments may be able to explain the seemingly inexplicablecauses of the Firefly’s failure.

ReferencesThe making of a trainer: The Air Force's acquisition of the

hapless Slingsby T-3A [Electronic version]. (1998, February)Light Plane Maintenance, 20 (2), 5–11; 22–23.

Diedrich, J. (2001). Air Force might sell troubled T-3 firefliesfor scrap. Air Force Times, 61 (26), 10.

Meredith, J.R., & Mantel, S.J., Jr. (2000). Project manage-ment: A managerial approach (4th ed.). New York: John Wiley& Sons Inc.

Office of the Secretary of Defense. (2000). Commercial itemacquisition: Considerations and lessons learned [CD-ROM].Retrieved January 8, 2001, from http://www.deskbook.osd.mil

Secretary of the Air Force Inspector General. (1998). Broadarea review of the enhanced flight screening program. RetrievedDecember 16, 2002, from http://www.af.mil/lib/misc/t3bar.html

Scott, W.B. (1998). USAF modifying Slingsby trainers to correct inflight engine shutdowns [Electronic version]. (1998).Aviation Week and Space Technology, 148 (2), 38.

Thompson, M. (1998). The deadly trainer [Electronic ver-sion]. Time, 151 (1), 42–45.

United States Department of Defense. (2000). Mandatoryprocedures for major defense acquisition programs (MDAPS) andmajor automated information system (MAIS) acquisition programs(DoD 5000.2-R). Retrieved January 8, 2001, fromhttp://www.deskbook.osd.mil

United States Air Force. (n.d.) T-3A System improvement program final report executive summary. Edwards Air Force Base,CA: 412th Test Wing.

Bud Baker, PhD, is a professor of management atWright State University, where he also directs theMBA program in project management. He holds anMBA from the University of North Dakota and mas-

ters and doctoral degrees from the Peter Drucker Center of theClaremont Graduate School. Prior to 1991, Baker spent more than twodecades as a U.S. Air Force officer, including a number of years as aproject manager. He is a contributing editor the PM Network magazine,a member of the editorial review board for the Project ManagementJournal, and a charter member of the Department of Defenseresearch integrated product team.

THayburn
This article is copyrighted material and has been reproduced with the permission of PMI. Unauthorized reproduction of this material is strictly prohibited.

58 Project Management Journal September 2002

The Impact of the Project Manager on Project Management PlanningProcesses

Shlomo Globerson, School of Business Administration, Tel Aviv University, Ramat Aviv, P.O. Box 39010 Tel Aviv 69978 Israel

Ofer Zwikael, Technology Management Department, Holon Academic Institute of Technology,25 Golomb St., Holon 58102 Israel

Project success is measured as the ability to complete the project according todesired specifications and within the specified budget and the promised time

schedule, while keeping the customer and stakeholders happy. For proper project completion, both planning and execution must be properly

implemented. Control is the monitoring mechanism that ensures each of the twophases is properly implemented, with corrective actions being introduced wherethere are undesired discrepancies between the project’s plan and its execution.

Much has been written about control (Cleland, 1994; Fleming & Koppelman,1994; Kimmons, 1990; Shtub, Bard, & Globerson, 1994; Wysocki, Beck, &Crane,1995; Zwikael, Globerson, & Raz, 2000). However, most of this literaturerelates to the use of control during the execution phase, the plan being used as thebaseline for evaluating progress during the execution phase.

The main reason for the scarcity of literature on planning control is the difficultyin defining a baseline for monitoring progress during the planning phase. One maysay that stakeholders’ requirements should be used as the baseline for evaluatingplanning. However, requirements are expressed in terms of functional needs, where-as planning is expressed by technical parameters. As these two areas use different“units of measurement,” they are difficult to compare. Despite the evaluation andcontrol difficulties, it is of the utmost importance to verify that planning is properlydone and to develop tools that will improve its quality. Poor planning will result inpoor execution.

The purpose of this paper is to evaluate the actual impact of the project manageron the quality of project planning processes and to determine how intervention canbe made more effective. The methodology used in this study is based on A Guide tothe Project Management Body of Knowledge (PMBOK® Guide) (PMI StandardsCommittee, 2000). According to the PMBOK® Guide, a project manager is concernedwith nine different knowledge areas, in which he or she has to properly manage 39different processes. The processes are grouped into four life-cycle phases: initiation,planning, execution, and closure. Because this study concentrates on planning, Table1 identifies the processes that support just the planning phase.

Out of the 39 processes listed, 21 are identified by the PMBOK® Guide as relatedto planning. If a project is to be properly planned, these 21 processes must be prop-erly executed. To evaluate the quality of planning process implementation, the prod-ucts of each single process must be evaluated. Although each process may have mul-tiple sets of outputs and each set may have multiple products, one major product can

AbstractIf a project is to be successfully completed,both planning and execution must be properlyimplemented. Poor planning will not allowappropriate execution and control processesor achievement of the project’s targets. Theobjective of the study reported in this paper is to evaluate the impact of the project manag-er on the quality of project planning processeswithin the nine knowledge areas defined by A Guide to the Project Management Body ofKnowledge (PMBOK® Guide) and to determineways of increasing the effectiveness of themanager’s intervention. Participants in thestudy evaluated their use of the 21 processesthat relate to planning, out of the 39 process-es required for proper project management.The results of the study reveal risk manage-ment and communications as the processeswith the lowest planning quality. Poor quality inthese areas results when project managers lackthe formal tools and techniques for dealing withcommunications and the functional managersare not equipped with the tools and techniquesthat will allow them to effectively contribute tothe risk management process. Improving qualityplanning processes requires the developmentof new tools in areas such as communications,as well as organizational training programsdesigned for the functional managers.

Keywords: project manager; functional manager; planning; impact

©2002 by the Project Management Institute2002, Vol. 33, No. 3, 58–648756–9728/02/$10.00 per article + $0.50 per page

Integration Project plan development Project plan executionIntegrated change control

Scope Scope planning InitiationScope definition Scope verification

Scope change control

Time Activity definition Schedule controlActivity sequencingActivity duration estimatingSchedule development

Cost Resource planning Cost controlCost estimatingCost budgeting

Quality Quality planning Quality assuranceQuality control

Human resources Organizational planning Team developmentStaff acquisition

Communications Communications planning Information distributionPerformance reportingAdministrative closure

Risk Risk management planning Risk monitoring and controlRisk identificationQualitative risk analysisQuantitative risk analysisRisk response planning

Procurement Procurement planning SolicitationSolicitation planning Source selection

Contract administrationContract closeout

be identified for each planning process. For example, the majorproduct of the scope definition process is the work breakdownstructure (WBS). Table 2 lists the major products for all plan-ning processes.

The StudyA field study was conducted to evaluate the extent of the pro-ject manager’s involvement in the planning processes and toevaluate their quality. A major problem in designing this studywas to establish a way to evaluate the extent to which planningprocesses were used in projects and their quality level. For thispurpose, the following assumption was made: The quality of aprocess is a function of the frequency with which it is used toobtain the major product of the process. This assumption is

based on the learning curve theory, which has proved ongoingimprovement as a function of the number of repetitions(Griffith, 1996; Snead & Harrell, 1994; Yiming & Hao, 2000;Watson & Behnke, 1991).

Participants in the study were project managers and otherswho are involved in project management activities. The 282participants came from different project management work-shops administered as part of internal or external training sem-inars. The portion of the questionnaire that they were asked tocomplete, which served as the database for this study, appearsin Appendix 1.

The following scale was used for evaluating the intensity ofuse of the different products:■ 5 = The product is always obtained;

September 2002 Project Management Journal 59

Table 1. Separating the 39 Processes Belonging to the Nine Knowledge Areas by Planning Processes and Other Processes

Other processesPlanning processesKnowledge area

60 Project Management Journal September 2002

■ 4 = The product is obtained quite frequently;■ 3 = The product is obtained frequently;■ 2 = The product is seldom obtained;■ 1 = The product is hardly ever obtained;■ 9 = The product is irrelevant to the projects I am involved in;■ 0 = I do not know whether the product is being obtained.

Table 3 summarizes the results. As may be seen from Table3, the quality of the processes ranges from a low of 2.0 forqualitative risk analysis up to 4.2 for activity duration estimat-ing. In other words, qualitative risk analysis is hardly practiced.There is a high correlation among the quality of productsbelonging to the same knowledge area; that is, they are eitherlow or high. For example, the quality of all risk processes isbelow 2.9, whereas the quality of all the scope processes isabove 3.5. The quality of each knowledge area is calculated bythe average quality of the processes belonging to it, as present-ed in Table 3, and charted in descending order in Figure 1.

The three groups of knowledge areas identified in Figure 1 are:■ High quality areas, to which integration, scope, time, andhuman resources belong. The score for this group is around 4;

■ Medium quality areas, to which cost, quality, and procure-ment belong. The score for this group is around 3;■ Poor quality areas, to which risk and communicationsbelong. The score of both is around 2.3.

Analysis and DiscussionAssuming that for successful completion of a project, allprocesses in the nine knowledge areas should be high quali-ty, we should discuss ways to improve the poor performanceareas. Performance in a specific area is a function of the pro-ject manager’s know-how of that area, the know-how ofother professionals, such as functional managers, who areinvolved in the specific process, and with the project manag-er’s ability to affect the area and its attendant processes.

In general, functional managers are accountable for theproper execution of the specific work packages assigned tothem, whereas the project manager is responsible for integra-tion and infrastructure-related work packages and activities. Forexample, a project manager is heavily involved with the processof establishing the WBS for the whole project and is directly

Table 2. Major Product of Each Planning Process

Integration Project plan development Project plan

Scope Scope planning Project deliverablesScope definition Work breakdown structure

Time Activity definition Project activitiesActivity sequencing PERT or Gantt chartActivity duration estimating Activity duration estimatesSchedule development Activity start and end dates

Cost Resource planning Activity required resourcesCost estimating Resource costCost budgeting Time-phased budget

Quality Quality planning Quality management plan

Human resources Organizational planning Role and responsibility assignmentsStaff acquisition Project staff assignments

Communications Communications planning Communications management plan

Risk Risk management planning Risk management planRisk identification Risk listQualitative risk analysis Project overall risk rankingQuantitative risk analysis Prioritized list of quantified risksRisk response planning Risk response plan

Procurement Procurement planning Procurement management planSolicitation planning Procurement documents

Major productPlanning processesKnowledge area

Integration Project plan development 4.0 4.0 1.1

Scope Scope planning 4.1 3.8 0.9Scope definition 3.6

Time Activity definition 4.1 3.9 0.7Activity sequencing 3.4Activity duration estimating 4.2Schedule development 4.0

Cost Resource planning 3.7 3.3 1.0Cost estimating 3.0Cost budgeting 3.2

Quality Quality planning 2.9 2.9 1.2

Human resources Organizational planning 3.8 3.7 0.9Staff acquisition 3.6

Communications Communications planning 2.3 2.3 1.1

Risk Risk management planning 2.2 2.3 1.2Risk identification 2.8Qualitative risk analysis 2.0Quantitative risk analysis 2.3Risk response planning 2.3

Procurement Procurement planning 3.3 3.3 1.2Solicitation planning 3.3

accountable for it. However, a process such as quantitative riskanalysis, which must be done separately for each individualwork package, requires heavy involvement of the functionalmanager and those of his employees who are responsible forexecuting the work package (assuming a matrix organization).Therefore, to perform the process properly, all the involved par-ties should possess know-how of risk management methods.

Process know-how means being familiar with the requiredinputs, the tools and techniques used, and the desired outputsof the process. If this is not the case, then risk management orany other relevant processes can’t be effectively handled on awork package level or on the integrated level. A project man-ager functioning in a matrix environment gets work packagesdone by “contracting out” to internal suppliers, who are thefunctional managers in the organization, and purchasing otherwork packages from external sources via procurement process-es. Because procurement requires the signing of formal docu-ments, as compared to working with functional managers

within the same organization, much care is taken to include allrelevant expectations in the contract, to ensure that the externalsuppliers follows all requirements. The combination of employ-ees working on a work package without the relevant know-howin relevant risk management processes, and a project managerwithout the ability to endow them with this knowledge makesit unrealistic to expect effective implementation of project man-agement processes. In other words, if a certain process is to becarried out in an appropriate manner, the people involved in theprocess need the relevant know-how.

Table 4 divides the planning processes according to theindividual who expends most effort in properly executing aprocess (the project manager or the functional manager), keep-ing in mind that the project manager is held responsible for allprocesses. In general, in a matrix organization, a functionalmanager is responsible for carrying single work package relat-ed activities, whereas a project manager deals with work pack-ages that are integrated in nature. For example, a functional

September 2002 Project Management Journal 61

Table 3. Quality of Major Planning Processes Within the Different PMBOK® Guide Knowledge Areas

Major product

Average STD

Processes

Name Quality

Knowledge area

62 Project Management Journal September 2002

manager carries the major responsibility for activity definition,whereas the project manager has to integrate all project activi-ties into activity sequencing, and so on.

Keep in mind that a project manager is regarded as a func-tional manager when he or she works on the professional workpackages that fall within his or her direct area of accountabili-ty. For example, the project manager is directly accountable forthe scope-definition process, the major output of which is theWBS. However, the project manager will not be able to executethis process out without the cooperation and the know-how offunctional managers. The same holds true for all processes,regardless of the manager who is directly accountable.

Further, the two areas that belong to the “poor quality” group,that is, risk management and communications, displayed poorresults. Because risk management is part of the PMBOK® Guideand, therefore, also part of the Project Management Professional(PMP®) examination, one may assume that project managers arefamiliar with it. However, functional managers do not have anyformal risk management training, and one may assume that theysuffer from lack of the requisite know-how with regard to riskmanagement processes. It is therefore not surprising that func-tional managers are of little help in performing risk managementprocesses. The project manager is thus left alone to struggle withit, with limited success according to the results of this study andothers (Ibbs & Kwak, 2000; Mullaly, 1997).

The situation is not the same for the communicationsarea, because its only planning process—communicationsplanning—is an integrated work package with which the pro-ject manager is the most heavily involved. Although otherprofessionals, such as functional mangers, must be involvedin the communications planning process, the project manag-er should lead the overall effort and have a strong commandof the relevant tools and techniques. Therefore, lack of know-how on the part of the functional manager is not the mainreason for poor performance in this area. The explanation hasto be sought elsewhere.

According to the PMBOK® Guide, “Communicationsplanning determines the information and communicationsneeds for the stakeholders: Who needs what information,when they will need it, and how it will be given to them”(p.119). This area is probably the most difficult for the project manager to plan, because it requires getting presentand future information needs from all the stakeholders. Veryfew formal tools and techniques are available to the projectmanager for supporting the communications area. PMBOK®Guide offers only one unstructured tool, namely, stakeholderanalysis. The lack of proper and easily accessible tools cou-pled with the difficult task of identifying future informationneeds of stakeholders makes communication a very difficultprocess to plan.

Pla

nnin

g Q

ualit

y

Integr

ation Tim

e

Scope

Human

Res

ource

sCos

t

Procur

emen

t

Quality

Risk

Commun

icatio

ns

Aver

age

4.0 3.93.8 3.7

3.3 3.3

2.9

2.3 2.3

3.3

Figure 1. Planning Quality of the Nine PMBOK® Guide Knowledge Areas

Knowledge Areas

5

4

3

2

1

0

The areas that belong to the middle quality group are cost,quality, and procurement. From Table 4, we can identify the manager who needs most of the know-how required to properly execute each of the processes. While the projectmanager leads the process in the quality area and the func-tional manager in the procurement area, both are involved inthe cost area. Therefore, the key to success in these areas is different tailor-made training modules for the project and thefunctional manager.

ConclusionAs the person who is fully accountable for the success of theproject as a whole, the project manager is responsible forovercoming the difficulties encountered in guaranteeingthat all planning processes are properly executed. To resolvethe problems, the project manager should identify theevents that have a negative impact on the successful com-pletion of the project and develop explicit mitigating plansto accommodate them.

In some areas, such as communications and quality, theproject management community should develop better toolsand techniques to support the project manager’s efforts. Inother areas, such as risk and cost, more emphasis should beplaced on the training of functional managers in the use ofthe relevant tools and techniques. In other words, the func-

tional manager also should get intensive but adapted projectmanagement training. The agent for such a fundamentalchange in the organizational culture can’t be the project man-ager alone. It is essential that it be sponsored at a high levelof the organization and even treated as a project by itself.

ReferencesCleland, D.I. (1994). Project management: Strategic design

and implementation. New York: McGraw-Hill.Fleming, Q.W., & Koppelman, J.M. (1994). The earned

value concept: Back to the basics. PM Network, 8 (11), 27–29.Griffith, T.L. (1996). Negotiating successful technology

implementation: A motivation perspective. Journal ofEngineering & Technology Management, 13 (1), 29–53.

Ibbs, C.W., & Kwak, Y.H. (2000). Assessing project man-agement maturity. Project Management Journal, 31, 32–43.

Kimmons, R.L. (1990). Project management basics.Houston, TX: Kimmons-Asaro Group Ltd. Inc.

Mullaly, M. (1998). 1997 Canadian project managementbaseline study. Proceedings of the Project ManagementInstitute’s 29th Annual Symposium, Long Beach, CA. NewtownSquare, PA: PMI, 375–384.

PMI Standards Committee. (2000). A guide to the projectmanagement body of knowledge. Newtown Square, PA: ProjectManagement Institute.

September 2002 Project Management Journal 63

Table 4. Mapping of the 21 PMBOK® Guide Planning Processes According to the Individual (Project or FunctionalManager) Who Performs Most of the Activities Required to Execute the Planning Process

High Integration Project plan development

Scope Scope planningScope definition

Time Activity sequencing Activity definitionSchedule development Activity duration estimation

Human resources Organizational planning Staff acquisition

Medium Cost Cost budgeting Resource planningCost estimating

Procurement Procurement planningSolicitation planning

Quality Quality planning

Poor Communications Communications planning

Risk Risk management planning Risk identificationQualitative analysisQuantitative analysisRisk response planning

Knowledge area Project manager Functional managerKnowledge area

quality

Shtub, A., Bard, J.F., & Globerson, S. (1994). Project manage-ment: Engineering, technology and implementation. EnglewoodCliffs, NJ: Prentice Hall.

Snead, K.C., & Harrell, A.M. An application of expectancytheory to explain a manager’s intention to use a decision sup-port system. Decision Sciences, 25 (4), 499–513.

Watson, W.E., & Behnke, R.R. (1991). Application of expectancy theory and user observations in identifying factors which affect human performances on computer projects. Journal of Educational Computing Research, 7 (3),363–376.

Wysocki, R.K., Beck, R., & Crane, D.B. (1995). Effective project management. New York: John Wiley & Sons Inc.

Yiming, C., & Hau, L. (2000). Toward an understanding ofthe behavioral intention to use a groupware application.Proceedings of the 2000 Information Resource ManagementAssociation International Conference, Anchorage, AK. Hershey,PA: Idea Group Publishing, 419–422.

Zwikael, O., Globerson, S., & Raz, Z. (2000). Evaluation ofmodels for forecasting the final cost of a project. ProjectManagement Journal, 31 (1), 53–57.

Shlomo Globerson, PMP, PhD, a professor at theGraduate School of Business Administration, Tel AvivUniversity, is an internationally known researcher, educa-tor, and consultant in the fields of project management

and operations management. As a researcher and writer, he has published more than 70 refereed articles and seven books and is afrequent contributor to PMI® Seminars & Symposium.

Ofer Zwikael is a lecturer at the faculty of manage-ment at Tel Aviv University and a faculty member at the Technology Management Department at the HolonAcademic Institute of Technology. He also acts as

academic counselor and senior lecturer of project management atJohn Bryce Training.

Planning product Never Always Irrelevant Do not know

Project plan 1 2 3 4 5 9 0Project deliverables 1 2 3 4 5 9 0Work breakdown structure 1 2 3 4 5 9 0Project activities 1 2 3 4 5 9 0PERT or Gantt chart 1 2 3 4 5 9 0Activity duration estimates 1 2 3 4 5 9 0Activity start and end dates 1 2 3 4 5 9 0Activity required resources 1 2 3 4 5 9 0Resource cost 1 2 3 4 5 9 0Time-phased budget 1 2 3 4 5 9 0Quality management plan 1 2 3 4 5 9 0Role and responsibility assignments 1 2 3 4 5 9 0Project staff assignments 1 2 3 4 5 9 0Communications management plan 1 2 3 4 5 9 0Risk management plan 1 2 3 4 5 9 0Risk list 1 2 3 4 5 9 0Project overall risk ranking 1 2 3 4 5 9 0Prioritized list of quantified risks 1 2 3 4 5 9 0Risk response plan 1 2 3 4 5 9 0Procurement management plan 1 2 3 4 5 9 0Procurement documents 1 2 3 4 5 9 0

Appendix 1. Project Planning Assessment QuestionnaireFor each planning product listed, please mark the most suitable answer regarding the projects you are involved in, according to the following scale.5 = The product is always obtained.4 = The product is quite frequently obtained.3 = The product is frequently obtained.2 = The product is seldom obtained.

1 = The product is hardly ever obtained.9 = The product is irrelevant to the projects I am involved in.0 = I do not know whether the product is obtained.

THayburn
This article is copyrighted material and has been reproduced with the permission of PMI. Unauthorized reproduction of this material is strictly prohibited.

Successful Information System Implementation:The Human Side, Second Editionby Jeffrey K. Pinto and Ido Millet

T his book is a cautionary tale:Don’t try to install an informa-

tion system (IS) unless you’re will-ing to suffer through all the stages ofgetting it right. It’s affirmative painon behalf of a good result. SuccessfulInformation System Implementation:The Human Side first tells you what,then shows you how.

The most succinct statement ofJeffrey K. Pinto’s and Ido Millet’sworthy thesis comes on p. 43: “Oneof the most important points thatpast research and experience have taught us is that the client isthe ultimate determinant of successful system implementa-tion. This lesson, so fundamental to practicing managers, isone that continually escapes the attention of researchers and IStheoreticians and must be continually relearned.”

The authors may be forgiven for beating this horse sooften (on almost every one of the book’s 196 pages),because apparently the horse isn’t dead! The preposterousidea that information systems themselves (whatever thatmay mean) are the key factor in their own success hasapparently been the guiding lack-of-light for decades.Failure rates “continue to hover at 67%, an astounding fig-ure when one considers that companies have spent enor-mous amounts of money in the past decade not only tryingto install these systems, but also trying to understand whatwent wrong with the previous failures.”

The first two chapters, The Problem With InformationSystem Projects and Implementation Theory: What the PastHas Taught Us, detail the dismal history of IS implementationfailures—and their costs in dollars, human frustration, andbusiness failures. Pinto and Millet make a compelling case forthe need to see the territory differently and provide a map totake managers and designers from problem definition to suc-cessful resolution.

The foundation for this map-making comes in the book’scrucial Chapter 3, Defining Implementation Success andFailure. If system designers, managers, and users don’t know—and agree on—what an answer should look like, how will any-one know if we have or have not found it?

Reviewing earlier partial definitions, Pinto and Millet addwhat’s needed for a full picture of implementation success.Their three-part definition starts with system traits, underwhich come two measures: system quality (“The systemadheres to satisfactory standards in terms of its operationalcharacteristics.”) and information quality (“The material pro-vided by the system is reliable, accurate, timely, user friendly,concise, and unique.”).

Second, under characteristics of data usage comes use (“Thematerial provided by the information system will be readilyemployed by the organization in fulfillment of its opera-tions.”) and user satisfaction (“Clients making use of the sys-tem will be satisfied with the manner in which it influencestheir jobs, through the nature of the data provided.”).

Last, under impact assessment comes individual impact(“Members of the departments using the information systemwill be satisfied with how the system helps them perform theirjobs, through positively impacting both efficiency and effec-tiveness.”) and organizational impact (“The organization as awhole will perceive positive benefits from the information sys-tem, through making better decisions and/or receiving costreductions in operations.”).

I quote the definition in full both because it seems to meto be correct—both necessary and sufficient—and alsobecause, like much of the prose in the book, it is avoidablyugly. That is, it has been very well written down, but it hasnot been well written up. The ideas are too good and tooneeded to be so unpleasant to take in. This is the editor’sfault and should be corrected if a third edition is needed.Judging from the magnitude and pervasiveness of the prob-lems the authors discuss, I’m afraid that need will indeedarise. While the editor’s at it, how about an index? I hatebooks without indexes, and I know I’m not alone. The lackis easy to fix and the benefit substantial.

The remainder of the book is, appropriately, a detailedhow-to, starting with critical success factors in informationsystem projects and moving through techniques for projectselection, planning, scheduling, the politics of implementa-tion, team-building and cross-functional cooperation,implementation champions, and finishing the project on a high note.

The final chapter, Conclusions: Quo Vadis?, stresses theimportance of realizing that no matter how complete andcomprehensive the process for IS design and implementa-tion, it always will be “characterized by enormous difficulty,ambiguity, and personal challenge.” The human side of IS

September 2002 Project Management Journal 65

Cover to CoverBook Review Editor, Kenneth H. Rose, PMP

66 Project Management Journal September 2002

implementation is the hard part, the stumbling block, the rea-son for the incredible, intolerable, but true 67% IS failure rate.

Project Management Institute, 1999, ISBN: 1880410664,paperback, 196 pp., $32.95.

Reviewed by Louis I. Middleman, ICF Consulting, whospecializes in facilitating communication and organiza-tional development workshops for federal agencies.

World Class Contracting: How WinningCompanies Build Successful Partnerships in thee-Business Ageby Gregory A. Garrett, PMP, CPCM,

Finally, a book on project procure-ment management written by

a Project Management Professional(PMP®)! World Class Contracting:How Winning Companies BuildSuccessful Partnerships in the e-Business Age by Gregory A. Garrettis not a book that will gather dustas it sits unopened on your book-shelf, but a ready reference sourcethat you will want to keep withinarm’s reach as you manage yourproject’s procurement process.

Today’s project managers face increased organizational flat-tening, downsizing, and outsourcing. They must manage crit-ical contractors and suppliers and manage the risks that comewith establishing long-term business relationships. Criticalnew skills include creating, negotiating, and administeringcontracts—the contract management process. World ClassContracting meets this new need by putting the language ofproject procurement management at the reader’s fingertips.

A certified professional contracts manager and PMP®,Garrett has extensive experience as a program manager andcontract manager on a wide range of high-technology prod-ucts, services, and customized solutions for multinationalorganizations.

The book provides both new and well-seasoned projectmanagers a thorough understanding of the contract man-agement process. Garrett expands on the project procure-ment management section of the Project ManagementInstitute’s A Guide to the Project Management Body ofKnowledge (PMBOK® Guide)—2000 Edition and traces thecontract management process through each of the six stepsfrom the perspectives of both buyer and seller. This is aunique benefit from the book. Traditional procurement andcontract management books are written from the perspectiveof the buying organization. Because contracts are aboutdeveloping and maintaining business partnerships betweenbuyers and sellers, Garrett’s book provides excellent insightinto both parties of the contracting process. It goes to the

essence of contract management: the business partnerships,people, processes, and tools to be successful in the e-busi-ness age. The author discusses more than 100 best practicesfrom leading global companies involved in contracting for awide range of goods and services.

Garrett begins with a discussion of how companies goabout building successful partnerships and the importance ofthese partnerships in the e-business age and then delves intothe process of how companies build trust with their vendorsand customers by managing expectations and honoring com-mitments. His checklist of 20 techniques for managing expec-tations and building trust between partners provides practicaland functional information to the reader.

Garrett then dives straight into the contract manage-ment process, breaking down the process into the threegeneral phases: pre-award, award, and post-award. Hespends the majority of his book discussing the major stepsthat constitute the three phases of the contracting process,from both a buyer and seller perspective. His coverage ofthe buyer’s activities (procurement planning, solicitationplanning, solicitation, source selection, contract adminis-tration, and close-out/termination) is compared and con-trasted with the seller’s activities (presales activity, bid/no-bid decision, bid/proposal preparation, contract negotia-tion/formation, contract administration, and contractclose-out/termination).

This coverage not only is unique to a text on contractmanagement but also provides valuable insight for the pro-ject manager and contract manager in building trust, man-aging expectations, and honoring commitments in the pro-ject environment. Garrett’s standard format of describingeach step in terms of inputs, tools/techniques, and outputsmakes this book easy to understand and apply to real-world projects. His in-depth analysis of the contract man-agement process and his treatment of the buyer and selleractivities provide both practical and most valuable contri-butions to the contract management body of knowledge.His use of figures and diagrams also helps to simplify thecomplex contracting process.

The chapter on teamwork focuses on the intricate relationship between the project manager and the contractmanager and the importance of establishing “who’s incharge.” Garrett’s treatment of basic contracting conceptsand principles as well as his coverage of e-procurementmethods provides a concise-yet-complete treatment of thiscritical area of contract management from competitive contracting methods to noncompetitive methods includingsingle-source and sole-source negotiations.

Garrett closes his book with a discussion of common mis-conceptions about contract management and then providesa summary of 15 of the most important best practices. Hisappendix of more than 20 templates, checklists, matrixes,and forms, ranging from a sample proposal compliancematrix to a contract close-out checklist, help to streamlineand demystify the contract management process. This, alongwith an enriching glossary of contract management termsand extensive bibliography of resource materials, makes his

September 2002 Project Management Journal 67

book a valuable contract management desktop reference andguide that should be kept within arm’s reach of any seriouscontract manager.

CCH Inc., 2001, ISBN: 0-8080-0563-4, hardcover, 341 pp.,$55.00.

Reviewed by U.S. Air Force Lieutenant Colonel Rene G.Rendon, PMP, director of contracts for the Air Force’sSpace-Based Infrared Systems program and a member ofthe PMI Los Angeles Chapter.

Project Management: Strategic Design andImplementation, Fourth Editionby David I. Cleland and Lewis R. Ireland

“T here is nothing permanentexcept change,“ advised

Heraclitus of Greece in 513 B.C.Project Management: Strategic Designand Implementation, Fourth Edition,by David I. Cleland and Lewis R.Ireland provides contemporary evi-dence that both change and improve-ment are the natural order of thingsin project management literature.

This new edition contains muchof the content of the previous edi-tion, but puts forth, in the parlanceof electronic Web site developers, a whole new look and feel.Its design follows the format of the authors’ previous collabo-rative success, Project Manager’s Portable Handbook, with deci-mally numbered paragraphs for structure and bullet lists fordetail. This form of layout and presentation results in contentthat is modularly organized and easily accessible for readers.

The authors have improved individual chapters by adding abrief introduction paragraph that outlines the central points ofthe chapter and warms-up readers for what follows. Each chap-ter now concludes with four additional sections: a listing ofadditional sources of information in the form of a generouslyannotated bibliography; a listing of project management prin-ciples that summarize chapter content in pithy statements ofenduring, universal value; a project management situation—abrief, descriptive case study that illuminates chapter content byway of a practical example; and a student/reader assignmentthat offers food for thought, discussion, or investigation.

In whole, the book comprises 22 chapters divided intoseven parts that march progressively forward from historythrough current practice to a view of the future. This new edi-tion provides a guiding graphic that appears at the head ofeach chapter as a structural map showing readers where theyare and maintaining a sense of global reference throughouttheir journey through the text.

In a very general way, the content of this edition followsclosely that of the previous edition, as most sequential editions

do. But this is no mere tweaking of text to generate a more cur-rent publication date and perhaps rejuvenate sagging sales. It isa significant improvement in both form—described above—and substance. The authors have rearranged technical materialin places to make it more logical and readable and updated ref-erences throughout to make them more relevant to readers andreflective of recent research.

The chapter on strategic issues in project management is agood example. The old material is still there, but newlydesigned with headings, bullets, and graphics that are moreappealing in appearance and more facilitating in function.Bringing things up to date, the authors have added a timely dis-cussion of project portfolio management and references to arti-cles as recent as June 2001.

Cleland and Ireland have combined previous separatechapters on project organization charting and authority into asingle chapter that presents a complete integrated view of theseinterrelated topics. They also have recast the previous chapteron working with project teams as a discussion of effective pro-ject teamwork that is more action oriented and that prescribesspecific steps to take when building project teams.

A new and much needed chapter addresses project man-agement maturity. The authors briefly discuss the history of maturity models, specific models developed by theSoftware Engineering Institute and by the Federal AviationAdministration, and two general approaches to model devel-opment. They provide a tool for assessing project manage-ment maturity and describe the contributing roles of bench-marking and business intelligence. While no cookbook solu-tion to maturity management currently exists, the informa-tion in this chapter leaves readers well informed and wellarmed to deal with this important emerging issue on theirown as individual project needs demand.

Another welcome addition is a separate chapter onearned value management systems (EVMS). Unlike texts thatlimit discussion of earned value to syntax and formulas,Cleland and Ireland address concepts, meaning, and appli-cation concisely and completely. They temper the uniqueand essential utility of EVMS with the candid and practicalobservation that it is not a tool for all projects. It requires arigorous detailed plan and disciplined management process-es. A poorly defined project will condemn an EVMS effort tofrustration and failure.

Project Management: Strategic Design and Implementation,Fourth Edition, is not the last word on project managementtheory and practice. Given the evolutionary nature of thedomain, such a book probably will never be written. But itsignificantly raises the bar for books of its kind and sets astandard against which others may be measured now and inthe future.

Mcgraw-Hill, 2002, ISBN: 0-07-139310-2, hardcover, 656 pp.,$70.00.

Reviewed by Ken Rose, PMP, an instructor for ESIInternational residing in Hampton, VA, USA, and vicepresident for programs, PMI Hampton Roads Chapter.

68 Project Management Journal September 2002

Cultivating Communities of Practice: A Guide to Managing Knowledgeby Etienne Wenger, Richard McDermott, and William M. Snyder

Apersistent barrier to projectmanagement’s evolution as a

discipline has been the distributionof knowledge. There’s no shortageof innovative solutions or solidmethodologies, but unless they areshared within a company, industry,or profession, their value never willbe realized fully. At best, these practices will be disseminatedthrough formal channels such aspublications or conferences andmay slowly enter the mainstream;at worst, they will be lost—to be reinvented time and timeagain. Similarly, most problems in this domain could benefitfrom the collective brainpower of the greater project manage-ment community. While professional collaboration opportu-nities are more numerous now than in recent years, it’s likelythat many project managers still lack adequate soundingboards for their ideas.

The Project Management Institute (PMI) and technologicalinnovations such as the Internet have advanced project man-agement by quantum leaps by lessening these knowledgetransfer and collaboration impediments. By examining whyefforts similar to this are successful, the book CultivatingCommunities of Practice: A Guide to Managing Knowledge createsa framework that can be used to leverage and expand an intellectual capital. Neither cookbook nor textbook,Cultivating Communities of Practice distills knowledgemanagement (KM)—a topic fraught with abstraction andhigh-minded ideas—into a series of effective principles andguidelines illustrated through case studies at companies suchas Hewlett-Packard, McKinsey, and DaimlerChrysler. While thebook is not targeted at the project management community,there are numerous examples of project-related KM practices.

Authors Etienne Wenger, Richard McDermott, and WilliamSnyder define three fundamental concepts—community,domain, and practice—upon which their methodologies arebased. A community can range from an informal gathering ofprofessionals to an institutionalized group to just short of afunctional department or division. The common bond is thedomain, which is the purpose of the community or the topicthat it was organized to address, such as project managementfor PMI. Last, as the authors state, the practice defines “a set ofsocially defined ways of doing things … that create a basis foraction, communication, problem-solving, performance andaccountability.” Even within communities that share domainsand practices, there are variances in missions that include helping members solve everyday problems; recognizing andpublicizing best practices; institutionalizing knowledge; andencouraging innovation.

Armed with this basic model, Wenger and his co-authorsexplore how communities of practice are born, live, and some-times die. Seeking to increase the efficiency of deepwater explo-ration projects, Shell Oil created networks of like-minded pro-fessionals to share ideas, regardless of their associated projector functional unit. One group consisted of geoscientists whostudied turbidite rock formations in the Gulf of Mexico, and itconsequently earned the colorful moniker “Turbodudes.” Theinformal meetings are marked by members posing questions,and the group responding with observations or suggestions. Acoordinator shepherds the discussion, ensures that knowledgeis captured, and works outside of meetings to keep membersengaged while fine-tuning the community’s approach. “Withso many meetings that aren’t relevant to your work, it’s nice togo to one where we talk about rocks,” one geologist mused.Over their multi-year life, the Turbodudes have increased thereturn of Shell’s exploration efforts and added more than $100million annually to the company’s bottom line.

So where’s the revolutionary thinking in groups of profes-sionals gathering to talk about common problems? Whilecommunities such as the Turbodudes may emerge organicallywithin companies or society, work is required to nurture them,as is true with any meaningful relationship. In some situations,more formal efforts are required to launch communities ofpractice, which sometimes run up against conflicting corporatecultures. The book’s value comes in its detailed dissection ofhow this is done. Drawing upon learning theory, industrialpsychology, and organizational change management, theauthors instruct the reader how to build strength into commu-nities and how to combat the pitfalls. There is no “one-size-fits-all” way of creating communities, and a number of hybridapproaches are explained for different types of organizations.Especially relevant for modern project managers is the discus-sion on how to build geographically distributed communitiesof practice. (Hint: It takes more than having an e-mail addressand a home page.) In a few instances, the authors can beaccused of overreaching by alluding to how communities ofpractice can cure a wide variety of societal ills, but their otheradvice is mostly pragmatic.

Project managers must be attuned to project-related qualityassurance, risk management, and knowledge-transfer issues, aswell as how to increase their personal skills and efficiency with-in the discipline. Easy to do but hard to do well, communitiesof practice offer a proven method to accomplish these goals.This book provides both a comprehensive overview and sug-gested steps on how to create effective communities.

Harvard Business School Press, 2002, ISBN: 1-57851-330-8,hardcover, 284 pp., $29.95.

Reviewed by Joseph Galarneau, PMP, a senior manager inthe media practice of KPMG Consulting in New York.

September 2002 Project Management Journal 69

Notes for Authors

Selecting Books for ReviewPMJ welcomes recommendations from project managers and others regarding booksthat may be of professional value to fellow PMI associates. Areas of potential interestinclude: new ideas about the theory, concepts, and techniques of project management;new approaches to technology and management; getting business results; competingin today’s complex workplace; and global changes. Recommendations should includethe title, author, and publisher, and a brief statement as to why the book should be considered for review. PMJ will select books for review and identify a reviewer.Individuals recommending books for review also may volunteer to write the review.However, individuals should not submit a review before PMJ has selected the book.PMJ receives many books from publishers and authors and cannot review them all.

Guidelines for WritersReviews should begin with a strong, brief opening paragraph that identifies the bookand author, and tells the reader why the book is important. The review should not onlydescribe the content of the book, but also what the content means; that is, why it is acontribution to the project management body of knowledge. Reviewers may include thefollowing elements:■ A summary of key or unique concepts;■ Favorite quote, graphic, chart, etc.;■ Important tips or guidelines;■ New terms or phrases, such as “knowbots” or “teamocracy;”■ Message from the book that should be remembered for future use, or should have

been disclosed years ago.Reviews should include the book’s strong points and any weak points if this informationwill be useful to the reader. Reviews should be written in a conversational style thatmaintains academic rigor. Reviewers should avoid use of the first person (“I”) and focuson the book and its contents. Reviewers should also avoid use of extensive lists as ameans of describing or duplicating content. Instead, focus on what the content meansto readers. Reviews should be no longer than 750 words in length (please use yourcomputer word count to verify length of the review).

Reviews should include complete publishing information, if possible: title, author(s),publisher (city and state), year published, ISBN number, total pages, and price in U.S.dollars. PMJ will add any information that is not available to reviewers.

Reviews should be prepared using MS Word and may be submitted by e-mail (preferred)or on disk. Submissions should include the name, title, company, address, phone/fax/e-mail, and brief (one or two sentence) biosketch of the reviewer. Reviews should besubmitted to:

Book Review EditorPMI Publications

[email protected]

PMI reserves the right to edit all material submitted for publication.

Guidelines for PMJ Book Reviews

70 Project Management Journal September 2002

September 16–17 Successful Project Management for IT Professionals. Atlanta, GA, USA. For more information, visit www.TLTraining.com or call +908-789-2800.

September 18–19 Successful Project Management for IT Professionals. Ft. Lauderdale, FL, USA. For more informa-tion, visit www.TLTraining.com or call +908-789-2800.

September 24–27 Third International NAISO Symposium on Engineering of Intelligent Systems and 2002Workshop on Information Systems for Mass Customization. Sponsored by the Natural andArtificial Intelligence Systems Organization. University of Malaga, Malaga, Spain. For more infor-mation, visit www.icsc-naiso.org/conferences/eis2002/index.html.

September 25–27 NORDNET 2002—International Project Management Conference. Reykjavik, Iceland. Sponsored by the Project Management Association of Iceland (VSF). For more information, visit www.vsf.is/nordnet2002.

October 2–4 First UK International Performance Management Symposium. Bristol, U.K. For more informa-tion, visit www.mtc.aust.com/symposium/uk2002/main.html.

October 3–10 PMI® 2002 Annual Seminars & Symposium. Sponsored by the Project Management Institute.Henry B. Gonzales Convention Center, San Antonio, TX, USA. For more information, visit www.pmi.org/pmi2002.

October 10–11 Managing People in Projects. Princeton, NJ, USA. For more information, visit www.kepner-tregoe.com or call +609-921-2806.

October 10–11 Managing People in Projects. San Francisco, CA, USA. For more information, visit www.kepner-tregoe.com or call +609-921-2806.

October 14–16 Technical Project Management. New York, NY, USA. Sponsored by the American ManagementAssociation. For more information, visit www.amanet.org.

October 16–18 Technical Project Management. San Francisco, CA, USA. Sponsored by the AmericanManagement Association. For more information, visit www.amanet.org.

October 18 Project Management: The Journey, Not the Destination. PMI South Florida Chapter Professional Development Seminar 2002. Fort Lauderdale, FL, USA. For more information, visit www.southfloridapmi.org.

October 18 Puerto Rico Chapter Project Management Annual Symposium. Tropimar Convention Center, Isla Verde, Carolina, Puerto Rico. For more information, e-mail [email protected].

November 11–15 Project World. Navy Pier, Chicago, IL, USA. For more information, visit www.projectworld.com.

November 13–16 Professional Development Days. Minneapolis, MN, USA. Sponsored by the PMI MinnesotaChapter. For more information, visit www.pmi-mn.org.

November 14 PMI Information Technology & Telecommunications SIG member teleconference. For moreinformation, visit www.pmi-ittelecom.org.

November 16 Project Management in the Government. Ritz Carlton, San Juan, Puerto Rico.

Calendar of Events

September 2002 Project Management Journal 71

ScopeThe Project Management Journal is the profes-sional journal of the Project ManagementInstitute (PMI®). The mission of PMJ is toadvance the state of the art of the knowledgeof project and program management. PMJpresents useful information on both theoryand practice in the field of project manage-ment. Authors are encouraged to submit thefollowing types of original manuscripts:descriptions of innovative practices; sum-maries of research results; reviews of currentliterature; surveys of current practices; criticalanalyses of concepts, theories, or practices;developments of concepts, theories, or prac-tices; analyses of failure. Manuscript lengthshould not exceed 12,000 words. The selec-tion of manuscripts for publication is basedon the extent to which they advance theknowledge and understanding of projectmanagement. PMI® neither approves nor dis-approves of any data, claims, opinions, orconclusions presented.

Manuscript ReviewPMJ uses a double-blind review process. Thefirst review of every manuscript is performedby two or three anonymous referees (mem-bers of the PMJ Editorial Review Board). Themanuscript is then either accepted, rejected,or returned to the author for revision (withreviewer comments furnished to theauthor). Revised manuscripts are sent to theEditor, who makes a final disposition. PMJstrives to respond to all authors within threemonths of the date the manuscript isreceived at the PMI® Publishing Division.Accepted manuscripts are subject to editori-al changes. The author is solely responsiblefor all statements made in the manuscript,including editorial changes.

Original PublicationIt is the policy of PMI® to be the sole, originalpublisher of manuscripts. Manuscripts thathave been submitted simultaneously to othermagazines or journals will be rejected outrightand will not be reconsidered. Republicationof a manuscript, possibly revised, that hasbeen disseminated via conference proceed-ings or newsletter is permitted if the Editorjudges there are significant benefits to begained from publication.

CopyrightUpon acceptance of a manuscript, authors willbe asked to transfer copyright of the article tothe publisher. This transfer will ensure thewidest possible dissemination of information.This transfer of copyright enables PMI® to pro-tect the copyrighted material for the authors,but does not relinquish the author’s

Notes for Authors

proprietary rights. The copyright transfer givesPMI® the exclusive rights to republish or reprintthe manuscript in any form or medium, as wellas to grant or refuse permission to third partiesto republish all or parts of the manuscript.

Short ItemsShort items do not need rigorous academicscrutiny and are not refereed. Upon receipt,however, these items become the copyrightproperty of PMI®.■ Opinion presents thoughtful discussion ofproject management issues. ■ Correspondence pertains to the project andprogram management profession, includingreferences to literature, practice, and scholar-ship as well as discussion and replies relatedto articles published in PMJ. ■ Book Reviews express opinion about booksrelated to the project management profession,or about general management or technicalbooks that cover topics of particular value tothe project manager.■ Calendar of Events offers notices of forth-coming meetings, conferences, and callsfor papers.

SubmissionsSend manuscripts to: Editor, ProjectManagement Journal, Four CampusBoulevard, Newtown Square, PA, 19073-3299USA. Submit five copies of the manuscript on8 1/2 x 11 inch paper, double spaced through-out, and printed on one side only; and/or sendan electronic copy via e-mail to [email protected]. Manuscripts shouldinclude the following in the order listed:■ A title page that includes the title of themanuscript and each author’s name, affilia-tion, mailing address, and phone, fax, and e-mail address. Correspondence will be directedonly to the first author listed. ■ An abstract of 100 words or less that out-lines the purpose, scope and conclusions ofthe manuscript, and selected keywords.■ Text (use headings and no more thantwo levels of subheadings). To permit objec-tive reviews by two referees, the abstract andfirst page of the text should not reveal theauthors and/or affiliations, but only themanuscript title. ■ References.■ Figures and Tables (titled, numbered inArabic, with captions, each on a separatesheet, preferred location indicated within thebody of the text).■ Biographical details of each author.

Upon manuscript acceptance, authorsmust also provide a final electronic versionwith the information above, a black-and-white passport-style professional photograph,and a signed copyright agreement.

Computer-Generated Text and IllustrationsAuthors are requested to submit the final textand illustrations on 3.5" IBM/compatible disksor via e-mail. As with the requirements for man-uscript submission, the main text, list of refer-ences, table and figure captions, and authorbiographies should be stored in separate textfiles with clearly identifiable file names. Keepthe layout of the text as simple as possible andsave text in their original applications and/orRich Text Format (RTF). It is essential that thename and version of the word processing pro-gram and format of the text files are clearly indi-cated (example: Word for Windows 95 doc).

Upon acceptance of the manuscript for publishing, authors will also be asked toprovide illustrations in computer format.Preferred formats are CorelDraw, MacroMediaFreehand, Freelance Graphics, PowerPoint,Harvard Graphics, Harvard Chart, or Canvas.If one of these formats is unavailable, submitin Windows Metafile (WMF), AdobeIllustrator (AI or EPS) or EncapsulatedPostScript (EPS). Please use default/standardextensions. If illustrations are available inpaper form only, they will be recreated elec-tronically for publication. Contact PMI®Publishing Division for further details.

Style of TextYou should write in clear and concise English.Spelling should follow Webster’s New WorldDictionary. Authors whose native tongue is notEnglish are assured that in-house editorialattention to their manuscript will improveclarity and acceptability to readers. For ques-tions regarding style and format of text, referto the Publication Manual of the AmericanPsychological Association, Fifth Edition.

ReferencesFor questions regarding reference format, referto the Publication Manual of the AmericanPsychological Association, Fifth Edition,Bibliographic Forms for Journal Articles.References used in the text should be identifiedby author name and publication date in paren-theses, e.g., (Cleland & King, 1983), and listedalphabetically at the end of the manuscript.Page numbers should be cited for all quota-tions. Follow the format example shown below:

Baker, B. (1993). The projectmanager and the media: Somelessons from the stealth bomber pro-gram. Project Management Journal, 24(3), 11–14.

Cleland, D.I., & King, W.R. (1983).Systems analysis and project management.New York: McGraw-Hill.

72 Project Management Journal September 2002

Index of Advertisers

Mindjet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C2

Primavera . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C4

Hartley, J.R. (1992). Concurrentengineering. Cambridge, MA: ProductivityPress.

Please ensure that references are complete,that they include, where relevant, author’sname, article or book title, volume and issuenumber, publisher, date and page reference.

The use of page footnotes should be keptto a minimum. Footnotes should be num-bered consecutively and listed at the end ofthe text as endnotes.

KeywordsKeywords categorize your manuscript. Theycover project management methodologiesand processes, tools and techniques, PMBOK®Guide knowledge areas, industries, types ofprojects, geography. Please list three or fourkeywords that best categorize your manu-script. Choose from the following list of sug-gested keywords (this is not a comprehensivelist) or you may use your own.

AccountingActivity Duration EstimatingAgricultureArrow Diagramming MethodBaselinesBenchmarkingBenefit/Cost AnalysisBudgetingChange ControlCommunications ManagementConcurrent EngineeringConfiguration ManagementConflict ResolutionConstraintsConstructionContingency PlanningContract CloseoutCost EstimatingCost ManagementCritical PathDelegationDeliverablesDesignDocumentationEarned ValueEngineeringEnvironmentEstimatingFast-TrackingFeedbackFinance

FloatFundingHuman Resource ManagementInformation SystemsIntegration ManagementLarge ProjectLeadershipLife-cycle CostingManufacturingManagement SkillsMatrix OrganizationMilestonesMitigationMonte Carlo AnalysisMultiproject PlanningNegotiatingNetworkingNew Product DevelopmentOrganizational PlanningOrganizational StructureParametric ModelingPerformance ReportingPharmaceuticalsProcurement ManagementProductivityProject Life CycleProject Management SoftwareProject Plan DevelopmentQuality AssuranceQuality ManagementReengineeringResource PlanningResponsibilityRisk ManagementRisk Response DevelopmentSchedule DevelopmentSchedule ControlScope ManagementScope DefinitionScope Change ControlSimulationStaff AcquisitionStakeholdersStandardsStatistical SamplingTeam DevelopmentTime ManagementToolsTrainingTransportation VarianceUtilitiesVirtual OrganizationWork Breakdown StructureWork Packages

Checklist■ 5 copies of manuscript or sent via e-mail■ IBM/PC compatible disk (upon acceptance)■ 100-word abstract■ Illustrations■ Author biographies■ Black-and-white passport-style professionalauthor photographs (upon acceptance)■ Signed copyright agreement (uponacceptance)

ProofsCorrespondence and proofs for correctionwill be sent to the first-named author unlessotherwise indicated. Copyediting of manu-scripts is performed by PMI® staff. The authorsare asked to check proofs for typographicalerrors and to answer queries from editors. Toimprove publication times it is important thatproofs be returned within three days. Authorsmay be charged for extensive corrections atthe proofing stage.

Copies and ReprintsAuthors will receive ten copies of the Journalfree of charge. Additional copies of the Journaland/or article reprints can be ordered at anytime from the PMI® Publishing Division.

Project Management Institute Publishing DivisionFour Campus BoulevardNewtown Square, PA 19073-3299 USATel: +610-356-4600Fax: +610-356-4647E-mail: [email protected]

CALL FOR PAPERS

Papers from researchers and practitioners in allied fields should have a project management slant.

For more information on publishing in PMJ, please see the Notes for Authors on the PMI Web site at www.pmi.org.

Questions about submissions may be addressed to the PMJ Editor at [email protected] or via mail to:

PMJ Editor

PMI Publishing Dept.

Four Campus Boulevard

Newtown Square, PA 19073 USA

Project Management Journal solicits unpublished papersin project management and allied fields

The editor of the Project Management Journal is actively seeking submissions of previously unpublished

research papers, commentaries, and dissertations in project management as well as a number of related disciplines, including:

Organizational behavior

Organizational development

Software engineering

Construction engineering

Human resource management

Communications

General management