regular paper the roi of systems engineering: some...

14
The ROI of Systems Engineering: Some Quantitative Results for Software-Intensive Systems Barry Boehm, 1 Ricardo Valerdi, 2, * and Eric Honour 3 1 Center for Systems & Software Engineering, University of Southern California, Los Angeles, CA 90089 2 Massachusetts Institute of Technology, Cambridge, MA 02139 3 University of South Australia & Honourcode, Inc., Cantonment, FL 32533 ROI: SOME QUANTITATIVE RESULTS FOR SOFTWARE-INTENSIVE SYSTEMS Received 1 August 2007; Accepted 16 December 2007, after one or more revisions Published online in Wiley InterScience (www.interscience.wiley.com) DOI 10.1002/sys.20096 ABSTRACT This paper presents quantitative results on the return on investment of systems engineering (SE-ROI) from an analysis of the 161 software projects in the COCOMO II database. The analysis shows that, after normalizing for the effects of other cost drivers, the cost difference between projects doing a minimal job of software systems engineering—as measured by the thorough- ness of its architecture definition and risk resolution—and projects doing a very thorough job was 18% for small projects and 92% for very large software projects as measured in lines of code. The paper also presents applications of these results to project experience in deter- mining “how much up front systems engineering is enough” for baseline versions of smaller and larger software projects, for both ROI-driven internal projects and schedule-driven outsourced systems of systems projects. © 2008 Wiley Periodicals, Inc. Syst Eng Key words: return on investment; systems engineering measurement; COCOMO; COSYSMO; value of systems engineering; systems architecting Regular Paper * Author to whom all correspondence should be addressed (e-mail: [email protected]; [email protected]; eric.honour@post- grads.unisa.edu.au). Systems Engineering © 2008 Wiley Periodicals, Inc. 1

Upload: dangnga

Post on 27-Aug-2018

217 views

Category:

Documents


0 download

TRANSCRIPT

The ROI of SystemsEngineering: SomeQuantitative Results forSoftware-Intensive SystemsBarry Boehm,1 Ricardo Valerdi, 2, * and Eric Honour3

1Center for Systems & Software Engineering, University of Southern California, Los Angeles, CA 90089

2Massachusetts Institute of Technology, Cambridge, MA 02139

3University of South Australia & Honourcode, Inc., Cantonment, FL 32533

ROI: SOME QUANTITATIVE RESULTS FOR SOFTWARE-INTENSIVE SYSTEMS

Received 1 August 2007; Accepted 16 December 2007, after one or more revisionsPublished online in Wiley InterScience (www.interscience.wiley.com)DOI 10.1002/sys.20096

ABSTRACT

This paper presents quantitative results on the return on investment of systems engineering

(SE-ROI) from an analysis of the 161 software projects in the COCOMO II database. The analysis

shows that, after normalizing for the effects of other cost drivers, the cost difference between

projects doing a minimal job of software systems engineering—as measured by the thorough-

ness of its architecture definition and risk resolution—and projects doing a very thorough job

was 18% for small projects and 92% for very large software projects as measured in lines of

code. The paper also presents applications of these results to project experience in deter-

mining “how much up front systems engineering is enough” for baseline versions of smaller

and larger software projects, for both ROI-driven internal projects and schedule-driven

outsourced systems of systems projects. © 2008 Wiley Periodicals, Inc. Syst Eng

Key words: return on investment; systems engineering measurement; COCOMO; COSYSMO;

value of systems engineering; systems architecting

Regular Paper

* Author to whom all correspondence should be addressed (e-mail:[email protected]; [email protected]; [email protected]).

Systems Engineering© 2008 Wiley Periodicals, Inc.

1

1. INTRODUCTION: MOTIVATION ANDCONTEXT

1.1. Motivation: The Need for a BusinessCase for Systems Engineering Investments

How much systems engineering is enough? Some deci-sion-makers draw on analogies such as, “We pay anarchitect 10% of the cost of a building, so that’s whatwe’ll pay for systems engineering.” But is 10% way toolittle, or way too much? Many cost-cutting decision-makers see systems engineering as an activity thatdoesn’t directly produce the product, and as a result tryto minimize its cost. But this often leads to an increasedamount of late rework and embarrassing overruns.

Despite its recognition since the 1940s, the field ofsystems engineering is still not as well understood asthe much later field of software engineering. It is de-fined by the International Council on Systems Engi-neering [Crisp, 2005] as “an interdisciplinary approachand means to enable the realization of successful sys-tems,” with further explanation clarifying that the field“focuses on defining … required functionality early,”“integrates all disciplines and specialty groups into ateam effort” with “structured development from con-cept to production to operation,” and “considers bothbusiness and technical needs.” The definition is pur-posefully vague, focusing on the thought processes,because successful systems engineering practitionersvary widely in the application of those processes. Thefield includes elements of both technical and manage-ment expertise—technical definition and control to ar-chitect the structures that will become the system, andmanagement leadership to motivate and guide the inter-disciplinary effort necessary to create the system.

Despite the lack of full understanding, it is clear thatsystems engineering is viewed as an essential field withhigh value, one whose value increases significantlywith the size and complexity of the development effort.Evidence for this view is contained in the high salariesand leadership roles entrusted to systems engineers. Anexploration of the ontology (shared understanding) ofsystems engineering [Honour and Valerdi, 2006] showsthe following elements are widely considered to be partof the field:

• Mission/Purpose Definition. Describing the mis-sion and quantifying the stakeholder preferences.

• Requirements Engineering. Creation and man-agement of requirements.

• System Architecting. Synthesizing a design forthe system in terms of its component elementsand their relationships. Component elements mayinclude software, hardware, or process.

• System Implementation. System-level efforts tointegrate the components of the first system(s)into a configuration that meets the defined mis-sion or purpose while complying with require-ments.

• Technical Analysis. Multidisciplinary analysisfocused on system emergent properties, usuallyused either to predict system performance or tosupport decision tradeoffs.

• Technical Management/Leadership. Efforts toguide and coordinate the technical personnel to-ward the appropriate completion of technicalgoals, including among others formal risk man-agement.

• Scope Management. Technical definition andmanagement of acquisition and supply issues toensure that a project performs only the tasksnecessary.

• Verification and Validation. Proof of the systemthrough comparison with requirements (verifica-tion) and comparison with the intended mission(validation).

Using data from 25 years of calibration and analysisof the Constructive Cost Model (COCOMO) collectionof project data, this paper explores the business case forsystems engineering in terms of system architecting andrisk resolution. We follow Rechtin [1991] in definingsystems architecting as including many of the key ele-ments of systems engineering, including definition andvalidation of the system’s operational concept, require-ments, and life cycle plans.

Recent systems engineering research is beginning toquantify the value of the field [Honour and Mar, 2002].Such quantification is one step toward better under-standing the field. In a pragmatic way, however, thequantification also seeks to provide useful tools formanagement decisions. Systems engineering has suf-fered from a lack of productivity measures. Because thefield includes highly varied work elements, and becausemany of the work elements are subjective in nature, noeffective productivity measures have yet been devised.As a result, the field has had decade-long cycles ofacceptance and rejection. While systems engineers havebeen retained as technical leaders, funding of the effortshas varied widely. Research on 43 systems projects[Honour, 2004a] shows that systems engineering effortsvaried from less than 1% of the project total funding togreater than 25%. Survey participants could not explainthe variation, nor could they justify it. In many cases,participant emotions were raw about the quality levelallowed by lower funding profiles. This researchshowed a distinct correlation between the systems en-

2 BOEHM, VALERDI, AND HONOUR

Systems Engineering DOI 10.1002/sys

gineering effort and the cost and schedule success asshown in Figures 1 and 2.

In a more general survey [Honour, 2004b], anecdotalevidence from seven separate research efforts providedthe following conclusions:

• Better technical leadership correlates to programsuccess.

• Better/more systems engineering correlates toshorter schedules by 40% or more, even in theface of greater complexity.

• Better/more systems engineering correlates tolower development costs, by 30% or more.

• Optimum level of systems engineering is about15% of a total development program.

• Programs typically operate at about 6% systemsengineering.

(See Honour [2004b] for the list of references.)Such heuristics are helpful, but fall short of the kind

of information needed by a manager making budgetdecisions. Systems engineering needs definitive infor-mation about the levels and kinds of tasks that matter tothe results of a project.

Figure 2. Schedule overrun as a function of SE effort. [Color figure can be viewed in the online issue, which is available atwww.interscience.wiley.com.]

Figure 1. Cost overrun as a function of SE effort. [Color figure can be viewed in the online issue, which is available atwww.interscience.wiley.com.]

ROI: SOME QUANTITATIVE RESULTS FOR SOFTWARE-INTENSIVE SYSTEMS 3

Systems Engineering DOI 10.1002/sys

INCOSE has made the determination of the returnon investments in systems engineering a high-priorityresearch topic in its Vision 2020 document [Crisp,2005]. A partial answer to this question in the domainof software-intensive systems development is providedbelow.

1.2. Context: Analysis of ContributingFactors to Software DevelopmentProductivity

Most of the quantitative analyses done to date on SE-ROI have shown statistical correlations between thepercentage of system development cost and develop-ment time devoted to systems engineering and the per-centage of additional cost and time needed to producea satisfactory system. This is not a direct measure ofbusiness value or mission effectiveness, but it is a goodproxy.

In general, though, the data available for these analy-ses have not included data that could help determinehow much of the correlation is due to systems engineer-ing effectiveness or to other factors such as require-ments volatility, contractual budget and schedulestretchouts, domain experience, or personnel capability.

The 161 software projects in the COCOMO II data-base collected over a 25-year period contain data onthese attributes as part of each project’s report on 23size, product, process, project, and personnel factors. Itsattribute for systems engineering effectiveness is thedegree of thoroughness of the project’s architecturedefinition and risk resolution by its Preliminary DesignReview or equivalent, based on seven factors discussedbelow.

Emerging models for estimating systems engineer-ing cost and time such as COSYSMO [Valerdi, 2005]have databases including many of these attributes, butthey are limited to addressing the cost aspect of ROIsince they only estimate system engineering costs andnot their effects on development. The cost and scheduledata in the COCOMO II database include both softwaresystems engineering and software development effort,allowing for analysis of their corresponding effect oncost.

2. FOUNDATIONS OF THE COCOMO IIARCHITECTURE AND RISK RESOLUTION(RESL) FACTOR

2.1. Experiential Origins of the RESLFactor

The original Constructive Cost Model (COCOMO) forsoftware cost and time estimation [Boehm, 1981] didnot include a factor for systems engineering thorough-

ness, or any factors reflecting management control overa project’s diseconomies of scale. The closest factor tosystems engineering thoroughness was called ModernProgramming Practices, which included such practicesas top-down development, structured programming,and design and code inspections. Diseconomies of scalewere assumed to be built into a project’s developmentmode: a low-criticality project had an exponent of 1.05relating software project size to project developmenteffort. This meant that doubling the product size in-creased effort by a factor of 2.07. A mission-criticalproject had an exponent of 1.20, which meant thatdoubling product size increased effort by a factor of2.30.

Subsequent experience and analyses at TRW duringthe 1980s indicated that some sources of software de-velopment diseconomies of scale were managementcontrollables, and that thoroughness of systems engi-neering was one of the most significant sources. Forexample, some large TRW software projects that didinsufficient software architecture and risk resolutionhad very high rework costs [Boehm, 2000], while simi-lar smaller projects had smaller rework costs.

2.1.1. Reducing Software Rework via Architectureand Risk ResolutionAnalysis of project defect tracking cost-to-fix data (amajor source of rework costs) showed that 20% of thedefects accounted for 80% of the rework costs, and thatthese 20% were primarily due to inadequate architec-ture definition and risk resolution.

For example, in TRW Project A in Figure 3, most ofthe rework was the result of development of the networkoperating system to a nominal-case architecture, andfinding that the systems engineering of the architectureneglected to address the risk that the operating systemarchitecture would not support the project requirementsof successful system fail-over if one or more of the

Figure 3. Steeper cost-to-fix for high-risk elements.

4 BOEHM, VALERDI, AND HONOUR

Systems Engineering DOI 10.1002/sys

processors in the network failed to function. Once thiswas discovered during system test, it turned out to bean “architecture-breaker” causing several sources ofexpensive rework to the already-developed software. Asimilar “architecture-breaker,” the requirement to han-dle extra-long messages (over 1 million characters),was the cause of most of the rework in Project B, whoseoriginal nominal-case architecture assumed that almostall messages would be short and easy to handle with afully packet-switched network architecture.

Earlier, analyses of cost-to-fix data at IBM [Fagan1976], GTE [Daly 1977], Bell Labs [Stephenson 1976],and TRW [Boehm 1976] found consistent results show-ing the high payoff of finding and fixing defects as earlyas possible. As seen in Figure 4, relative to an effort of10 units to fix a requirements defect in the Code phase,fixing it in the Requirements phase involved only about2 units of effort, while fixing it in the Operations phaseinvolved about 100 units of effort, sometimes going ashigh as 800 units. These results caused TRW to developpolicies requiring thorough risk analyses of all require-ments by the project’s Preliminary Design Review(PDR). With TRW’s adoption of the Ada programminglanguage and associated ability to verify the consis-tency of Ada module specifications, the risk policy wasextended into an Ada Process Model for software, alsorequiring that the software architecture pass an Adacompiler module consistency check prior to PDR[Royce, 1998].

2.1.2. A Successful Example: CCPDS-RThe apparent benefits of fixing requirements at earlyphases of the life cycle motivated subsequent projectsto perform much of systems integration before provid-ing the module specifications to programmers for cod-ing and unit test. As a result of this and the eliminationof architecture risks prior to Preliminary Design Re-view, subsequent projects were able to significantlyreduce late architecture-breaker rework and the steepslope of the cost-to-fix curve. A good example was theCommand Center Processing and Display System-Re-placement (CCPDS-R) project described in Royce[1998], whose flattened cost-to-fix curve is shown inFigure 5. It delivered over a million lines of Ada codewithin its original budget and schedule. Its PDR washeld in month 14 of a 35-month initial-delivery sched-ule and included about 25% of the initial-deliverybudget, including development and validation of itsworking high-risk software, such as its network operat-ing system and the key portions of its user interfacesoftware.

2.2. The RESL Factor in Ada COCOMO andCOCOMO IIThe flattened cost-to-fix curve for large projects exem-plified in Figure 5 confirmed that increased emphasison architecture and risk resolution led to reduced re-work and diseconomies of scale on large projects. In1987–1989, TRW developed a version of COCOMOfor large mission-critical projects using the Ada Processmodel, called Ada COCOMO [Boehm and Royce,1989]. It reduced the 1.20 exponent relating productsize to project effort as a function of the degree that theproject could follow the Ada Process model. This wasdifficult to do on some projects required by governmentstandards and contracts to use sequential waterfall-model processes. Thus, it made reduction of softwareproject diseconomies of scale via architecture and riskresolution operate as a management controllable factor,and helped government and industry people evolvetoward more risk-driven concurrently engineered proc-esses rather than documentation-driven processes.

2.2.1. Resulting Risk-Driven Concurrent EngineeringSoftware Process ModelsThe Ada Process Model and the CCPDS-R projectshowed that it was possible to reinterpret sequentialwaterfall process model phases, milestones, and re-views to enable projects to perform risk-driven concur-rent engineering of their requirements, architecture, andplans, and to apply review criteria focusing on thecompatibility and feasibility of these artifacts.

Subsequently, these practices were elaborated intogeneral software engineering—and systems engineer-ing for software-intensive systems—process modelsemphasizing risk-driven concurrent engineering andassociated milestone review pass-fail criteria. Theseincluded the Rational Unified Process [Royce, 1998;Jacobson, Booch, and Rumbaugh, 1999; Rumbaugh,Jacobson, and Booch, 2004; Kruchten, 2000], and theUSC Model-Based (System) Architecting and SoftwareEngineering (MBASE) model [Boehm and Port, 1999,2001], which integrated the risk-driven concurrent en-gineering spiral model [Boehm et al., 1998] with theRechtin concurrent engineering Systems Architectingapproach [Rechtin, 1991; Rechtin and Maier, 1997].Both RUP and MBASE used a set of anchor pointmilestones, including the Life Cycle Objectives (LCO)and Life Cycle Architecture (LCA) as their model phasegates. Actually, these were determined in a series ofworkshops involving the USC Center for Software En-gineering and its 30 government and industry affiliates,including Rational, Inc., as phase boundaries for CO-COMO II cost and schedule estimates [Boehm, 1996].Table I summarizes the pass/fail criteria for the LCOand LCA anchor point milestones.

ROI: SOME QUANTITATIVE RESULTS FOR SOFTWARE-INTENSIVE SYSTEMS 5

Systems Engineering DOI 10.1002/sys

Fig

ure

4. R

isk

of d

elay

ing

risk

man

agem

ent.

6 BOEHM, VALERDI, AND HONOUR

Systems Engineering DOI 10.1002/sys

More recently, the MBASE approach has been ex-tended into an Incremental Commitment Model (ICM)for overall systems engineering. It uses the anchor pointmilestones and feasibility rationales to synchronize andstabilize the concurrent engineering of the hardware,software, and human factors aspects of a system’s ar-chitecture, requirements, operational concept, plans,and business case [Pew and Mavor, 2007; Boehm andLane, 2007]. A strong feasibility rationale will includeresults of architecture tradeoff and feasibility analysessuch as those discussed in [Clements, Kazman, andKlein, 2002] and [Maranzano et al., 2005].

2.2.2. The RESL Factor in COCOMO IIThe definition of the COCOMO II software cost esti-mation model [Boehm et al., 2000] was evolved during1995–1997 by USC and its 30 industry and governmentaffiliates. Its diseconomy-of-scale factor is a functionof RESL and four other scale factors, two of which arealso management controllables: Capability MaturityModel maturity level and developer-customer-userteam cohesion. The remaining two are Precedentednessand Development Flexibility. The definition of theRESL rating scale was elaborated into the seven con-tributing factors shown in Table II. As indicated in TableI, “architecture and risk resolution” includes the con-

current engineering of the system’s operational con-cept, requirements, plans, business case, and feasibilityrationale as well as its architecture, thus covering mostof the key elements that are part of the systems engi-neering function.

The values of the rating scale for the third charac-teristic, percent of development schedule devoted toestablishing architecture, were obtained through a be-havioral assessment of the range of possible values thatsystems engineers might face. The minimum expectedlevel of effort spent on architecting was assumed to be5%, or 1/20, of the total project effort. To operationalizethe remaining rating levels, a similar logic was applied.It was assumed that the subsequent rating levels were10% (1/10), 17% (1/6), 25% (1/4), or 33% (1/3) of theproject effort. In the best case, 40% or more effortwould be invested in architecting.

Each project contributing data to the COCOMO IIdatabase used Table II as a guide for rating its RESLfactor. The ratings for each row could have equal orunequal weights as discussed between data contributorsand COCOMO II researchers in data collection ses-sions. The distribution of RESL factor ratings of the 161projects in the COCOMO II database is approximatelya normal distribution, as shown in Figure 6.

Table I. Anchor Point Milestone Pass/Fail Feasibility Rationales

Figure 5. Reducing software cost-to-fix: CCPDS-R (adapted from Royce [1998]).

ROI: SOME QUANTITATIVE RESULTS FOR SOFTWARE-INTENSIVE SYSTEMS 7

Systems Engineering DOI 10.1002/sys

The contribution of a project’s RESL rating to itsdiseconomy of scale factor was determined by aBayesian combination of expert judgment and a multi-ple regression analysis of the 161 representative soft-

ware development projects’ size, effort, and cost driverratings in the COCOMO II database. These includecommercial information technology applications, elec-tronic services, telecommunications, middleware, engi-neering and science, command and control, and realtime process control software projects. Their sizesrange from 2.6 thousand equivalent source lines of code(KSLOC) to 1300 (KSLOC), with 13 projects below 10KSLOC and 5 projects above 1000 KSLOC. Equivalentlines of code account for the software’s degrees of reuseand requirements volatility.

The expert-judgment means and standard deviationsof the COCOMO II cost driver parameters were treatedas a priori knowledge in the Bayesian calibration, andthe corresponding means and standard deviations re-sulting from the multiple regression analysis of thehistorical data were treated as an a posteriori update ofthe parameter values. The Bayesian approach producesa weighted average of the expert and historical data

Figure 6. RESL ratings for 161 projects in the COCOMOdatabase.

Table II. RESL Rating Scale

8 BOEHM, VALERDI, AND HONOUR

Systems Engineering DOI 10.1002/sys

values, which gives higher weights to parameter valueswith smaller standard deviations. The detailed approachand formulas are provided in Chapter 4 of the CO-COMO II text [Boehm et al., 2000].

2.2.3. RESL Calibration ResultsCalibrating the RESL scale factor was a test of thehypothesis that proceeding into software developmentwith inadequate architecture and risk resolution results(i.e., inadequate systems engineering results) wouldcause project effort to increase due to the softwarerework necessary to overcome the architecture deficien-cies and to resolve the risks late in the developmentcycle—and that the rework cost increase percentagewould be larger for larger projects.

The regression analysis to calibrate the RESL factorand the other 22 COCOMO II cost drivers confirmedthis hypothesis with a statistically significant result. Thecalibration results determined that for this sample of161 projects, the difference between a Very Low RESLrating and an Extra High rating was an extra contribu-tion of 0.0707 added to the exponent relating projecteffort to product size. This translates to an extra 18%effort for a small 10 KSLOC project, and an extra 92%effort for an extra-large 10,000 KSLOC project.

Figure 7 summarizes the results of the analysis. Itshows that at least for this sample of 161 softwareprojects, the difference between a project doing a mini-mal job of systems engineering—as measured by itsdegree of architecture and risk resolution—is an in-creasingly large increase in overall project effort andcost, independent of the effects of the other 22 CO-COMO II cost drivers. This independence is becausethe regression analysis also accounts for variations ineffort due to the other 22 factors in its statistical results.The level of statistical significance of the RESL pa-rameter was above 1.96 which is the critical value for

the analysis of 23 variable and 161 data points as shownin the Appendix. Moreover, the pairwise correlationanalysis shows that no variable was correlated morethan 0.4 with RESL.

3. RESULTING ROI FOR SOFTWARESYSTEMS ENGINEERING IMPROVEMENTINVESTMENTS

Investing in improved software systems engineeringinvolves a higher and stronger level and focus of efforton risk-driven concurrent engineering of software sys-tem requirements, architecture, plans, budgets, andschedules. It also requires assurance of their consis-tency and feasibility via prototyping, modeling, analy-sis, and success-critical stakeholder review andcommitment to support the next phase of project activ-ity, as discussed at the end of section 2.1.

The results of the COCOMO II calibration of theRESL factor shown in Figure 7 enable us to determinethe ROI for such investments, in terms of the addedeffort required for architecture and risk resolution, andthe resulting savings for various sizes of software sys-tems measured in KSLOC. A summary of these resultsis provided in Table III for a range of software systemsizes from 10 to 10,000 KSLOC.

The percentage of time invested in architecting isprovided for each RESL rating level together with:

• Level of effort. The numbers reflect the fractionof the average project staff level on the job doingsystems engineering if the project focuses onsystems engineering before proceeding into de-velopment for 5%, 10%, 17%, etc. of its plannedschedule; it looks roughly like a Rayleigh curveobserved in the early phases of software projects[Boehm, 1981].

• RESL investment cost %. The percent of pro-posed budget allocated to architecture and riskresolution. This is calculated by multiplying theRESL percentage calendar time invested by thefraction of the average level of project staffingincurred for each rating level. For example, theRESL investment cost for the Very Low case iscalculated as: 5 ∗ 0.3 = 1.5.

• Incremental investment. The difference betweenthe RESL investment cost % of the nth ratinglevel minus the (n – 1)th level. The incrementalinvestment for the Low case is calculated as: 4 –1.5 = 2.5%.

• Scale factor exponent for rework effort. The ex-ponential effect of the RESL driver on softwareproject effort as calibrated from 161 projects.

Figure 7. Added cost of minimal software systems engi-neering.

ROI: SOME QUANTITATIVE RESULTS FOR SOFTWARE-INTENSIVE SYSTEMS 9

Systems Engineering DOI 10.1002/sys

Return on Investment values are calculated for fivedifferent rating scale levels across four different sizesystems through the calculation of:

• Added effort. Calculated by applying the scalefactor exponent for rework (i.e., 1.0707) to thesize of the system (i.e., 10 KSLOC) and calculat-ing the added effort introduced. For the 10KSLOC project, the added effort for the VeryLow case is calculated as follows:

Added effort = 101.0707 − 10

10 ∗ 100

= 17.7.

• Incremental benefit. The difference between theadded effort for the nth case and the (n – 1)th case.The incremental benefit for the Low case is cal-culated as: 17.7 – 13.9 = 3.8.

• Incremental cost. Same as the value for incre-mental investment.

• Incremental ROI. Calculated as difference be-tween the benefit and the cost divided by the cost.For the 10 KSLOC project, the incremental ROIfor the Low case is calculated as follows:

ROI = (3.8 − 2.5)

2.5

= 0.52.

Table III. Software Systems Engineering/RESL Return on Investment

10 BOEHM, VALERDI, AND HONOUR

Systems Engineering DOI 10.1002/sys

It is evident that architecting has a decreasingamount of incremental ROI as a function of RESL effortinvested. Larger projects enjoy higher levels of ROI,which supports the idea that the point of diminishingreturns (negative incremental ROI) is dependent on thesize of the system. These results are presented graphi-cally in Figure 8.

4. DETERMINING “HOW MUCHARCHITECTING IS ENOUGH”

The results above can also be used in the increasinglyfrequent situation of determining “how much architect-ing is enough” for schedule-driven software-intensivesystems projects involving outsourcing. Frequently,such projects are in a hurry to get the suppliers on thejob, and spend an inadequate amount of time in systemarchitecture and risk resolution before putting supplierplans and specifications into their Requests for Propos-als (RFPs). As a result, the suppliers will frequentlydeliver incompatible components, and any earlierschedule savings will turn into schedule overruns due

to rework, especially as shown above for larger projects.On the other hand, if the project spends too much timeon system architecting and risk resolution, not enoughtime is available for the suppliers to develop their sys-tem components. This section shows how the CO-COMO II RESL factor results can be used to determinean adequate architecting “sweet spot” for various sizesof projects.

The full set of effects for each of the RESL ratinglevels and corresponding architecting investment per-centages are shown in Table IV for projects of size 10,100, and 10,000 KSLOC. Also shown are the corre-sponding total-delay-in-delivery percentages, obtainedby adding the architecting investment time to the re-work time, assuming constant team size during reworkto translate added effort into added schedule. Thus, inthe bottom two rows of Table IV, we can see the addedinvestments in architecture definition and risk resolu-tion are more than repaid by savings in rework time fora 10,000 KSLOC project up to an investment of 33%,after which the total delay percentage increases.

This identifies the minimum-delay architecting in-vestment “sweet spot” for a 10,000 KSLOC project tobe around 33%. Figure 9 shows the results of Table IVgraphically. It indicates that for a 10,000 KSLOC pro-ject, the sweet spot is actually a flat region around a 37%architecting investment. For a 100 KSLOC project, thesweet spot is a flat region around 20%. For a 10 KSLOCproject, the sweet spot is at around 5% investment inarchitecting. The term “architecting” is adapted fromRechtin’s System Architecting book [Rechtin, 1991], toinclude the overall concurrent effort involved in devel-oping and documenting a system’s operational concept,requirements, architecture, life-cycle plan, and result-ing feasibility rationale. Thus, the results in Table IVand Figure 9 confirm that investments in architecting

Table IV. Effect of Architecting Investment Level on Total Project Delay

Figure 8. Incremental software systems engineering ROI.

ROI: SOME QUANTITATIVE RESULTS FOR SOFTWARE-INTENSIVE SYSTEMS 11

Systems Engineering DOI 10.1002/sys

are less valuable for small projects, but increasinglynecessary as the project size increases.

However, the values and sweet spot locations pre-sented are for nominal values of the other COCOMO IIcost drivers and scale factors. Projects in different situ-ations will find that “their mileage may vary.” Forexample, a 10 KSLOC safety-critical project—with acorresponding Very High RESL rating—will find thatits sweet spot will be upwards and to the right of thenominal case 10 KSLOC sweet spot. A 10,000 KSLOChighly volatile project—with a corresponding Require-ments Volatility factor of 50%—will find that its sweetspot will be higher and to the left of the nominal case10,000 KSLOC sweet spot, due to costs of require-ments, architecture, and other product rework. Also,various other factors can affect the probability and sizeof loss associated with the RESL factor, such as staffcapabilities, tool support, and technology uncertainties[Boehm et al., 2000]. And these tradeoffs are onlyconsidering project delivery time and productivity andnot the effects of delivered system shortfalls on businessvalue, which would push the sweet spot for safety-criti-cal projects even further to the right.

5. CONCLUSIONS

There is little doubt that doing the right amount ofsystems engineering has value. To date, the difficultyhas been to determine how much value. Better under-standing of the field requires that the effect of systemsengineering tasks be quantified. Such quantificationassists managers to set appropriate budgets, and it as-sists practitioners to select the appropriate tasks for aproject of given characteristics.

Evidence has been provided for the return on invest-ment for systems engineering in the context of soft-ware-intensive systems. While the numbers may bedifferent for non-software-intensive systems, we feelthat the general framework provides significant evi-dence that larger systems enjoy larger systems engi-neering ROI values compared to smaller systems andthat the most cost-effective amount of systems engi-neering has an inherent sweet spot based on the size ofthe system.

In this review of data from 25 years of COCOMOsoftware projects, the ROI of some systems engineeringtasks is quantified. The RESL parameter added in CO-COMO II specifically addresses the degree to which asoftware project achieves (or has plans and resources toachieve) a thoroughly defined architecture package(also including its operational concept, requirements,and plans) along with risks properly identified andmanaged, all of which are major characteristics of thesystems engineering effort that defines the software.The calibration of the RESL parameter provides dataabout the ROI of that systems engineering effort that isbased on 161 project submissions.

Therefore, in relation to the RESL systems engineer-ing efforts (architecting and risk reduction) as used insoftware development projects, the data indicates thefollowing important conclusions:

• Inclusion of greater RESL effort can improve thesoftware productivity by factors from 18% (smallsoftware projects) to 92% (very large softwareprojects).

• Incremental addition of greater RESL effort canresult in cost ROI of up to 8:1. The greatest ROIoccurs when very large software projects usingVery Low RESL effort (5% of project time, 1.5%of project cost) move to somewhat greater effort.

• In some cases, incremental addition of greaterRESL effort is counterindicated. This is particu-larly true for small software projects that arealready using in excess of 15% RESL effort.

• For schedule-driven projects, optimum RESL ef-fort varies from 10% of project time (small soft-ware projects) to 37% of project time (very largesoftware projects).

These results strengthen the argument for the valueof systems engineering by providing quantitative evi-dence that doing a minimal job of software systemsengineering significantly reduces project productivity.Even higher ROIs would result from including thepotential operational problems in business or missioncost, schedule, and performance that could surface as aresult of inadequate systems architecting and risk reso-lution.

Figure 9. How much architecting is enough?

12 BOEHM, VALERDI, AND HONOUR

Systems Engineering DOI 10.1002/sys

REFERENCES

B. Boehm, Software engineering, IEEE Trans Comput C-25(12) (December 1976), 1226–1241.

B. Boehm, Software engineering economics, Prentice-Hall,Upper Saddle River, NJ, 1981.

B. Boehm, Anchoring the software process, Software 13(4)(July 1996), 73–82.

B. Boehm, Unifying software engineering and systems engi-neering, Computer 33(3) (March 2000), 114–116.

B. Boehm and J. Lane, Using the incremental commitmentmodel to integrate system acquisition, systems engineer-ing, and software engineering, CrossTalk 20(10) (October2007), 4–9.

B. Boehm and D. Port, Escaping the software tar pit: Modelclashes and how to avoid them, ACM Software Eng Notes24(1) (January 1999), 36–48.

B. Boehm and D. Port, Balancing discipline and flexibilitywith the spiral model and MBASE, CrossTalk 14(12)(December 2001), 23–28.

B. Boehm and W. Royce, Ada COCOMO and the Ada processmodel, Proc 5th COCOMO User’s Group, 1989, SoftwareEngineering Institute, Pittsburgh, PA.

B. Boehm, C. Abts, A.W. Brown, S. Chulani, B.K. Clark, E.Horowitz, R. Madachy, D. Reifer, and B. Steece, Softwarecost estimation with COCOMO II, Prentice-Hall, UpperSaddle River, NJ, 2000.

B. Boehm, A. Egyed, Kwan, D. Port, A. Shah, and R.Madachy, Using the WinWin spiral model: A case study,IEEE Comput 31(7) (July 1998), 33–44.

P. Clements, R. Kazman, and M. Klein, Evaluating softwarearchitectures, Addison Wesley Professional, Boston, MA,2002.

Table A.I. COCOMO II Regression Run

ROI: SOME QUANTITATIVE RESULTS FOR SOFTWARE-INTENSIVE SYSTEMS 13

Systems Engineering DOI 10.1002/sys

H.E. Crisp (Editor), Systems engineering vision 2020—Ver-sion 1.5, International Council on Systems Engineering,Seattle, WA, 2005.

E. Daly, Management of software engineering, IEEE TransSW Eng SE-3 (3) (May 1977), 229–242.

M. Fagan, Design and code inspections to reduce errors inprogram development, IBM Syst J 15(3) (1976), 182–211.

E.C. Honour, Understanding the value of systems engineer-ing, INCOSE Int Symp, Toulouse, France, 2004a.

E.C. Honour, Value of systems engineering, Cambridge, MA,2004b.

E.C. Honour and B. Mar, Value of systems engineering—SE-COE research project progress report, INCOSE Int Symp,Las Vegas, NV, 2002.

E.C. Honour and R. Valerdi, Advancing an ontology forsystems engineering to allow consistent measurement,Conf Syst Eng Res, Los Angeles, CA, 2006.

I. Jacobson, G. Booch, and J. Rumbaugh, The unified soft-ware development process, Addison-Wesley, Reading,MA, 1999.

P. Kruchten, The rational unified process: An introduction,Addison-Wesley, Reading, MA, 2000.

J.F. Maranzano, S.A. Rozsypal, G.H. Zimmerman, G.W.Warnken, P.E. Wirth, and D.W. Weiss, Architecture re-views: practice and experience, Software (March/April2005), 34–43.

R. Pew and A. Mavor (Editors), Human-system integration inthe system development process, National AcademiesPress, Pew & Mavor, Washington, D.C., 2007.

E. Rechtin, Systems architecting, Prentice-Hall, EnglewoodCliffs, NJ, 1991.

E. Rechtin and M. Maier, The art of systems architecting,CRC Press, Boca Raton, FL, 1997.

W. Royce, Software project management: A unified frame-work, Addison Wesley, Reading, MA, 1998.

J. Rumbaugh, I. Jacobson, and G. Booch, Unified modelinglanguage reference manual, Addison-Wesley, Reading,MA, 2004.

W. Stephenson, An analysis of the resources used in safeguardsoftware system development, Bell Labs draft paper, Mur-ray Hill, NJ, August 1976.

R. Valerdi, The constructive systems engineering cost model(COSYSMO), PhD Dissertation, University of SouthernCalifornia, 2005.

Barry Boehm is the TRW professor of software engineering and director of the Center for Systems andSoftware Engineering at the University of Southern California. He was previously in software engineering,systems engineering, and management positions at General Dynamics, Rand Corp., TRW, and the DefenseAdvanced Research Projects Agency, where he managed the acquisition of more than $1 billion worth ofadvanced information technology systems. Dr. Boehm originated the spiral model, the Constructive CostModel, and the stakeholder win-win approach to software management and requirements negotiation. Heis a Fellow of INCOSE.

Ricardo Valerdi is a Research Associate at the Lean Advancement Initiative at MIT and a Visiting Associateat the Center for Systems and Software Engineering at USC. He earned his BS in Electrical Engineeringfrom the University of San Diego, MS and PhD in Industrial and Systems Engineering from USC. He isa Senior Member of the Technical Staff at the Aerospace Corporation in the Economic & Market AnalysisCenter. Previously, he worked as a Systems Engineer at Motorola and at General Instrument Corporation.He is on the Board of Directors of INCOSE.

Eric Honour was the 1997 INCOSE President. He has a BSSE from the US Naval Academy and MSEEfrom the US Naval Postgraduate School, with 37 years of systems experience. He is currently a doctoralcandidate at the University of South Australia (UniSA). He was the founding President of the Space CoastChapter of INCOSE, the founding chair of the INCOSE Technical Board, and a past director of the SystemsEngineering Center of Excellence. Mr. Honour provides technical management support and systemsengineering training as President of Honourcode, Inc., while continuing research into the quantificationof systems engineering.

14 BOEHM, VALERDI, AND HONOUR

Systems Engineering DOI 10.1002/sys