proposed approach to serc em task: assessing syse effectiveness in major defense acquisition...

20
Proposed Approach to SERC EM Task: Assessing SysE Effectiveness in Major Defense Acquisition Programs (MDAPs) Barry Boehm, USC-CSSE 26 November 2008

Post on 20-Dec-2015

219 views

Category:

Documents


0 download

TRANSCRIPT

Proposed Approach to SERC EM Task: Assessing SysE Effectiveness in Major

Defense Acquisition Programs (MDAPs)

Proposed Approach to SERC EM Task: Assessing SysE Effectiveness in Major

Defense Acquisition Programs (MDAPs)

Barry Boehm, USC-CSSE26 November 2008

Outline

• EM Task Statement of Work – Definition of “Systems Engineering Effectiveness”

• EM Task Schedule and Planned Results• Candidate Measurement Methods, Evaluation

Criteria, Evaluation Approach• Survey and evaluation instruments• Workshop objectives and approach

01/29/2009 2

EM Task Statement of WorkEM Task Statement of Work• Develop measures to monitor and predict system engineering effectiveness

for DoD Major Defense Acquisition Programs– Define SysE effectiveness– Develop measurement methods for contractors, DoD program managers and

PEOs, oversight organizations• For weapons platforms, SoSs, Net-centric services

– Recommend continuous process improvement approach– Identify DoD SysE outreach strategy

• Consider full range of data sources– Journals, tech reports, org’s (INCOSE, NDIA), DoD studies

• Partial examples cited: GAO, SEI, INCOSE, Stevens/IBM• GFI: Excel version of SADB

• Deliverables: Report and presentation – Approach, sources, measures, examples, results, recommendations

01/29/2009 3

Measuring SysE Effectiveness- And measuring SysE effectiveness measures

Measuring SysE Effectiveness- And measuring SysE effectiveness measures

• Good SysE correlates with project success– INCOSE definition of systems engineering, “An interdisciplinary approach and means to enable the realization

of successful systems”• Good SysE not a perfect predictor of project success

– Project does bad SysE, but gets lucky at the last minute and finds a new COTS solution, producing a great success

– Project does great SysE, but poor managers and developers turn it into a disaster• Goodness of a candidate SysE effectiveness measure (EM)

– Whether it can detect when a project’s SysE is leading the project more toward success than toward failure • Heuristic for evaluating a proposed SysE EM

– Role-play as underbudgeted, short-tenure project manager– Ask “How little can I do and still get a positive rating on this EM?”

01/29/2009 4

01/29/2009 5

RESL as Proxy for SysE Effectiveness:A COCOMO II Analysis Used on FCS

RESL as Proxy for SysE Effectiveness:A COCOMO II Analysis Used on FCS

01/29/2009 6

Added Cost of Minimal Software Systems Engineering

Added Cost of Minimal Software Systems Engineering

01/29/2009 7

How Much Architecting is Enough?How Much Architecting is Enough?

0

10

20

30

40

50

60

70

80

90

100

0 10 20 30 40 50 60

Percent of Time Added for Architecture and Risk Resolution

Per

cent

of T

ime

Add

ed to

Ove

rall

Sch

edul

e

Percent of Project Schedule Devoted to Initial Architecture and Risk Resolution

Added Schedule Devoted to Rework(COCOMO II RESL factor)

Total % Added Schedule

10000KSLOC

100 KSLOC

10 KSLOC

Sweet Spot

Sweet Spot Drivers:

Rapid Change: leftward

High Assurance: rightward

EM Task Schedule and Results

01/29/2009 8

Period Activity Results

12/8/08 – 1/28/09 Candidate EM assessments, surveys, Interviews, coverage matrix

Initial survey results, candidate EM assessments, coverage matrix

1/29-30/09 SERC-internal joint workshop with MPT Progress report on results, gaps, plans for gap followups

2/1 – 3/27/09 Sponsor feedback on results and plans; Sponsor identification of candidate pilot organizations; Execution of plans; suggested EMs for weapons platform (WP) pilots

Identification of WP pilot candidate organizations at Contractor, PM/PEO. Oversight levels; Updated survey, EM evaluation, recommended EM results

3/30-4/3/09 week SERC-external joint workshop with MPT, sponsors, collaborators, pilot candidates, potential EM users

Guidance for refining recommended EMs; Candidates for pilot EM evaluations

4/7 – 5/1/09 Tailor lead EM candidates for weapons platform (WP) pilots; SADB-based evaluations of candidate EMs

Refined, tailored EM candidates for weapons platform (WP) pilots at Contractor, PM-PEO, oversight levels; Pilot evaluation guidelines

5/6-7/09 Joint workshop with MPT, sponsors, collaborators, pilot candidates, stakeholders; Select WP EM pilots

Selected pilots; Guidance for final preparation of EM candidates and evaluation guidelines

5/11 – 7/10/09 WP EM pilot experiments; Analysis and evaluation of guidelines and results; Refinement of initial SADB evaluation results based on EM improvements

EM pilot experience database and survey results; Refined SADB EM evaluations

7/13 – 8/14/09 Analyze WP EM pilot and SADB results; Prepare draft report on results, conclusions, and recommendations

Draft report on WP and general EM evaluation results, conclusions, and recommendations for usage and research/transition/education initiatives

8/17-18/09 Workshop on draft report with sponsors, collaborators, WP EM deep-dive evaluators, stakeholders

Feedback on draft report results, conclusions, and recommendations

8/19 - 9/4-09 Prepare, present, and deliver final report Final report on WP and general EM evaluation results, conclusions, and recommendations for usage and research/transition/education initiatives

Outline

• EM Task Statement of Work – Definition of “Systems Engineering Effectiveness”

• EM Task Schedule and Planned Results• Candidate Measurement Methods, Evaluation

Criteria, Evaluation Approach• Survey and evaluation instruments• Workshop objectives and approach

01/29/2009 9

Candidate Measurement MethodsCandidate Measurement Methods

• NRC Pre-Milestone A & Early-Phase SysE top-20 checklist• Air Force Probability of Program Success (PoPS) Framework• INCOSE/LMCO/MIT Leading Indicators• Stevens Leading Indicators (new; using SADB root causes)• USC Anchor Point Feasibility Evidence progress• UAH teaming theories• NDIA/SEI capability/challenge criteria• SISAIG Early Warning Indicators/ USC Macro Risk Tool

01/29/2009 10

Independent EM Evaluations and Resolution

01/29/2009 11

Candidate EM USC Stevens FC-MD UAH

PoPS Leading Indicators X X X

INCOSE LIs X X

Stevens LIs X X X

SISAIG LIs/ Macro Risk X X X

NRC Top-20 List X X X

SEI CMMI-Based LIs X X X

USC AP-Feasibility Evidence

X X X

UAH Team Effectiveness X X X

Extended SEPP Guide EM Evaluation CriteriaExtended SEPP Guide EM Evaluation CriteriaQuality Attributes

Accuracy and Objectivity. Does the EM accurately and objectively identify the degree of SE effectiveness? Or is it easy to construct counterexamples in which it would be inaccurate?

Level of Detail. How well does the EM cover the spectrum between strategic and tactical data collection and analysis?

Scalability. How well does the EM cover the spectrum between simple weapons platforms and ultra-large systems of systems with deep supplier chains and multiple concurrent and interacting initiatives?

Ease of Use/Tool Support. Is the EM easy to learn and apply by non-experts, and is it well supported by tools, or does it require highly specialized knowledge and training?

Adaptability. Is the EM easy to adapt or extend to apply to different domains, levels of assessment, changing priorities, or special circumstances?

Maturity. Has the EM been extensively used and improved across a number and variety of applications?

Cost AttributesCost To Collect DataCost To Relate Data To Effectiveness Schedule EfficiencyKey Personnel Feasibility

Functional Attributes

1201/29/2009

Criteria validation (EM) - 2 Criteria validation (EM) - 2

1301/29/2009

Candidate EM Coverage Matrix

01/29/2009 14

SERC EM Task Coverage Matrix V1.0

NRC Probability of Success

SE Leading Indicators

LIPSF (Stevens)

Anchoring SW Process(USC)

PSSES (U. of Alabama)

SSEE(CMU/SEI)

Macro Risk Model/Tool

Concept Dev

Atleast 2 alternatives have been evaluated x x x x(w.r.t NPR) (x)

Can an initial capability be achieved within the time that the key program leaders are expected to remain engaged in their current jobs (normally less than 5 years or so after Milestone B)? If this is not possible for a complex major development program, can critical subsystems, or at least a key subset of them, be demonstrated within that time frame?

x (x) x

x(5 years is not

explicitly stated)

(x)(seems to be

inferrable from the conclusions)

(x)(implies this)

Will risky new technology mature before B? Is there a risk mitigation plan? x x x (x) x x

Have external interface complexities been identified and minimized? Is there a plan to mitigate their risks?

x x x x x x

KPP and CONOPS

At Milestone A, have the KPPs been identified in clear, comprehensive, concise terms that are understandable to the users of the system?

x (x) x (x)x

(strongly implied)

(x)(implied) x x

At Milestone B, are the major system-level requirements (including all KPPs) defined sufficiently to provide a stable basis for the development through IOC?

x x (x) x x (x)

(x)(There is no direct reference to this but is inferrable)

x

Has a CONOPS been developed showing that the system can be operated to handle the expected throughput and meet response time requirements?

x x (x) (x) x

(x)(there is a mention

of a physical solution. That's the

closest in this regard)

x x

Legend:x = covered by EM(x) = partially covered (unless stated otherwise)

EM “Survey” InstrumentsEM “Survey” Instruments

• Two web-/PDF-based “surveys” now in place• EM set exposure, familiarity and use

– Participants: industry, government, academia– Short survey, seeking experience with EM sets– Presently distributed only to limited audience– Finding individual metrics used, less so for EM sets

• EM evaluation form– Participants: Task 1 EM team members– Detailed survey, requires in-depth evaluation of EM’s– Plan follow-up interviews for qualitative assessments– Evaluation still in progress, too early to expect results

Exposure to EM setsExposure to EM sets

• Only two EM sets used (one by DoD, other by industry)• Even in limited audience, EM sets not widely used• Several have no exposure at all• Some “union of EM sets” might be required

EM Task Schedule and Results

01/29/2009 17

Period Activity Results

12/8/08 – 1/28/09 Candidate EM assessments, surveys, Interviews, coverage matrix

Initial survey results, candidate EM assessments, coverage matrix

1/29-30/09 SERC-internal joint workshop with MPT Progress report on results, gaps, plans for gap followups

2/1 – 3/27/09 Sponsor feedback on results and plans; Sponsor identification of candidate pilot organizations; Execution of plans; suggested EMs for weapons platform (WP) pilots

Identification of WP pilot candidate organizations at Contractor, PM/PEO. Oversight levels; Updated survey, EM evaluation, recommended EM results

3/30-4/3/09 week SERC-external joint workshop with MPT, sponsors, collaborators, pilot candidates, potential EM users

Guidance for refining recommended EMs; Candidates for pilot EM evaluations

4/7 – 5/1/09 Tailor lead EM candidates for weapons platform (WP) pilots; SADB-based evaluations of candidate EMs

Refined, tailored EM candidates for weapons platform (WP) pilots at Contractor, PM-PEO, oversight levels; Pilot evaluation guidelines

5/6-7/09 Joint workshop with MPT, sponsors, collaborators, pilot candidates, stakeholders; Select WP EM pilots

Selected pilots; Guidance for final preparation of EM candidates and evaluation guidelines

5/11 – 7/10/09 WP EM pilot experiments; Analysis and evaluation of guidelines and results; Refinement of initial SADB evaluation results based on EM improvements

EM pilot experience database and survey results; Refined SADB EM evaluations

7/13 – 8/14/09 Analyze WP EM pilot and SADB results; Prepare draft report on results, conclusions, and recommendations

Draft report on WP and general EM evaluation results, conclusions, and recommendations for usage and research/transition/education initiatives

8/17-18/09 Workshop on draft report with sponsors, collaborators, WP EM deep-dive evaluators, stakeholders

Feedback on draft report results, conclusions, and recommendations

8/19 - 9/4-09 Prepare, present, and deliver final report Final report on WP and general EM evaluation results, conclusions, and recommendations for usage and research/transition/education initiatives

Target EM Task Benefits for DoDTarget EM Task Benefits for DoD

• Identification of best available EM’s for DoD use– Across 3 domains; 3 review levels; planning and execution

• Early warning vs. late discovery of SysE effectiveness problems• Identification of current EM capability gaps

– Recommendations for most cost-effective enhancements, research on new EM approaches

– Ways to combine EM strengths, avoid weaknesses• Foundation for continuous improvement of DoD SysE

effectiveness measurement– Knowledge base of evolving EM cost-effectiveness– Improved data for evaluating SysE ROI

01/29/2009 18

Candidate EM Discussion IssuesCandidate EM Discussion Issues

• Feedback on evaluation criteria, approach– Single-number vs. range asessment– Effectiveness may vary by domain, management level

• Experience with candidate EMs• Additional candidate EMs

– Dropped GAO, COSYSMO parameters, NUWC open-systems• Feedback on EM survey and evaluation instruments

– Please turn in your survey response• Feedback on coverage matrix• Other issues or concerns

01/29/2009 19

6/18/08 ©USC-CSSE 20

Macro Risk Model Interface

Question # Impact (1-5) Unav

ailab

le

Meag

er

Fragm

entar

y

Comp

etent

but

Incom

plete

Stron

g

Exter

nally

Valid

ated

USC MACRO RISK MODEL Copyright USC-CSSE NOTE: Evidence ratings should be done independently, and should address the degree to which the evidence that has been provided supports a "yes" answer to each question.

U M F C/I S EV EVIDENCECSF Risk (1-25)

Rationale and Artifacts

Goal 1:

Critical Success Factor 1System and software functionality and performance objectives have been defined and prioritized.

6

1(a) 3 Are the user needs clearly defined and tied to the mission?

1(b) 3 Is the impact of the system on the user understood?

1(c) 2Have all the risks that the software-intensive acquisition will not meet the user expectations been addressed?

Critical Success Factor 2The system boundary, operational environment, and system and software interface objectives have been defined.

15

2(a) 3 Are all types of interface and dependency covered?

2(b) 4 For each type, are all aspects covered?

2(c) 5 Are interfaces and dependencies well monitored and controlled?

Critical Success Factor 3System and software flexibility and resolvability objectives have been defined and prioritized.

21

3(a) 5 Has the system been conceived and described as "evolutionary"?

System and software objectives and constraints have been adequately defined and validated.

3

3

2

3

4

5

5

01/29/2009 20