test and evaluation issues for systems of systems ... · – compiled list of “what keeps me...
TRANSCRIPT
Test and Evaluation Issues for
Systems of Systems: Creating Sleep Aids for Those
Sleepless NightsBeth Wilson, Raytheon
Tom Wissink, Lockheed Martin
Darlene Mosser-Kerner, OSD DT&ENDIA T&E Division, Developmental Test and Evaluation Committee
Judith Dahmann, MITRE
John Palmer, BoeingNDIA SE Division, Systems of Systems Committee
Rob Heilman, TRMC
Bob Aaron, ATECStrategic Initiative Coleads
NDIA Test and Evaluation Conference
Paper #11651
NDIA T&E Conference Mar 2011 1
Abstract
In 2009, the NDIA System of Systems Committee developed a white paper describing test and evaluation issues that cause
"sleepless nights".
In 2010, the NDIA SoS and DT&E Committees collaborated in a joint workshop to translate these issues into strategic initiatives and
collaborative go-do activities as improvement areas. The issues included future T&E for systems brought together as SoS,
requirements, metrics, systems changes, and end to end testing with systems not yet available.
This paper will summarize the results of that workshop and the progress being made to mitigate SoS T&E sleepless nights.
2NDIA T&E Conference Mar 2011
NDIA DT&E Committee
Moved from SE to T&E Division
Focus of this Paper: DT&E Collaboration with SoS
3NDIA T&E Conference Mar 2011
NDIA
Systems Engineering
Division NDIA
Test and Evaluation Division
OSD
SE
OSD
OT&E
NDIA
DT&E Committee
NDIA Systems Engineering
Conference (Oct)
NDIA Test and Evaluation
Conference (Mar)
Issues
Identified
Issues
Identified
Results ResultsResults Results
NDIA
Industrial Committee On Test
and Evaluation (ICOTE)
Discussion
OSD
DT&E
Issues
Identified
SED
Committee
Collaboration TaskingTED tasking
for
core T&E
Issues
from
OSD
DT&E
and
OT&E
Sleepless Nights:
Test and Evaluation for SoS
• Systems of Systems Topics Discussed in 2009:– Compiled list of “what keeps me awake at night” topics for SoS
– Test and evaluation for SoS topped the “Sleepless Nights” list
• NDIA SoS and DT&E Committees Worked Jointly in 2009:– Identified key T&E challenges for SoS
– White paper described 5 top issues
– Presented at 2009 NDIA SE Conference in joint SoS/T&E track
• Focus for 2010: Joint Workshop August 17th
– Define a path from Sleepless Nights to Sominex
– Evaluate challenges and underlying issues
– Transition specific issues into strategic initiatives
• Resulting Effort:– 3 Strategic Initiatives
– 1 Collaborative Go-Do
Workshop Defined Path to Find Sleep Aids
4NDIA T&E Conference Mar 2011
Reminder from 2009:
T&E Challenges for SoS
1) Future T&E: If SoS are not programs of record (and not subject to T&E regulations) why should we worry about this at all?
2) Requirements: If „requirements‟ are not clearly specified up front for a SoS, what is the basis for T&E of an SoS?
3) Metrics: What is the relationship between SoS metrics and T&E objectives?
4) Systems Changes: Are expected cumulative impacts of systems changes on SoS performance the same as SoS performance objectives?
5) End to End Testing: How do you test the contribution of a system to the end to end SoS performance in the absence of other SoS elements critical to the SoS results? What if systems all implemented to their specification, but the overall SoS expected changes cannot be verified?
White Paper was Starting Point
5NDIA T&E Conference Mar 2011
Facilitated Workshop:
The Technique
Transition from Problem Space to Solution Space
Data Collection:
SoS White Paper
SE Conference Papers
Potential Problem Areas
1) Future T&E for Systems
brought together as SoS
2) Requirements
3) Metrics
4) Systems Changes
5) End to End Testing with
systems not yet available
Potential Causes
If we could only fix one thing,
it would be ________
X
X
X
X
X
X
X
Setup
X
X
X
X
X
X
X
Testing
X
X
X
X
X
X
Tasks
X
X
X
X
X
X
X
Cable Security
XWork-arounds required
SDP emulator
Procedures incomplete or incorrectIntegration Test Conduct
XClassified field returns
XClassified workstations
XClassified shipmentSecurity
Detailed tasks for integration/prep
Test results prior to string integration
REX array availability
REX/NBDC/SDP availability
Signal cable deliverySchedule Dependencies
XEquipment in NFR
Test equipment
Safety interconnect
Power/Cooling connections
Signal cables wrong length or incorrectFacility
X
X
X
X
X
X
X
Setup
X
X
X
X
X
X
X
Testing
X
X
X
X
X
X
Tasks
X
X
X
X
X
X
X
Cable Security
XWork-arounds required
SDP emulator
Procedures incomplete or incorrectIntegration Test Conduct
XClassified field returns
XClassified workstations
XClassified shipmentSecurity
Detailed tasks for integration/prep
Test results prior to string integration
REX array availability
REX/NBDC/SDP availability
Signal cable deliverySchedule Dependencies
XEquipment in NFR
Test equipment
Safety interconnect
Power/Cooling connections
Signal cables wrong length or incorrectFacility
Improvement Areas:
Strategic Initiatives
Collaborative Go-Do
Leverage Matrix
Map Causes to problem areas
6NDIA T&E Conference Mar 2011
Facilitated Workshop:
Attendees
7
Mr. Robert Aaron Army Government
Col (Ret) Suzanne M. Beers MITRE FFRDC
Dr. William D. Bell MITRE FFRDC
Mr. Aumber Bhatti MITRE FFRDC
Clyneice Chaney MITRE FFRDC
Mr. Peter H. Christensen MITRE FFRDC
Mr. David W. Coleman MITRE FFRDC
Dr. Judith S. Dahmann MITRE FFRDC
Ms. Indira Deonandan MIT Government
Mr. John W. Diem OSD/ MSCO Government
Mr. Mark E. Fenicle DoD Government
Mr. Tanya Gobel SAIC Industry
Mr. Robert Heilman DOD Government
CDR (Ret) Bryan Herdlick JHU APL Government
Dr. JoAnn Lane USC CSSE Industry
Mr. Steven S. Lee DoD Industry
Mr. Marty Leek (Facilitator) Raytheon Industry
Mr. Favio L. Lopez Army Industry
Mr. John R. Palmer Boeing Industry
Mr. George Rebovich Jr. MITRE FFRDC
Mr. Frank J. Serna Draper Industry
Mr. Michael Shanahan USMC Government
Dr. Carol A. Sledge SEI FFRDC
CDR (Ret) James D Smith II SEI FFRDC
Mr. Thomas Wissink Lockheed Martin Industry
Mr. Jack Zavin OSD NII/DoD CIO Government
Dr. Janice A. Ziarko MITRE Industry
Ms. Robin E. Ziradinovic SAIC Industry
Government28%
FFRDC36%
Industry36%
NDIA T&E Conference Mar 2011
Workshop Results
Initiatives Identified with Action Plans
8NDIA T&E Conference Mar 2011
Initiative Title Action Plan Initiative Vision Statement
Str
ate
gic
Initia
tives
Best Practices
Model for SoS T&E
Define a best practices
model
SoS T&E as a continuous improvement process
supporting capabilities and limitations information
for end users and feedback to SoS and System
SE teams toward evolution of the SoS
Radical Approach to
SoS T&E
Define SoS capability
test approach
Rethink T&E of systems in an operational context
and systems interoperability away from system
testing toward integrated capability SoS testing
SoS Governance Define characteristics of
successful SoS T&E
Indentify the process by which we can change and
influence the governance of SoS. Mature and
improve templates to define a minimum set of
characteristics that are required to govern SoS
T&E efforts
Go-D
o SoS SE Policy and
Guidance
Recognize and employ
SoS guidance
Ensure that guidance or SoS SE (DoD SoS SE
Guide) is recognized and employed on growing
number of SoS
Initiative Teams
2 Initiatives Launched, Will Feed Results into 3rd
9NDIA T&E Conference Mar 2011
#1 Best Practices
Leads Judith Dahmann, (MITRE &ASD R&E/SE)
Rob Heilman (TRMC)
Team
Members
George Rebovich, (MITRE)
Jim Buscemi (GBL&TRMC)
Paola Pringle (Navy)
Kent Pickett (MITRE)
Chris Scrapper (MITRE)
Aaron Budgor, (GBL Systems, TRMC)
Laura Feinerman, (MITRE)
Joe Lucidi, (Army OTC)
#3 Governance
Leads Bob Aaron (ATEC)
James Smith (SEI)
Team
Members
John Palmer (Boeing)
Carol Sledge, PhD (SEI)
Robin Zivadinovic (JFCOM/Ctr)
#2
Define SoS Capability Test
1. Form core team (Complete)– Core team will implement activities
– Share results for feedback from SoS and DT&E committee
2. Define scope (Complete)– Focus on Acknowledged SoS (SoS objectives, management, funding and
authority; however systems retain their own management, funding and authority in parallel with the SoS)
– Investigating potential for Directed SoS (SoS objectives, management, funding and authority; systems are subordinated to SoS)
3. Develop a draft description of the proposed model– Review the workshop discussions (Complete)
– Review current SoS SE guidance on T&E (Complete)
– Framework for model and implementation approaches (In Progress)
– Draft model description and circulate for review (Planned)
4. Review use cases to support and/or adapt the model
5. Update the model based on use cases
6. Review and assess state and utility of the model
Identifying T&E inserts into SoS Wave Model
Soliciting Use Case Recommendations
10NDIA T&E Conference Mar 2011
Complete
In Process
Planned
InitiateSoS
PlanSoS
Update
EvolveSoS
Arch
EvolveSoS
Arch
ImplementSoS
Update
PlanSoS
Update
ContinueSoS Analysis
ImplementSoS
Update
PlanSoS
Update
ContinueSoS Analysis
ConductSoS Analysis
ContinueSoS Analysis
ImplementSoS
Update
DevelopSoS
Arch
External Environment
#1: Best Practices Model
Approach and Status
• SoS SE Guide Trapeze Model– “Assessing Performance” is a core
element of SoS SE
• SoS SE Artifacts– Performance Measures and Metrics
• Wave Model– SoS T&E begins with SoS analysis and
is addressed throughout the other stepsExternal Environment
Translating Capability Objectives
Understanding Systems
Developing & Evolving SoS Architecture
Monitoring Change
Assessing Performance
Orchestrating Upgrades
Assessing Requirements
& Solution Options
Trapeze Model
InitiateSoS
PlanSoS
Update
EvolveSoS
Arch
EvolveSoS
Arch
ImplementSoS
Update
PlanSoS
Update
ContinueSoS Analysis
ImplementSoS
Update
PlanSoS
Update
ContinueSoS Analysis
ConductSoS Analysis
ContinueSoS Analysis
ImplementSoS
Update
DevelopSoS
Arch
External Environment
Wave Model
#1: Best Practices Model
Role of T&E in SoS Models
NDIA T&E Conference Mar 2011
SoS Wave Model • Describe key activities at each stage
as they relate to T&E of the SoS– Conduct (and Continue) SoS analysis
– Develop and evolve SoS architecture
– Plan SoS Updates
– Implement SoS Updated
InitiateSoS
PlanSoS
Update
EvolveSoS
Arch
EvolveSoS
Arch
ImplementSoS
Update
PlanSoS
Update
ContinueSoS Analysis
ImplementSoS
Update
PlanSoS
Update
ContinueSoS Analysis
ConductSoS Analysis
ContinueSoS Analysis
ImplementSoS
Update
DevelopSoS
Arch
External Environment
• What actions are taken at each step to support the model of SoS T&E as
“Continuous improvement process supporting capabilities and limitations information for end users and feedback to the SoS and system SE teams toward evolution of the SoS”
• Why are these important?
• What value to they add?
• How do they contribute to the larger SoS SE and T&E outcomes?
• How do they address the challenges?
• What methods or tools apply?
#1: Best Practices Model
Framework for Description
NDIA T&E Conference Mar 2011
#3: Governance
Approach and Status
1. Form core team (Complete)
2. Define scope (Complete)– Purpose: to provide an integrated governance perspective for SOS
development, deployment, and life cycle
– Scope: Governance for overall acquisition, including T&E as a holistic/comprehensive view (focus on Directed and Acknowledged SoS)
3. Identify Governance As-Is State (Complete)– Fundamental Governance Concepts
– Architecture Concepts & DODAF for managing complexity
4. Develop Governance To-Be Fundamental Concepts (In Process)– Organizations that produce reference models, reference architectures, and data
engineering components including T&E considerations for measuring performance
– Synchronized and aligned organizations (structures), policy, tools, technical approaches, and resources that support the selected option.
5. Draft Recommendations to Achieve To-Be State
Reference Architecture As Framework to
Discuss Governance
13NDIA T&E Conference Mar 2011
Complete
In Process
Planned
Reference Model
Solution
Architectures
Reference
Architecture
Test TechnologyCurrent State
Solution
Architectures
Solution Specific
Models
Test TechnologyObjective State
#3: Governance
Current and Objective State
14
Our Architecture/Technology organizations should be designed how?
CCMCMCM
15
Creating the Environment for NR KPP – OV-1(NR KPP WG - Draft)
IOC
Technology
Development
Production &
Deployment
Operations &
Support
FRP DecisionReview
FOC
MaterielSolutionAnalysis
Materiel Development Decision
User Needs
Technology Opportunities & Resources
ProgramInitiationBA C
Focus of major changes
Engineering and
Manufacturing Development
Post-CDRAssessment
Post PDRAssessment
3 Dec 2008
PDR
or
Use M&S Environment
to develop
Architecture
- Begins w. Reference Models-Technology/Cost trade offs- Model Driven Architecture (MDA)- Integrated Development Environment (IDE) all under Configuration Mgmt- Use M&S Environment to develop architecture and to the
extent possible SOS DOTLMPS components- Common/interoperable tools and tool kits for this acquisition domain
Use M&S
Environment to develop/test
Reference
Architecture
Use M&S
Environment to continually
improve
Architecture
Use M&S
Environment to
develop/test
Solution
Architecture
Use M&S
Environment to deploy
Solution
Architecture
- User pulls down what he needs
- Each Phase Virtual using DOD COEs
- Designers push new patches
• The Materiel Development Decision precedes
entry into any phase of the acquisition framework
• Entrance criteria met before entering phases
• Evolutionary Acquisition or Single Step to Full
Capability
3 Dec 2008
ATEC TTD Architecture
Test Technology “To-Be” System Evolution (SV-8)
•Ref Architecture
•Community Driven
•Full Automation Support
•Reference Repository
•Sophisticated Visualization,
Aggregation, Disaggregation
functionality for data models
•Evolved CRM that includes
metadata for all T&E phases
•Additional & Evolved data
models
•Improved TT Investment
Decision Support System
•Ref Architecture
•Community Active in Evolution
•Improved Automation Support
•Operational Reference
Repository with:
•Improved Visualization,
Aggregation, Disaggregation
functionality for data models
•Evolved CRM that includes
Execution metadata
•Additional & Evolved data
models
•Initial TT Investment Decision
Support System
•Ref Architecture
•Community involved in
evolution
•Initial Automation Support
•Operational Reference
Repository with:
•Initial Visualization
functionality for data model
comparison
•Enhanced CRM that
includes Planning metadata
•Additional & Evolved data
models
•Initial Ref Architecture
for TTD Review
•Initial Reference
Repository with:
•First AEC data models
•Search, Submit, &
Download Functions
•Initial Ref Architecture
for Community Input
•Functioning Reference
Repository with:
•Initial CRM containing
the following data
models:•AEC T&E Reference
Model & other available
data models
•TENA TSPI
•Engineering Units DB
DTC
(x9)
OTC
(x8)
AEC
(x13)
•Models,
•Simulations, &
•Instrumentation
Local SCA and
individual Test
Center Standard
Interfaces Shared
Databases
Data ModelsDatabases
Data Models
Databases
Data Models
•M&S
•Instruments,
•Threat Representations,
•Facilities/Infrastructure
International,
Joint, Army, & ATEC
Standard Interfaces Common
DatabasesCommon Reference
Model within the ATEC
Reference Repository
DTCOTC
AEC
Joint Test & Training Services
Joint
•M&S, Instruments
•Threat Representations
•Facilities/Infrastructure
ATEC/Army
Standard Interfaces
Common
Databases
Common Reference Model
within the ATEC Reference
Repository
DTCOTC
AEC
ATEC T&E Services
EXAMPLE ONLY
#2: Capability Testing
Approach Planned
1. Assess inputs from Strategic Initiatives #1 and #3
2. Form core team
3. Define scope
4. Define SoS T&E As-Is State – Build up of systems testing in operational context
– Build up of systems interoperability
5. Define SoS Capability T&E To-Be State– Define gaps in implementation as integrated capability SoS
– Identify barriers responsible for these gaps
6. Draft Recommendations to Achieve Capability SoS T&E
Rethink T&E of SoS in Operational Context
17NDIA T&E Conference Mar 2011
Complete
In Process
Planned
Summary
• Successful Workshop with SoS and T&E Practitioners
• Framework Established for Continuing Collaboration
• Transition Discussion from Challenges to Solutions
• Strategic Initiatives to Develop T&E Solutions for SoS: 1. Define a best practices model
2. Define SoS capability test
3. Define characteristics of successful SoS T&E
– Recognize and employ existing guidance for SoS (DoD SoS SE Guide)
Not Too Late to Join a Team!
18NDIA T&E Conference Mar 2011
BACKUP
Details on T&E Issue Discussions
19NDIA T&E Conference Mar 2011
Issue 1 If SoS are not programs of record (and not subject to T&E regulations) why should we worry about this at all?
Discussion
• Restatement of issue:
– How do we define, articulate, and enforce the relationship between the SoS and the constituent systems?
– How does T&E support/help this?
• Governance/Roles/Stakeholders– Need a shepard (architect?) and support
from users– Need to educate stakeholders– What are rules of governance?– What are the regulations, standards, and
policies?– Need to obtain resources (funding, test
assets, time)– SoS leadership focus: architecture
views, who “owns”– Potential conflicts between SoS and
constituents– Business case for PMs to do SoS
• SoS T&E Focus– SoS T&E operationally driven (vs. DT-ish)– SoS edge of the envelop– What is an AoA of SoS?– Emergent behaviors (good and bad)– SoS resource consumption (e.g. data pipeline)– Continual assessment (joint exercises,
deployments)– How to define test strategies to efficiently
continuously test?– How do we help the T&E process help the SoS
work?
• Understand SoS Capabilities– What is the SoS expected to do?– Define and articulate relation between SoS and
systems– Flexible composition– Artfully sub-optimize the systems in favor of the
SoS– System performance bounds are not rigid in real
operation• Candidate solution: SoS requirements document
with annex for each constituent system (what is constituent contribution to SoS capability)
20
Issue 1 If SoS are not programs of record (and not subject to T&E regulations) why should we worry about this at all?
Approach to addressing issue
• Define a minimal set of SoS governance characteristics of a successful acknowledged SoS– Roles/resources
– Rules/regs/standards/policies
– Managing conflicts
– Establishing cooperation of constituent systems
– Includes responsibility to define SoS capabilities, architecture, and associated test strategy
– Concept of continual change and test in operational and training environment
– Lean management, taking advantage of available opportunities
– Recognize the large number of SoS across the DoD, and the fact that many systems support multiple SoS.anf the potential impacts of governance
21
Issue #2 If “requirements” are not clearly up front
from a SoS, what is basis for T&E of an SoS?Discussion
• Requirements vs expectations; Mission objective vs. technical requirements
• Mission threads linked to capability strands as architecture model
• Who/what has responsibility for architecture/requirement- another DOD layer?
• Standards for participating or acceptance of each system into SoS
• Requirements model for architecture encompassing time, space changes
• SoS level requirement T&E at program or SoS level balance?
• T&E of aggregation of systems level requirements (SOS level TEMP)
• Integrated development environment/ reference architecture as model
• Need operations/architecture view of SoS that individual systems must plug into-need someone responsible for this
• Prioritization of SoS capabilities at high (OSD) level required to permit constituent PM to manage development and delivery. With funding at SoS
• Measure and baseline SoS capability thru T&E w/o requirements. Where do we get metrics?
• Must have an “enforcer” capability manager - carrots and sticks
• Measure SoS capabilities when changes to SoS Baseline
• CONOPs vs innovative use of systems in face of changing threat
• Move from paper to 4 dimensions to capture SoS capabilities requirements.
• Use of modeling tools of SoS components delivered with each component to communicate requirements
• Capability flow down to systems, demo meeting systems capability
22
Issue 2: If “requirements” are not clearly defined up front for a SoS, what is basis for T&E of an SoS?
Approach to addressing issue
• The DOD needs a top-down (architecture, requirements, context, expectation) flow-process to systems within the SoS
• Needs authority & funding to enforce capability fulfillment
• Needs to be flexible enough to meet changing needs and threats and CONOPS/operator innovation.
• Determine the right balance between system test to sos- test to SOS level test
23
Issue 3 What is the relationship between SoS metrics
and T&E objectives?Discussion
• SoS T&E is focused on continuous improvement of the SoS (as compared to system T&E which is focused on the field, fix, or don’t field decision)
• Continuous SoS T&E requires – Stable/consistent metrics
– Consistent approach to defining evolving baseline
– A way to deal with emergent behavior (technical, organization, human) – positive or negative
– Need to leverage wide range of opportunities for test environments
– Continuous improvement means continuous testing ; Built in test instrumentation for feedback from field
• SoS metrics– Do not address discrete behaviors of systems (as do system metrics)
– Do address end to end performance across systems in SoS toward capability objectives of the SoS
• What is objective of T&E for an SoS?– Development information on capabilities and limitations of SoS to inform end users and
ongoing SoS evolution (as compared to system T&E which is assessment of whether system meets requirements)
• SoS T&E customers? – End user and SoS SE team (as compared to system T&E where aquisition community is
the customer)
• SoS T&E should be risk driven: focus on areas of risk to SoS or systems24
Issue 3 What is the relationship between SoS metrics and T&E objectives?
Approaches to addressing issue
• Characterize SoS T&E as continuous improvement, document the approach and share with the community
• Radically change how we look at testing given the growing prevalence of SoS– Concepts of DT and OT don’t really fit
– Inefficient to address systems in operational SoS environment on a system by system basis (OT today)
– Continue to test individual systems to assess whether we have developed what we asked for
– Create a new approach to OT, by cross systems support for testing capabilities
25
Issue 4 Are expected cumulative impacts of systems changes on SoS performance the same as SoS performance objectives?
Discussion• To address these issues you need to fix
– Define the SoS and its performance objectives• Constituent systems that are part of the SoS
• Which parts of the constituents contribute to the SoS objectives
– Describe the current and future state of the changing systems (Baselines)
– Assign ownership of SoS performance objectives
– Big challenge; leadership issue, etc• More collaborative approach for stakeholders of SoS
• Emergent behavior – interaction of systems, humans, system and organization along with constant change of the parts
• Bounds of human impact– Operator – leader – mission
– The people side of systems
• Training and development of the evaluators (and the end users)
• Expensive to assess if capabilities are realized (hard to do)
– Doing more with less?
– Disconnect thinking and reality?
• Leadership understanding of SE and SoS– Is there competency to make decisions and know
the impact and implications?
• Trades without know the desired outcome can be achieved
– Evaluation on an SoS basis vs individua;lsystems and their acquisitions
– Timing and who benefits (lack of rewards systems)
– Accountability for SoS
• Continued improvement, assessment, and alignment because objectives have changed– More data from fielded systems
• Connections to fielded side of the house (doesn’t deal well with change)
• “Measurement system’ for system – Analysis of impacts
– M&S?
– Risks; “we are not sure but…” with some mitigation
– Regression testing and configuration of SoS
– Comparative analysis
26
Issue 4 Are expected cumulative impacts of systems changes on SoS performance the same as SoS performance objectives?
Approaches to addressing issue
• Influence assigning leadership responsibility and ownership of defined SoS capability and associate performance objectives
• Establish incentives of constituent systems to collaborate and achieve SoS performance objectives
• Map SoS capabilities and performance objectives to constituent systems (under configuration control)
• Continual assessment, improvement, and realignment is required (incremental approach) focused on end user)
• Create a guidance framework for emergent behaviors of changing to be measured and managed
27
• Trying to assemble all piece parts for T&E
• So many variables that can impact T&E outcome
• Reliance on other programs (e.g., JTRS) for capabilities that can slip in schedule or are never delivered
• Spanning “use-case” space with a reasonable set of resources and schedule
• Need defined set of requirements (but, of course, this is part of the problem space)
• What does a T&E strategy look like?
• How account for “the network” and stresses to it?
Issue 5 Are expected cumulative impacts of systems changes on SoS performance the same as SoS performance objectives?
How do you test the contribution of a system to the end to end SoS performance in the absence of other
Discussion SoS elements critical to the SoS results?
• DoD should require programs to share/ make transparent to other programs their development, DT and other data (obstacles: proprietary/security)
• Recommend ways to systems instrument to enable post-fielding collection of “test” data
• Operations, exercises, training
• DoD should develop a common approach to accounting for “the network” as a constituent of all SoSs for purposes of T&E
• DoD articulate purpose of SoS T&E– Is it a capability demo ( “what do we have?”)
– Is it a classical check against requirements?
– The real purpose of SoS T&E is to answer:• Is the new capability operationally useful (whether or
not it “met” requirements); what are risks?
– How can the new capability be used?
– What further changes are required?28
Issue 5 Are expected cumulative impacts of systems changes on SoS performance the same as SoS performance objectives?
How do you test the contribution of a system to the end to end SoS performance in the absence of other SoS elements critical to the SoS results?
Approach
• M&S of piece parts that are not yet ready to be tested (but issues between M&S for individual system performance versus effects-based M&S) – potential solution to issue #1.
• Architectures and synchronizing them an enabler of T&E (provides well-defined baseline; can measure deltas against the baseline)
• Combinatorial test & design (suggested as potential solution to issue #2).
• Model-test-model approach suggested for way to accommodate emergent behavior
• Field exercises – instrumentation to collect data
• Training as a T&E opportunity
• No SoS requirement => no TEMP for SoS capabilities => no SoS T&E funding. Therefore need a capability (SoS) focused, cross-system, integrated test schedule that builds to a graduation-level event. (some disagreement re. existence of such an event). Push SoS T&E to fleet/operators as proof of IOC (need fleet experimentation funding).
29