最新文章专题视频专题问答1问答10问答100问答1000问答2000关键字专题1关键字专题50关键字专题500关键字专题1500TAG最新视频文章推荐1 推荐3 推荐5 推荐7 推荐9 推荐11 推荐13 推荐15 推荐17 推荐19 推荐21 推荐23 推荐25 推荐27 推荐29 推荐31 推荐33 推荐35 推荐37视频文章20视频文章30视频文章40视频文章50视频文章60 视频文章70视频文章80视频文章90视频文章100视频文章120视频文章140 视频2关键字专题关键字专题tag2tag3文章专题文章专题2文章索引1文章索引2文章索引3文章索引4文章索引5123456789101112131415文章专题3
当前位置: 首页 - 正文

System of systems integration and test

来源:动视网 责编:小OO 时间:2025-10-02 10:44:24
文档

System of systems integration and test

InformationKnowledgeSystemsManagement5(2005/2006)261–280261IOSPressSystemofsystemsintegrationandtestRolandT.BrooksaandAndrewP.SagebaLockheedMartinCorporationBethesda,MD,USAbDepartmentofSystemsEngineeringandOperationsResearch,GeorgeMasonUniversity,Fa
推荐度:
导读InformationKnowledgeSystemsManagement5(2005/2006)261–280261IOSPressSystemofsystemsintegrationandtestRolandT.BrooksaandAndrewP.SagebaLockheedMartinCorporationBethesda,MD,USAbDepartmentofSystemsEngineeringandOperationsResearch,GeorgeMasonUniversity,Fa
Information Knowledge Systems Management5(2005/2006)261–280261 IOS Press

System of systems integration and test

Roland T.Brooks a and Andrew P.Sage b

a Lockheed Martin Corporation Bethesda,MD,USA

b Department of Systems Engineering and Operations Research,George Mason University,Fairfax,VA, USA

Abstract:The environment in many contemporary organizations,including the US Department of Defense,has changed and evolved such that operations now requires successful joint integration and interoperability of multiple complex systems to achieve an integrated operating capability.Many of these complex systems were not necessarily designed to communicate with one another,and there are other interesting problems encountered when not properly integrating them where the goal is to establish an integrated system of systems(SoS)capability.There has been very little research with respect to the integration and testing of SoS and,in particular,verification and validation activities to assure that the systems are successfully integrated to achieve the integrated capability,mission,and satisfy requirements.

This paper describes an approach to SoS integration and testing,including an evolutionary SoS overarching test process, driven by capabilities-based testing,for reducing integration risks when engineering a SoS.The effectiveness of the approach is validated by means of a case study.This paper describes several contributions:

1.a system requirements approach based on three models,top-down decomposition,horizontal threads-based analysis,and

bottoms-up operational capability reengineering;

2.a cross-program independent verification,validation,and test process:and

3.key systems engineering lessons learned and learning principles on the basis of the case study.

The major intent of this effort is to obtain an overarching SoS integration and test process that will enable risk reduction before the SoS is operationally deployed.

1.Introduction

Complex systems,which were not originally designed to be integrated,are now being connected as a system of systems(SoS)in an attempt to rapidly achieve an integrated operating capability.This has created a number of interesting problems,especially those of integration and interoperability risks when connecting interdependent component systems without a suitable process to support joint integration and interoperability.This is especially true in today’s environment where there is a need for information, knowledge,and intelligence sharing among organizations,programs and systems because of a larger more diverse set of operations.

System of systems engineering requires architectural definition and system of systems integration planning,which are different from the traditional systems engineering process related activities.SoS integration is a complex issue and requires an evolutionary system of systems engineering approach because of the emergent SoS properties,including:ambiguous boundaries,diverse stakeholders,fluid situations and requirements,ill-defined problems,dysfunctional barriers,sophisticated technology,and uncertain resources[11].There is a need for aflexible architectural framework and integration and test process,which are focused on the overall SoS as an integrated capability.Each component system of a system of systems must work collaboratively and cooperatively towards a coherent mission.

13-1995/05/06/$17.00 2005/2006–IOS Press and the authors.All rights reserved262R.T.Brooks and A.P.Sage/System of systems integration and test

One of the main contemporary challenges is that SoS are often formed from a variety of component systems that are engineered from the bottom-up as customized systems.These systems are typically comprised heavily of commercial-off-the-shelf(COTS)elements and systems.These component systems have often been procured by different organizational units and at different time frames,even when intended for the same type of operation,and thus often work in quite dissimilar ways.Also,to exacerbate the problem,most of these component systems are rarely decommissioned or retired,which not only results in significant support and maintenance life-cycle costs,but also complicates integration issues. The reason is that any new capability that is deployed to an operational environment must be tested for interoperability with legacy systems as well as new evolving systems.

A major task in evolving a SoS is how to effectively integrate the component systems so that they work as a combined whole.The answer lies in achieving interoperability between the component systems; but interoperability has rarely been successful as intended.Therefore,interoperability is one of the most essential Measures of Effectiveness of a SoS.It is important to mention that the overall performance of a SoS relies just as much on human and organizational actions as it does upon the performance of technical system components.The human and organizational elements have to be considered as integral parts of the SoS.

Interoperability is a quality required by all inter-communicating component systems.In the SoS context,interoperability can be defined as the ability of a system to correctly exchange with other systems,by means of mutually supportive actions,useful information and knowledge in support of end-to-end operational effectiveness for strategic and operational success.Joint interoperability does not mean that the human computer interface must be identical for all participants in a SoS.Rather,each should be in the form most suitable for the particular task and take into account such factors as the natural language and culture of operators.

The SoS increasing size,power,and complexity is such that complete comprehension of the SoS is probably beyond the intellectual capability for one person to fully understand.However,SoS can be specified effectively using careful system of systems analysis of the operational capabilities and requirements(what do we want to do and what information or knowledge do we need to do it?), and decomposition or partitioning into smaller and understandable elements.This is significant for reducing the problem space,risk mitigation,and complexity of SoS.The delivery of an operational effective SoS is critically dependent upon successful integration of key interfaces to achieve an integrated capability.Common standards,specifications,rules,and definitions are fundamental and helpful when constructing or connecting SoS,but cannot alone guarantee interoperability.Designing a new system to be incorporated into a SoS requires not only a clear understanding of the capabilities and requirements of the system to be developed,but also a clear understanding of the implementations of the systems with which it is to interoperate.This is why an open andflexible architecture is important;and it must describe a common language and set of rules.

Figure1illustrates a prototypical SoS diagram,and helps distinguish some of the differences from systems engineering to SoS engineering.

In thisfigure,the SoS is the overarching system comprising System A,System B,and System C.Each one of the component systems accomplishes independently useful functionality,but successful integration enables an overarching integrated set of capabilities not possible by an individual system.The traditional systems engineering focus is reflected by System A,which could be decomposed into subsystems one and two.The system of systems is not just a sum of all the parts because when the component systems are integrated there are often interesting interactions and dynamics that are not anticipated and are beyond the capability of any one system.The system of systems behavior is emergent,non-linear,and willR.T.Brooks and A.P.Sage/System of systems integration and test263

SE Traditional Focus

Fig.1.Sample SoS diagram.

continue to evolve over time based on feedback and the environment.A key issue here is the possibility of conflicting objectives among component systems.For example,it could occur that System A cannot perform it’s role in the SoS without compromising its own individual objectives In such situations,the system of systems will never really reach a full operating capability or completion state.

In the next section,we present a brief survey of relevant literature and a discussion of relevant work to date.Next,we present an overview of our approach that addresses the problems and challenges.The subsequent section contains a case study of the actual work conducted,including conclusions and key findings,to validate the approach taken.This is followed by our conclusions.

2.Literature review

There has been limited research on SoS,especially from the integration and testing perspective.Most work has been focused on SoS architectural frameworks and is a necessary prelude to formal integration and testing.Formal testing activities are critical because these activities are methods for providing verification and validation(V&V).This section contains a discussion and survey of various SoS research efforts to date,which are helpful as a foundation to formal testing.

Keating and Rogers[16]assert that system of systems require an evolutionary systems engineering approach that is different from the traditional systems engineering process.The authors identify6problem themes of SoS knowledge development:fragmented perspectives,lack of rigorous development,lack of theoretical grounding,information technology dominance,limitations of traditional SE single system focus,and whole systems analysis.This work describes some key distinctions between system of systems engineering and SE which include:the focus on multiple integrated complex systems vs.single complex system;the objective of satisficing vs.optimizing;the approach of a methodology vs.a process;the expectation of an initial response vs.a solution;the problem of emergent vs.defined;the analysis of contextual influence dominance vs.technical dominance;the goals of pluralistic vs.unitary;and the boundaries offluid vs.fixed.2R.T.Brooks and A.P.Sage/System of systems integration and test

Sage and Cuppan[19]assert that SoS exhibit the characteristics of complex adaptive systems.There is a description of5primary characteristics of a SoS,based on some work from Maier[15]that make the SoS designation,which include:op erational independence of the individual systems,where each system has purpose and functionality in its own right;managerial independence of the systems,where there is no centralized control;geographic distribution,such that systems tend to be distributed geographically; emergent behavior,such that the whole is not the sum of all the parts;and evolutionary development, such that the SoS emerges and adapts over time.Sage and Cuppan[19]also explores a concept called “New Federalism”,which is a useful and related concept to SoS.It relates to a federation of systems (e.g.,coalition system)and applies in situations where there is little central command and control like authority and power.

Morganwalp and Sage[16]describe an Enterprise Architectural Framework(EAF),Architecture Development Process(ADP),and Measures of Effectiveness for SoS.The Enterprise Architectural Framework is an extension of the Zachman Architectural Framework with a3dimensional view to provide adequate insight and visibility to the SoS.The authors assert that an enterprise architecture is typically,also,a SoS.There is also a discussion of a multi-stroke decomposition process for a SoS from the enterprise-level to the parts-level.

Krygiel[13]describes the integration environment and some key lessons learned from two case studies based on experiences from NIMA and the Army when building a SoS.While the approaches by NIMA and the Army were similar,the implementation strategies were different.The Army generally used a top-down process and NIMA a bottoms-up process.The author extrapolated9common lessons learned from the case study as categories when integrating SoS.

Kasser[11]discusses techniques for managing a SoS based on information and knowledge management perspectives for controlling the systems life-cycle of the component systems.Also,the author establishes an overall management framework to reduce the complexity of managing a SoS to help solve issues such as cost and schedule overruns,and program failures.

Correa and Keating[2]assert the use of multimethodologies(methodology enhancement)as a frame-work for model formulation of SoS to resolve system of systems engineering issues.The authors also apply standard complex adaptive systems(CAS)characteristics to a SoS to help understand the com-plexity of a SoS.The characteristics include:aggregation,the integration of multiple complex systems; tagging,where there is a new set of mechanisms that must emerge to enable integration;non-linearity, such that SoS behavior cannot be derived by just analyzing the component systems;flow,whereby component systems will exchange information as required to achieve the mission;diversity,in which from many potential component systems,one set must be chosen to constitute the SoS;internal nodes, whereby multiple alternatives of states and actions have to be explored due to the uncertain environments of a SoS;and building blocks to enable evolutionary development.

Sage[18]presents an overview and description of the various conflict and risk situations that involve SoS,including a partial taxonomy.With the trend of distributed systems and globalization,the author contends that numerous organizational difficulties and dysfunctional behaviors are realized,which results in potential conflict situations.These issues are challenges to systems management involving SoS and specifically,large distributed systems engineering and integration.The author asserts an eight step approach for the overall risk and conflict management strategy that will allow meaningful implementation of a risk and conflict management strategy.

Eisner[7]describes a process of architecting a unified SoS in development situations where SoS engineering discipline needs to be formulated and applied.This paper provides a formalism called“S2 Engineering”,and addresses how computer tools may be and are applied to this new discipline.AnR.T.Brooks and A.P.Sage/System of systems integration and test265 example is given of two legacy systems,S1and S2,and a general suggested process for the architect to upgrade both for an additional capability and functionality.

Bodeau[1]proposes a systems security engineering process for SoS.This process addresses such issues as how to identify and mitigate the security risks resulting from connectivity between component systems of a SoS,how to integrate security into a target architecture,and how to address the constraints associated with legacy systems.The author asserts that the collection of component systems must not be treated as one entity because the integration could cause or result in vulnerabilities,which were not apparent in any particular system.This might well cause an individual system’s role to be compromised due to compliance with SoS needs

3.Overview of SoS architecture and test approach

As previously mentioned,there are several interesting problems encountered when connecting systems to establish a system of systems capability.This section will focus on describing the approach,including good system of systems architecting and integration and test planning.One of the key tenets of this approach is the need to identify and resolve integration risk early in the SoS engineering life-cycle to support such efforts as capabilities-based planning and implementation.Figure2provides a description of the System of Systems based Lifecycle process in the form of a traditional V diagram.The rest of this lengthy section will be concerned with describing implementation details for this process model. Traditional systems engineering is often based on rigid requirements(requirements-centric)and non-agile processes where systems are deployed as platform-centric solutions with little interoperability. Among the current processes,especially within DoD is that of Capabilities-Based Planning(CBP)based on mission needs and overarching capabilities,joint vision,and joint operating concepts,or concept of operations.This capabilities-based SoS engineering process continues with SoS requirements and architecture definition,architecture-based design,spiral or evolutionary development,SoS integration and verification,and deployment and operations.

Capability-Based Planning[3,5,17]is a top-down methodology for identifying and describing existing and future capabilities shortcomings.This methodology also includes the key attributes of effective solutions and a description of the most effective approach or combination of approaches to resolve the capability shortcomings.This approach leverages the expertise of multiple agencies and organizations, industry,and academia in order to identify improvements to existing capabilities and to develop new or modified capabilities required to support missions.One of the advantages of this approach is that it moves away from a program-by-program solution in favor of building a capabilities-based solution. This often leads to a Capabilities-Based Acquisition decision tofill the capability gap or shortcoming by building a network-centric system of systems or family of systems.The analyses to support this process include functional needs analysis and functional solutions analysis,which are documented in a specific set of capabilities-based documents.

Within DoD,these documents include an Initial Capabilities Document(ICD),Capabilities Develop-ment Document(CDD),and Capabilities Production Document(CPD).The Initial Capabilities Doc-ument describes gaps in capability for a particular functional or mission area.The Capability Devel-opment Document defines an increment of operational capability to support the System Development and Demonstration phase of the acquisition process.The Capability Development Document also pro-vides the measurable and verifiable operational performance parameters,including Key Performance Parameters.The Key Performance Parameters are those system attributes or characteristics considered most essential for the desired capability.The Capability Production Document provides the information

266R.T.Brooks and A.P .Sage /System of systems integration and test

Program Management &Control

Risk /Opportunity Management Decision Analysis Configuration Control &Mgmt

Program Finance

Subcontract Management Supplier Management Quantitative Management

Contract Management Quality Systems Engineering and Management Support Integrated Logistics Support (I LS)

System of Systems Readiness Systems Analysis &Modeling

Systems Management and Control Processes

Fig.2.Capabilities based lifecycle V process model.

necessary to support production,testing,and deployment of a capability increment via the Production and Deployment phase of system of systems acquisition.The Capabilities Production Document also refines the performance attributes and Key Performance Parameters initially developed in the Capability Development Document.

Architecture-based design is a multi-dimensional,engineering decision practice that establishes an architectural solution for a system to allow for its assembly through subsequent development and imple-mentation processes.This concept is appropriate in a SoS context because it enables the transformation from a rigid requirements-centric approach to a holistic architectural-design approach.This includes using and applying,for example,the DoD Architectural Framework (DoDAF)[4]views and building the associated architectural products or artifacts.These views are the integrated Operational Views (OVs),System Views (SVs),and Technical Standards Views (TVs).These three DoDAF views are major perspectives that logically combine to define an integrated architectural description.The Operational View describes the tasks and activities necessary to successfully perform a mission.The Systems View describes the systems of concern and the interfaces among those systems in context with the Operational View.The Technical Standards View describes a profile of the minimal set of time-phased standards and rules governing implementation.System of systems architects can use these to produce the architectural products to establish an integrated architecture.This type of integrated architecture description is im-perative for good system of systems architecting to guide design,development,and integration.It also provides a framework or construct in support of Capabilities-Based Planning for identifying issues such

as capability gaps,duplications,and systems interoperability concerns.

System of systems architects can use a variety of automated software tools to build the appropriate architectural products or artifacts in a standard format such as the Unified Modeling Language(UML) or the Systems Modeling Language(SysML).An automated tool can be helpful for system of systems configuration management and traceability because it makes the process easier to integrate and maintain the architectural products.The system requirements specification for the system of systems is derived and translated from the overarching capabilities,operational mission scenarios,operational requirements,and integrated architecture.The system requirements specification and overall desired integrated capability are the basis for formal testing,including verification and validation.The system requirements have an important role in allocating responsibility to development programs to build the component systems and interfaces that provide key functionality to enable integration.The integrated architecture will help in this regard because if there are gaps,a requirement can be derived and allocated to a development program to build the functionality required.This assures that development programs are under contract in order to build the essential functions and interfaces required to support the SoS.The challenge is to write the system requirements specification with the essential performance characteristics,such as to enable developers to rapidly build systems effectively in an incremental or evolutionary manner and to enable testers to verify or measure the system performance iteratively against the requirements specification. This must be done while concurrently keeping a focus on building and deploying the overall desired capabilities quickly in order to support the mission while maintaining process discipline and reducing risks associated with cost,schedule and performance.The system requirements specification must define the minimal essential constraints(e.g.,standards,information technology infrastructure,etc.)and behaviors that enable interoperability and integration.Also,the requirements must maintain traceability to the integrated architecture to allow for configuration management.

The top-down requirements management seeks to assure that the existing system requirements are mapped to over-arching requirements documents(e.g.,Capabilities Development Document)and ar-chitectural artifacts.These are allocated down to all of the programs and component systems.The requirements database helps to accomplish this by establishing vertical requirements traceability.All requirements pertaining to an integrated capability,across all the component systems,should be mapped to an integrated capability deployment.This helps identify key system requirements and documents, including the SoS Specification and Interface Requirement Specifications that pertain to each integrated initial operating capability.

Horizontal threads-based requirements and interface management employs a requirements function that allow architecture products to be imported from a software tool and establishes traceability between thread sequence of functions and data exchanges to functional and interface requirements.This allocation of requirements to threads(behavior analysis)enables the Cross-Program Independent Verification and Validation Process and test plans to incorporate architectural artifacts to establish the testing of the capabilities provided by the multiple programs.

Operational capability reengineering assures that derived requirements created by tactical modification of the operational baselines are captured and properly accounted for in the development and planning baselines.By requiring that changes to the operational baselines at all levels be coordinated,that assures baselines are kept synchronized.This synchronization allows integration of each development baseline integrated initial operating capability.

Systems analysis,including cross-program analysis and modeling,enables SoS trades to be performed in which predictions can be made when prioritizing system requirements across the various component systems.This leads to decision analysis when making trades regarding performance,schedule,andcost.These SoS trades are important for performing what-if analysis(e.g.,what if a certain capability was deployed in a particular sequence to various sites),especially in the area of deployment planning and synchronization.Modeling and simulation have an important interaction as tools within the Cross-Program IV&V and Test Process.

The capabilities-based SoS engineering process described here is more of a holistic view of systems than is the traditional systems engineering process.In other words,it attempts architecting and design of the whole,rather than only architecting and design of the individual systems and components.This enables the mission need and desired capabilities to be transformed into an overarching systems concept and architecture.Figure2illustrates the suggested Capabilities-Based SoS Engineering V Process.This Capabilities-Based SoS Engineering V Process is driven by capability gaps or shortcomings.This process also helps set the stage for taking a holistic view of the SoS,by focusing on good system of systems architecting,integration,and test planning.The left side of the Engineering V is the decomposition and definition phases.The right side of the Engineering V is implementation,verification,and validation phases.The SoS integration and verification area in Fig.2starts concurrently with SoS requirements and architecture definition area.The SoS integration and verification area is specifically highlighted to illustrate the focus of this research and to set the context for the Cross-Program Independent Verification, Validation,and Test Process that is introduced,defined and implemented in this project as a significant contribution.

SoS architecture integration must be based on a domain-proven SoS process that will enable capabilities developed under multiple programs to function as a coherent whole through support for identification of interfaces,identification of gaps,and adjudication of functional overlaps.The interdependent systems provide unique challenges for large-scale SoS integration.The programs and systems are exceedingly complex,and thefirst challenge to SoS integration is to make sure their architectures align quickly and effectively to support rapidly evolving mission needs.Additional challenges include identifying and resolving redundancies,objectively measuring progress towards a more efficient architecture,and building the architecture interface for transition and change management issues.The end result will be identification of gaps and inconsistencies between the programs and the recommendation of corrective actions for each program.The key attributes and critical architectural principles,coupled with the SoS requirements,help facilitate changing missions and the uncertain environment for SoS.The key attributes and critical architecture principles in Table1are essentially the Measures of Effectiveness at the SoS-level,which can be augmented by individual system Measures of Effectiveness and Measures of Performance.All of the key attributes are important,but interoperability will be the focus because it is essential for successful SoS integration.

We will now describe the Cross-Program Independent Verification,Validation,and Test Overarching Process,including implementation procedures.This specific test process,as a significant contribution in this project,was designed and implemented to identify and resolve integration risk in this particular situation for the SoS.Also,this process entails system-level verification and validation procedures and SoS-level verification and validation procedures as key elements of the process.

Planning for Cross-Program IV&V and Test is initiated concurrently with the SoS requirements and architecture definition phase of the Capabilities-Based SoS Engineering V Process.Phase-appropriate test metrics must be collected,analyzed,and reported.Inherent to each phase of testing is planning, commitment,execution,and reporting activities.Each test commences with a test readiness review, a formal decision point to determine that the component systems are suitable or ready for integration testing,and completes when the test report is delivered to systems management.During each test phase, any discrepancies that are encountered must be documented,assigned,worked,and tracked to closure inTable1

Generic SoS measures of effectiveness Key Attributes and Critical Architecture

Principles

SoS Requirements

Scalability:The ability of a system to incremen-tally expand or shrink in response to a changing in mission requirements.Specified through the ability to respond to increasing demand for a func-tion or service through service replication,increased hardware system resource availability,or improved system performance.

Interoperability:The technical exchange of in-formation and the end-to-end operational effec-tiveness of that exchange.Specified through integratability,adaptability,business rules data access, information architecture,and data objects.Measured by the Levels of Information Systems Interoperability Capability Maturity Model.

Information Assurance:Information operations that protect and defend information and infor-mation systems by assuring their availability,in-tegrity,authentication,confidentiality,and non-repudiation.Specified by confidentiality,integrity availability,authentication,and non-repudiation and other requirements.

Robustness:The ability to accommodate un-expected or unplanned events,requirements,or demands.Specified through evolvability(extensibility),survivability(mission as-surance),fault-tolerance,self-healing,no central point of failure,high granularity,and loose coupling.

Agility:The ability to accommodate multiple and changing missions and priorities.Specified through conjunction,application and business rule context separation architecture to mimic business,user-managed business rules and metadata,platform-neutral rules,and business rule components.

Flexibility:The accommodation and integration of new technologies and experimentation.Specified through abstraction,late binding,indirection,encapsulation, implementation separation,message-based integration,and continuous technology insertion.

Service Orientation:The separation of how and where an operation,function,method,or state is implemented from what the operation,function, method,or state is supposed to do or represent.Specified through component-based,service-based,layering and tiers, self-forming and self-describing,and generalized public interfaces.

Adherence to Enterprise Standards:Assurance that the system is designed in accordance with organizational strategy,policies,procedures, and standards.Specified through requirements for services and frameworks to adhere to these standards.Measurement and assessment against these standards repeatedly through the SoS life-cycle,assuring compliance.

accordance with the test team tracking process.Product anomalies are tracked as discrepancy reports, typically through an Engineering Review Board,which is the technical group that will make decisions, recommendations,and assure that the engineering is sound.During each test phase,updates to the development program’s configuration items must be coordinated through a Configuration Control Board, which is thefinal decision authority for changes to configuration items and assessment of the impact of those changes to programs costs,schedules,and performance.It is critical that the Lead Systems Integrator or Systems Integrator reviews and provides input to each phase of the life-cycle to assure that program risks are reduced and that an efficient,cohesive,and robust test and evaluation program is implemented.

Modern verification and validation methods,compatible with Service Oriented Architectures(SOA) and current faced-paced asynchronous development and deployment models,are needed to assure that the capabilities-based SoS test program does not impede rapid development.These verification and validation methods must employ incremental and iterative techniques,tightly coupled with associated development activities,to affect early integration testing,reduce risk,and assure interoperability,while remainingflexible enough to enable rapid verification and deployment of quick-reaction capabilities considering real-world constraints.

The Cross-Program Test Lead facilitates cross-program interface testing,leads the verification of SoS-level requirements specification,supports the transition to operations,and validates that the capabilities integrated into the operational baseline satisfy the needs and objectives of the users.The Cross-ProgramTest Lead leverages laboratory resources developed for system-level testing together with an Integrated Test Facility Federation(ITFF),to establish a cross-program integration test environment,which supports both verification and validation.The Cross-Program Test Lead also leverages the test personnel from the development systems to aid in the development of test scenarios,test cases,and/or test procedures for the conduct of cross-program tests.The Cross-Program Test Lead should use best of breed tools, proven processes,techniques,and expertise required to implement a robust test and evaluation program. One of the key early activities is determination of the verification methods(e.g.,demonstration, analysis,testing,&examination)associated with each requirement or capability.It is important that the appropriate methods based on cost and other conditions be allocated early in the process.In many cases verification testing of performance requirements and capabilities is not a cost-effective verification or validation method because of the number of conditions.The Cross-Program Test Lead,where appropriate,should use analysis as a method,including cross-program analysis and modeling and other analytical techniques to verify and validate performance requirements and capabilities.This includes verifying and validating associated Measures of Effectiveness,Measures of Performance and Technical Performance Measures.These are very important factors in a SoS context.

System-level verification is the formal verification testing of the development programs and systems, as described in the individual programs test plans and System Requirements Document.The Program Managers of the development programs are responsible to ensure that these test plans and requirements are complete and accurate.The purpose of the Cross-Program Test Lead’s participation at this level of testing is to verify that system-level testing is complete and suitable.The Cross-Program Test Lead documents the assessments and test activities.Any discrepancies raised are documented,assigned, worked,and tracked to closure using the tracking and discrepancy reporting process.

A key component of the overarching testing process is working closely with the development programs test teams to influence Early Interface Testing.Early Interface Testing is the informal testing of interface compatibility between two systems at a time before the full system-level capability is complete.This is a critical technical risk reduction activity to assure interface compatibility early in the development cycle. To promote the success of this test activity,the Cross-Program Test Lead identifies risk areas as candidates for Early Interface Testing for each integrated capability.This planning and coordination includes a schedule and list of products and tools needed to conduct this level of testing.The Cross-Program Test Lead plans,coordinates,and facilitates Early Interface Testing between selected development programs. Also,the Cross-Program Test Lead adjudicates any interface discrepancies between systems.This is important because sometimes the independent Systems Integrator or Lead Systems Integrator is required as an honest broker to determine which side of the interface failed and help take corrective actions to resolve the problem before formal cross-program integration testing begins.

Early Interface Testing is also important in a serviced-oriented architecture with one-to-many and many-to-many interfaces between the systems created by different development programs.This test activity verifies early on the system design,and that changes to a service developed by one program will not adversely affect already deployed services in the operational baseline.It is not enough to just verify that each side of an interface conforms to an Interface Control Document or Interface Requirements Specification without actually conducting a test or demonstration.Early Interface Testing therefore can greatly reduce technical and schedule risks,for improved cross-program integration.

Prioritized testing will focus on cross-program interfaces identified as either high risk or key to the integrated capabilities.A high risk interface can be defined as an interface whose failure would hinder the SoS in performing critical functionality or providing even minimal capabilities.Cross-program analysis plays a useful role to help determine and identify such high-risk interfaces,as key integration points,to support integration testing.Cross-Program Testing is the formal testing of interfaces and interactions among the component systems delivered for an integrated capability.This testing uses the Interface Control Documents and Interface Requirements Specifications.This is thefirst phase of testing the integrated capability.The Cross-Program Test Lead plans,coordinates,and leads the activity by developing and maintaining a Cross-Program Test Plan in collaboration with the development programs,their Test Managers,and test teams.This plan documents the cross-program testing scope,objectives,requirements,test environment, responsibilities,data evaluation,entrance and exit criteria,and schedule.The Cross-Program Test Lead integrates the detailed test procedures and engineering instructions provided by the development programs and their test teams into comprehensive test procedures with support from the developers. This practice helps the Cross-Program Test Lead orchestrate testing in accordance with the Cross-Program Test Plan and Test Procedures.The major emphasis in Cross-Program Testing is on verifying Interface Requirements Specifications and interoperability.This testing includes both functionality and performance of multiple interfaces and their information exchanges.Any anomalies found are documented and tracked to closure using the discrepancy reporting process.Once the Cross-Program Test Lead has successfully demonstrated the key individual system interfaces,the integrated capability is verified and validated by means of SoS-level verification and validation procedures,and this must occur before transition to an operational baseline.

Independent verification and validation consists of two levels of testing:SoS-level verification and SoS-level validation.This is thefinal phase of testing the integrated capability.The Cross-Program Test Lead is responsible for coordinating and leading the planning,scheduling,and orchestration of these two levels of testing.The Cross-Program Test Lead develops an Integrated Capability Test Plan that documents the strategy,roles and responsibilities,schedule,coordination plans,objectives,requirements, entrance and exit criteria,and description of test scenarios.This plan supports both the SoS verification and SoS validation activities.

The objective of verification is to assure that the SoS satisfies the SoS Specification for the integrated capability and performs as intended in an operational environment.Using the Integrated Capability Plan, the Cross-Program Test Lead develops detailed integrated capability verification procedures.These pro-cedures include verification of Key Performance Parameters and key technical performance requirements (including load&stability).In addition to testing normal operations and performance requirements,the Cross-Program Test Lead must perform capabilities-based test activities for the SoS.The Cross-Program Test Lead works closely with the operational personnel and the system developers to coordinate test data input and determine the necessary configurations to allow test data toflow or traverse through the key SoS interfaces and be identified at various user interface and integration points.This requires joint collaboration between the Cross-Program Test Lead and developer contractors.Any anomalies found are documented and tracked to closure using the discrepancy reporting process.

The objective of validation is to determine whether the SoS satisfies user needs and mission objectives in an operational environment.Validation is also measured by operational utility to end-users and any comparative testing against legacy interfaces.The Integrated Capability Plan and procedures address maintainability,supportability,and operability.Operational personnel or an operational test agent focuses on evaluating operational suitability using deliverable user manuals and user documentation.They are an integral part of conducting formal validation testing or acceptance testing of the integrated capability, in accordance with detailed procedures developed by the Cross-Program Test Lead.Therefore,the validation process assures that integrated capabilities satisfy the operational needs of the mission,that users can operate the SoS effectively,and that the capabilities are supportable and maintainable.Any anomalies found are documented and tracked to closure using the discrepancy process.Capabilities verified

Requirements tested/passed/ failed

Interfaces Tested Discrepancy Reports opened/closed Test Durations Services Integrated

Integrated Test Facility

Federation

Test Environment

Test Tool Suite Tools/Environment

Inputs

SoS Engineering V Process

Lessons Learned

Mission&System CONOPs

Architecture Views& Artifacts

SoS Specification

Interface Requirement Specifications

SoS RVTM

Discrepancy Reports& Status Measures

Outputs

Cross-Program

Test Plan

Cross-Program

Test Procedures

Updated SoS

RVTM

Capabilities Plan

Capabilities

Procedures

Test Reports

DR Summary

Lessons Learned

RFCs

ECPs

Discrepancy

Reports System-Level

Verification SoS-Level Verification

SoS-Level

Validation

Key Process Designed and Implemented in this Project to reduce integration risk of

the SoS before deployment to operators

Fig.3.Cross-program independent verification,validation,and test overarching process.

The Cross-Program Independent Verification,Validation,and Test Overarching Process,including the procedures,are illustrated in Fig.3.

Some of the key inputs to this overarching process include the Capability-Based SoS Engineering Process,architectural products,the SoS Specification,and the Operational Mission and System Concept of Operations.The outputs from the system-level verification procedure and SoS-level verification and validation procedures include:Cross-Program Test Plan and Procedures,Integrated Capability Plan and Procedures,a SoS Requirements Verification Traceability Matrix(RVTM),test reports,discrepancy reports,etc.A key approach of the SoS-level verification and validation procedures is Capability-Based Testing.Capability-Based Testing is an evolutionary SoS test approach for verifying and validating the desired capabilities of the SoS beyond the developer’s environment in an integrated systems context, with a focus on performance and interoperability.This approach is different from the traditional requirements-based testing,which is focused on detailed component systems requirements.Capability-Based Testing uses the operational mission and system scenarios,behavior analysis(threads),and architecture products to drive the capability-based test scenarios.These scenarios are dynamic and more efficient than traditional requirements-based testing.The capability-based test scenarios coupled with the integration test environment is the test harness supporting verification and validation.This is important because the system scenarios and threads describe specific logical cross-program key interface sequences and behaviors,which demonstrate end-to-end functionality for providing a capability.The traditional requirements-based testing is not effective and suitable in a SoS context because the key timing and sequencing can not be captured well in a static requirements list or document,but are captured in the

capability-based test scenarios as part of the Capability-Based Test Program.However,the key system performance requirements captured in the SoS Specification trace to the capability-based test scenarios and test cases for test coverage and traceability.The SoS Specification ultimately is verified and validated using the capability-based test scenarios/test cases,and analysis via the Capability-Based Test Program. There is often a one-to-many relationship between a test scenario/test case and system capabilities and requirements.

This Capability-Based Testing approach enables the Cross-Program Test Lead to use and leverage the integrated architecture and architecture-based design artifacts.The sequence or interaction diagrams help describe the key sequences,interfaces,and events for the systems of concern,which feeds the capability-based test scenarios.The sequence diagram also includes the trigger or stimulus that initiates the sequence,where the desired output can be measured.The output includes the Measures of Effectiveness and Measures of Performance analyses.This approach enables traceability between the capability-based test scenarios and the integrated architecture for configuration management.This way,when the architecture evolves so will the capability-based test scenarios in order to assure that both are synchronized appropriately.The value-added of the Capability-Based Testing approach for this SoS is that it helps reduce integration risk by focusing on verifying integrated capabilities of the aggregate system and performance attributes(e.g.,timeliness,accuracy,quantity,speed,capacity,etc.).The integrated capabilities are the key system requirements defined in the SoS Specification and the performance attributes are the key system qualities or architectural constraints.Also,Capability-Based Testing verifies interoperability capabilities and readiness of the SoS to be deployed to operators in an operational environment.

The major success criteria for this Cross-Program IV&V and Test Process will be to identify any system of systems discrepancies or integration risks,which would not have been identified at this phase in absence of this process.Any identified discrepancies or integration issues would have been detected and reported after the SoS was deployed to the operational environment.This situation could cause significant cost and disruption to the mission,depending on the severity,because of the rework required to resolve the discrepancies or issues.In order to truly reduce integration risks early for this SoS,the identification and disposition of discrepancies and integration risks must be done via the Cross-Program IV&V and Test Process before deployment to operations.

Interoperability is one of the key SoS attributes and Measures of Effectiveness because it is essential for successful SoS integration.Interoperability in a SoS context is the ability of a component system to correctly exchange with other component systems,by means of mutually supportive actions,useful information and knowledge in support of end-to-end operational capabilities and mission need.Inter-operability is not just a technical measure;it is also a test of cross-program collaboration between the component systems.

There is a DoD capability maturity model for interoperability,which defines multiple levels of interop-erability for information systems.This capability maturity model,called Levels of Information Systems Interoperability(LISI)[14],can be applied in a SoS context as an approach to help measure and assess interoperability capabilities between the component systems.This structured approach supports both top-down and bottom-up assessments of systems interoperability levels.The focus of LISI is on the overall system-to-system interactions and dynamics,and the various interchanges based on each system’s capabilities.LISI is a powerful tool in support of the Cross-Program IV&V and Test Process.4.Case study

This section will describe an actual case study in which Lockheed Martin was the Large-Scale Integrator,which is termed by DoD as the Lead Systems Integrator(LSI).Due to security and other concerns,aliases will be used instead of the specific organizations and program names.However,this is a real case study that was conducted in a DoD environment and not a hypothetical one.This case study will introduce a case study framework in which the material was organized.This case study also validates the effectiveness of the Cross-Program IV&V and Test Overarching Process that was designed and implemented to specifically identify and reduce integration risk for this SoS as a key contribution of this project.This test process and its execution enabled interesting things to be learned about the system of systems.

The Government awarded separate acquisition and major development contracts for each develop-ment program to develop or acquire a component system(multiple spirals for each increment).The Government also awarded one Lead Systems Integrator contract,under Program XYZ,for performing large-scale SoS integration and testing,to provide oversight and help the Government by identifying and mitigating integration risks during the SE life-cycle.The component systems of the SoS were being de-veloped asynchronously by different developer contractors and were in different stages of the acquisition and systems engineering life-cycles.Most of the development programs were considered Major Defense Acquisition Programs(MDAPs),which require additional Government oversight because of the large acquisition cost values.These component systems were interdependent and required proper integration and test with suitable processes in order to provide integrated capabilities and support mission objectives. These were some of the primary reasons a Lead Systems Integrator or Large-Scale Integration contract was awarded,including the contractor’s(henceforth will be called the Integrator)past performance and proven ability to achieve SoS integration while reducing integration risk.The Integrator was unbiased and impartial to the developer contractors and their parent companies.This arrangement was enforced through an Organizational Conflict of Interest and Non-Disclosure Agreement.This was done so that the Integrator could have visibility into the procurement of the development programs,adequately monitor and assess program development efforts,and examine data,processes or methods,whether proprietary or not.This allowed the Integrator to remain independent while advising the Government when program or system discrepancies or risks,which could negatively affect the overall SoS integration effort,are discovered.

The joint Integrator and Government responsibilities(over a two year period of performance)are the basis and focus of this case study,including the keyfindings and conclusions from the Cross-Program Test Lead,who designed and managed the implementation of the Cross-Program IV&V and Test Process. The Cross-Program Test Lead also included keyfindings and conclusions via observations of other team members involved with the Lead Systems Integrator effort on Program XYZ.The SoS effort for Program XYZ comprised some of the major functional areas,including a processing system,an analysis system, a reporting system,and an information technology infrastructure system.

This case study combined data collection methods and techniques such as:technical exchange meet-ings with Government and contractor personnel;participation in the Government Engineering Review Board and Configuration Control Board sessions;participation in developer contractor technical re-views(e.g.,System Requirements Review,Preliminary Design Review,Critical Design Review,etc.)or readiness events(e.g.,Test Readiness Review,Deployment Readiness Review,etc.);demonstrations of programs’implementations and executions;analysis of documentation and programs;and witnessing or orchestrating program and cross-program testing and analysis.The evidence was both qualitativeTable2

Case study framework(after Friedman and Sage,[12])

Concept Domain Responsibility Domain

1.SE Contractors Responsibility

2.Shared LSI and

Government

Responsibility

3.Government

Responsibility

A.Mission Needs/Drivers,Overarching Capabilities

B.Requirements Definition and Management

C.Systems Architecting and Design

D.Systems and Interface Integration

E.Verification and Validation

F.Life Cycle Support

G.Risk Assessment and Management

H.System and Program Management

and quantitative,including engineering processes and plans,procedures,instructions,records,models, reports,and tools.

A case study framework was implemented to organize the essential data,keyfindings,and conclusions. It also provides common terms,definitions,and perspectives.This framework(Friedman and Sage,[9]) of the key systems engineering concepts and responsibilities for case studies of systems engineering and management,was tailored as appropriate for a SoS context.This case study framework was implemented in a matrix format to include some of the major phases in the Capabilities-Based System of Systems Engineering V Process.Those key phases documented in this case study were:mission needs/drivers,overarching capabilities;requirements definition and management;systems architecting/ integrated architecture;systems and interface integration;verification and validation(including testing& deployment);life-cycle support;risk assessment and management;and system and program management. The matrix in Table2(Friedman and Sage,[9])depicts the framework by identifying the engineering phase as one column for the Concept Domain and the Responsibility Domain,with SE contractors, shared LSI and Government,and Government responsibilities as other columns.It also highlights cells within the matrix for the specific joint LSI and Government responsibilities and learning principles. These highlighted cells for the shared LSI and Government responsibilities were the focus of the case study since the Enterprise Lead Systems Integrator was hired to take a holistic view of the SoS and help the Government identify and resolve risks in the systems engineering process life-cycle.

We now provide a summary description of the case study framework conclusions,keyfindings,and some key systems engineering lessons learned(about LSI issues)and learning principles.

–Mission Needs/Drivers,Overarching Capabilities had gaps and duplication.The DoD Capabilities-Based Development Documents had gaps and overlap for the major development programs compris-ing the SoS.This required the Lead Systems Integrator to perform additional engineering analysis to build an integrated Capability Development Document for the Enterprise to adjudicate the capability gaps and overlap in the documents for the major development programs.This document set the foundation for the Lead Systems Integrator to perform SoS capabilities and requirements analyses to support the Requirements Definition and Management Domain.

–Requirements Definition and Management should focus on SoS performance specifications and behaviors.The key system performance requirements and behaviors at the SoS-level should be the focus by the Lead Systems Integrator in documenting the SoS Requirements Specification.This conclusion is based on thefinding that the system performance requirements and behaviors specify the architectural constraints or system qualities that are essential for the SoS operating capabilities andoptimization of the aggregate system.Also,a keyfinding was that the system requirements approach for this SoS must be based on the use of three interrelated models or major perspectives–top-down decomposition,horizontal threads-based analysis,and operational capability reengineering–as the vantage points for conducting effective system requirements analysis given this complex problem. This provided a technical vision of how the aggregate system would operate,what behaviors and performance characteristics are expected,and how the aggregate system would integrate into the operational environment.The key SoS performance requirements specified by the Lead Systems Integrator included properties such as timeliness,capacity,sizing,throughput,etc.These SoS performance requirements specified the extent to which a key function is executed.The key functions of the SoS are further decomposed and defined at the system-level and documented in the System Requirements Documents maintained by the major development programs and their Program Managers.The Lead Systems Integrator and Government concluded that it was effective for the Enterprise to manage the requirements and performance measures in a repository.This enabled enterprise stakeholders to develop the mindset that corporate knowledge(e.g.,system requirements,performance measures,analysis,etc.)should be managed in an enterprise repository (knowledge management concept)environment so that it could be used by the Enterprise to support Government decision-making.Also,system requirements were mapped by the Lead Systems Integrator to architectural products in the repository(using database attributes and links)as an integrated approach so that the architects and systems engineers could maintain traceability for how an architectural function related to the system requirements for synchronization.In many cases the requirements and architectural products are not well integrated and maintained by different organizations,which results in two or more baselines and reduced value of the overall architecture. On Program XYZ,a concerted effort was performed and achieved by the Lead Systems Integrator to synchronize requirements and architectural products for the Enterprise by using the Dynamic Object-Oriented Requirements System and Systems Architect COTS products.

–Systems Architecting and Design/Integrated Architecture should provide a holistic view of the SoS.The Lead Systems Integrator discovered that it was effective to take a holistic view of the SoS and establish an early architectural solution and framework,with a set of reference models derived from the Federal Enterprise Architecture Framework[8].These reference models were simple representations of an aspect of the architecture.The Lead Systems Integrator concluded that using an integrated architecture team consisting of Government,Lead Systems Integrator, and development contractors to develop the architecture design was effective for both buy-in and validation.The architectural framework not only influenced the design of the SoS for the major development programs and their Program Managers,but it served as the construct for cross-program architecture integration and cross-program integration and test by the Cross-Program Test Lead.

–Systems and Interface Integration requires key cross-program interfaces to be managed by the Lead Systems Integrator or Systems Integrator.One of the keyfindings in this area for the Lead Systems Integrator was that it was necessary to build three primary products to enable systems and interface integration of the SoS.These three products included an Interface Management Plan(which defined the Enterprise process and policy for developing and deploying key cross-program interfaces and interface profiles,roles and responsibilities,etc.),an Interface Context Description Document(which provided a context diagram that illustrated via a graphical picture the cross-program interfaces),and an Interface Status Report(which documented the Lead Systems Integrator’s evaluation,status,and maturity of the key cross-program interfaces).The Lead Systems Integrator also concluded that the Interface Control Working Group was essential in helping to identify,document,and coordinatekey cross-program interfaces with the key stakeholders in the Enterprise.This working group was an important tool used by the Lead Systems Integrator to identify and mitigate interface integration risk for the Government and the major development programs and their Program Managers.The technical reviews(e.g.,System Requirements Review,Preliminary Design Review,Critical Design Review,etc.)recommended by the Lead Systems Integrator and supported by the major development programs provided an effective and suitable opportunity for the Lead Systems Integrator to implement good SE management controls for interface integration risk reduction,given that this is one of the most complex aspects of integration.The Lead Systems Integrator also found that using service-oriented software adaptors and translators were useful and helpful in mitigating legacy interface integration risk while providing required cross-program functionality.

–Verification and Validation requires a robust Capabilities-Based Testing and Analysis Program. The Cross-Program Test Lead was successful in establishing one baselined overarching test and verification/validation plan(Integrated Capabilities Verification/Validation Plan)for the Enterprise. This plan was a compliance document for the major development programs comprising the SoS and included the test strategy,test process,roles and responsibilities,and success criteria.The Cross-Program Test Lead discovered that the Enterprise Integrated Master Schedule was an important SoS engineering management tool for enterprise testing because it allowed the lead to prioritize and perform top-down and bottom-up test planning and test execution of key integrated capabilities based on mission priorities and needs.Also,by requiring the development Program Managers to establish6to9month integration and test cycles,integrated capabilities could be aligned for synchronized deployment.One of the keyfindings was that Early Interface Tests for high-risk cross-program interfaces should be conducted to reduce integration risk early in the Capabilities-Based Test Program so that issues can be identified and mitigated.The Cross-Program Test Lead should also act as a formal witness and monitor the major system-level factory acceptance testing to verify readiness of a system to enter integration testing with the cross-program interfaces.Another keyfinding was the need to integrate multiple models(three in total)into an Integrated Capabilities-Based Model in MS Excel to address multiple perspectives,the key enterprise system performance questions,and dimensions of the SoS problem space.These integrated models in Excel were essential in supporting the Cross-Program IV&V and Test Process for verifying SoS performance requirements.

Using the Integrated Capabilities-Based Model,the Cross-Program Test Lead was able to make predictions on the SoS performance and identify and mitigate integration risks,especially in the areas of sizing,timeliness,and capacity before deployment to specific operational sites.Some of these integration risks would have likely resulted in critical discrepancies being identified and written by the operational sites if it were not for the effort of the Integrated Capabilities-Based Model and analysis conducted.These critical discrepancies will have required the development programs to performfixes and rework to resolve.This model was also helpful and useful to the Cross-Program Test Lead in providing objective data for measuring the Interoperability level(level3)of the SoS in terms of its ability to share and exchange useful data.One lesson learned in this area was that the Government should have established a more robust integration test facility federation early,as this would have made it easier for the Cross-Program Test Lead to orchestrate a cross-program test.

–Life-Cycle Support requires a total life-cycle viewpoint including management and control processes, engineering implementation processes,and engineering support processes.The SoS Engineering Process was effective for the Lead Systems Integrator in taking a total life-cycle viewpoint of the SoS through its requirements and architecture definition,interface design,development and assembly, integration and test,and deployment to operations.The management and control processes andengineering support processes,including configuration management,quality assurance,decision analysis,risk assessment and management,quantitative management,analysis and modeling,etc., should support the engineering implementation processes for improved life-cycle support and risk reduction.These support processes coupled with the engineering processes enable the Lead Systems Integrator to look at the whole system,including its required life-cycle support,to understand how design decisions and system trades impact total life-cycle support.These processes are also important given that historically the most costly activities of life-cycle support are operations and maintenance primarily due to design decisions and trades made earlier in the SE life-cycle.

–Risk Assessment and Management should be integrated within the Integration and Test Process to effect enterprise integration.One of thefindings in this area for this SoS was the need for the Cross-Program Test Lead to integrate risk analysis and risk mitigation inherently within the Cross-Program IV&V and Test Process.The Early Interface Testing procedure,for example,was a critical risk reduction activity in identifying and resolving interface integration compatibility concerns before the key cross-program interfaces were fully developed.Also,this procedure was helpful and useful in reducing integration risk before executing the Cross-Program Testing and SoS-level verification and validation procedures.Overall,the SoS Engineering V Process generally enabled schedule,cost,and technical risk reduction during the Requirements Definition and Management,Systems Architecting, and Interface Integration Concept Domains,as documented by those learning principles.

–System and Program Management should develop a Program Plan.The Lead Systems Integrator developed a Program Plan that documented the processes and controls for managing Program XYZ.

This Program Plan documented16key systems engineering processes that were enforced by the Program Manager across Program XYZ and was monitored by Lockheed corporate oversight.One of the keyfindings was the use of the Integrated Master Schedule as a tool comprising the4th level Work Break-down Structure tasks.This tool provided a management and reporting capability on task status,progress,and critical path analysis.It was also tightly coupled with the Earned Value Management System forfinancial cost control and reporting,value-added,and overall program performance,which was very effective output for both the Lead Systems Integrator and Government to set expectations.It had the capability to report the status and health of Program XYZ to the Government.This was helpful and useful in identifying and resolving issues and adjudicating resources and priorities to satisfy program objectives and product specifications.

5.Summary

This paper has presented an evolutionary approach and process for integrating and testing a complex system of systems.The keyfinding or conclusion of this research project was that the Cross-Program Independent Verification,Validation,and Test Overarching Process can be effective,helpful,and useful in identifying and mitigating integration risk before deployment.Although SoS integration is rather complex,much value-added is achieved in the integration and test process.The implementation of this test process as documented by the case study enabled the Cross-Program Test Lead to identify and mitigate several critical SoS discrepancies,especially in performance(including sizing,timeliness,and capacity attributes).These results andfindings validated the usefulness and effectiveness of both the integration and test approach and process for this specific SoS.

The critical discrepancies discovered would not have been identified before deploying the SoS inte-grated capabilities system if it were not for the effort of implementing the Cross-Program Independent Verification,Validation,and Test Process.This could have resulted in significant cost,schedule,and

R.T.Brooks and A.P.Sage/System of systems integration and test279 performance risks to the operational capabilities.This research project demonstrated the value of a Lead Systems Integrator in taking a holistic view of the SoS and applying a Capabilities-Based Systems Engineering Process to effectively identify and mitigate life-cycle risks in support of integration and interoperability.The Cross-Program Independent Verification,Validation,and Test Process enabled interesting things to be learned about this specific system of systems.The lessons learned on the integra-tion and testing activities of this complex SoS,especially the application of the Capabilities-Based Test approach,is a contribution to systems engineering and management[20]on the evolution of system of systems engineering.

The keyfindings of this project can also be compared and contrasted with the nine common lessons learned identified by the Krygiel[13]case study.There were many similarities and commonalities, including establishing an agreed-to overarching test plan early and iterative integration,which was reflected in this project by Early Interface Testing,and the notion of the test process,such as Cross-Program IV&V and Test,to help establish who is in charge andfirmly establish the role of the Lead Systems Integrator.Also,the lesson on defining a SoS architecture before integration and test,following an evolutionary acquisition approach,and the lesson on planning resources effectively and efficiently,all were key tenets of the approach in this project.One difference was that in this project the concept and technical merits of one integration facility were not implemented,but this was a lesson learned for the Government to establish a federation of test facilities(given the distributed nature of the cross-program interfaces)to help orchestrate Cross-Program Testing.

An extension of this research could be to study the issue of connecting multiple system of systems from different domains and coalition partners to establish a family of systems capabilities to support a joint mission.This will be an interesting issue for some time given the advent of a Director of National Intelligence,where there is an objective to establish an overarching joint and collaborative environment to enable strategic cross-domain interoperability and integration capabilities between the U.S.and coalition partners to support missions like Homeland Security.There is a major need to define and implement appropriate Cross-Program Independent,Verification,Validation and Test Process to mitigate interoperability and integration risks for network-centric systems,which must interoperate effectively and efficiently to achieve mission success in future environments.The Capabilities-Based Test Approach introduced and described here could be effective and suitable with proper tailoring to support other network-centric system of systems to verify and validate integrated capabilities and meet user needs.

References

[1] D.J.Bodeau,System-of-Systems Security Engineering,Proceedings IEEE Conference on Computer Security Applications

Conference,December1994,228–235.

[2]Y.Correa and C.Keating,An Approach to Model Formulation for Systems of Systems,Proceedings IEEE Conference on

Systems,Man,and Cybernetics,October2003,3553–3558.

[3]P.K.Davis,Analytical Architectures for Capability Based Planning,Rand Monograph,2003.

[4]DoD Architectural Framework,available at http://www.defenselink.mil/nii/doc/DoDAF v1V olume I.pdf.

[5]DoD Instruction CJCSI3170.01D,Joint Capabilities Integration and Development System(JCIDS),available at

http://dod5000.dau.mil/.

[6]DoD Directive8500.1,Information Assurance,available at http://www.dtic.mil/whs/directives/corres/html/85001.htm.

[7]H.Eisner,A Systems Engineering Approach to Architecting a Unified System of Systems,Proceedings IEEE Conference

on Systems,Man,and Cybernetics,October1994,204–208.

[8]Federal Enterprise Architecture,available at http://www.feapmo.gov/.

[9]G.Friedman and A.Sage,Case Studies of Systems Engineering and Management in Systems Acquisition,Systems

Engineering1(2004),84–97.280R.T.Brooks and A.P.Sage/System of systems integration and test

[10]Global Information Grid Architecture,available at http://kips.disa.mil/new/d81001p.pdf.

[11]J.Kasser,The Acquisition of a System of Systems is just a simple Multi-phased Parallel processing paradigm,Proceedings

IEEE Conference on Engineering Management,August2002,510–514.

[12] C.Keating,R.Rogers,R.Unal,D.Dryer,A.Sousa-Poza,W.Peterson and G.Rabadi,System of Systems Engineering,

Engineering Management Journal15(2)(June2003),36–45.

[13] A.J.Krygiel,Behind the Wizard’s Curtain,Command and Control Research Press,Washington DC,1999,available at

http://www.dodccrp.org/publications/pdf/Krygiel Wizards.pdf.

[14]Levels of Information Systems Interoperability(LISI),available at http://lisi.ncr.disa.mil/references.html.

[15]M.W.Maier,Architecting Principles for a System of Systems,Systems Engineering1(4)(2008),267–284.

[16]J.Morganwalp and A.P.Sage,A System of Systems Focused Enterprise Architecture Framework and an Associated

Architecture Development Process,Information Knowledge&Systems Management3(2003),1–19.

[17]Naval Studies Board(ED),Naval Analytical Architectures:Improving Capability Based Planning,National Academy

Press,2005.

[18] A.P.Sage,Conflict and Risk Management in Complex System of Systems Issues,Proceedings IEEE Conference on

Systems,Man,and Cybernetics,October2003,3296–3301.

[19] A.P.Sage and C.Cuppan,On the Systems Engineering and Management of Systems of Systems and Federations of

Systems,Information Knowledge&Systems Management2(4)(2001),325–345.

[20] A.P.Sage and W.B.Rouse,eds,Handbook of Systems Engineering and Management,Wiley,New York,1999.

Roland T.Brooks received his Bachelor of Science from Morgan State University in1992and his

Masters of Science from the University of Baltimore in1997.Mr.Brooks received his Engineer of

Information Technology Degree from George Mason University in2006.Mr.Brooks has over14years

of systems engineering and integration experience,which includes designing,building,integrating,and

testing complex systems and system of systems for the DoD and the Intelligence Community.He is

currently employed as a systems engineering manager by Lockheed Martin Corporation in Hanover,

Maryland.

Andrew P.Sage–received the BSEE degree from the Citadel,the SMEE degree from MIT and the

Ph.D.from Purdue,the latter in1960.He received honorary Doctor of Engineering degrees from

the University of Waterloo in1987and from Dalhousie University in1997.He has been a faculty

member at several universities including holding a named professorship and being thefirst chair of the

Systems Engineering Department at the University of Virginia.In1984he became First American Bank

Professor of Information Technology and Engineering at George Mason University and thefirst Dean

of the School of Information Technology and Engineering.In May1996,he was elected as Founding

Dean Emeritus of the School and also was appointed a University Professor.He is an elected Fellow of

the Institute of Electrical and Electronics Engineers,the American Association for the Advancement of

Science,and the International Council on Systems Engineering.He is editor of the John Wiley textbook

series on Systems Engineering and Management,the INCOSE Wiley journal Systems Engineering and is coeditor of Information,Knowledge,and Systems Management.He edited the IEEE Transactions on Systems,Man,and Cybernetics from January1972through December1998,and also served a two year period as President of the IEEE SMC Society.In1994he received the Donald G.Fink Prize from the IEEE,and a Superior Public Service Award for his service on the CNA Corporation Board of Trustees from the US Secretary of the Navy.In2000,he received the Simon Ramo Medal from the IEEE in recognition of his contributions to systems engineering and an IEEE Third Millennium Medal.In2002,he received an Eta Kappa Nu Eminent Membership Award and the INCOSE Pioneer Award.He was elected to the National Academy of Engineering in2004for contributions to the theory and practice of systems engineering and systems management.His interests include systems engineering and management efforts in a variety of application areas including systems integration and architecting,reengineering,engineering economic systems,and sustainable development.

文档

System of systems integration and test

InformationKnowledgeSystemsManagement5(2005/2006)261–280261IOSPressSystemofsystemsintegrationandtestRolandT.BrooksaandAndrewP.SagebaLockheedMartinCorporationBethesda,MD,USAbDepartmentofSystemsEngineeringandOperationsResearch,GeorgeMasonUniversity,Fa
推荐度:
  • 热门焦点

最新推荐

猜你喜欢

热门推荐

专题
Top