software quality assurance plan (sqap)...

60
OMP Quality Assurance Plan May 15, 2006 OMPArchitectability Team Version 1.1 Team Members: Eunyoung Cho Minho Jeung Kyu Hou Varun Dutt Monica Page 1

Upload: phungminh

Post on 18-May-2018

215 views

Category:

Documents


0 download

TRANSCRIPT

OMP Quality Assurance PlanMay 15, 2006

OMPArchitectability TeamVersion 1.1

Team Members:

Eunyoung ChoMinho Jeung

Kyu HouVarun Dutt

Monica Page

1

REVISION LIST

Document Name: OMP Quality Assurance Plan

Document Number: OMP-SQAP-001

No Revision Date Author Comments1 0.1 May 2, 2006 All team members Created initial draft2 0.2 May 3, 2006 Kyu Hou Added tools for functional requirements3 0.3 May 3, 2006 Varun Dutt Added tools for quality attribute requirements4 0.4 May 4, 2006 Minho Jeung Added quality assurance strategy5 0.5 May 4, 2006 Eunyoung Cho Added testing approach6 0.6 May 4, 2006 Monica Page Added organization7 1.0 May 4, 2006 All members Final review8 1.1 May 15, 2006 All members Revised SQAP according to reviewing

Quality Assurance Plan 2

Table of Contents

1 INTRODUCTION............................................................................................................................................................4

2 QUALITY ASSURANCE STRATEGY.........................................................................................................................6

3 REFERENCE DOCUMENTS........................................................................................................................................7

Software Verification and Validation Plan (SVVP) (See the review and audit section for review process)....................8Software Verification and Validation Report (SVVR) (See the review and audit section for review process)................8User Documentation..........................................................................................................................................................8Software Project Management Plan (SPMP) (See the review and audit section for review process)...............................8Software Configuration Management Plan (SCMP) (See the review and audit section for review process)...................8

4 GOALS..............................................................................................................................................................................9

4.1 QA GOALS OF EACH PHASE...........................................................................................................................................9

5 REVIEWS AND AUDITS...............................................................................................................................................9

5.1 WORK PRODUCT REVIEWS............................................................................................................................................95.2 QUALITY ASSURANCE PROGRESS REVIEWS................................................................................................................11

6 TOOLS AND TECHNIQUES.......................................................................................................................................12

6.1 TOOLS AND TECHNIQUES FOR ASSURING QUALITY OF FUNCTIONAL REQUIREMENTS.................................................126.2 TOOLS AND TECHNIQUES FOR ASSURING THE QUALITY ATTRIBUTE REQUIREMENTS..................................................14

7 TESTING STRATEGY.................................................................................................................................................15

7.1 UNIT TESTING..............................................................................................................................................................167.2 INTEGRATION TESTING................................................................................................................................................167.3 ACCEPTANCE TESTING.................................................................................................................................................177.4 REGRESSION TESTING..................................................................................................................................................187.5 CRITERIA OF COMPLETENESS OF TEST........................................................................................................................18

8 SPECIAL REQUIREMENT (EQUIVALENCE CLASS).........................................................................................18

9 ORGANIZATION.........................................................................................................................................................21

9.1 AVAILABLE RESOURCES THAT TEAM INTENDS TO DEVOTE..........................................................................................219.2 QUALITY ASSURANCE TEAM........................................................................................................................................229.3 MANAGING OF THE QUALITY OF ARTIFACTS...............................................................................................................239.4 PROCESS FOR PRIORITIZING QUALITY ASSURANCE TECHNIQUES...............................................................................239.5 QA STRATEGY BREAK DOWN INTO TASKS....................................................................................................................249.6 QUALITY ASSURANCE PROCESS MEASURES................................................................................................................24

10 BIBLIOGRAPHY..........................................................................................................................................................27

11 GLOSSARY....................................................................................................................................................................27

11.1 DEFINITION..................................................................................................................................................................2711.2 ACRONYMS..................................................................................................................................................................28

12 APPENDIX.....................................................................................................................................................................28

Quality Assurance Plan 3

1 INTRODUCTION Purpose:This document outlines the actions of our team in order to make our object system “Overlay Multicast Protocol” (hereafter referred to as OMP) and other related artifacts conform to the requirement of the stakeholders and the qualitative standards within the specified project resources and constraints. This document with its present format has been created using the IEEEStd. 730.1-1989. Scope:The primary spectators of this document are the OMPArchitectability MSE/MSIT team members. Every member in our team is responsible for the actions planned in this document such as developing the overlay multicast protocol, documenting the results throughout the development of the project, reviewing the project progress, and testing the project quality, controlled by this plan.

The following are the portions of the software lifecycle that are covered by the SQAP:Requirements, Design, Implementation, Test, Verification and Acceptance

The list of software items to be covered in each of the above mentioned lifecycle phases are given below:

Software Lifecycle Phase Software ItemRequirements SRS, SOWDesign SDDImplementation SJCSTest STPVerification and Acceptance SVVP

The Software Quality Assurance Plan covers the software items. In addition, the SQAP is also covered by this quality assurance plan.

Project OverviewBackground and Context The major stakeholder in this project is POSDATA Co., Ltd. (POSDATA), an IT service provider. With the advent of the ubiquitous era, POSDATA is going a step further than just providing IT services (which consists of System Integration and outsourcing services) by actively providing strategic business solutions for the future. POSDATA now extends their business area to DVR (Digital Video Recorder), which allows users to monitor, store, and control video streaming of images in real time from a remote location through wide area network.

POSDATA wishes to enlarge the DVR system to the N-DVR (Next generation Digital Video Recorder) system. A major objective of the N-DVR system is that many users should be able to audit the traffic status via the N-DVR system at the same time. Currently, if many users attempt to watch the traffic status concurrently, the visual image will not be shown smoothly because the bandwidth of the Internet might exceed the limit caused by large data transaction. Thus, it is necessary to reduce network load because POSDATA has many branches and factories across Korea.

Quality Assurance Plan 4

The client will apply a new protocol to N-DVR where N-DVR will be used to transfer video streaming in an in-house broadcasting system or a factory monitoring system. Applying OMP (Overlay Multicast Protocol) to N-DVR will provide added value to POSDATA as they continue to provide IT solutions and seek to improve their business operations.

Project Objectives Apply OMP to the N-DVR Server in order to provide efficient video streaming to clients Solve the network congestion problem that occurs when many clients attempt to view the

stream at the same time via the N-DVR Server

Architectural Objectives Evaluate the architecture for OMP with N-DVR according to how well the quality attributes

pertaining to the studio objectives mentioned above are addressed and representative of the business drivers for POSDATA.

Develop and Prioritize Quality Attribute Scenarios Conduct an ATAM Evaluation on the architecture for POSDATA

Technical Constraints Hardware constraints: Use Linux Server and Windows (Operating System) Clients Development Software: C++

Business Constraints The OMP will run in Linux environment in the N-DVR server. The client OS is Window 2000 or XP.

SQAP will cover the project resources and constraints.

Requirements Major functional requirements of the project

Group Configuration: A group consists of dynamic members (logging into and out of network in real time) that dynamically share data where each group has a unique group id.

Member Configuration Each member of a group (as described above in Group Management) can be registered or unregistered at any time.

Multicast Routing: The data should be transmitted through an optimized path.

Data Replication: Each parent node (which could be a router for example) in a group should duplicate incoming data according to number of child nodes it possess.

Non Functional (quality attribute) requirements of the project Performance: The client should be able to watch the video stream within 3 seconds of the

request for the stream.

Usability: The configuration of group or group members should be user friendly. Moreover, the system should provide POSDATA with a workable user interface(UI) to manage the network

Quality Assurance Plan 5

Security: Access of unregistered users should not be allowed.

Availability: The system tries to reestablish the transmission of video stream between the appropriate client nodes within 5 seconds and logs the failure of transmission message.

Portability: The system should be able to work on diverse hardware and software platforms existing on connecting client nodes.

Modifiability: The system should be able to provide enhanced security requirements that may come from POSDATA client in the future.

Potential extensions of the project N: N Conference or Network connection services maybe provided over the network from the current 1:N Conference or Network connection services.

2 QUALITY ASSURANCE STRATEGY To assure quality of software deliverables in each software development phase, we will use the ‘test

factor/test phase matrix’. The matrix has two elements. Those are the test factor and the test phase. The risks coming from software development and the process for reducing the risks should be addressed by using this strategy. The test factor is that the risk or issue which is needed to be tackled, and the test phase is that the phase of software development which conducts the test. The matrix should be customized and developed for each project. Thus, we will adapt the strategy to our studio project through four steps.

In the first step, we will select the test factors and rank them. The selected test factors such as reliability, maintainability, portability or etc, will be placed in the matrix according to their ranks.

The second step is for identifying the phases of the development process. The phase should be recorded in the matrix.

The third step is that identifying the business risks of the software deliverables. The risks will be ranked into three ranks such as high, medium and low.

The last step is that deciding the test phase of addressing the risks. In this step, we will decide that which risks will be placed each development phase.

For example, the table given below addresses a ranked list of test factors on the project and also specifies the various lifecycle phases on the project. One risk has been highlighted and a strategy to mitigate the same is also marked. Whenever the team enters a phase, the corresponding risks associated with the phase are identified. The table below serves only as a purpose of example.

Test phaseTest factors

Requirements Design Build Dynamic test Integrate Maintain

Correctness Risk:The SRS may not be correct as per the goals of the SQAP;

Quality Assurance Plan 6

Other Risks

Strategy:Formal Technical Review of SRS

PerformanceAvailabilityContinuity of Processing

ComplianceEase of useCouplingEase of OperationsAccess ControlFile Integrity

Test factors/test phase matrix [Perry 2000]

The matrix forms a part of the quality assurance strategy and as mentioned above this matrix would be used in each of the project lifecycle phases to identify the risks associated in each of the phases with respect to the testing factors. The risks would also be accompanied with their mitigation strategies and in case the risk materialized into a problem, the respective mitigation would be applied. It is for these reasons, that a mention is made about the matrix here in a separate section of the document and not mixed with other sections of the document to avoid repetition.

3 REFERENCE DOCUMENTS PurposeThis section identifies the documents or the work products that will govern our main project activities such as Requirements, Design, Implementation, Test, Verification and Acceptance of the software and lists which documents are to be reviewed or audited for adequacy and completeness. For each document, review and audit to be conducted and the criteria to judge the adequacy are to be specified.

Minimum documentation requirements

Software Requirements Document (SRS) (See the review and audit section for review process)Software specification review is to be used to check for adequacy and completeness of this documentation. The Software Requirements Document, which defines all the functional requirements, quality attributes requirements and constraints on the OMP project.

Software Architecture and Design (ADD) or Software Design Document (SDD) (See the review and audit section for review process)Software Architecture and Design review and detailed design review are to be used for adequacy and completeness of this documentation. This document provides the quality attributes on the project and also various architectural decisions the team took meet the quality attributes.

Software Verification and Validation Plan (SVVP) (See the review and audit section for review process)

Quality Assurance Plan 7

Software verification and validation plan review is to be used for adequacy and completeness of this documentation. This document although still does not exist presently with the team but would cater to providing the steps for verification and validation of the created work product.

Software Verification and Validation Report (SVVR) (See the review and audit section for review process)

This documentation which still does not exist, should include the following information basically pertaining to the tasks results of SVVP:Summary of all life cycle V & V tasks and the results of these activitiesSuggestions whether the software is, or is not, ready for operational use

User Documentation

This is to be included in the Software Project Management document. This document is not presently made by the team. This provides all the information on the successful software execution and operation of the software to the end user.

Software Project Management Plan (SPMP) (See the review and audit section for review process)

SPMP should identify the following items. These should be reviewed and assessed by all the team members in the team. The items and their corresponding checks include:

Items CheckFull description of software development activity as defined in the SPMPSoftware development and management organizations responsibilities as defined in SPMPProcess for managing the software development as defined in SPMPTechnical methods, tools, and techniques to be used in support of the software development as defined in the SPMPAssignment of responsibilities for each activity as defined in the SPMPSchedule and interrelationships among activities as defined in SPMPProcess improvement activities as defined in SPMPGoals deployment activities as defined in SPMPA list of deliverables as defined in the SPMPStrategic quality planning efforts triggered by reviews as defined in the SPMPFigure 1 SPMP Review Checklist

Software Configuration Management Plan (SCMP) (See the review and audit section for review process)

This documentation should describe the methods to be used for: Maintaining information on all the changes made to the software

Other Miscellaneous Documents

Statement of WorkThis document defines the work as negotiated with the client.

IEEE Std. 730-2002IEEE Standard for Software Quality Assurance Plans. This document defines the standards for making the SQAP document.

Quality Assurance Plan 8

4 GOALS

4.1 QA GOALS OF EACH PHASE

Phase GoalsRequirement gathering SRS should have no more than one defect per page as per the client’s

review of the SRS.Architecture The ADD should not have more than two defects per architectural

representation during its formal technical review (FTR).Development Each application program should not have more than 10 defects per 1

KLOC found in FTR.Testing All tested work products should be checked for finding at least one

defect per page or 10 defects per 1 KLOC of codes in FTR.

5 REVIEWS AND AUDITS

5.1 WORK PRODUCT REVIEWS

The general Strategy for the review is given below:The checklist (See Appendix) for review is the same as given in section 4 of the final project, quality assurance plan assignment.Formal Reviews:

1. One week prior to the release of document to the client, the SQA will review the document list generated by the Software Product Engineers (team members on a project team).

2. The SQA will ensure that the necessary revisions to the document have been made and that the document would be released by the stated date. In case there are any shortcomings then the same would be pointed to the software project management.

Informal Reviews: A. Design Walk-throughs

The SQA will invite design walk-throughs to encourage peer and management reviews of the design. The Software Project Management would ensure that all the reviews are done in a verifiable way and the results are recorded for easy reference. SQA will ensure that all the action items are addressed

B. Code Walk-throughsSQA will invite all the code walk-throughs to ensure that a peer review is conducted for the underlying code. The Software Project Management would ensure that the process is verifiable where as the SQA will ensure that all the items have been addressed.

C. Baseline Quality ReviewsThe SQA would review any document or code that is baselined as per the revision number of the work product. This would ensure:

Quality Assurance Plan 9

1. The testing and inspection of module and code before release2. Changes to software module design document have been recorded and made3. Validation testing has been performed4. The functionality has been documented5. The design documentation conforms to the standards for the document as defined in the

SPMP.6. The tool and techniques to verify and validate the sub system components are in place.

Work Product When Reviewed by Quality Assurance (Status or Criteria)

How Reviewed by Quality Assurance (Standards or Method)

Requirements(Software Requirements Specification)

After a new release or modification

The Requirements Specification document is reviewed and approved by the assigned reviewer(s). The Requirements Specification document if supplied by the customer is also reviewed by the designated reviewer(s) and any issues or gaps between the requirements stipulated in the contract and those covered in the document are resolved. The reviewed document is presented to the customer for acceptance, as stipulated in the Contract. Requirements Specification document forms the baseline for the subsequent design and construction phases. Changes, if any, to the Requirements Specification document after its release, are studied, their impact evaluated, documented, reviewed and approved before the same are agreed upon and incorporated.

Design and Construction (SDD)

After a new release or modification

The Design phase is carried out using an appropriate system design methodology, standards and guidelines, taking into account the design experience from past projects. The design output is documented in a design document and is reviewed by the Reviewer to ensure that:

The requirements including the statutory and regulatory requirements as stated in the Requirements Specification document, are satisfied

The acceptance criteria are met Appropriate information for service provision (in the

form of user manuals, operating manuals, as appropriate) is provided.

Acceptance for the design document is obtained from the customer, if required by the Contract. The Design Document forms the baseline for the Construction phase. Changes, if any, to the Design Document after its release, are studied, their impact evaluated,documented, reviewed and approved before the same are agreed upon and incorporated.

Construction (Code) After a new release or modification

The Project Team constructs the software product to be delivered to meet the designspecifications, using:

Suitable techniques, methodology, standards and guidelines

Reusable software components, generative tools, etc. as appropriate

Appropriate validation and verification techniques as identified in the Project Plan.

Changes, if any, to the software programs after the release, are

Quality Assurance Plan 10

studied, their impact evaluated, documented, reviewed and approved before the same are agreed upon and incorporated.

Testing and Inspection (Code and other work products like Test plans, SVVP, SVVR, SCMP and SPMP etc)

After a new release or modification

Before delivery of the product, the PL ensures that all tests, reviews, approvals and acceptances as stipulated in the Project Plan have been completed and documented. No product is delivered without these verifications.

Acceptance (Final Software Deliverables: Code, User Manual, SRS, SDD etc as per the SOW)

As per the SOW The customer generally reviews and tests the final product. The customer may also review or test intermediate deliveries as stipulated in the contract.The Project Team assists the customer in planning and conducting the AcceptanceTest, as stipulated in the contract. The customer may prepare the Acceptance Test Plan covering schedules, evaluation procedures, test environment and resources required and conduct acceptance tests. Any problems and defects reported during the Acceptance Testing phase are analyzed and rectified by the Project Team as stipulated in the contract.

5.2 QUALITY ASSURANCE PROGRESS REVIEWS In order to remove defects from the work products early and efficiently and to develop a better understanding of causes of defects so that defects might be prevented, a methodical examination of software work products is conducted in projects in the following framework:

1. Reviews of Project Plans and all deliverables to the customer are carried out as stated in the Quality Plan of the project. A project may identify additional work products for review.

2. Specialists and/or cross-functional teams as appropriate, carry out review of software work products to ensure proper depth and coverage of the review.

3. Reviews emphasize on evaluating the ability of the intended product to meet customer requirements. The reviewer also checks whether the regulatory statutory and unstated requirements, if any, have been addressed.

4. Personnel independent of the activity being performed carry out the reviews.5. Reviews focus on the work product being reviewed and not on the developer. The result of the

review in no way affects the performance evaluation of the developer.6. The defects identified in the reviews are tracked to closure. If a work product is required to be

released without tracking the defects to closure, a risk analysis is carried out to assess the risk of proceeding further.

6 TOOLS AND TECHNIQUES

The OMP project uses the following strategy for selection of the tool on the project:1. The tool selection mainly focuses on core functionality that a project is needed.2. The appropriateness of the tool is mapped to the life cycle phase in which the tool will

be used3. Matching the tool to the expertise of the testing team4. Selection of the tool that is not only affordable but also appropriate

Quality Assurance Plan 11

6.1 TOOLS AND TECHNIQUES FOR ASSURING QUALITY OF FUNCTIONAL REQUIREMENTS

In order to ensure the quality of function requirements, we are going to apply the techniques as shown below.

1. Peer review: the requirement statements should prevent from fuzziness. In other words, the statements should be written precisely enough to measure. Unambiguousness is major component that each requirement should have. For example, a requirement statement, “the statistic data should be shown fast enough”, does not express how fast it is. By conducting peer review, those kinds of expression should be changes.

2. Customer review: After peer review, customers review the requirement documentation to check whether there exists any incomplete or mislead statements. Once the customer review has been done, the customer signoff the documentation.

3. Traceability checking: Once a requirement is gathered, the requirement should keep the traceability in order to make sure that any change of the requirement record. In order to check traceability, traceability matrix should be managed.

4. Establish Change Control Board: Each change request will be audited by CCB members so that the requirement changes can be done based on the decision of CCB. The CCB enables our team to decide whether the requirement change or new requirement would be acceptable or not.5. Regression Testing: The objective of regression testing is assuring all aspects of an application system work well after testing. The detail explanation will be announced in testing strategy section.

The tool for ensuring the functional requirements’ quality is going to use as shown bellow

1. Excel: Our team will use Excel as a requirement management tool. We plan to make a list and manage it. The traceability also is managed by Excel tool.

2. Bugzilla: Our team plan to use Bugzilla, a bug tracking tool. The reason that we are going to use the tool is to find functional requirements related with many bugs. By prioritizing the requirements based on the bugs, our team can detect the risks and manage the requirement.

According to [Perry 2000], the usage of each testing tool that mentioned above is explained as below.Testing tools Testing useAcceptance test criteria Provides the standards that must be achieved for the system to be

acceptable to the user.System Logs Provides an audit trail of monitored events occurring in the environment

area controlled by system software.Instrumentation Measures the functioning of a system structure by using counters and other

monitoring instruments.Confirmation/Examination Verifies that a condition has or has not occurred.

Quality Assurance Plan 12

Based on our functional requirements, our team also chooses specific testing tools for each functional requirement.Functional requirements Testing ToolsGroup configuration Acceptance Test Criteria

Confirmation/ExaminationMember configuration Acceptance Test Criteria

System LogsMulticast Routing Acceptance Test Criteria

InstrumentationData replication System Logs

Confirmation/Examination

The rationale for choosing those tools is described as below.Testing tools RationaleAcceptance test criteria Acceptance test criteria set the boundary where the defined requirements meet

the limitation of our system. By applying the acceptance test criteria, each requirement can be checked based on the result of the system execution. The first three function requirements presented previous table include the boundary acceptance criteria that the customer wish to have so that by using acceptance test criteria, our team evaluate the requirements.

System Logs The entire interaction between OMP server and clients should be logged in order to collect the data regarding join and disjoin and analyze whether the join request is actually conducted right after the parent node disjoins or not, and how much time it takes for joining process.

Instrumentation In order to make sure that OMP finds out the fastest path, network monitoring tools, such as NS2 will be used.

Confirmation/Examination The duplication should be made as many as the child nodes that a parent node has. Confirmation and examination verifies whether duplication condition has occurred or not.

6.2 TOOLS AND TECHNIQUES FOR ASSURING THE QUALITY ATTRIBUTE REQUIREMENTS

For assuring quality attribute requirements, we are going to. In design phase, we develop quality attribute scenarios and review those scenarios among team members. After implementation, we are going to use the specific tools to measure whether or not our system meets the quality attributes.

Quality Attribute Tool/Technique Used Rationale for using the tool/technique

Performance:ScenarioA video stream (internal) event is sent from the N-DVR server to a number of clients during normal conditions. The number of clients receiving the video stream are predefined number N (could be a 100 or more) from the N-DVR server using the maximum available bandwidth with 11 Mbps.

Multicast Beacon (Tool) As per the real time transport protocol standard (RFC3550), the multicast beacon can be used to measure the multicast protocol performance. This is because the tool sends small data packets to a multicast session and also receives packets from a particular session, which enables it to measure the network performance.The tool has a distributed nature and allows the networked clients to start and stop at any time or without affecting other clients to which the tool maybe catering to. The

Quality Assurance Plan 13

communication between the client and the server is based on UDP networking facility. The tool also helps a developer to gather data for the various parameters like delivery ratio versus number of connections and disconnections to the node

Performance:ScenarioA video stream (internal) event is sent from the N-DVR server to the client during normal conditions. The client receives the video stream within 5 seconds of its dispatch from the N-DVR server using the maximum available bandwidth with 11 Mbps.

Multicast Beacon (Tool) Same as above

Availability:ScenarioAn unanticipated failure of transmission message is received at the N-DVR server (informing that link(s) may be broken or a video stream data could not be sent between client nodes) during normal operation. The N-DVR server tries to reestablish the transmission of video stream between the appropriate client nodes within 5 seconds and logs the failure of transmission message with performance information and continues in normal mode.

Fault injector,Analyze log data

In order to make sure that our system delivers the video stream without disconnection, we have to show the result. Therefore our team will make fault injector which kill OMP processes periodically. By analyzing the output data from recovery time, we make sure that our system meet availability.

7 TESTING STRATEGYTesting is a well planned process carried out by a testing team in which a software unit, several

integrated software units or a package are examined by running the programs. All the associated tests are performed according to defined test procedures or tools with test cases.

A Software Test Plan (STP) will be written to satisfy the requirements described in the SRS. The test executes code directly on test data in a controlled environment.

The goals for testing of OMP are as follows.

1) To detect failures of OMP2) To measure quality of OMP3) To clarify the inconsistency between SRS and code written by OMP4) To learn about OMP program for investigating behaviors under various conditions and provide

feedback to rest of the team

Commonly used testing techniques are structural (white box) testing, functional (black box) testing, and equivalence partitioning. In addition to this, mathematical/formal verification and

Quality Assurance Plan 14

paradigm/language hazards are also reliable and logical methods. In OMP project, the following techniques will be used.

OMP will be analyzed in statically.BLAST, Insure++ and CodeWizard is altenatives.In case of Putify or PREfast, if these are available in our client, these also can applied. OMP test strategy is the minimum efforts with minimum testing tool because the whole project should be done in two semesters. Therefore, three phase unit testing, integration testing and acceptance testing will be included.

Testing Techniques White Box (Structural) Testing Black Box (Functional) TestingTesting Type Unit testing Integration testing

Acceptance testingCharacteristics - Look at the code (white-box) and try to

systematically cause it to fail- Function coverage- Statement (line) coverage

- Verify each piece of functionality of the system

- Black-box (not at code level)- More common in practice than white-box

Rationale - The key portion of OMP is the group management algorithm among nodes. Therefore, the test of a certain join/leave algorithm and control data transmission between nodes is proper to the white box testing.

- For the verification of group co-work and for the validation for delivery to the customer

Advantages in OMP

- Tool support for measuring coverage- Helps to evaluate test suite- Find untested code- Test program one part at a time- Consider code-related boundary conditions

- Finds bugs white-box doesn’t- Think from the a user’s view- The code checking by programmer already- Issues will be covered: timing, unanticipated

errors,configuration issues,

performance, hardware failuresDisadvantages in OMP

- Use large resource- Not testable of availability (response time),

reliability, load durability

- No insight into code structure- Rely on the good testers’ guess

7.1 UNIT TESTING

The target of unit test is a small piece of source code. This is typically one function at a time and often specified by developer. Therefore, each logic path in the component specifications and every line of code are tested with white box testing. This is also useful to detect bug early and improve test coverage such as function, statement, branch, and condition. The validation of internal interface/API designs is executed. It provides the detection of bug before implementation using test suites or independent of client/server code.

In object-oriented OMP project, all code units must be tested according to pre-defined testing scripts including inputs and expected correct outputs of the unit to verify it against the software architecture and design.

The actual results will be recorded and compared against the expected results. The unit tests are done as white-box testing by the developers. When all the units have been tested under code coverage, the team will initiate inspections to all the test results.

In unit testing, four major functionalities are tested such as group management, member management, multicast routing, and data replication. Test procedures including cases are generated in next semester with customer cooperation. This testing will be done by the developers in three phases from Jun.12 to Jul. 10. The approval of test design is done by Development Manager.

Quality Assurance Plan 15

7.2 INTEGRATION TESTING

Integration testing will execute on several modules together and need for the untested modules. However it should be avoided “big bang” integrations when we are going directly from unit tests to whole program tests. It is likely to have many big issues. In this test, it is hard to identify which component causes each. This test interaction between modules ultimately leads to end-to-end system test. Each test is written to verify one or more requirements specified in the SRS. The scenarios or use cases specified in the requirements are used in this test. Most of all, stress or volume testing for large number of users are executed.

The tasks of OMP Integration testing are tailored from the following steps.

Steps Tasks

Integration Test Plan

Finalize Test TypesFinalize Test SchedulesOrganize Test TeamEstablish Test EnvironmentInstall Test Tools

Integration Test Cases

Design/Script Performance TestsDesign/Script Security Tests(*)Design/Script Volume TestsDesign/Script Stress TestsDesign/Script Compatibility TestsDesign/Script Conversion TestsDesign/Script Usability TestsDesign/Script Documentation TestsDesign/Script Backup TestsDesign/Script Recovery TestsDesign/Script Installation TestsDesign/Script Other Types Tests

Review/Approve Tests Schedule/Conduct ReviewObtain Approvals

Execute TestsRegression Test System FixesExecute New System TestsDocument System Defects

(*) Security Tests will be considered after the addition of the functionality.

Team’s Approval: The approval procedure is critical in a testing project. This needs by an agreement between project members. The owner of approval is defined by Test Manager. As an example of test deliverable, test plan is required the approval of Project Manager, Development Manager and Sponsor.

7.3 ACCEPTANCE TESTING

Acceptance testing is functional tests that the customer uses to evaluate the quality of the system. This is the test for the acceptance from the customer upon the result. The script is smaller but important functionality is set in this testing. Tests the whole system and is able to test with Scenarios or Use Cases specified in the requirements using live data. This is the ‘validity’ test of the system. The tasks of OMP Acceptance testing are tailored from the following steps.

Steps Tasks

Complete Acceptance Test Planning

Finalize Acceptance Test TypesFinalize Acceptance Test SchedulesOrganize Acceptance Test Team

Quality Assurance Plan 16

Establish Acceptance Test EnvironmentInstall Acceptance Test Tools

Complete Acceptance Test CasesSubset System-Level Test CasesDesign/Script Additional Acceptance Tests

Review/Approve Acceptance Test Plan Schedule/Conduct ReviewObtain Approvals

Execute Acceptance TestsRegression New Acceptance FixesExecute New Acceptance TestsDocument Acceptance Test Details

7.4 REGRESSION TESTING

The purpose of the regression test is to catch any new bugs introduced by modification. Therefore the test suites should run every time the system changes. It is inevitable then a new functionality is added. In OMP project, the security function will be added in near future upon the customer’s milestone. In that case, the tests revealed as a defect are tested together with all of the old functionality for the side effect. This is common for bug fixes to introduce new issues and several test-fix cycles should be planned. By rerunning of previously conducted tests to ensure that the unchanged system segments function properly or reviewing previously prepared manual procedures to ensure that they remain correct after changes have been made to OMP are included.Team’s Approval: After minimum two test-fix cycles for the high risks that new changes may affect unchanged areas of OMP, the old and new functionalities are approved by test team.

7.5 CRITERIA OF COMPLETENESS OF TEST

In each development phase, tests will be planned and indicated as complete by certain criteria. Test factors are based upon test strategy and for some test methods, also the test coverage. The factors that will govern when testing is complete are as follows:

Unit Testing: 1) when the entirely of the test suite has been tested and issues have been fixed and/or logged to be fixed at a specified time (i.e. during another testing round or using another method) in the future, testing can be deemed as complete 2) When at least 90% of the code(and all critical sections are included within that percentage) has been tested and all major and minor bugs found have been logged and/or fixed, testing can be deemed as complete; this applies to unit tests and regression tests

Regression Testing: When all code (the percentage) that is related to modified code has been covered and all issues/defects have been logged and corrected, regression testing is complete.

Integration Testing: 100% of all coupled modules will be tested in order for integration testing to be deemed complete

Acceptance Testing: Completed when customer is satisfied with the product(signs off on deliverable)

8 SPECIAL REQUIREMENT (EQUIVALENCE CLASS)The team assessed NEMO as an example of an Overlay Multicast Protocol (OMP), using Daikon as a part of Mini Project, that could be used to provide an OMP solution to the POSDATA client.

Quality Assurance Plan 17

Consider a piece of software code taken from NEMO (which provides an existing research based example of Overlay Multicast Protocol). The piece of code in NEMO is from the main method (i.e. the Main Method class). When we run NEMO, the main method class could at runtime:

1. Create a bootstrap (i.e. a single networking node or member)2. Create a number of publishers (i.e. a number if bootstrap nodes to cater to other nodes)3. Create a number of subscribers (i.e. a number of client nodes that connect to the existing

bootstrap nodes or the publishers)The main method branches for one of the above three steps based upon the parameters passed to it from the command line when it is executed. A switch statement in the piece of code defines which of the cases out of the above three (where more than one case could be executed in one execution) is executed by the runtime compiler when the code is executed.

The code is given below for convenience and the team was interested to find the equivalence class for forming the test cases for testing the three cases of bootstrap node, publisher nodes or subscriber nodes. Not only was this but the motivation to find a representative sample from each equivalence class. The code is given as mentioned below:

public static void main(String[] args) {

try { SocketAddress[] bootstrap = new SocketAddress[0]; int publishRate = -1; IPacketSocketFactory socketFactory = null; IReefService service = null; IReefConfiguration config = new NemoConfiguration(); MulticastPublisher publisher = null;

switch (args.length) { case 3: publishRate = Integer.parseInt(args[2]);

case 2: bootstrap = new SocketAddress[] { AgentId.parseAgentId(args[1]).getSocketAddress() };

case 1:

int port = Integer.parseInt(args[0]); InetSocketAddress addr = null;

if (port > 0) { addr = new InetSocketAddress(InetAddress.getLocalHost(), port); }

socketFactory = new SharedPacketSocket(new PacketSocket(addr));

break;

default: usage(); System.exit(-1); }

Quality Assurance Plan 18

}

The code can be explained without problems, where:1. The input at maximum can contain 3 values2. The three values are:

a. Subscriber’s IP address (like 128.145.1.2)b. Publisher’s/Bootstrap’s IP address (like 128.145.1.2)c. Publisher’s/ Bootstrap port number (like 80)

If all three (a,b,c) inputs are passed the code goes through all the Cases 1, 2 and 3If (a,b) inputs are passed the code goes through Cases 1 and 2If (a) input is passed the code goes through Case 1

Analysis of Equivalence Classes:

Input Variable

Risk(s) Equivalence Classes -Representative Sample + Desired Output (Failure/Non Failure)

The Input Argument to the program

A person may give more than three or less than three inputs; the program may crash

Number of Inputs3 - (126.128.2.3, 126.128.2.4, 126.128.2.5, 80; Error) (Failure Class)Number or Inputs=3 - (126.128.2.3, 126.128.2.4, 126.128.2.5, 80; Go through 1, 2 and 3 cases) (Non Failure Class)Number of Inputs<3 - (126.128.2.3, 80; Error) (Non Failure Class)Boundary Values:Number of inputs=3 and Number of inputs<3

A person may not interpret the port as a string or float rather than integer; the program may crash

The Input with all 3 arguments as IP addresses - (80; Go through case 1) (Non Failure Class)The Input with port number as String - (bootstrap; Error) (Failure Class)The Input with port number as Float - (80.0; Error) (Failure Class)Boundary Values:Port numbers=80, 80.0, 80.0001 and 0 <= x <= 65536

A person may interpret the three arguments as three different IP addresses; the program may crash

The Input with port number as integer - (126.128.2.3, 126.128.2.4, 126.128.2.5, 126.128.2.6; Error) (Failure Class)The Input with port number as integer - (126.128.2.3, 126.128.2.4, 80; Go through 1, 2 and 3 cases) (Non Failure Class))Boundary Values:All three are IP addressesTwo are IP addresses taking all permutationsOne is an IP address taking into account taking all permutationsWhere a.b.c.d, and each of a,b,c,d are between 0 and 255 and there are dots between a,b,c,d

A person may interpret the 2 IP addresses as continuous numbers; the program may crash

The Input with 2 continuous IP addresses and a port number - (12612823, 12612824, 80; Error) (Failure Class)The Input with 2 proper IP addresses followed by a port number- (126.128.2.3, 126.128.2.4, 80; Go through 1,2 and 3 cases) (Non Failure Class)Boundary Values:Where a.b.c.d, and each of a,b,c,d are between 0 and 255 and there are dots between a,b,c,d. That there is a minimum of 3 dots such that a.b.c.d, or there is are 2 dots such that ab.c.d or similar permutations, or there is 1 dots such that abc.d or similar permutations, or there is 0 dots such that abcd

A person may interpret the IP addresses to contain negative numbers; the program may crash

The Input with 2 IP addresses and a port number - (-126.128.2.3, 126.128.2.4, 80; Error) (Failure Class)The Input with 2 IP addresses and a port number - (126.-128.2.3, 126.128.2.4, 80; Error) (Failure Class)The Input with 2 IP addresses and a port number - (-126.128.-2.3, 126.128.2.4, 80; Error) (Failure Class)The Input with 2 IP addresses and a port number - (-126.128. 2.-3, 126.128.2.4, 80; Error) (Failure Class)The Input with 2 proper IP addresses followed by a port number- (126.128.2.3, 126.128.2.4, 80; Go through 1,2 and 3 cases) (Non Failure Class)Boundary Values:

Quality Assurance Plan 19

Where a.b.c.d, and each of a,b,c,d are not between 0 and 255 and there are dots between a,b,c,d.

A person may interpret the IP addresses to contain strings; the program may crash

The Input with 2 String IP addresses and a port number - (Kyu, Minho, 80; Error) (Failure Class)The Input with 2 proper IP addresses followed by a port number- (126.128.2.3, 126.128.2.4, 80; Go through 1,2 and 3 cases) (Non Failure Class)Boundary Values:Where a.b.c.d, and each of a,b,c,d are not between 0 and 255 and consists of strings like “Kyu” etc and there may or may not be dots between a,b,c,d. That there is a minimum of 3 dots such that a.b.c.d, or there is are 2 dots such that ab.c.d or similar permutations, or there is 1 dots such that abc.d or similar permutations, or there is 0 dots such that abcd

Similarly, based on more risks statements more number of equivalence classes can be made.

9 ORGANIZATION

9.1 AVAILABLE RESOURCES THAT TEAM INTENDS TO DEVOTE

People: The OMP teams consist of 5 people, 3 of which are on the project directly and 2 who have knowledge of the project, but not sole responsibility for it. In allocating tasks, this will be taken into consideration as the context of tasks and knowledge and familiarity level of team members will be used to decide who gets what tasks. SQA activities will be allocated equally among the team.

Figure 2: OMP Team organization

Time: It is expected that the standard rate150/200 LOC/hour should be allotted for all document reviews. Also, time will help determine what the focus of review should be if the scope of the review needs to be limited for lack to time. Assessment of time will also incorporate how familiar team members are with particular tools and techniques and how much time it may take to become familiar with them in order to utilize them efficiently and effectively. There are 1440 hours remaining in the project of which 99 hours will be dedicated

Quality Assurance Plan 20

to quality assurance. This amount may change as the project progresses and project activities are planned, based on based on quality needs and project status.

Money: It is important to realize the cost of particular tools and methods (i.e. training) may constrain technique and method consideration. At the moment there is no allotted budget, so tool and techniques must be free or as close to free as possible. This may result in the short term usage of tools (i.e. evaluation versions).

9.2 QUALITY ASSURANCE TEAM

All SQA team members will have access to SQA plans and guidelines so that they are aware of the SQA activities and how their roles and tasks fit within these guidelines. As professionals, all team members are trusted to make sure that they have the knowledge and information needed to fulfill their tasks. Discussion of QA activities may be scheduled and conducted so that members can discuss roles and responsibilities amongst one another. In addition team members will collaborate to select roles for reviews so that they are addressed by the team members who best fit the characteristics of the role in relation to the project (i.e. ensuring that the author is not the moderator).

The SQA Leader will be in charge of managing the SQA team, and will be the tie breaker when the team hits roadblocks during decision making. The SQA leader also has the responsibility to making sure that all other team members are carrying out their responsibilities in the correct manner.For each task, activity, method or technique, each team member will have a defined role that will incorporate specific tasks related to such activities so that each team member will be aware of his or her obligations as far as contribution to the activity. Throughout the SQA process each team member is responsible for knowing:

Their Roles and Responsibilities Goals of each activity with which they are associated Processes that are to be carried out

Role ResponsibilitiesQuality Coordinator Responsible for ensuring all quality activities are planned and

carried out accordingly

Responsible for ensure all team member are properly trained and equipped for their given roles and responsibilities

Ensures SQA activities align with available resourcesModule Leader coordinates activities related to a specific system module

Software Quality Leader Responsible for leading SQA activities(i.e. coordinating reviews and walkthroughs)

Quality Reviewer Reviews and identifies defects in project artifactsProvides feedback for improved quality in software artifacts

SQA Team Member Responsible for providing support during SQA activities by carrying out assigned tasks as they relate to quality of the system

Quality Assurance Plan 21

* Some of roles will be shared because of the small team size.

9.3 MANAGING OF THE QUALITY OF ARTIFACTS

When changes to the system result in implementations that may also have introduced new bugs, reviews will be conducted on code regarding those changes. However, a CVS repository (or integrated quality management system where people have management responsibility which enable them to manage resources, realize products and measure, analyze and improve) will be kept for the purposes of comparing modified code to the original and so that the original may be accessed in the event that it is needed for revision due to bugs in the modified code. All testing and reviews will have documentation indicating:

Process: How a particular method or technique will be carried out Goals: This will state the purpose of quality activities associated with the artifacts. Results: Outputs of the methods and techniques and Analysis and Conclusions that are formed

as a result of them Persons Involved: Roles and Responsibilities of SQA team members in relation to artifacts Notes: Any comments concerning the artifact that will be useful for successfully using the

artifact

9.4 PROCESS FOR PRIORITIZING QUALITY ASSURANCE TECHNIQUES

Create a prioritized checklist of testing characteristics/interests of the system; these will be relative to functional requirements (Group Configuration, Member Configuration, Multicast Routing, Data Replication) and Quality Attributes(Performance and Availability)

Choose techniques (i.e. design and code reviews) that seem to fit in line with the characteristics identified (i.e. from common knowledge or based on research);

SQA team should engage in dialogue and assign weight to each technique for each checklist item in terms of how useful each technique is to serve the purposes of testing relative to the criteria(in the checklist) that are of interest; the rating will be 1-10 with 1 being the weakest and 10 the strongest

SQA team conducts an assessment session of techniques that could be useful for testing purposes; the SQA leader will be in charge of this session

Team should come to an agreement about a specific technique and engage in dialogue to address any issues with a particular technique

Weighting and majority team agreement should be deciding factor on a technique

Quality Assurance Plan 22

9.5 QA STRATEGY BREAK DOWN INTO TASKS

Tasks Effort(total hours)

Duration

Exit criteria Deliverables

Product realization

73

Requirement 8 Phase1 SRS Reviewed and baseline SRSDesign 16 Phase2 ADD

Reviewed and baselineADD

Coding 16 Phase3 Code walkthrough and formal technical Review done

Codes with unit test cases

Testing 20 Phase4 Less than 5 bugs per 1K LOC Test plan report and test code

Verification & Validation

8 Phase5 Reviewed and approved by customer

SPMP

Customer delivery 5 Phase6 System installation and start operation on customer end

Verified and validated code

Measurement, Analysis and Improvement to SQAP

3

Performance management

2 Phase 1~6

Meet acceptance criteria as defined SPMP

SPMP

Process appraisal 1 Phase 1~6

All team members feel satisfy with the process

Practice new process

Management responsibility

3

Quality management and management review

3 Phase 1~6

All quality management’s role is assigned and baselined in SPMP

SPMP

Support processes 20Configuration management

5 Phase 1~6

RFC has been processed and track to completion

Configuration management plan with completed RFC

Planning 15 Phase 1~6

Planning for a new activity is done by team members

Timeline and work application schedule

9.6 QUALITY ASSURANCE PROCESS MEASURES

Measurements of SQA processes serve to provide an evaluation criterion that provides such information as how useful the processes are in increasing quality of the system and suggest areas in which the processes can be improved so as to be more useful. Evaluation serves to look at the processes in their current states and identify areas that may be extended, excluded or modified so that the productivity of SQA activities is increased.

23

Quality Assurance Processes will be evaluated based on:Reviews:

Number of Defects Found Length of time to find defects Type of Errors Identified Number of Type Error Occurrences Number of errors introduced after modification Defects/LOC: Efficiency of Code Review LOC/Hour: Rate of code review

Testing: Number of Defects Found Length of time to complete testing Type of Errors Identified Number of errors introduced after modification

Use of Process Measures Process measures will provide valuable information such as the types of defects that are

common within the system which will allow testers and reviewer to know what types of defects to expect to some degree. In addition, they will help with planning QA activities as they will help to understand how long QA activities take and thus how much time to allot to them.Process Improvement Plan (PIP)

Process measures will also be used to determine how much time is spent on quality assurance activities and how productively that time is spent. By determining the amount of bugs founds, the severity of these bugs and how much money is saved by catching these bugs, we can determine how effective the process is. If the code and design reviews are not productive, we will evaluate the processes to determine if how they are carried out is where the problem lies or if it is a matter of these processes not being needed for specific work products. In the case that the processes can still be useful (i.e. a significant number of a specific or any type of defects/issues are being missed during the reviews and/or walkthroughs), then the process will be modified to help ensure that these types of issues are addressed to a greater extent in future similar reviews and/or walkthroughs. The testing process will be addressed in the same manner, except the chosen tools and how they are being used will be the points of evaluation with respect to the process measures.

Follow up and Tracking:After reviews and testing are over, it is to be specified whether the review or the result of testing was successful as per the criterion laid out in the goals of SQAP. In case the results were within limit, the process would ensure that the work product is packaged for release by the team members or documents are baselined. In case the results were not found to be consistent with the goals, the bugs would be tracked in a repository like Bugzilla, appropriate action in terms of who will make the corrections would be done and a reevaluation would be carried out after the corrections have been made.

Exit Criteria:

Quality Assurance Plan 24

The exit criteria as defined in the plan depends upon the goals set for the specific sections of the plan. Thus, whenever the process of review or testing takes place, whatever may be the goal (defined in the goal section already added) specific to a deliverable or work product being tested or reviewed, would serve as the exit criteria on the project for afore mentioned SQAP activities.

Quality Assurance Plan 25

10 BIBLIOGRAPHY

1. [Perry 2000]: Effective Methods for Software Testing, 2nd Edition, Wiley and Sons2. [EST03]: Effective Software Testing: 50 ways to improve your testing, Elfriede Dustin, Addison-

Wesley3. [STCQ00]: Software testing and Continuous Quality Improvement, William Lewis, Auerbach4. [AST99]: Automated Software Testing: Introduction, Management and Performance, Elfriede

Dustin, Jeff Rashka, John Paul5. [PGST03]: A Practitioner’s Guide to Software test Design, Lee Copeland, Artech House Publishers6. [IEEE730]: IEEE 730 SQAP standard’s document

11 GLOSSARY

11.1 DEFINITION

Audit An independent examination of a software product, software process, or set of software processes to assess conformance with specifications, standards, contractual agreements, or other criteria.

Inspection A visual examination of a software product to detect and identify software anomalies, including errors and deviation from standards and specifications. Inspections are peer examinations led by impartial facilitators who are trained in inspection technique

Management Review

A systematic evaluation of a software acquisition, supply, development, operation, or maintenance process performed by or on behalf of management that monitors progress, determines the status of plans and schedules, confirms requirements and their system allocation, or evaluates the effectiveness of management approaches used to achieve fitness for purpose

Review A process or meeting during which a software product is presented to project personnel, managers, users, customers, user representatives, or other interested parties for comment or approval

Walk-through A static analysis technique in which a designer or programmer leads members of the development team ad other interested parties through a software product, and the participants ask questions and make comments about possible errors, violation of development standards, and other problems

26

11.2 ACRONYMS

ADD Architectural Design DocumentCMU Carnegie Mellon University ICU Information and Communication UniversityPL Project LeaderRFC Request For ChangeSCA Software Change AuthorizationSCMP Software Configuration Management PlanSCMPR Software Configuration Management Plan ReviewSCR Software Change RequestSEPG Software Engineering Process GroupSJCS Standard of Java Coding SpecificationSOW Statement of WorkSPMP Software Project Management PlanSRS Software Requirements SpecificationSSR Software Specification ReviewSTP Software Test PlanSVVP Software Verification and Validation PlanSVVPR Software Verification and Validation Plan ReviewSVVR Software verification and Validation ReportTM Team Members

12 APPENDIX

Checklists

SRS:

27

28

SDD:

The checklists as defined by the Open Process Framework (OPF), for SDD and similar documents:12.1.1 Ambiguity

Inspection Questions Yes (Pass)

No (Fail)

Single Interpretation: Does every design decision documented in SDD have only a single interpretation that is the same for both those who produce it and those who read it?    

29

12.1.2 Completeness

Inspection Questions Yes (Pass)

No (Fail)

Package Designs: Does the SDD document all significant package design decisions?    

Unit Designs: Does the SDD document all significant unit design decisions?    

Thoroughly Documented: Are design decisions for the current release documented as completely and as thoroughly as is known at the present time? Note that information relevant to future releases need not be completely documented.

   

Current TBDs: Is the acronym “TBD” used to signify that the associated design decisions have not yet been determined and documented?    

No TBDs at Release: Does the final SDD for a release not contain any “TBDs” for that release?    

12.1.3 Consistency

Inspection Questions Yes (Pass)

No (Fail)

Upward Consistency: Is the SDD consistent with higher-level documents (e.g., System Requirements Specification, Project Glossary, Domain Object Model, Software Architecture Document)?

   

Lateral Consistency: Is the SDD consistent with documents and models at the same level (e.g., Database Design Document, Human Interface Design Document)?    

Internal Consistency: Is the SDD internally consistent in that all design decisions that it contains are compatible?    

12.1.4 Modifiability

Inspection Questions Yes (Pass)

No (Fail)

Organization: Does the SDD have a coherent, easy-to-use organization?    

Redundancy: Are the design decisions neither redundantly stated nor intermingled?    

12.1.5 Verifiability

Inspection Questions Yes (Pass)

No (Fail)

Standards Conformance: Is the SDD verifiable in the sense that its content and format conform to this standard?    

12.1.6 Headers and Footers

Inspection Questions Yes (Pass)

No (Fail)

Header: Does every page of the SDD contain a header with the following information? - Application Name - Document Title (i.e., “Software Design Document”) - Document Identifier - Document Version Number - Document Version Date

   

Footer: Does every page of the SDD contain a footer with the following information?    

30

- Distribution type (e.g., “Public”, “Confidential”, or “Secret”) - Copyright Notice - Document Page Number

12.1.6.1 1.1) Definition

Inspection Questions Yes (Pass)

No (Fail)

Existence: Does paragraph 1.1 (Definition) of the introduction section exist?    

Identification: Is it properly identified as paragraph 1.1 and titled (Definition)?    

Correct: Is the definition of the document correct?    

Reuse: Does paragraph 1.1 reuse the standard boilerplate definition of a software design document?    

12.1.6.2 1.2) Objectives

Inspection Questions Yes (Pass)

No (Fail)

Existence: Does paragraph 1.2 (Objectives) of the introduction section exist?    

Identification: Is it properly identified as paragraph 1.2 and titled (0bjectives)?    

Objectives: Does it provide a list of the software design document’s objectives?    

Complete: Is this list of objectives complete?    

Correct: Are each of the objectives correct?    

Reuse: Does paragraph 1.2 reuse the standard boilerplate objectives?    

12.1.6.3 1.3) Intended Audiences

Inspection Questions Yes (Pass)

No (Fail)

Existence: Does paragraph 1.3 (Intended Audiences) of the introduction section exist?    

Identification: Is it properly identified as paragraph 1.3 and titled (Intended Audiences)?    

Intended Audiences: Does it provide a list of the specification’s intended audiences?    

Roles: Is this list of intended audiences complete in terms of roles?    

Teams: Is this list of intended audiences complete in terms of teams?    

Organizations: Is this list of intended audiences complete in terms of organizations?    

Correct: Are each of the intended audiences correct?    

Reuse: Does paragraph 1.3 reuse the standard boilerplate list of intended audiences?    

12.1.6.4 1.4) References

Inspection Questions Yes (Pass)

No (Fail)

31

Existence: Does paragraph 1.4 (References) of the introduction section exist?    

Identification: Is it properly identified as paragraph 1.4 and titled (References)?    

References: Does it provide a list of the software design document’s references?    

Customer Documentation: Is this list of references complete in terms of customer organization doumentation?    

Third-Party Documentation: Is this list of references complete in terms of third-party documentation?    

Development Organization Documentation: Is this list of references complete in terms of development organization documentation?    

Correct: Is this list of references correct?    

Reuse: Does paragraph 1.4 reuse the standard boilerplate list of development organization references?    

12.1.6.5 1.5) Document Overview

Inspection Questions Yes (Pass)

No (Fail)

Existence: Does paragraph 1.5 (Document Overview) of the introduction section exist?    

Identification: Is it properly identified as paragraph 1.5 and titled (Document Overview)?    

Complete: Does it describe each major section of the software design document?    

Correct: Are these descriptions correct summaries of the associated sections?    

Reuse: Does paragraph 1.5 reuse the standard boilerplate section descriptions?    

Well-Written: Is the document overview well-written and easy to read?    

12.1.7 2) Package Design

Inspection Questions Yes (Pass)

No (Fail)

Existence: Does section 2 (Package Design) exist?    

Name: Is it properly identified as section 2 and titled (Package Design)?    

Purpose: Does it summarize the purpose of the package design section?    

Completeness: Is it complete in that it lists all packages?    

Coupling: Is the overall decomposition of the software component into packages one that minimizes package to package coupling? Are the dependencies between the packages consistent with the dependencies between their component units and packages?

   

Size: Is the total number of packages proportional to the size and complexity of the associated application or component? Is the total number of packages proportional to the total number of units they contain?

   

Disjointness: Are the packages disjoint in that they do not redundantly have the same purpose and responsibilities and do not implement the same services?    

Well-Written: Is the package design section well-written and easy to read?    

32

12.1.7.1 2.X) Individual Package X

Inspection Questions Yes (Pass)

No (Fail)

Package Existence: Does paragraph 2.X (Package Name X) exist for each package?    

Package Name: Is paragraph 2.X properly identified and titled (Package Name X)? Is the name of the package clearly descriptive in that it captures the package’s purpose, abstraction, and the role that it plays within the software architecture? Does the name of the package follow standard package naming conventions? Is the name of the package unique within the scope of the software design document?

   

Package Purpose: Is the purpose and abstraction of the package properly documented? Is the purpose of the package consistent with and implied by its name? Is the package cohesive in terms of having a single abstraction?

   

Package Responsibilities: Are the responsibilities of the package properly documented? Is the package cohesive in terms of having a cohesive set of responsibilities? Is the set of package responsibilities consistent with the responsibilities of its component units and packages? Is each responsibility consistent with the name of the package? Is each responsibility consistent with the purpose (abstraction) of the package? Are all package responsibilities at the same design level of abstraction?

   

Requirements: Are the collaborating contents of the package able to fulfill all requirements allocated to it? Does this include functional, data, quality, and API requirements? Does the package comply with all design constraints?

   

Package-Level Interfaces: Does the package have well-defined interface(s)? If the package has multiple interfaces, is each one cohesive in that it provides a cohesive set of services? Is the package loosly coupled with other packages? Are the internal contents of the package more tightly coupled to each other than they are to the contents of other external packages? Are the visibilities of these contents properly identified? Is information hiding (encapsulation) maximized?

   

Package Contents: Are all contents of the package (e.g., classes, packages, procedures) properly listed? Is the package cohesive in terms of its contents? Are all of the components of the package needed to fulfill its responsibilities?

   

Package Size: Does the package contain a reasonable number of components? Note that too many or too few components may imply that the package should be decomposed or merged with another package.

   

Package Feasibility: Is the package feasible to implement given current resources including staffing, schedule, and technology?    

Package Diagrams: Do adequate static and dynamic diagrams (e.g., class diagrams, collaboration diagrams, sequence diagrams, state transition diagrams) exist that properly document the structure and behavior of the package? Are these diagrams syntactically correct? Are these diagrams up-to-date?

   

State Machine: If the package has a state machine, is the state machine consistent with the package responsibilities, interfaces, and contents? Is the state machine as simple as practical with no extraneous states or transitions? Are all state and transition names understandable and unique within the state machine? Are all referenced objects visible to the class? Have composite states been used where appropriate to simplify the state machine?

   

Well-Written: Is the package X paragraph well-written and easy to read?    

12.1.7.2 2.X.Y) Individual Unit Y

Inspection Questions Yes (Pass)

No (Fail)

33

Unit Existence: Does paragraph 2.X.Y (Unit Name Y) exist inside paragraph 2.X for each unit contained within the corresponding package?    

Unit Name: Is paragraph 2.X.Y properly identified and titled (Unit Name Y)? Does the name of the unit clearly descriptive in that it captures the unit’s purpose, abstraction, and the role that it plays within the package? Does the name of the unit follow standard naming conventions by using the correct part of speech (e.g., singular noun or noun phrase for a class, active verb or verb phrase for a procedure)? Is the name of the unit unique within the scope of its name space (e.g., enclosing package or software component)?

   

Unit Purpose: Is the purpose and abstraction of the unit properly documented? Is the purpose of the unit consistent with and implied by its name? Is the unit cohesive in terms of having a single abstraction? Does the unit have the correct kinds of abstraction (e.g., object abstraction for a class, functional abstraction for a procedure)?

   

Unit Responsibilities: Are the responsibilities of the unit properly documented? Is the unit cohesive in terms of having a cohesive set of responsibilities? Is each responsibility consistent with the name of the unit? Is each responsibility consistent with the purpose (abstraction) of the unit? Are all unit responsibilities at the same design level of abstraction?

   

Requirements: Are the collaborating contents of the unit able to fulfill all requirements allocated to it? Does this include functional, data, quality, and API requirements? Does the unit comply with all design constraints?

   

Unit-Level Interfaces: Does the unit have well-defined interface(s)? If the unit has multiple interfaces, is each one cohesive in that it provides a cohesive set of services? Is the package loosely coupled with other packages? Are the internal contents of the unit more tightly coupled to each other than they are to the contents of other external units? Are the visibilities of these contents properly identified? Is information hiding (encapsulation) maximized?

   

Inheritance: Are inheritance relationships properly documented? Is inheritance only used to capture generalization/specialization relationships and design abstractions rather than implementation details? Are subclasses and subtypes clearly distinct from their super classes and supertypes? Is inheritance hierarchies balanced, being neither too flat or too deep?

   

Association: Are all association relationships properly documented including multiplicity? Are different types of relationships (e.g., simple association aggregation, collection, membership) clearly differentiated?

   

Disjointness: Are the units disjoint in that they do not redundantly have the same purpose and responsibilities and do not implement the same services?    

Unit Contents: Are all contents of the unit (e.g., for classes: attributes, methods, and exceptions) properly documented? Are all of the elements of the unit needed to fulfill its responsibilities?

   

- Class Attributes: Is every attribute documented with name, type, and visibility?    

- Class Invariants: Are all invariants (if any) properly documented?    

- Class Methods: Are all methods properly documented including name, purpose, signature, preconditions and postconditions, and structure (if appropriate)? Are method signature parameters properly documented including names, types, range restrictions (if any), and data flow (in, out, in/out)? Are method signature exceptions properly documented? Are method signatures consistent with the syntax of the target programming languages? Is the method structure (e.g., PDL) properly documented for all large and complex methods? Are the methods cohesive, modular, and of correct size? Is the behavior of the class completely described by it methods descriptions (e.g., purpose and assertions)?

   

- Class Exceptions: Are all exceptions properly documented including name, abstraction, type, direction (raised or handled)? Are all exception handlers properly documented?    

Unit Size: Does the unit contain a reasonable number of components? Note that too many or too    

34

few components may imply that the unit should be decomposed or merged with another unit.

Unit Feasibility: Is the unit feasible to implement given current resources including staffing, schedule, and technology?    

Unit Diagrams: Do adequate static and dynamic diagrams (e.g., class diagrams, collaboration diagrams, sequence diagrams, state transition diagrams) exist that properly document the unit’s structure and behavior? Are these diagrams syntactically correct? Are these diagrams up-to-date?

   

State Machine: If the unit has a state machine, is the state machine consistent with the unit responsibilities, interfaces, and contents (e.g., attributes and methods)? Is the state machine as simple as practical with no extraneous states or transitions? Are all state and transition names understandable and unique within the state machine? Are all referenced objects visible to the class? Have composite states been used where appropriate to simplify the state machine?

   

Well-Written: Is the unit X Y paragraph well-written and easy to read?    

Coding:The coding standard as defined by Watts Humphrey for C++ (the programming language to be used) is given below:

13 C++ CODING STANDARD

Purpose To guide the development of C++ programsProgram headers All programs begin with a descriptive header.

Header format

/**********************************************/ /* Program Assignment: the program number     */ /* Name:               your name              */ /* Date:               the date program       */ /*                     development started    */ /* Description:        a short description of */ /*                     the program function   */ /**********************************************/

If the program is to be submitted for grading, use this heading instead.Listing contents Provide a summary of the listing contents.

Contents example

/**********************************************/ /* Listing Contents:                          */ /*  Reuse instructions                        */ /*  Includes                                  */ /*  Class declarations:                       */ /*   CData                                    */ /*   ASet                                     */ /*  Source code in c:\classes\CData.cpp:      */ /*   CData                                    */ /*   CData()                                  */ /*   Empty()                                  */ /**********************************************/

Reuse instructions

Describe how the program is used. Provide the declaration format, parameter values and types, and parameter limits. Provide warnings of illegal values, overflow conditions, or other conditions that could potentially result in improper operation.

Example /**********************************************/

35

/* Reuse Instructions                         */ /*   int PrintLine(char *line_of_character)   */ /*   Purpose: to print string,                */ /*    'line_of_character', on one print line  */ /*   Limitations: the maximum line length is  */ /*    LINE_LENGTH                             */ /*   Return: 0 if printer not ready to print, */ /*    else 1                                  */ /**********************************************/

Identifiers Use descriptive names for all variables, function names, constants, and other identifiers. Avoid abbreviations or single letter variables.

Identifier exampleint number_of_students;   /* This is GOOD */ float x4, j, ftave;       /* These are BAD */

CommentsSufficiently document the code so the reader can understand its operation. Comments should explain both the purpose and the behavior of the code. Comment variable declarations to indicate their purpose.

Good commentif(record_count > limit) /* have all the      */ /* records been processed?                    */

Bad commentif(record_count > limit) /* check if record_  */ /* count is greater than limit                */

Major sections Major program sections should be preceded by a block comment that describes the processing that is done in the next section.

Example

/**********************************************/ /* This program section will examine the      */ /* contents of the array "grades"             */ /* and will calculate the average grade       */ /* for the class                              */ /**********************************************/

Blank space Write programs with sufficient spacing so they are easy to read. Separate every program construct with at least one space.

Indenting Indent every level of bracket from the previous one. Open and closed brackets should be on lines by themselves and aligned with each other.

Indenting example

while (miss_distance > threshold) {    success_code=move_robot (target_location);    if (success_code==MOVE_FAILED)    {       printf("The robot move has failed.\n");    } }

CapitalizationAll defines are capitalized. All other identifiers and reserved words are lowercase. Messages being output to the user can be mixed-case so as to make a clean user presentation.

Capitalization example

#define DEFAULT_NUMBER_OF_STUDENTS 15

int class_size=DEFAULT_NUMBER_OF_STUDENTS;

User Manual:

The standard as defined by the US government (housing and development organization MIS system is very versatile and found suitable for the present system). The checklist is given below:

36

37

To be completed by Author To be completed by Reviewer

REQUIREMENTAUTHOR

X-REFERENCE Page #/Section #

AUTHOR COMMENTS COMPLY REVIEWER COMMENTS

Y N1.0 GENERAL INFORMATION 1.1 System Overview: Explain in general terms the system and

the purpose for which it is intended.1.2 Project References: Provide a list of the references that

were used in preparation of this document in order of importance to the end user.

1.3 Authorized Use Permission: Provide a warning regarding unauthorized usage of the system and making unauthorized copies of data, software, reports, and documents, if applicable.

1.4 Points of Contact:1.4.1 Information: Provide a list of the points of

organizational contact that may be needed by the document user for informational and troubleshooting purposes.

1.4.2 Coordination: Provide a list of organizations that require coordination between the project and its specific support function. Include a schedule for coordination activities.

13.1.1.1.1

1.4.3 Help Desk: Provide help desk information including responsible personnel phone numbers for emergency assistance

13.1.1.1.2

1.5 Organization of the Manual: Provide a list of the major sections of the User’s Manual (1.0, 2.0, 3.0, etc.) and a brief description of what is contained in each section.

13.1.1.1.3

1.6 Acronyms and Abbreviations: Provide a list of the acronyms and abbreviations used in this document and the meaning of each.

13.1.1.1.4

To be completed by Author To be completed by Reviewer

REQUIREMENTAUTHOR

X-REFERENCE Page #/Section #

AUTHOR COMMENTS COMPLY REVIEWER COMMENTS

Y N2.0 SYSTEM SUMMARY2.1 System Configuration: Briefly describe and depict

graphically the equipment, communications, and networks used by the system.

13.1.1.1.5

2.2 Data Flows: Briefly describe or depict graphically, the overall flow of data in the system. 13.1.1.1.6

2.3 User Access Levels: Describe the different users and/or user groups and the restrictions placed on system accessibility or use for each.

2.4 Contingencies and Alternate Modes of Operation: Explain the continuity of operations in the event of emergency, disaster, or accident. Explain what the effect of degraded performance will have on the user.

13.1.1.1.7

39

To be completed by Author To be completed by Reviewer

REQUIREMENTAUTHOR

X-REFERENCE Page #/Section #

AUTHOR COMMENTS COMPLY REVIEWER COMMENTS

Y N3.0 GETTING STARTED3.1 Logging On: Describe the procedures necessary to access

the system, including how to get a user ID and log on. 13.1.1.1.83.2

13.2 SYSTEM MENU:3.2.x [System Function Name]: (Each system function

in this subsection should be under a separate header. Generate new subsections as necessary for each system function from 3.2.1-3.2.x.) Provide a system function name and identifier here for reference in the remainder of the subsection. Describe the function and pathway of the menu item. Provide an average response time to use the function.

13.2.1.1.1

3.3 Changing User ID and Password: Describe how the user obtains a user ID. Describe the actions a user must take to change a password.

13.2.1.1.2

3.4 Exit System: Describe the actions necessary to properly exit the system. 13.2.1.1.3

40

To be completed by Author To be completed by Reviewer

REQUIREMENTAUTHOR

X-REFERENCE Page #/Section #

AUTHOR COMMENTS COMPLY REVIEWER COMMENTS

Y N4.0 USING THE SYSTEM (ONLINE): THIS SECTION IS ONLY USED FOR ONLINE SYSTEMS4.x [System Function Name]: (Each system function in this

section should be under a separate header. Generate new sections as necessary for each system function from 4.1-4.x.) Provide a system function name and identifier here for reference in the remainder of the subsection. Describe the function in detail and depict graphically. Include screen captures and descriptive narrative.

13.2.1.1.4

4.x.y [System Sub-Function Name]: (Each system sub-function should be under a separate header. Generate new subsections as necessary for each system sub-function from 4.1.1-4.x.y.) Where applicable, for each sub-function referenced within a section in 4.x, describe in detail and depict graphically the sub-function name(s) referenced.

13.2.1.1.5

4.2 Special Instructions for Error Correction: Describe all recovery and error correction procedures, including error conditions that may be generated and corrective actions that may need to be taken.

13.2.1.1.6

4.3 Caveats and Exceptions: If there are special actions the user must take to insure that data is properly saved or that some other function executes properly, describe those actions here.

13.2.1.1.7

41

To be completed by Author To be completed by Reviewer

REQUIREMENTAUTHOR

X-REFERENCE Page #/Section #

AUTHOR COMMENTS COMPLY REVIEWER COMMENTS

Y N5.0 USING THE SYSTEM (BATCH): THIS SECTION IS ONLY USED FOR BATCH SYSTEMS5.x [System Function Name]: (Each system function in this

section should be under a separate header. Generate new sections as necessary for each system function from 5.1-5.x.) Provide a system function name and identifier here for reference in the remainder of the subsection. Describe the function in detail and depict graphically. Include screen captures and descriptive narrative.

13.2.1.1.8

5.x.y [System Sub-Function Name]: (Each system sub-function should be under a separate header. Generate new subsections as necessary for each system sub-function from 5.1.1-5.x.y.) Where applicable, for each sub-function referenced within a section in 5.x, describe in detail and depict graphically the sub-function name(s) referenced.

13.2.1.1.9

5.2 Special Instructions for Error Correction: Describe all recovery and error correction procedures, including error conditions that may be generated and corrective actions that may need to be taken.

13.2.1.1.10

5.3 Caveats and Exceptions: If there are special actions the user must take to insure that data is properly saved or that some other function executes properly, describe those actions here.

13.2.1.1.11

5.4 Input Procedures and Expected Output: Prepare a detailed series of instructions (in non technical terms) describing the procedures the user will need to follow to use the system.

13.2.1.1.12

42

To be completed by Author To be completed by Reviewer

REQUIREMENTAUTHOR

X-REFERENCE Page #/Section #

AUTHOR COMMENTS COMPLY REVIEWER COMMENTS

Y N6.0 QUERYING6.1 Query Capabilities: Describe or illustrate the pre-

programmed and ad hoc query capabilities provided by the system. Include query name or code the user would invoke to execute the query.

13.2.1.1.13

6.2 Query Procedures: Develop detailed descriptions of the procedures necessary for file query including the parameters of the query and the sequenced control instructions to extract query requests from the database.

13.2.1.1.14

7.0 REPORTING7.1 Report Capabilities: Describe all reports available to the

end user. If user is creating ad hoc reports with special formats, please describe here.

13.2.1.1.15

7.2 Report Procedures: Provide instructions for executing and printing the different reports available. 13.2.1.1.16

43