[ieee 2011 2nd international conference on computer and communication technology (iccct) -...

6
A Metric Suite for Early Estimation of Software Testing Effort using Requirement Engineering Document and its validation Ashish Sharma Department of Computer Engineering & Applications GLA University, Mathura, India [email protected] Dharmender Singh Kushwaha Department of Computer Science & Engineering MNNIT, Allahabad, India [email protected] AbstractSoftware testing is one of the most important and critical activity of software development life cycle which ensures software quality and directly influences the development cost and success of the software. This paper empirically proposes a test metric for the estimation of the software testing effort using IEEE-Software Requirement Specification (SRS) document in order to avoid budget overshoot, schedule escalation etc., at very early stage of software development. Further the effort required to develop or test the software will also depend on the complexity of the proposed software. Therefore the proposed test metric computes the requirement based complexity for yet to be developed software on the basis of SRS document. Later the proposed measure is also compared with other proposals for test effort estimation like code based, cognitive value based and use case based measures. The result obtained validates that the proposed test metric is a comprehensive one and compares well with various other prominent measures. The computation of proposed test effort estimation involves least overhead as compared to others. Keywords-Requirement based complexity, requirement based function points, requirement based test effort estimation I. INTRODUCTION Software testing is a vital aspect which directly influences the quality of the software. In order to carry out a systematic testing, it is absolutely imperative to predict the effort required to test the software. Hence this paper proposes a metric for estimation of software testing effort using software requirements. Most of the methodologies that have been proposed in the past are code based, but when we have the code, it is too late. Therefore, the effort requirement for the software can be minimized, if the computation of test effort can be done in early phases of software development lifecycle (SDLC). In this direction, we have captured the attributes from SRS of the proposed software to compute the improved requirement based complexity [16]. Further, the obtained IRBC will serve as the basis for estimation and analysis of the test effort. As SRS acts as a verifiable document, hence, estimation of test effort based on SRS will be early warning, systematic, less complex, faster and comprehensive one. To prove the effectiveness, the proposed test metric is also compared with various prevalent test effort estimation practices given in the past. These test effort measures can be classified and individually compared based on three broad categories like use-case based test effort estimation, code based test effort estimation and complexity value based test effort estimation. Use case based methods [9, 10, 12, 13] are also derived from software requirements and considers actor ranking, use case points, normal and exceptional scenarios and various technical and environmental factors in order to compute test effort based on a constant conversion factor. The limitation with these methods is that, it depends on the prudence of analyst. However on the other hand, code based test effort estimation methods [4, 7, 8, 11] first considers the code, generates the test cases, execution points, productivity factors and later computes the test effort. These methodologies are computation intensive and amount of re- work also gets increased. Cognitive complexity based test effort estimation measures [6, 17] computes the cognitive complexity of the software and co-relates the cognitive complexity with test effort estimation. The measure uses line of the code and their subsequent basic control structure. The measure uses a tedious calculation to finally arrive at test effort. However, the proposed measure considers requirements written in standard IEEE 830:1998 [1] format in order to obtain effective and precise result. Hence the strength of the measure lies in computation of requirement based complexity from requirement engineering document prior to estimation of software test effort. This will lead to early defect detection which has been shown to be much less expensive than finding defects during integration testing or later. The Standish Group research [14] shows a staggering result that 31.1% of projects are cancelled before they ever get completed. Also 52.7% of the projects cost 89% of their original estimates”. The top three reasons for most of project failures are first, that requirements and specifications are incomplete; second that requirements and specifications change too often and finally there is a lack of user input during freezing the requirements. The reason for estimation of software testing effort in early phases of SDLC is simple economics. The study in [14] also brings out two important aspects. First, that the majority of defects have their root cause in poorly defined requirements and secondly, the cost of fixing an error is cheaper the earlier it is found. International Conference on Computer & Communication Technology (ICCCT)-2011 978-1-4577-1386-611$26.00©2011 IEEE 373

Upload: dharmender-singh

Post on 17-Feb-2017

212 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: [IEEE 2011 2nd International Conference on Computer and Communication Technology (ICCCT) - Allahabad, India (2011.09.15-2011.09.17)] 2011 2nd International Conference on Computer and

A Metric Suite for Early Estimation of Software Testing Effort using Requirement Engineering Document and its validation

Ashish Sharma Department of Computer Engineering & Applications

GLA University, Mathura, India [email protected]

Dharmender Singh Kushwaha Department of Computer Science & Engineering

MNNIT, Allahabad, India [email protected]

Abstract— Software testing is one of the most important and critical activity of software development life cycle which ensures software quality and directly influences the development cost and success of the software. This paper empirically proposes a test metric for the estimation of the software testing effort using IEEE-Software Requirement Specification (SRS) document in order to avoid budget overshoot, schedule escalation etc., at very early stage of software development. Further the effort required to develop or test the software will also depend on the complexity of the proposed software. Therefore the proposed test metric computes the requirement based complexity for yet to be developed software on the basis of SRS document. Later the proposed measure is also compared with other proposals for test effort estimation like code based, cognitive value based and use case based measures. The result obtained validates that the proposed test metric is a comprehensive one and compares well with various other prominent measures. The computation of proposed test effort estimation involves least overhead as compared to others.

Keywords-Requirement based complexity, requirement based function points, requirement based test effort estimation

I. INTRODUCTION

Software testing is a vital aspect which directly influences the quality of the software. In order to carry out a systematic testing, it is absolutely imperative to predict the effort required to test the software. Hence this paper proposes a metric for estimation of software testing effort using software requirements. Most of the methodologies that have been proposed in the past are code based, but when we have the code, it is too late. Therefore, the effort requirement for the software can be minimized, if the computation of test effort can be done in early phases of software development lifecycle (SDLC). In this direction, we have captured the attributes from SRS of the proposed software to compute the improved requirement based complexity [16]. Further, the obtained IRBC will serve as the basis for estimation and analysis of the test effort. As SRS acts as a verifiable document, hence, estimation of test effort based on SRS will be early warning, systematic, less complex, faster and comprehensive one. To prove the effectiveness, the proposed test metric is also compared with various prevalent test effort estimation practices given in the past. These test effort measures can be classified and

individually compared based on three broad categories like use-case based test effort estimation, code based test effort estimation and complexity value based test effort estimation.

Use case based methods [9, 10, 12, 13] are also derived from software requirements and considers actor ranking, use case points, normal and exceptional scenarios and various technical and environmental factors in order to compute test effort based on a constant conversion factor. The limitation with these methods is that, it depends on the prudence of analyst.

However on the other hand, code based test effort estimation methods [4, 7, 8, 11] first considers the code, generates the test cases, execution points, productivity factors and later computes the test effort. These methodologies are computation intensive and amount of re-work also gets increased.

Cognitive complexity based test effort estimation measures [6, 17] computes the cognitive complexity of the software and co-relates the cognitive complexity with test effort estimation. The measure uses line of the code and their subsequent basic control structure. The measure uses a tedious calculation to finally arrive at test effort.

However, the proposed measure considers requirements written in standard IEEE 830:1998 [1] format in order to obtain effective and precise result. Hence the strength of the measure lies in computation of requirement based complexity from requirement engineering document prior to estimation of software test effort. This will lead to early defect detection which has been shown to be much less expensive than finding defects during integration testing or later. The Standish Group research [14] shows a staggering result that 31.1% of projects are cancelled before they ever get completed. Also 52.7% of the projects cost 89% of their original estimates”. The top three reasons for most of project failures are first, that requirements and specifications are incomplete; second that requirements and specifications change too often and finally there is a lack of user input during freezing the requirements.

The reason for estimation of software testing effort in early phases of SDLC is simple economics. The study in [14] also brings out two important aspects. First, that the majority of defects have their root cause in poorly defined requirements and secondly, the cost of fixing an error is cheaper the earlier it is found.

International Conference on Computer & Communication Technology (ICCCT)-2011

978-1-4577-1386-611$26.00©2011 IEEE 373

Page 2: [IEEE 2011 2nd International Conference on Computer and Communication Technology (ICCCT) - Allahabad, India (2011.09.15-2011.09.17)] 2011 2nd International Conference on Computer and

II. RELATED WORKS

During the last few decades, various models, methods and techniques have been developed for the estimation of software test effort. This section presents a survey of some leading papers describing the work carried out so far, for the estimation of software testing effort.

The work presented in this paper is based on SRS. The Software Engineering Standard Committee of IEEE computer society [1] presents the guidelines for the documentation of software requirements using IEEE 830: 1998 format. Antonio Bertilino [2] discusses software testing roadmap for the achievements, challenges and dreams for software testing. Symon [3] discusses the computation of function point for the estimation of size and cost of the software. The paper also considers technical and environmental factors. Boehm [5] discusses constructive cost model (COCOMO) and its various categories for the estimation of software cost using cost driver attributes.

Eduardo Aranha [4] uses a controlled natural language (CNL) tool to convert test specification into natural language and estimate test effort. It also computes test size and test execution complexity measure. Zhou, Xiaochun [9] presents an experience based approach for the test suit size estimation. Aranha and Borba [8] discusses about test execution and a test automation effort estimation model for test selection. The model is based on the test specifications written in a controlled natural language. It uses manual coverage and automated test generation technique. Nageshwaran [13] presents a use-case based approach for test effort estimation and considers weight and environmental factors with a constant conversion factor for the computation of test effort. Zhu Xiaochun et. al [10] presents an empirical study on early test execution effort estimation based on test case number prediction and test execution complexity. Erika Almeida et. al. [12] discusses a method for test effort, based on use cases. It uses parameters like: actor ranking, technical and environment factor related to testing like test tools, input, environment, distributed system, interfaces etc. for the calculation of test effort.

Kushwaha and Misra [17] discusses CICM and modeling of test effort based on component of the shelf (COTS) & component based software engineering (CBSE). Further the CICM is compared with cyclometric number. The measure is able to demonstrate that the cyclometric number and the test effort increase with increase in software complexity.

Sharma & Kushwaha [15, 16] discusses the improved requirement based complexity (IRBC) based on SRS of the proposed software and also presented an object based semi-automated model for the tagging and categorization of software requirements. Many other researchers have contributed towards this issue and a roadmap showing the various approaches used for the test effort estimation is shown in table 1.

TABLE 1. COMMONLY USED MEASURES AND THEIR EVOLUTION

2010 Requirement based test effort estimation2009 Use case based test estimations2008 Test execution complexity based estimation2007 Tools and Natural language based estimation2004 Formal method based estimations2002 Extended finite model based estimations

Late 1990’sObject oriented based estimations

Heuristic & Solution based estimations1990’s Model based estimation1980 Test data based estimation

III. COMPLEXITY COMPUTATION FROM IEEE:SRS DOCUMENT

It has been established that the complexity of the code has a direct bearing on the amount of test effort [17]. For accurate and systematic estimation of test effort, it is necessary to compute the complexity of yet to be developed software in the requirement analysis phase itself. Hence, in this direction, this paper uses an improved requirement based complexity (IRBC) measure [16], which is derived based on the recommendations of IEEE 830: 1998, SRS Document [1] and it is mathematically expressed as:

IRBC = ((PC X PCA) + DCI + IFC + SFC) X SDLC The details of this computation can be found in [16].

The next section discusses the application of IRBC for the estimation of testing effort for the software to be developed.

IV. PROPOSED APPROACH FOR TEST EFFORTESTIMATION

Since practitioners are generally short of time and resources, they tend to evade systematic testing, that is not considered to be so very lucrative job. This affects the overall software development because every type of testing technique demands adequate test case generation, modeling and documentation. Though many software testing measures have been proposed in the past, but still it is far from being matured. If we go by quality standards then, the estimation of the software testing effort has to be done before the tester(s) start writing the test cases. Without test effort estimation, the testing manager cannot-create a test plan based on which the decision such as requirement of man power, schedule to write test cases, performing the testing, drawing out the testing results, submitting the result report to development team and receiving back the reported bugs that have been resolved. To move a step closer, the proposed test effort measure empirically estimates the software testing effort so as to properly plan the testing process, reduce the testing cost and computation of software testing effort in early phases of SDLC. In order to clearly understand the approach, the following paragraphs discusses the estimation of software testing effort from improved requirement based complexity measure (IRBC) that is derived from software requirements.

International Conference on Computer & Communication Technology (ICCCT)-2011

374

Page 3: [IEEE 2011 2nd International Conference on Computer and Communication Technology (ICCCT) - Allahabad, India (2011.09.15-2011.09.17)] 2011 2nd International Conference on Computer and

Requirement Based Function Points (RBFP) IRBC comprises of all the attributes that are sufficient to compute function point analysis (FPA) [3] for any software. The function point measure includes five parameters i.e. external input, external output, interfaces, file and enquiry. These parameters are the basis for the estimation of function points in order to obtain the size of proposed software. In addition to the parameters used by FPA, there are certain other parameters also for any software requirement. However, IRBC makes use of an exhaustive set of parameters for its computation in order to compute the Requirement Based Function Point (RBFP) measure. IRBC is capable of computing the complexity value for yet to be developed software. However, in order to estimate RBFP, we require technical and environmental factors (TEF) pertaining to the testing activity. The details of nine different TEF are described in table 2 along with their assigned weights.

TABLE 2. TECHNICAL AND ENVIRONMENTAL FACTORSFactors Description Assigned Value

1 Test tools 52 Documented inputs 53 Development environment 24 Test environment 35 Test suite reuse 36 Distributed system 47 Performance objectives 28 Security features 49 Complex interfaces 5

These parameters are having significant contribution in order to obtain RBFP. The need and applicability of these factors will be determined on the basis of degree of influence (DI) that ranges from zero (harmless) to four (essential) . Hence, TEF can be computed by summing the score of nine different factors based on their degree of influence (DI) which ranges from zero (harmless) to four (essential). Harmless = 0, Needless = 1, Desirable = 2, Necessary =3Essential = 4. Mathematically, TEF can be computed as:

TEF = 0.65 + 0.01 x ∑DISince IRBC has bearing on two basic parameters i.e. functionality and input and these parameters are sufficient to decide or generate the test case in both black box and white box scenarios. Also, we have nine TEF specifically meant for the testing purpose. Hence, requirement based function point (RBFP) can be described as a product of IRBC and TEF. This is expressed as:

RBFP = IRBC * TEF RBFP acts as basis for finding out the optimal number of test cases, which is required to carry out exhaustive testing of the yet to be developed software. Number of Requirement Based Test Cases (NRBTC) Number of requirement based test case (NRBTC) is a function of requirement based function point (RBFP), because numbers of function point dictate the number of test cases to be designed [10]. This can be expressed as:

NRBTC = (RBFP)1·2 The estimation of test case requirement for the proposed software will serve as the basis for the estimation of test effort in man hours. Requirement Based Test Team Productivity (RBTTP) Test team productivity depends on the number of staff and personnel (talent) available to test the software. A model [10] for the estimation of tester rank with two different dimensions i.e. experience in testing and knowledge of target application as illustrated in figure 1. Tester rank [10] helps in understanding the tester behavior for test execution. Higher the rank of test team, lower is the need of number of testers. Hence, in order to derive the requirement based test team productivity (RBTPP), number of testers and their relative rank is considered which can be expressed as:

RBTTP =

Where T is total number of testers and R is the respective rank of the tester model.

Figure 1: Tester Rank Model

Having obtained the value of number of test cases (NRBTC) and productivity of test team members (RBTTP), we now compute the requirement based test effort estimation in man-hrs. Requirement Based Test Effort (RBTE) In order to compute software testing effort for the proposed software, it is necessary to know the two significant parameters, first, the prior knowledge of number of test cases and second, the productivity of the test team. Hence in this direction, we have already derived the contributing measures i.e. NRBTC, for the computation of number of test case and RBTTP, for the estimation of test team productivity. These measures are multiplied in order to get the final requirement based test effort estimate (RBTEE) in man-hours for the proposed software. This is expressed as:

RBTEE = RBTC * RBTTP man-hrs Early estimation of software testing effort using requirement based complexity will save tremendous amount of time, cost and man power for yet to be developed software. The next section carries out a case study in order to elaborate the proposed approach.

V. SOFTWARE DOCUMENTATION USING IEEE-830:1998 This section carries out a case study of FCFS Scheduling

2

3

4 1 2

4

Junior

Senior

Junior SeniorExpert in Testing

Expert in Application knowledge

International Conference on Computer & Communication Technology (ICCCT)-2011

375

Page 4: [IEEE 2011 2nd International Conference on Computer and Communication Technology (ICCCT) - Allahabad, India (2011.09.15-2011.09.17)] 2011 2nd International Conference on Computer and

algorithm in order to illustrate the proposed metric and its comparison with other existing measures. A. Example SRS: FCFS Scheduling 1. Introduction The CPU is one of the primary computer resources. First come first serve (FCFS) scheduling algorithm is based on the concept of, process that requests the CPU first, is assigned the CPU first. 2. Purpose The main purpose of FCFS scheduling algorithm is to increase the CPU utilization, & throughput and reduce waiting time and response time. With this algorithm one can achieve fairness in allocating the processes to CPU based on the order of their arrival. 3. Scope The major scope of FCFS scheduling algorithm lies in batch systems, where the waiting time can be large if short request wait behind the longer ones. Therefore, FCFS is used in the case where the burst time of the processes are comparatively less and in ascending order. It is not suitable for time sharing systems but is used in multilevel feedback queues. 4. Definitions FCFS is a scheme in which the process that requests the CPU first is assigned the CPU first. Throughput, can be defined as the number of processes that are to be completed per unit time. Turnaround time, is the interval between the submission time of the processes to the completion time. Waiting time, can be defined as the sum of periods spent in waiting in the ready queue. Response time, is the amount of time describes when the processor starts responding. 5. References Galvin, Abraham Silberschatz, Introduction to Operating System Concepts, 7th Edition, Wiley publication. 6. Overview This algorithm executes the requests on the basis of arrival. The average waiting time under this policy is quite long, hence, the FCFS policy is non-preemptive. 7. Overall Problem Description: Process that requests the CPU first will be allocated the CPU first. When a process enters the ready queue, its PCB is linked onto the tail of the queue. When the CPU is free it is allocated to the process at the head of the queue. The running process is then removed from the queue. 8. Product Perspective: The perspective of the case sudy for FCFS scheduling can be described wih a block diagram shown below, where the number of process in ready queue will be assigned on first come and first serve basis to the processor.

READY QUEUE

9. Product Functions: 9.1 Inputs: Process arrival time, number of processes,

burst time, waiting time process, turnaround time.

9.2 Outputs: Display of waiting Time, Display of turnaround Time.

10. User Characteristics: User type is end user only for providing the input and visualizing the result on output screen. The computation to be performed is on single machine without any client server environment. 11. Constraints FCFS is non-preemptive in nature, and FCFS is particularly troublesome for time-sharing system. Also, no process should be allowed to keep the CPU for an extended period.

B. Illustration of Requirement Based Test Effort Estimation Measure (RBTEE) Method 1: Proposed Approach: RBTEE Name of Program/ Module: FCFS Input Complexity= 2*1*1 =2; Output Complexity =2*1*1 =2; Calculation of Storage Complexity = 1, IOC = 5 Functionality: FCFS; Sub Processes: Execution time, Computation, Display; FR = 1*3=3 i. No. of Attributes: Functionality, Usability ii. No. of Sub-attributes: Accuracy, Operability, NFR = 2 Requirement Complexity = 5; Product Complexity = 25 Personal Complexity Attributes PCA = = 1.17 Design Constraints Imposed: DCI = 0, Interface Complexity: IFC = 0, User Class Complexity: UCC = 1: Casual End User, System Feature Complexity: SFC = 0 SDLC= 1*1 =1; IRBC = ((25*1.17)+0+0+0)*1=29.25 RBFP = IRBC X TCF: 29.25 * .95 = 27.78 RBTC = (RBFP) 1.2: = 54.028; RBTTP = 1*1 =1 RBTEE = RBTC X RBTTP = 54.028*1 = 54.028 Method 2: Scenario based: Use Cases: Validate Inputs, Validate burst time input to make process

Unadjusted Actor Weight: Actor No. of Use Cases Weight UAW

Manager 2 1 2Attendant 2 1 2Calculation of Scenarios

Use Case Normal Scenario(N) Exceptional Scenario(E)U1 Valid input If user input special char. U2 Ascending order Take process in arbitrary order &

PT=PN*N+PE*E, Now, Unadjusted use case Weight Use Case Scenarios Normal Exceptional Weight

U1 2 1 1 1.4U2 3 1 2 1.8

UUCW 3.2UUCP= =7.2, Now Technical Environment Factor:

Factor Description Assigned Weight Extended T1 Input 1 4 4T2 Environment 1 0 0T3 Test Env. 1 3 3T4 Performance 2 1 2

Total=9, AUCP= = 5.328, Effort=5.328*3 =15.984 mhMethod 3: Test Execution Effort Estimation Test Cases: Validate Input values. Burst time Test-1: No. of Executions=5, Test-2: No. of Executions=14; Total =5+14=19, Data Input = 15, Screen items=18, Execution Complexity =23, Effort=437 mhMethod 4: Experience based approach

CPUP P P

International Conference on Computer & Communication Technology (ICCCT)-2011

376

Page 5: [IEEE 2011 2nd International Conference on Computer and Communication Technology (ICCCT) - Allahabad, India (2011.09.15-2011.09.17)] 2011 2nd International Conference on Computer and

Calculation of Function Point: EI = 8, EO = 40, EQ = 0, ILF = 20, ELF = 0; CAF=1.07; Function Point = 72.76 No Of Test Cases = (function point)1.2 = 171.50 Method 5: Use Case point based approach

Unadjusted Actor Weight: Actor Use Cases Factor UAW

User 1 2 2Total=2, Now, Unadjusted use case Weight (UUCW):

Use Case Type FactorInput Simple 5

User Creation Simple 5Resources Simple 5

Total=5+5+5=15, Unadjusted Use Case Point= 17 Technical Factor Description:

Factor Description Assigned Weight Extended T1 Input 1 2 2T2 Environment 1 1 1T3 Test Env. 1 1 1T4 Performance 2 1 2

Total=2+1+1+2=6; Adjusted UCP Calculation: AUCP= = 12.07, Final Effort=12.07*20 =241.4 mhMethod 6: Test Specification based approach Validate Input values: Burst time: Test-1: No. of Executions=5, Test-2: No. of Executions=14,Total Execution point=5+14=19, Now Productivity Factor:

Productivity Factor WeightCustomer Participation 1Staff Availability 2Logical Complexity 2Requirement Volatility 1Efficiency Requirement 1

Total=1+2+2+1+1=7; Effort=19*7=133 man-hours Method 7: Cognitive Complexity based estimation WICS = 4/25-5 + 7/25-10 + 7/25-16 + 5/ 25-20 = 2.44 SBCS = 1+3+3 =7, CICM = 7*2.44 = 17.11 CU Method 8: Test effort estimation FP =72.76, UTP = 72.76 * 4.5 = 327.42, ATP = 327.42 * 1.375 = 450.20, TE = 450.20 * 0.2 = 90.04 man hrs Having seen the various proposals for computing the test effort estimation, we evaluate these for a variety of programs in order to ascertain and establish the proposed metric and result.

VI. RESULTS AND VALIDATION This section categorically compares the proposed test effort estimation measures with various established measures proposed in the past. To analyze the validity of the proposed measure, ten SRS’s of various problem statements have been developed. In order to compare the proposed measure with code based test effort estimation measures, the source code of all ten problems are also generated. The comparison strictly considers various categories like use case based, complexity value based and code and execution points based test effort estimation. The metrics based on use case includes the parameters like actor weight, use case weight; constant conversion factors etc for the estimation of test effort but the important contributing factors such as input, output, interfaces, storage, functional requirement and most importantly non functional requirements are not taken into consideration in

the established testing measures. However, other two categories for test effort estimation i.e. code based and complexity based. The code based methods uses the lines of code, computes the execution points and finds the test cases using a constant to compute test effort measure. However, the complexity based test effort estimation also uses lines of code, control structure and lastly correlates the test effort with complexity of the code. These two approaches can work only when the code of the software is available. Test effort estimation using these three approaches is computed in order to analyze the validity of the proposed approach. Use case based methods can be further classified into two broad categories i.e. use case point based and scenario based. The values obtained from the proposed approach when compared with these two use case based methods shows that the proposed measure is a comprehensive one, well aligned and comparable. Also the test effort computation in man-hrs. by the proposed approach is more realistic because it is systematically derived from requirement based complexity (IRBC), which in turn is derived based on IEEE: 830:1998 SRS document, that is also not used by any other established measures for the estimation of test effort. The ten problems considered for applying the test effort estimation measures are – bit stuffing, matrix addition, matrix multiplication, first come fist serve algorithm implementation, knapsack problem, LZW algorithm, round robin scheduling, cyclic redundancy check (CRC), prime number and semaphore implementation. Figure 2 shows the comparison between proposed measure and other requirement based approaches i.e. use case point and scenario based approaches. It is seen from the plot that the value of use case point based approach is higher because of the fact that the adjusted use case point is multiplied with constant 20. However, in scenario based measures, lower value is obtained because the conversion factor is used as constant 3. The measure specifically estimates the test effort on identification of normal and exceptional scenario for the proposed software, which is not there in use case point approach.

Figure 2 Plot between RBTEE V/s Use Case based measures Figure 3 shows the values obtained from test execution point and test specification based approaches, which are purely code dependent. The code & execution point based approaches carry higher values of test effort in man-hrs.

050

100150200250300350400450

1 2 3 4 5 6 7 8 9 10

Use Case (Man-hr) Scenario based Use Case (Man- hr) RBTEE(Man-hr)

No. of Programs

Test

Eff

ort v

alue

sin

man

-hrs

International Conference on Computer & Communication Technology (ICCCT)-2011

377

Page 6: [IEEE 2011 2nd International Conference on Computer and Communication Technology (ICCCT) - Allahabad, India (2011.09.15-2011.09.17)] 2011 2nd International Conference on Computer and

than the proposed measure because of the fact that, these approaches are based on the execution points, the execution points depend on the variables used in the program. The higher values for 3, 6 and 10 are due to higher execution points, screen items which in turn increases the execution complexity and test effort.

Figure 3. Plot between RBTEE V/s Test Execution points based measures

Figure 4 shows the comparison between I-RBC and RBTEE. It can be deduced that the test effort estimation is dependent on the software complexity. Though the measuring unit of both the parameters is different still the dependency can be observed. The unit for IRBC is complexity value; however test efforts are measures in man hrs. The purpose is to show the relationship between both the parameters, that is, higher complexity requires higher test effort.

Figure 4. Comparison between IRBC and RBTEE

VII. CONCLUSION

The proposed work computes the requirement based test effort estimation based on IEEE-830: 1998 standard of requirement engineering document, soon after freezing the requirements. The proposed measure is reliable because it is derived from SRS of the software to be developed. This makes the estimation precise and perfect. The robustness of the proposed measure is proved by comparing it with different categories of test effort estimations. Although, there have been various proposal for test effort estimation but the important contributing factors such as input, output, interfaces, data storage, functional requirement decompositions and most importantly non-functional requirements were not taken into considerations in existing measures, which otherwise plays very significant role in

complexity computation and test effort estimation. On the basis of result and validation, it is observed that the proposed measure follows the trend of all the other established measures in a comprehensive fashion. However other requirement based measures like use case and scenario based test effort estimation measures depend on the prudence of the analyst that can induce some unintended errors during analysis and modeling of the requirements. The proposed work has also been successful in establishing a linear relationship between improved requirement based complexity and requirement based test effort.

VIII. REFERENCES 1. Software engineering standard committee of IEEE Computer

Society. IEEE Recommended Practice For Software Requirement Specifications”, IEEE Inc. NY, USA, 1998.

2. Antonio Bertilino, Software Testing Research: Achievements, challenges and Dreams, IEEE Future of software Engineering-FOSE 2007

3. Charles R Symons. Function point: Difficulties and Improvements, IEEE Transactions on Software Engineering, Vol.14, No.1, Jan. 1988.

4. Eduardo Aranha, Filipe de Almeida, Thiago Diniz, Vitor Fontes, Paulo Borba. Automated Test Execution Effort Estimation Based On Functional Test Specification. Proceedings of Testing: Academic and Industrial Conference Practice and Research Techniques, MUTATION 2007.

5. Barry Boehm, Cost models for future software life cycle processes, Annals of Software Engineering, Special Volume on Software Process and Product Measurement, Neitherlands, 1985

6. Thomas J Mc Cabe. A Complexity Measure. IEEE Transactions on Software Engineering, Vol., SE-2, No. 4, Dec. 1976.

7. Ajitha Rajan, Michael W Whalen, Mats P. E. Heimdahl.2007. Model Validation Using Automatically Generated Requirements-Based Tests. 10th IEEE- High Assurance Systems Engineering Symposium, 2007.

8. Aranha E, Borba P. Test Effort Estimation Model Based On Test Specifications. Testing : Academic and Industrial Conference- Practice and Research Techniques,IEEE Computer Society, 2007

9. ZHU Xiaochun, ZHOU Bo, WANG Fan, QU Yi CHEN Lu. Estimate Test Execution Effort at an Early Stage: An Empirical Study”, International Conference on Cyber World , IEEE Computer Society,2008

10. Qu Yi Zhou Bo, Zhu Xiaochun. Early Estimate the Size of Test Suites from Use Cases”, 15th Asia-Pacific Software Engineering Conference, IEEE Computer Society, 2008

11. DAniel Guerreiro e Silva, Bruno T. de Abreu, Mario Jino. A Simple Approach For Estimation of Execution of Function Test Case. IEEE-International Conference on Software Testing Verification and Validation, 2009

12. Erika R. C De Almeida, Bruno T. de Abreu, Regina Moraes. An Alternative Approach to Test Effort Estimation Based on Use Case. IEEE-International Conference on Software Testing Verification and Validation, 2009

13. Suresh Nageshwaran, Test Effort Estimation Using USE CASE Points. Quality Week 2001, San Francisco,California USA, 2001

14. The Standish group research for staggering bugs and effort, http://standishgroup.com

15. Sharma Ashish, Kushwaha DS, A Complexity measure based on requirement engineering document, JCSE UK, May 2010

16. Sharma Ashish, Kushwaha DS, NLP based component extraction and its complexity analysis, ACM Sigsoft,, Issue 36, January 2011

17. Kushwaha DS, Misra AK, Software Test Effort Estimation. ACM Sigsoft, Software Engineering Notes, Vol. 33 , No. 3, May 2008

0

100

200

300

400

500

600

700

800

900

1000

1 2 3 4 5 6 7 8 9 10

Test Specification (Man-hr) Test Execution Effort (Man-hr)

Test Effort (Man-hr) RBTEE (Man-hr)

0

20

40

60

80

100

120

1 2 3 4 5 6 7 8 9 10

IRBC RBTEE

No. of Programs

Test

Eff

ort v

alue

sin

man

-hrs

No. of Programs

Estim

ated

Val

ues

International Conference on Computer & Communication Technology (ICCCT)-2011

378