a survey of oo and non-oo metrics

46
A Survey Of OO AND NON-OO Metrics Preethika Devi.K,Monica S Dept of Computer Science, College Of Engineering,Guindy Abstract: This paper presents the results derived from our survey on metrics used in object–oriented environments. Our survey includes a small set of the most well known and commonly applied traditional software metrics which could be applied to object–oriented programming and a set of object– oriented metrics (i.e. those designed specifically for object–oriented programming). Given the central role that software development plays in the delivery and application of information technology, managers are increasingly focusing on process improvement in the software development area. This demand has spurred the provision of a number of new and/or improved approaches to software development, with perhaps the most prominent being object-orientation (OO). In addition, the focus on process improvement has increased the demand for software measures, or metrics with which to manage the process. The need for such metrics is particularly acute when an organization is adopting a new technology for which established practices have yet to be developed. It is a measure of some property of a piece of software or its specifications. Since quantitative measurements are essential in all sciences, there is a continuous effort by theoreticians to bring similar approaches to software development. The goal is obtaining objective, reproducible and quantifiable measurements, which may have numerous valuable applications in schedule and budget planning, cost estimation, quality assurance testing, software debugging, software performance optimization, and optimal personnel task assignments. Key words: Object Oriented, Design, Inheritance, Metric, Measure, Coupling, Cohesion , Test metrics, Size estimation, Effort estimation, Test effectiveness evaluation.

Upload: moni8293

Post on 31-Oct-2015

31 views

Category:

Documents


0 download

DESCRIPTION

a term paper related to software testing

TRANSCRIPT

A Survey Of OO AND NON-OO Metrics Preethika Devi.K,Monica S Dept of Computer Science, College Of Engineering,Guindy

Abstract:This paper presents the results derived from our survey on metrics used in object–oriented environments. Our survey includes a small set of the most well known and commonly applied traditional software metrics which could be applied to object–oriented programming and a set of object–oriented metrics (i.e. those designed specifically for object–oriented programming). Given the central role that software development plays in the delivery and application of information technology, managers are increasingly focusing on process improvement in the software development area. This demand has spurred the provision of a number of new and/or improved approaches to software development, with perhaps the most prominent being object-orientation (OO). In addition, the focus on process improvement has increased the demand for software measures, or metrics with which to manage the process. The need for such metrics is particularly acute when an organization is adopting a new technology for which established practices have yet to be developed. It is a measure of some property of a piece of software or its specifications. Since quantitative measurements are essential in all sciences, there is a continuous effort by theoreticians to bring similar approaches to software development. The goal is obtaining objective, reproducible and quantifiable measurements, which may have numerous valuable applications in schedule and budget planning, cost estimation, quality assurance testing, software debugging, software performance optimization, and optimal personnel task assignments.

Key words:Object Oriented, Design, Inheritance, Metric, Measure, Coupling, Cohesion , Test metrics, Size estimation, Effort estimation, Test effectiveness evaluation.

I. INTRODUCTIONObject-Oriented Analysis and Design of software provide many benefits such as reusability, decomposition of problem into easily understood object and the aiding of future modifications. But the OOAD software

development life cycle is not easier than the typical procedural approach. Therefore, it is necessary to provide dependable guidelines that one may follow to help ensure good OO programming practices andwrite reliable code. Object-Oriented programming metrics is an aspect to be considered. Metrics to be a set

of standards against which one can measure the effectiveness of Object-Oriented Analysis techniques in thedesign of a system.Five characteristics of Object Oriented Metrics are as following:

Localization operations used in many classes Encapsulation metrics for classes, not modules Information Hiding should be measured & improved Inheritance adds complexity, should be measured Object Abstraction metrics represent level of abstraction

Software metrics are used to evaluate the software development process and the quality of the resulting product. Software metrics aid evaluation of the testing process and the software product by providing objective criterion and measurements for management decision making. Their association with early detections and correction of problems make them important in software. Software metrics are all about measurements which, in turn, involve numbers, the use of numbers to make things better, to improve the process of developing software and to improve all aspects of the management of that process.

Importance of Metrics Metrics is used to improve the quality and productivity of

products and services thus achieving Customer Satisfaction. Easy for management to digest one number and drill down, if

required. Different Metric(s) trend act as monitor when the process is going

out-of-control. Metrics provides improvement for current process.

Software metrics hold importance in testing phase, as software testing metrics acts as indicators of software quality and fault proneness. In order to measure the actual values such as software size, defect density, verification effectiveness and productivity, records of these values must be maintained. Ideally, these actual values will be tracked against estimates that are made at the start of a project and updated during project execution.

II.TABLE

The following table gives a brief description of the OO metrics,definition,formula and its author.

TABLE 1.1 LIST OF OO METRICS AND THEIR FORMULA

Name Description Formula Author Average Service State Dependency (ASSD)

It is used in web service testing where multiple services share a same state which can be updated and retrieved by these service components . If is stateful component then Ck =1 otherwise Ck =0.

ASSD=1/n¿∑k=1

n

ck

n = total no.of components in domainCk= component which has state

Kai Qian, Jigang LiuDecoupling Metrics for Service Composition

Average Service Persistent Dependency (ASPD)

ASPD shows the average participation times for a service component to tie with other service components indirectly. The lower the ASPD is,

ASPD=1/n * ∑

0<i ≤ n0< j ≤m

P (i , j)

n= total no.of componentsm=no.of repositoriesP(i,j)=1, if service component i participates the

Kai Qian, Jigang LiuDecoupling Metrics for Service Composition

the looser the coupling may be.

persistent data j otherwise 0

Required Service Dependency

We can see that one composite componentmay consist of basic components and compositecomponents recursively until it reaches a basic component. This is the way to composite a composite service component either by aggregation or containmentcomposition.

---

Kai Qian, Jigang LiuDecoupling Metrics for Service Composition

Average Required Service Dependency(ARSD)

It is the average of required service dependency. The lower the ARSD is the looser the coupling will be.

ARSD= 1/n * ∑i−1

n

R i

n= total no.of componentsRi=¿ no of required services the servicecomponent i requires to provide its services.

Kai Qian, Jigang LiuDecoupling Metrics for Service Composition

Average Service Invocation Coupling(ASIC)

It is web service invocation coupling metric. The lower the ASIC is the looser the coupling will bethere between service components. This index also measures the portability and performance quality attributes.

ASIC = 1/n * ∑i=1

n

IC i

ICi = W nb* Nnb + Wb* Nb + Wsys * Nsyn

Nnb =no.of non blocking asynchronous operationsNb =no.of blocking asynchronous operationsNsyn =no.of synchronous operations

Kai Qian, Jigang LiuDecoupling Metrics for Service Composition

Reuse ratio (U) It is the ratio of no.of superclasses to the total no.of classes.It must be high. U=

no . of superclasstotalno . of classes

B.Henderson-sellers, Object-Oriented Metrics, Measures of Complexity.

Prentice Hall, 1996.Specialisation Ratio(S)

It is the ratio of no.of subclasses to the total no.of superclasses. S=

no . of subclassno . of superclass

B.Henderson-sellers, Object-Oriented Metrics, Measures of Complexity.Prentice Hall, 1996

Polymorphism Factor(PF)

The PF, metric is proposed as a measure of polymorphism. It measures the degree ofmethod overriding in the class inheritance tree.

Mn(Ci)=no.of new methodsMn (Ci)=no.of new methodsDC(Ci)=no.of descendant count

R.Harrison, S.J.Counsell, and R.V.Nithi, An Evaluation of MOOD set ofObjectOriented Software Metrics. IEEE Trans. Software Engineering

Number of methods overridden by a subclass(NMO)

When a method in a subclass has the same name and type signature as in its superclass, then the method in the superclass is said to be overridden by the method in

the subclass.

B.Henderson-sellers, Object-Oriented Metrics, Measures of Complexity.Prentice Hall, 1996

Number of Attributes per Class (NOA)

It is the total no.of attributes present in a particular class

NOA= total no.of attributes present in a class.B.Henderson-sellers, Object-Oriented Metrics, Measures of Complexity.Prentice Hall, 1996

Number of methods per Class (NOM)

It is the total no.of methods present in a particular class

NOM= total no.of methods present in a classB.Henderson-sellers, Object-Oriented Metrics, Measures of Complexity.Prentice Hall, 1996

Weighted methods per class(WMC)

The WMC is a count of sum of complexities of all methods in a class

n=no of methods in a class Ci =complexity of method i

S.R.Chidamber and C.F.Kamerer, A metrics Suite for Object-OrientedDesign. IEEE Trans. Software Engineering,1994

Response for a class(RFC)

The response set of a class (RFC) is defined as set of methods that can be potentially executed in response to a message

RFC = Mi U all j{ Rij }Mi = set of all methods in a class{ Rij }= set of methods called by Mi

S.R.Chidamber and C.F.Kamerer, A metrics Suite for Object-OrientedDesign. IEEE Trans. Software Engineering

received by an object of that class

Coupling between objects(CBO)

CBO for a class is count of the number of other classes to which it is coupled. Twoclasses are coupled when methods declared in one class use methods or instance variablesdefined by the other class

---- S.R.Chidamber and C.F.Kamerer, A metrics Suite for Object-OrientedDesign. IEEE Trans. Software Engineering,1994

Data Abstraction Coupling(DAC)

Data Abstraction is a technique of creating new data types suited for an application to beprogrammed. It provides the ability to create user-defined data types called Abstract DataTypes (ADTs).

DAC = no.of Abstract Data types(ADT’s) defined in the class B.Henderson-sellers, Object-Oriented Metrics, Measures of Complexity.Prentice Hall, 1996

Message passing coupling(MPC)

It is the number of send statements defined in a class.

MPC = no.of send statements in a class B.Henderson-sellers, Object-Oriented Metrics, Measures of Complexity.Prentice Hall, 1996

Coupling Factor(CF) It is the ratio of the maximum possible number of couplings in the system to the actual number of coupling is not imputable to inheritance

TC= total no.of classes is_client(Ci,Cj)= 1,if relationship exists between Ci and Cj,0 otherwise

R.Harrison, S.J.Counsell, and R.V.Nithi, An Evaluation of MOOD set ofObjectOriented Software Metrics. IEEE Trans. Software Engineering

Lack of cohesion in methods(LCOM)

Lack of Cohesion (LCOM) measures the dissimilarity of methods in a class by looking at the instance variable or attributes used by methods. positive high value of LCOM implies that classes are less cohesive. So a low value of

LCOM=|P|-|Q|,if |P|>|Q| 0,otherwise.P= {(Ii,Ij), Ii ∩ Ij = 0 } Q= {(Ii,Ij), Ii ∩ Ij ≠ 0 }(Ij) = set of all instance variables used by method Mi

S.R.Chidamber and C.F.Kamerer, A metrics Suite for Object-OrientedDesign. IEEE Trans. Software Engineering,1994

LCOM is desirableTight class cohesion(TCC)

The measure TCC is defined as the percentage of pairs of public methods of the classwith common attribute usage

TCC=[method pairs with common attribute usage/no of pairs of method]*100

L.Briand , W.Daly and J. Wust, A Unified Framework for CouplingMeasurement in Object-Oriented Systems. IEEE Transactions on softwareEngineering

Loose class cohesion(LCC)

The measure LCC is defined as the percentage of pairs of public methods of the class, which are directly or indirectly connected.It is same as TCC but it also considers indirect methods.

L.Briand , W.Daly and J. Wust, A Unified Framework for CouplingMeasurement in Object-Oriented Systems. IEEE Transactions on softwareEngineering

Information flow based cohesion(IFC)

ICH for a class is defined as the number of invocations of other methods of the sameclass, weighted by the number of parameters of the invoked method.

---

Y.Lee, B.Liang, S.Wu and F.Wang, Measuring the Coupling and Cohesion of anObject-Oriented program based on Information flow, 1995

Information flow based inheritance coupling(IFCIC)

same as IFC,but only counts method’s invocation of ancestors of classes

--- Y.Lee, B.Liang, S.Wu and F.Wang,

Measuring the Coupling and Cohesion of anObject-Oriented program based on Information flow, 1995

RFC -1 Same as RFC except that methods indirectly invoked are not included.

S.R.Chidamber and C.F.Kamerer, A metrics Suite for Object-OrientedDesign. IEEE Trans. Software Engineering,1994

Depth of Inheritance Tree(DIT)

The depth of a class within the inheritance hierarchy is the maximum number of steps from the class node to the root of the tree and is measured by the number of ancestor classes. In cases involving multiple

DIT= no of ancestor classes

S.R.Chidamber and C.F.Kamerer, A metrics Suite for Object-OrientedDesign. IEEE Trans. Software Engineering,1994

inheritances, the DIT will be the maximum lengthfrom the node to the root of the tree

Number of children(NOC)

The NOC is the number of immediate subclasses of a class in a hierarchy

NOC=no.of immediate subclassS.R.Chidamber and C.F.Kamerer, A metrics Suite for Object-OrientedDesign. IEEE Trans. Software Engineering,1994

Method Inheritance Factor(MIF)

It is system level metrics. MIF is defined as the ratio of the sum of the inherited methods in all classes of the system

Ma(Ci)= Mi(Ci)+ Md(Ci)TC= total no of classesMd(Ci)=no of methods declared in the classMi(Ci)=no of methods inherited in the class

R.Harrison, S.J.Counsell, and R.V.Nithi, An Evaluation of MOOD set ofObjectOriented Software Metrics. IEEE Trans. Software Engineering

Attribute Inheritance factor(AIF)

AIF is defined as the ratio of the sum of inherited attributes in all classes of the system. AIF denominator is the total number of available attributes for all classes.It is defined in an analogous manner and provides an indication of the impact of inheritance in the object oriented software.

Aa(Ci)= Ai(Ci)+ Ad(Ci)TC= total no of classesAd(Ci)=no of attributes declared in the classAi(Ci)=no of attributes inherited in the class

R.Harrison, S.J.Counsell, and R.V.Nithi, An Evaluation of MOOD set ofObjectOriented Software Metrics. IEEE Trans. Software Engineering

Method hiding factor(MHF)

The MHF metric states the sum of the invisibilities of all methods in all classes.The invisibility of a method is the percentage of the total class from which the method is hidden. If the value of MHF is high (100%), it means all methods are private which indicates very little

Md(Ci)=no of methods declared in the classMv(Ci)=no of methods visible in the classMh(Ci)=no of methods hidden in the class

R.Harrison, S.J.Counsell, and R.V.Nithi, An Evaluation of MOOD set ofObjectOriented Software Metrics. IEEE Trans. Software Engineering

functionality. Thus it is not possible to reuse methods with high MHF.

Attribute hiding factor(AHF)

The AHF metric shows the sum of the invisibilities of all attributes in all classes.The invisibility of an attribute is the percentage of the total classes from which this attribute is hidden. If the value of AHF is high (100%), it means all attributes are private. AHF with low (0%) value indicate all attributes are public.

Ad(Ci)=no of attributes declared in the classAv(Ci)=no of attributes visible in the classAh(Ci)=no of attributes hidden in the class

R.Harrison, S.J.Counsell, and R.V.Nithi, An Evaluation of MOOD set ofObjectOriented Software Metrics. IEEE Trans. Software Engineering

DAC’ This counts the unique classes used

S.R.Chidamber and C.F.Kamerer, A metrics Suite for Object-OrientedDesign. IEEE Trans. Software Engineering,1994

Mc’cabe cyclomatic complexity(MCC)

 It directly measures the number of linearly independent paths through a program's source code. The concept, although not the method, is somewhat similar to that of general text complexity measured by the Flesch-Kincaid Readability Test.

MCC = L – N + 2P

L= no.of edges/linksN=no.of nodesP=no.of disconnected paths

McCabe (December 1976). "A ComplexityMeasure". IEEE Transactions on Software Engineering: 308–320.

Source Lines of code(SLOC)

SLOC is used to estimate the total effort that will be neededto develop a program, as well as to calculate approximate productivity. The SLOC metric measures

Lorenz, Mark & Kidd Jeff: “Object-Oriented Software Metrics”, PrenticeHall, 1994.

the number of physical lines of active code, that is, no blank or commentedlines code

Comment Percent(CP)

The CP metric is defined as the number of commented lines of code divided by the number of non-blank lines of code . The comment percentage is calculated by thetotal number of comments divided by the total lines of code less the number of blanklines.

CP= no.of commented lines / no.of non-balnk lines of code Laing V.,Coleman C., : “Principal Components of Orthogonal OOMetrics",Software Assurance Technology Center (SATC),2001

Specialisation Index per class(SIX)

The specialisation index measures the extent to which subclasses override(replace) behaviour of their superclasses. SIX provides a measure of the quality of sub-classing.

SIX=[No of overridden methods * Class hierarchy nesting level]/Total no of methods

Lorenz, Mark & Kidd Jeff: “Object-Oriented Software Metrics”, PrenticeHall, 1994.

Number of messages send (NOM)

NOM measures the number of messages sent in a method, segregated by typeof message.The types include:Unary, Messages with no argumentsBinary, messages with one argument, separated by a special selector name.(concadenation and math functions).Keyword, messages with one or more arguments

---- Lorenz, Mark & Kidd Jeff: “Object-Oriented Software Metrics”, PrenticeHall, 1994

Conceptual Similarity betweenMethods - CSM

The conceptual similarity between methods mk ∈ M(C) and mj ∈ M(C), CSM(mk, mj), is computed as thecosine between the vectors vmk and vmj, corresponding to mk and mj in the semantic space constructed by LSI.

Denys Poshyvanyk, Andrian Marcus,The Conceptual Coupling Metrics for Object-Oriented Systems

Conceptual Similarity between aMethod and a Class - CSMC

Let ck ∈ C and cj ∈ C be two distinct (ck ≠ cj) classesin the system. Each class has a set of methods M(ck) ={mk1, …, mkr}, where r = |M(ck)| and M(cj) = {mj1, …,mjt}, where t = |M(cj)|. Between every pair of methods(mk, mj) there is a similarity measure CSM(mk, mj).

Cj=class jt = |M(cj)|

Denys Poshyvanyk, Andrian Marcus,The Conceptual Coupling Metrics for Object-Oriented Systems

Conceptual Similarity between two Classes - CSBC

average of the similarity measures between allunordered pairs of methods from class ck and class cj.The definition ensures that the conceptual similaritybetween two classes is symmetrical, as CSBC(ck, cj) =CSBC(cj, ck).

Cj=class jr= |M(Ck)|

Denys Poshyvanyk, Andrian Marcus,The Conceptual Coupling Metrics for Object-Oriented Systems

Conceptual Coupling of a Class -CoCC

It is measured by thedegree to which the methods of a class are conceptuallyrelated to the methods of other classes.

n = no.of classesc = class idi ∈ C, and c≠di.

Denys Poshyvanyk, Andrian Marcus,The Conceptual Coupling Metrics for Object-Oriented Systems

Number of It measures the RHarrison,S_Counsell,R.NithiCoupling metrics for OO

associations(NAS) Number of Associations between a class and its peer.

---- design,1998

Average Inheritance Depth(AID)

It is the average of the Inheritance depth.AID=1,parent classes =0,classes without parents

AID = DIT/nDIT=Inheritance depthn= no.of classes

Henderson Sellers,Object Oriented Metrics: Measures of complexity,Prentice hall

Class to leaf depth(CLD)

Maximum number of levels in the hierarchythat are below the class

CLD= Maximum number of levels in the hierarchy that are below the class

J. Bansiya and C.G. Davis, “A Hierarchical Model for Object-OrientedDesign Quality Assessment”, IEEE Transactions on SoftwareEngineering, Vol. 28, No. 1, 2002

Number of Parents(NOP)

Number of classes that a given class directlyinherits from.

NOP=no.of parents of a particular classJ. Bansiya and C.G. Davis, “A Hierarchical Model for Object-OrientedDesign Quality Assessment”, IEEE Transactions on SoftwareEngineering, Vol. 28, No. 1, 2002

Number of Descendants(NOD)

Number of classes directly/indirectlyinherited from this class

Li, W. and Henry, S., "Object-oriented metrics thatpredict maintainability", Journal of Systems and Software,vol. 23, no. 2, 1993

Number of Ancestors(NOA)

Number of classes from which the class is inherited (directly/indirectly)

Li, W. and Henry, S., "Object-oriented metrics thatpredict maintainability", Journal of Systems and Software,vol. 23, no. 2, 1993

Number of Methods inherited (NMinh)

Number of methods in a class that the class inherits from its ancestors and does not override.

Li, W. and Henry, S., "Object-oriented metrics thatpredict maintainability", Journal of Systems and Software,vol. 23, no. 2, 1993

Number of Methods Added(NMA)

Number of methods not inheritednot overridden

Li, W. and Henry, S., "Object-oriented metrics thatpredict maintainability", Journal of Systems and Software,vol. 23, no. 2, 1993

Static Polymorphism in Ancestors (SPA)

Number of function members that implement

Arisholm, E., Briand, L. C., and Foyen, A., "Dynamiccoupling measurement for object-

the same operator in ancestors and in the currentclass (at compile time)

oriented software", IEEETransactions on Software Engineering

Dynamic Polymorphism in Ancestors (DPA)

Number of functionmembers that implement the same operator in ancestors and inthe current class (at run time).

Arisholm, E., Briand, L. C., and Foyen, A., "Dynamiccoupling measurement for object-oriented software", IEEETransactions on Software Engineering

Static Polymorphism in Descendants (SPD)

Number of functionmembers that implement the same operator in descendants andin the current class (at compile time).

Arisholm, E., Briand, L. C., and Foyen, A., "Dynamiccoupling measurement for object-oriented software", IEEETransactions on Software Engineering

Dynamic Polymorphism in Descendants (DPD)

Number of functionmembers that implement the same operator in descendants andin the current class (at run time)

Arisholm, E., Briand, L. C., and Foyen, A., "Dynamiccoupling measurement for object-oriented software", IEEETransactions on Software Engineering

Static Polymorphism (SP)

It is the sum of static Polymorphism in Ancestors and Static Polymorphism in Descendants

SP= SPA+SPDSPA= Static Polymorphism in AncestorsSPD= Static Polymorphism in Descendants

Arisholm, E., Briand, L. C., and Foyen, A., "Dynamiccoupling measurement for object-oriented software", IEEETransactions on Software Engineering

Dynamic Polymorphism (DP)

It is the sum of dynamic Polymorphism in Ancestors and dynamic Polymorphism in Descendants

DP= DPA+DPDDPA= Dynamic Polymorphism in AncestorsDPD= Dynamic Polymorphism in Descendants

Arisholm, E., Briand, L. C., and Foyen, A., "Dynamiccoupling measurement for object-oriented software", IEEETransactions on Software Engineering

Overloading in standalone classes(OVO)

Count of number of functions with the same name in a class

Arisholm, E., Briand, L. C., and Foyen, A., "Dynamiccoupling measurement for object-oriented software", IEEETransactions on Software Engineering

Export Object Coupling (EOC)

EOC with respect to a scenario X between objects oi and oj is the percentage of no.of messages exchanged between to the total no.of messages.

EOC =[ {Mx(oi,oj )|oi,oj∈ O∧ oi ≠oj}/ MTx ] *100

Mx(oi, oj ) =the number of messages sent from oi to oj andMTx = total number of messages exchanged during theexecution of the scenario x.

S. Yacoub, H. Ammar, and T. Robinson, "Dynamic Metrics for Object-OrientedDesigns," proc. IEEE 6th International Symposium on Software Metrics (Metrics'99), pp.50-61

Import Object Coupling (IOC)

IOC with respect to a scenario X between objects oi and oj is the percentage of no.of messages exchanged between to the total no.of messages.

EOC =[ {Mx(oi,oj )|oi,oj∈ O∧ oi ≠oj}/ MTx ] *100

Mx(oi, oj ) =the number of messages received by oi from oj andMTx = total number of messages exchanged during theexecution of the scenario x

S. Yacoub, H. Ammar, and T. Robinson, "Dynamic Metrics for Object-OrientedDesigns," proc. IEEE 6th International Symposium on Software Metrics (Metrics'99), pp.50-61

Dynamic CBO of class

This metric is a direct translation of C&K CBO metric,except that it is defined at the runtime.

Dynamic CBO= no.of couples of a class with other classes at run time.Misook Choi, JongSuk Lee, “A Dynamic Coupling for Reusable and Efficient SoftwareSystem”, 5th ACIS International Conference on Software Engineering Research, Management& Applications,

Degree of dynamic coupling between two classes at run time (DDCR)

No.of times class A access methods or instance variables of class B as the percentage of total no of methods or instance variables accessed by class A

DDCR=[No.of times A access methods of class B/total no of methods accessed by class A]*100

Guru nandha rao,Measurement of Dynamic Coupling in a Object Oriented SystemAmerican Journal of Scientific Research

Degree of dynamic coupling from a given set classes

This is an extension of above metric to indicate the level of dynamic coupling within set of classes

[sum of no.of accesses to methods or instance variables outside each class/sum of total no of accesses from these classes]*100

Guru nandha rao,Measurement of Dynamic Coupling in a Object Oriented SystemAmerican Journal of Scientific Research

RI Runtime import coupling between objects

RI =No of classes from which a given class accesses methods or instance variables at runtime

Misook Choi, JongSuk Lee, “A Dynamic Coupling for Reusable and Efficient SoftwareSystem”, 5th ACIS International Conference on Software Engineering Research, Management& Applications

RDI Runtime import degree of coupling

RDI =[ no.of accesses made by a class/total no of access] Misook Choi, JongSuk Lee, “A Dynamic Coupling for Reusable and Efficient SoftwareSystem”, 5th ACIS International Conference on Software Engineering Research, Management& Applications

RE Runtime export coupling between objects

RI =No of classes which accesses methods or instance variables from a given class at runtime

Misook Choi, JongSuk Lee, “A Dynamic Coupling for Reusable and Efficient SoftwareSystem”, 5th ACIS International Conference on Software Engineering Research, Management& Applications

RDE Runtime export degree of coupling

RDE =[ no.of accesses made to a class/total no of access] Misook Choi, JongSuk Lee, “A Dynamic Coupling for Reusable and Efficient SoftwareSystem”, 5th ACIS International Conference on Software Engineering Research, Management& Applications

Coupling Between ServicesMetric (CBS)

CBS is built directly from CBO metric in a suite of C&K metrics.For service A,CBS metric is calculated based on the number of relationshipsbetween A and other services in system.

n =the number of services in systemAiBj=0 if Ai does not connect to Bj and AiBj=1 if Ai connects to Bj

Pham Thi Quynh, Huynh Quyet Thang “Dynamic coupling metric for service oriented software” – International Journal on Electroonics Engineering,2009

Instability Metric for Service Metric (IMS)

Measuring the instabilityof service

fan.in= number of functions that call to function A fan.out =number of functions that are called by function A.

Joost Visser, Departamento de Inform´atica Universidade do Minho Braga, Portugal, “Structure metrics for XML schema”

Degree of Coupling between 2 services metric (DC2S)

The DC2S metric identifies relationship between two services to detect the dependency between these services. DC2S metric

Pham Thi Quynh, Huynh Quyet Thang “Dynamic coupling metric for service oriented software” – International Journal on Electroonics Engineering,2009

identifies the level of coupling between twoservices in runtime

n = no of services in the systemN(A,Bi)=no of connections from service A to Bi

Degree of Coupling within a given set of services metric(DCSS)

dynamic metric, weightof edge is the number of times from request service to provider service. The value of this metric is low, the coupling in system will be loose and reserve. This metric helps to distinguish the difference between two systems which have the same nodes but differ inthe connection between nodes

Max=K*V*(V-1)Min=V*(V-1)d(u,v)=length of shortest path from u to vK= maximum value in the length of shortest path between any two nodesV=vertex set of the graph G(U,V)

Pham Thi Quynh, Huynh Quyet Thang “Dynamic coupling metric for service oriented software” – International Journal on Electroonics Engineering,2009

Operation Hiding Effectiveness Factor(OHEF)

It comes under MOOD2 metrics suite. OHEF measures the goodness of scope settings on class operations i.e. methods When OHEF=1, scope settings are perfect.

OHEF = Classes that do access operations / Classes that can access operations

Fernando Brito e Abreu: Using OCL to formalize object oriented metrics definitions. Technical Report ES007

Attribute Hiding Effectiveness Factor(AHEF)

AHEF is related to AHF. AHF measures the general level of attribute hiding whereas AHEF measures how well the hiding succeeds.

AHEF = Classes that do access attributes / Classes that can access attributes

Fernando Brito e Abreu: Using OCL to formalize object oriented metrics definitions. Technical Report ES007

Internal inheritance factor(IIF)

IIF measures the amount of internal inheritance in your system. Internal inheritance happens when a class inherits another class in the same system. If there is no IIF = Classes that inherit a VB class / All classes that inherit something

Fernando Brito e Abreu: Using OCL to formalize object oriented metrics definitions. Technical Report ES007

inheritance, IIF=0.

Parametric polymorphism factor(PPF)

This metric is simply the percentage of the classes that are parametrized. A parametrized class is also called a generic class

PPF = Parametrized classes / All classesFernando Brito e Abreu: Using OCL to formalize object oriented metrics definitions. Technical Report ES007

Number of interfaces implemented by class (IMPL)

Counts the number of Implements statements. A class may implement either another class (VB Classic) or an interface definition.

R. Marinescu. A Multi-Layered System of Metrics for the Measurement of Reuse by Inheritance. Paper submitted to TOOLS USA'99, March 1999.

Non-private methods defined by class(WMCnp)

This is the same as WMC(Weighted methods of class from C&K metrics) but Private methods are ignored

WMCnp=WMC- private methods

WMC=weighted methods per class

Yu, Z. and Rajlich, V., "Hidden Dependencies inProgram Comprehension and Change Propagation",

Methods defined and inherited by class (WMCi)

This is the same as WMC but all inherited methods are also counted

WMCi=WMC+inherited methods

WMC=weighted methods per class

Yu, Z. and Rajlich, V., "Hidden Dependencies inProgram Comprehension and Change Propagation",

Variables defined by class(VARS)

Number of variables and arrays defined at the class-level. Inherited and procedure-level variables are not counted.

VARS=no.of variables and arrays defined in the class Yu, Z. and Rajlich, V., "Hidden Dependencies inProgram Comprehension and Change Propagation",

Non-private variables defined by class(VARSnp)

The same as VARS, but Private variables are excluded.

VARSnp=VARS-private variables VARS=Variables defined by class

Yu, Z. and Rajlich, V., "Hidden Dependencies inProgram Comprehension and Change Propagation",

Variables defined and inherited by class(VARSi)

The same as VARS, but inherited variables are included.

VARSi=VARS-inherited variables VARS=Variables defined by class

Yu, Z. and Rajlich, V., "Hidden Dependencies inProgram Comprehension and Change Propagation",

Events defined by class(EVENT)

Number of Event statements (event definitions) in a class. Inherited events and event handlers are not counted.

EVENT=no of event statements Poshyvanyk, D., Marcus, A., Dong, Y., and Sergeyev,A., "IRiSS - A Source Code Exploration Tool", in Industrialand Tool Proceedings of 21st IEEE International Conferenceon Software Maintenance,2005

Constructors defined by class(CTORS)

Number of constructors (Sub New) in a class. Class_Initialize in VB Classic is an event handler, and not counted in CTORS.

Poshyvanyk, D., Marcus, A., Dong, Y., and Sergeyev,A., "IRiSS - A Source Code Exploration Tool", in Industrialand Tool Proceedings of 21st IEEE International Conferenceon Software Maintenance,2005

Class size(CSZ) Number of methods + variables defined by class. Measures the size of the class in terms of operations and data.

CSZ = WMC + VARS.

WMC=weighted methods per classVARS=Variables defined by class

Poshyvanyk, D., Marcus, A., Dong, Y., and Sergeyev,A., "IRiSS - A Source Code Exploration Tool", in Industrialand Tool Proceedings of 21st IEEE International Conferenceon Software Maintenance,2005

Class interface size(CIS)

Number of non-private methods + variables defined by class. Measures the size of the interface from other parts of the system to the class.

CIS = WMCnp + VARSnp.

WMCnp= Non-private methods defined by classVARSnp=Non-private variables defined by class

Poshyvanyk, D., Marcus, A., Dong, Y., and Sergeyev,A., "IRiSS - A Source Code Exploration Tool", in Industrialand Tool Proceedings of 21st IEEE International Conferenceon Software Maintenance,2005

Number of classes(CLS)

If zero, no OO metrics are meaningful.

Poshyvanyk, D., Marcus, A., Dong, Y., and Sergeyev,A., "IRiSS - A Source Code Exploration Tool", in Industrialand Tool Proceedings of 21st IEEE International Conferenceon Software Maintenance,2005

Number of abstract classes(CLSa)

Number of abstract classes defined in project.

Poshyvanyk, D., Marcus, A., Dong, Y., and Sergeyev,A., "IRiSS - A Source Code Exploration Tool", in Industrialand Tool Proceedings of 21st IEEE International Conferenceon Software Maintenance,2005

Number of concrete classes(CLSc)

Number of concrete classes defined in project. A concrete class is one that is not abstract.

CLSc = CLS – CLSa

CLS= Number of classesCLSa= Number of abstract classes

Poshyvanyk, D., Marcus, A., Dong, Y., and Sergeyev,A., "IRiSS - A Source Code Exploration Tool", in Industrialand Tool Proceedings of 21st IEEE International Conferenceon Software Maintenance,2005

Number of root classes(ROOTS)

Number of distinct class hierarchies.

Poshyvanyk, D., Marcus, A., Dong, Y., and Sergeyev,A., "IRiSS - A Source Code Exploration Tool", in Industrialand Tool Proceedings of 21st IEEE International Conferenceon Software Maintenance,2005

Number of leaf classes(LEAFS)

A leaf class is one that other classes don't inherit from.

Poshyvanyk, D., Marcus, A., Dong, Y., and Sergeyev,A., "IRiSS - A Source Code Exploration Tool", in Industrialand Tool Proceedings of 21st IEEE International Conferenceon Software Maintenance,2005

Number of interfaces(INTERFS)

Number of .NET Interfaces. Abstract classes are not counted as interfaces even though they can be thought of as interfaces.

Poshyvanyk, D., Marcus, A., Dong, Y., and Sergeyev,A., "IRiSS - A Source Code Exploration Tool", in Industrialand Tool Proceedings of 21st IEEE International Conferenceon Software Maintenance,2005

Maximum depth of inheritance tree(maxDIT)

It is the maximum value od Depth Inheritance Tree of C&K metrics. maxDIT should not exceed 6.

Succi, G., Pedrycz, W., Djokic, S., Zuliani, P., andRusso, B., "An Empirical Exploration of the Distributions ofthe Chidamber and Kemerer Object-Oriented Metrics Suite",Empirical Software Eng., 10 (1), Jan. 2005,

Afferent coupling(Ca)

the number of classes outside this module that depend on classes inside this module

Ca = no.of classes outside the module Mi that depends on classes inside module Mi

Robert Martin, OO Design Quality Metrics, An Analysis of Dependencies 

Efferent coupling(Ce) the number of classes inside this module that depend on classes outside this module

Ce = no.of classes inside the module Mi that depends on classes outside module Mi

Robert Martin, OO Design Quality Metrics, An Analysis of Dependencies 

Instability(I) It is the ratio of efferent coupling to the total no of efferent and afferent coupling

I = Ce / (Ca + Ce)Ca = no.of classes outside the module Mi that depends on classes inside module Mi

Ce = no.of classes inside the module Mi that depends on classes outside module Mi

Robert Martin, OO Design Quality Metrics, An Analysis of Dependencies 

Module coupling(MC)

It is the inverse of sum of no of input,output parameters,no of global variables,no of modules called and no of modules calling. .5 is low coupling, .001 is high coupling

MC = 1/(Pi+Po+G+Mcalled+Mcalling)

Pi=no of input parameters Po=no of output parameters G=no of global variablesMcalled=no of modules called from the class CiMcalling = no of modules calling from the class Ci

Dhama, H. 1995. Quantitative models of cohesion and coupling in software. In Selected Papers of the Sixth Annual Oregon Workshop on Software Metrics (Silver Falls, Oregon, United States). W. Harrison and R. L. Glass, Eds. Elsevier Science, New York, NY, 65-74.

Person days per class(PDC)

It is used to estimate the effort required to develop a system.

Lorenz, Mark & Kidd Jeff: “Object-Oriented Software Metrics”, PrenticeHall, 1994.

Classes per developer (CPD)

it is an estimate of how much of code a single developer can reasonably expect to own.

CPD=no.of key classes /no of peopleLorenz, Mark & Kidd Jeff: “Object-Oriented Software Metrics”, PrenticeHall, 1994.

Breadth of Inheritance Tree (BIT)

It is equal to the no of leaves in the tree. Higher BIT means higher number of methods/attributes reused in the derived class

BIT=no of leaves in the treeDr. Kadhim M.Breesam “Metrics for Object-Oriented Design Focusing on Class Inheitance Metric” IEEE 2nd InternationalConference on Dependability of Computer Systems (DepCoS-RELCOMEX’07) 2007

Method Reuse Per Inheritance Relation (MRPIR)

MRPIR computes the total number of methods reused per inheritance relation in the inheritance hierarchy. Itapplies on whole inheritance hierarchy in the system.

MIk= No. of methods inherited through kth inheritance relationshipr= Total number of inheritance relationships

Dr. Kadhim M.Breesam “Metrics for Object-Oriented Design Focusing on Class Inheitance Metric” IEEE 2nd InternationalConference on Dependability of Computer Systems (DepCoS-RELCOMEX’07) 2007

Attribute Reuse Per Inheritance Relation (ARPIR)

It computes the total number of attributes reused per inheritance relation in the inheritance hierarchy AIk= No. of attributes inherited through kth inheritance relationship

r= Total number of inheritance relationships

Dr. Kadhim M.Breesam “Metrics for Object-Oriented Design Focusing on Class Inheitance Metric” IEEE 2nd InternationalConference on Dependability of Computer Systems (DepCoS-RELCOMEX’07) 2007

Generality of Class (GC)

Generality of a class is the measure of its relative abstraction level. Higher the generality of a class more it islikely to be reused

GC=a/al

a=abstraction level of the classal=total no of possible abstraction levels

Dr. Kadhim M.Breesam “Metrics for Object-Oriented Design Focusing on Class Inheitance Metric” IEEE 2nd InternationalConference on Dependability of Computer Systems (DepCoS-RELCOMEX’07) 2007

Reuse Probability (RP)

It is the probability of reusing classes in the inheritance hierarchy

Ni = Total number of classes that can be inheritedNlg = Total number of classes that can be inherited but having lowest possible generic levelN = Total number of classes in the inheritance hierarchy

Dr. Kadhim M.Breesam “Metrics for Object-Oriented Design Focusing on Class Inheitance Metric” IEEE 2nd InternationalConference on Dependability of Computer Systems (DepCoS-RELCOMEX’07) 2007

Abstractness(A) This is the ratio of abstract classes in the category to the total no of classes in the category.It ranges from 0 to 1. 0 means concrete and 1 means completely abstract.

A= abstract classes in the category / total no of classes in the category

Mariano Ceccato and Paolo Tonella, Measuring the Effects of Software Aspectization,International conference on Object Technology,1998

Number of Catch Blocks per Class (NCBC)

The metric counts the percentage of the catch blocks in each method of the class. The NCBC denominator represents the maximum number of possible catch blocks for classCik. This would be the case where all possible exceptions in Cik have a correspondingcatch block to handle these exceptions.

n = Number of Methods in a classm = Number of Catch Blocks in a MethodCij is jth Catch Block in ith MethodCik is kth Catch Block in ith Methodl = Maximum Number of possible Catch Blocks in a Method

K.K.Aggarwal, Yogesh Singh, Arvinder Kaur and Ruchika Malhotra ,Software Design Metrics for Object-Oriented Software, JOURNAL OF OBJECT TECHNOLOGY,2008

The following table gives a brief description of the Non- OO metrics,definition,formula and its author.TABLE 1.2 LIST OF NON-OO METRICS AND THEIR FORMULA

Quality code (QC) It captures the relation between the number of weighted defects and the size of product release. Purpose is to deliver a high quality product.

WTP+WF/KCSI

WTP – no. of weighted defects found in the product under test(before official release)WF – no. of weighted defects found in the product after release.KCSI – no. of new or changed source lines of code in thousands.

Dr. Arvinder Kaur1,#Mrs. Bharti Suri2,#Ms. Abhilasha Sharma3“Software Testing Product Metrics - A Survey”

Quality of the Product (QP)

It shows the relation between thenumber of weighted defects shipped to customers and size of the productrelease. Purpose is to deliver a highquality product.

WF/KCSI

WF – no. of weighted defects found in the product after release.KCSI – no. of new or changed source lines of code in thousands

K. K. Aggarwal & Yogesh Singh “SoftwareEngineering Programs Documentation OperatingProcedures (Second Edition)” New Age InternationalPublishers, 2005.

Test Improvement (TI)

It shows the relation between thenumber of weighted defects detectedby the test team during and the size ofthe product release. Purpose is todeliver a high quality product.

WTTP/KCSI

KCSI – no. of new or changed source lines of code inthousands.WTTP – is the no. of weighted defects found by the test team inthe test cycle of the product.

Yanping Chen, Robert L. Probert, Kyle Robenson“Effective Test Metrics for Test Strategy Evolution”

Test effectiveness (TE)

It shows the relation between thenumber of weighted defects detectedduring testing and the total number ofweighted defects in the product.Purpose is to deliver a high qualityproduct.

WT/(WTP+WF)*100%

WTP – no. of weighted defects found in the product under test(before official release)WF – no. of weighted defects found in the product after release.WT – no. of weighted defects found by the test team during theproduct cycle.

P. Dhavachelvan, G. V. Uma, V.S.K.Venkatachalapathy “A new approach in developmentof distributed framework for automated softwaretesting using agents”

Test time (TT) It shows the relation between tie spenton testing and the size of the productrelease. Purpose is to decrease timeto-market.

TT/KCSI

KCSI – no. of new or changed source lines of code inthousands.TT – no. of business days used for product testing.

N. E. Fenton and S. L. Pfleager “Software Metrics: ARigorous and Practical Approach”, Second EditionRevised ed. Boston

Test time overdevelopmenttime(TD)

It shows the relation between timespent on testing and the time spent ondeveloping. Purpose is to decrease time-to market.

TT/TD*100%

TT – no. of business days used for product testing.TD – no. of business days used for product development.

L. Finkelstein “Theory and Philosophy ofMeasurement” , in Theoretical Fundamentals, Vol. 1,Handbook of measurement Science, P.H. Sydenlam,Ed. Chichester : John Wiley & Sons

Test costnormalized toproduct size(TCS)

It shows the relation between resourceor money spent on testing and the sizeof the product release. Purpose is todecrease cost-to-market.

CT/KCSI

CT – total cost of testing the product in dollars.KCSI – no. of new or changed source lines of code inthousands.

Paul C. Jorgensen “Software Testing - A Craftsman’sApproach Second Edition”

Test cost as aratio ofdevelopmentcost (TCD)

It shows the relation between testingcost and development cost of theproduct. Purpose is to decrease costto-market.

CT/CD*100%

CT – total cost of testing the product in dollars.CD – total cost of developing the product in dollars.

S.H. Kan, J. Parrish and D.Manclove “In-processmetrics for software testing”

Cost perweighted defectunit (CWD)

It shows the relation between moneyspent by test team and the number ofweighted defects detected duringtesting. Purpose is to decrease cost-tomarket.

CT/WT

CT – total cost of testing the product in dollars.WT – no. of weighted defects found by the test team during theproduct cycle.

Stephen H. Kan “Metrics and Models in SoftwareQuality Engineering”, Second Edition,

Testimprovement inproduct quality.

It shows the relation between thinumber of weighted defedts detectedand the size of the product release.

WP/KCSI

WP – no. of weighted defects found in one specific test phase.KCSI – no. of new or changed source lines of code inthousands.

Cem Kaner “Software Engineering Metrics: What dothey measure and how do we know?”

Test timeneedednormalized tosize of product.

It shows the relation between timespent on testing and the size of theproduct release.

TTP/KCSI

TTP – no. of business days used for a specific test phase.KCSI – no. of new or changed source lines of code inthousands.

N.Nagappan “Toward a software Testing andReliability Early warning Metric Suite”

Test costnormalized tosize of product.

It shows the relation between resourceor money spent on the test phase andthe size of the product release.

CTP/KCSI

KCSI – no. of new or changed source lines of code inthousands.CTP – total cost of a specific test phase in dollars.

Dr. Arvinder Kaur1,#Mrs. Bharti Suri2,#Ms. Abhilasha Sharma3“Software Testing Product Metrics - A Survey”

Cost perweighted defectunit.

It shows the relation between moneyspent on the test phase and the numberof weighted defects detected.

CTP/WT

CTP – total cost of a specific test phase in dollars.WT – no. of weighted defects found by the test team during theproduct cycle.

E. Osterweil “Strategic Directions in SoftwareQuality”,

Testeffectivenessfor driving outdefects in eachtest phase.

It shows the relation between thenumber of one type of defectsdetected in one specific test phase andthe total number of this type of defectin the product.

WD/(WD+WN)*100%

WD – no. of weighted defects of this defect type that aredetected after the test phase.WN – no. of weighted defects of this defect type (any particulartype) that remain uncovered after the test phase (missed defects)

Ramesh Pusala “Operational Excellence throughefficient Software Testing Metrics”

Softwarereliability

It is the probability of failure freeoperation of a computer program for aspecified time in a specifiedenvironment.

Z(t) = (h)exp(-ht/N)

Z(t) – instantaneous failure rate.h – Failure rate prior to the start of testingN – no. of faults inherent in the program prior to the start oftesting.

Linda H. Rosenberg, Theodore F. Hammer, Lenore L.Huffman “Requirements, Testing, and Metrics” NASA,GSFC

Test sessionefficiency

Goal of the test session efficiencymetric is to identify trends in thescheduled test time’s effectiveness

SYSE= Active Test Time/Scheduled Test TimeTE= Total no. of good runs/Total runs

SYSE – System EfficiencyTE – Tester efficiency

Norman F. Schneidewind “Measuring and EvaluatingMaintenance Process Using Reliability, Risk, and TestMetrics” IEEE Transaction on Software Engineering

Test focus Goal is to identify the amount of effortspent finding and fixing real faultsversus the effort spent eithereliminating false defects or waiting fora hardware fix.

TF = No. of DRs closed with a software fix/Total no. of DR

DR – Discrepancy ReportTF – Test Focus

“Notable Metrics in Software Testing”S. M. K. Quadri1 and Sheikh Umar Farooq2

Softwarematurity

Goal is(i) To quantify the relativestabilization of a software subsystem(ii) To identify any possible overtestingor testing bottlenecks byexamining the fault density of thesubsystem over time. Threecomponents are: T, O, H.

T = Total no. of DRs changed to a subsystem/1000 SLOCO = No. of currently open subsystem DRs/1000 SLOCH = Active test hours per subsystem/1000 SLOC

T – Total DensityO – Open DensityH – Test Hours

Stephen H. Kan, “Metrics and Models in SoftwareQuality Engineering”

Test coverage Goal of the metric is to examine theefficiency of testing over time.

% of code branches that have been executedduring testing.

Ramesh Pusala “Operational Excellence throughefficient Software Testing Metrics” Infosys, 2006

Test Execution Productivity

This metric gives the test cases execution productivity which on further analysis can give conclusive result.

No of Test cycles executed / Actual Effort for testing George E. Stark, Robert C. Durst, Tammy M. Pelnik“An Evaluation of Software Testing metrics”

Test Case

Productivity (TCP)

This metric gives the test case writing productivity based on which one can have a conclusive remark.

Dr. Arvinder Kaur1,#Mrs. Bharti Suri2,#Ms. Abhilasha Sharma3“Software Testing Product Metrics - A Survey”

Defect Acceptance

(DA)

This metric determine the number of valid defects that testing team has identified during execution.

“Cost Effective Software Test Metrics”LJUBOMIR LAZICa, NIKOS MASTORAKISb

Defect Rejection

(DR)

This metric determine the number of defects rejected during execution. It gives the percentage of the invalid defect the testing team has opened and one can control, whenever required.

“A Brief Overview Of Software Testing Metrics”Mr. Premal B. NirpalDr. K. V. Kale

Bad Fix Defect (B) Defect whose resolution give rise to new defect(s) are bad fix defect.This metric determine the effectiveness of defect resolution process. It gives the percentage of the bad defect resolution which needs to be controlled.

Roger S. Pressman “Software Engineering”, A Practitioner’s Approach 5th Edition, McGraw Hill, 1997.

Test Efficiency (TE) This metric determine the efficiency of the testing team in identifying the defects.It also indicated the defects missed out during testing

Where,DT = Number of valid defects identified during testing.DU = Number of valid defects identified by user after release of

Fenton, N., S.L. Pfleeger “Software Metrics: A Rigorous and Practical Approach”, PWS Publishing Co

phase which migrated to the next phase.

application.

Defect Severity

Index (DSI)

This metric determine the quality of the product under test and at the time of release, based on which one can take decision for releasing of the product i.e. it indicates the quality.

“A Brief Overview Of Software Testing Metrics”Mr. Premal B. NirpalDr. K. V. Kale

Performance

Scripting

Productivity (PSP)

This metric gives the scripting productivity for performance test script and have trend over a period of time.

“A Brief Overview Of Software Testing Metrics”Mr. Premal B. NirpalDr. K. V. Kale

Performance Test

Efficiency (PTE)

This metric determine the quality of the Performance testing team in meeting the requirements which can be used as an input for further improvisation, if required.

“A Brief Overview Of Software Testing Metrics”Mr. Premal B. NirpalDr. K. V. Kale

Requirements Volatility

Requirements volatility (RV) refers to additions, deletions and modifications of requirements during the systems development life cycle. Ignoring requests for requirement changes can cause system failure due to user rejection, and failure to manage RV can increase the development time and cost.

{(No. of requirements added + No. of requirements deleted + No. of requirements modified) / No. of initial approved requirements} * 100%

“A Brief Overview Of Software Testing Metrics”Mr. Premal B. NirpalDr. K. V. Kale

Defect Rejection Ratio

The ratio of the number of defect reports which were rejected (perhaps because they were not actually bugs) divided by the total number of defects.

(No. of defects rejected / Total no. of defects raised) * 100%

“A Brief Overview Of Software Testing Metrics”Mr. Premal B. NirpalDr. K. V. Kale

Regression Defect This shows the ability

to keep the product right while fixing defect

( no of regression bugs )*100 / Total no of bugs%“A Brief Overview Of Software Testing Metrics”Mr. Premal B. NirpalDr. K. V. Kale

Defect ValidationThis report shows

(Closed+Reopen) / Fixed defects

(No of validated (closed + reopen) defects/No of fixed defects) *100%

“A Brief Overview Of Software Testing Metrics”Mr. Premal B. NirpalDr. K. V. Kale

Defect Validation Rate

No of defects validated per week. This is a measure of QA productivity in terms of validating the fixed defects

No of defects validated per week per person = (total no. of validated defects) *40/(Total hours to validate the defects * no. of resources)

“A Brief Overview Of Software Testing Metrics”Mr. Premal B. NirpalDr. K. V. Kale

Defect Removal Effectiveness

Efficiency of Testing Process (define size in KLoC or FP, Req.)

Testing Efficiency= Size of Software Tested /Resources used

DRE= (Defects removed during development phase x100%) / Defects latent in the product

Defects latent in the product = Defects removed during development

Phase+ defects found later by user

“A Brief Overview Of Software Testing Metrics”Mr. Premal B. NirpalDr. K. V. Kale

Lines of Code (LOC)

It is a direct approach method and requires a higher level of detail by means of decomposition and partitioning.Once the expected value for estimation variable has been determined, historical LOC or FP data are applied and person months, costs etc are calculated using the following formula.

Productivity = KLOC / Person-monthQuality = Defects / KLOCCost = $ / LOCDocumentation = pages of documentation / KLOC

Where,

KLOC stand for no. of lines of code (in thousands).

Person-month stand for is the time(in months) taken by developers to finish the product.

Defects stand for Total Number of errors discovered

“A Brief Overview Of Software Testing Metrics”Mr. Premal B. NirpalDr. K. V. Kale

III.CONCLUSION AND FUTURE WORK

This paper introduces the basic metric suite for object-oriented and non object oriented design. The need for such metrics is particularly acute when an organization is adopting a new technology for which established practices have yet to be developed. Metric data provides quick feedback for software designers and managers. Analyzing and collecting the data can predict design quality. If appropriately used, it can lead to a significant reduction in costs of the overall implementation and improvements in quality of the final product. The improved quality, in turn reduces future maintenance efforts. Using early quality indicators based on objective empirical evidence is therefore a realistic objective.

IV.REFERENCES

Lorenz, Mark & Kidd Jeff: “Object-Oriented Software Metrics”, PrenticeHall, 1994.

Dr. Kadhim M.Breesam “Metrics for Object-Oriented Design Focusing on Class Inheitance Metric” IEEE 2nd International

S.R.Chidamber and C.F.Kamerer, A metrics Suite for Object-OrientedDesign. IEEE Trans. Software Engineering,1994

L.Briand , W.Daly and J. Wust, A Unified Framework for CouplingMeasurement in Object-Oriented Systems. IEEE Transactions on softwareEngineering

Y.Lee, B.Liang, S.Wu and F.Wang, Measuring the Coupling and Cohesion of anObject-Oriented program based on Information flow, 1995

R.Harrison, S.J.Counsell, and R.V.Nithi, An Evaluation of MOOD set ofObjectOriented Software Metrics. IEEE Trans. Software Engineering

McCabe (December 1976). "A ComplexityMeasure". IEEE Transactions on Software Engineering: 308–320.

Laing V.,Coleman C., : “Principal Components of Orthogonal OOMetrics",Software Assurance Technology Center (SATC),2001

Denys Poshyvanyk, Andrian Marcus,The Conceptual Coupling Metrics for Object-Oriented Systems

Henderson Sellers,Object Oriented Metrics: Measures of complexity,Prentice hall

Aine Mitchell, James F. Power, Toward a definition of run-time object-oriented metrics, 2003

Sencer Sultanoðlu, Ümit Karakaþ, Software Size Estimating, Web Document, 1998David N. Card, Khaled El Emam, Betsy Scalzo, Measurement of Object-Oriented SoftwareDevelopment Projects, 2001

Conference on Dependability of Computer Systems (DepCoS-RELCOMEX’07) 2007

K.K.Aggarwal, Yogesh Singh, Arvinder Kaur and Ruchika Malhotra ,Software Design Metrics for Object-Oriented Software, JOURNAL OF OBJECT TECHNOLOGY,2008

Mariano Ceccato and Paolo Tonella, Measuring the Effects of Software Aspectization,International conference on Object Technology,1998

Dhama, H. 1995. Quantitative models of cohesion and coupling in software. In Selected Papers of the Sixth Annual Oregon Workshop on Software Metrics (Silver Falls, Oregon, United States). W. Harrison and R. L. Glass, Eds. Elsevier Science, New York, NY, 65-74.

Robert Martin, OO Design Quality Metrics, An Analysis of Dependencies 

Poshyvanyk, D., Marcus, A., Dong, Y., and Sergeyev,A., "IRiSS - A Source Code Exploration Tool", in Industrialand Tool Proceedings of 21st IEEE International Conferenceon Software Maintenance,2005

A Brief Overview Of Software Testing MetricsMr. Premal B. NirpalDr. K. V. Kale

Yanping Chen, Robert L. Probert, Kyle Robenson “Effective Test Metrics for Test Strategy Evolution” Proceedings of the 2004Conference of the centre for Advanced Studies on Collaborative Research CASCON’04

Software Testing Product Metrics - A SurveyDr. Arvinder Kaur1,#Mrs. Bharti Suri2,#Ms. Abhilasha Sharma3

Norman F. Schneidewind “Measuring and Evaluating Maintenance Process Using Reliability, Risk, and Test Metrics” IEEE Transaction on Software Engineering.

S.H. Kan, J. Parrish and D.Manclove “In-process metrics for software testing” IBM Systems Journal, Vol. 40. no. 1,2009.

“Cost Effective Software Test Metrics” LJUBOMIR LAZICa, NIKOS MASTORAKISb.

Bennett, Ted L., and Paul W. Wennberg. “Eliminating Embedded Software DefectsPrior to Integration Test.”Dec. 2005.

V. R. Basili, G. Caldiera, H. D. Rombach, “The Goal Question Metric Approach”,Encyclopedia of Software Engineering,volume 1, John Wiley & Sons, 1994, pp. 528-532

“Notable Metrics in Software Testing” S. M. K. Quadri1 and Sheikh Umar Farooq2

George E. Stark, Robert C. Durst, Tammy M. Pelnik “An Evaluation of Software Testing metrics for NASA‟s Mission Control Center” 1992.

Paul C. Jorgensen “Software Testing - A Craftsman’s Approach Second Edition”