object-oriented (oo) estimation martin vigo gabriel h. lozano m

Post on 17-Jan-2018

230 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

DESCRIPTION

What is special in OO? estimation? Artifacts (different metrics, OO metrics). Relations found.

TRANSCRIPT

Object-Oriented (OO) estimation

Martin VigoGabriel H. Lozano M.

Gabriel H. Lozano M.

General Estimation process

Independent variable x Process estimation

f(x)

What is special in OO? estimation?

• Artifacts (different metrics, OO metrics).

• Relations found.

”...metrics which reflect the specifities of the OO paradigm must be defined and validated in order to be used in industry. Some studies have concluded that

“traditional” product metrics are not sufficient for characterizing, assessing, and predicting the quality of OO software systems”.[2]

What we did?• Look some OO estimation researches.• Focus on:

• the metrics used (OO metrics).• The independent variables and the life

cycle associated.• The dependent variable(s) and the

relation with the independent var.• Not focus in the “relation discover

process”.

What we did? (cont...)

• Not focused on the f(x) discover process.

OO software metrics

OO metrics sources

• [5] is a very well-known source of theoretically-grounded (base on the ontology of Bunge[7][8][9]) OO design metrics.

• Different kind of researches.

External complexity of a class (EC)

Where n is the number of associations of the class, Ai is each association of the class, and AC(A) is

Internal complexity of a class (IC)

where n is the number of methods of the class, Mi is each method, and MC(M) is

Weighted Methods per Class (WMC)

If all method complexities are to be considered to be unity, then WMC = n, the number of methods.

(this is the definition of NoM in [1])

Depth of the inheritance tree (DIT)

• Maximum length of the path from the class to the root of the inheritance tree.

dit = 1dit = 2dit = 3

Coupling between object classes (CBO)

• is a count of the number of other classes to which it is coupled.

CBO = 0CBO = 1CBO = 2

Response for a class (RFC)

• The Response set of a class is a set of methods that can potentially be executed in response to a message received by an object of that class.

Lack of cohesion (LCOM)• is a count of the number of

method pairs whose similarity is 0 minus the count of method pairs whose similarity is not zero.

• the similarity is given by:

Others self-defined

• Number of children of a class (NoC).

• Number of Attributes of a class (NoA).

Researches reviewed• Early estimation of software size in

object-oriented environments a case study in a CMM level 3 software firm[1].

• A validation of Object-Oriented Design Metrics as Quality Indicators [2].

• Predicting Maintainability with Object-Oriented Metrics. An Empirical Comparison[4]

Early estimation of software size in object-oriented environments a case study in a CMM level 3 software firm

Purpose• To study if any property of

analysis objects can be used to infer the size of the final code in an OO environment.

Independent variables• From the analysis objects they took:

• External complexity of a class (ECC).• Internal complexity of a class (ICC).• Depth of the inheritance tree of a

class (DIT).• Number of methods of a class (NoM).• Number of children of a class (NoC).• Number of attributes of a class (NoA).

Input data for the analysis

• 2 different projects at a european software firm at level 3 of CMM.

• Telecommunications domain.• It has a defined OO software

development process.• C++.

Method for obtaining the f(x)

• Parametric correlation between the analysis metrics and LOC.

• Identification of a linear parametric model.

• Study the possibility of extending the model.

• More info see [1].

Final results• The number of analysis methods

well correlated with software size (r > o.77).

• Inferential statistics guarantee that the results are applicable outside.

• Two proposed linear models:

A validation of Object-Oriented Design Metrics as Quality Indicators [2]

Purpose• Build a predictive model of fault-

prone classes.• make inspections of design or

code artifacts more efficient. Testing big systems cost prohibitive.

Hypothesis• H-WMC.• H-DIT.• H-NOC.• H-CBO.• H-RFC.• H-LCOM.

The process• Empirical study over 4 months.• Students of an upper division

undergraduate/graduate level course of CS @ University of Maryland.

• Dev. environment and technology used WAS representative of what WAS currently used.

• C++.

Data collection

• source code C++.• data about these programs.• data about errors found during

testing phase.• the repaired source code delivered

at the end of the life cycle.

Distributions of the metrics of the 180 classes in the studied systems.

Method for obtaining the f(x)

• The response variable they used to validate the OO design metrics is binary (was a fault detected during test?)

• Logistic regression was used.• More info see [2].

Final results• H-WMC Yes.• H-DIT Yes.• H-RFC Yes.• H-NOC NO!! -> L-NOC Yes!! Good

design?.• LCOM insignificant.• CBO significant and more particularly

for UI classes.

Validation of the estimation

•Classes with most faults are detected.•80/180 inspected.•48/58 faulty classes would be identified.•250/268 faults detected.

Comparison with usual code metrics

•112/180 inspected.•51/58 faulty classes would be identified.•231/268 faults detected.•+3 faulty classes, -32 more classes inspected.•250 vs. 231 faults detected.

References• [1] M. Ronchetti, G. Succi, W. Pedrycz, B. Russo, Early

estimation of software size in object-oriented environments a case study in a CMM level 3 software firm (2004).

• [2] V.R. Basili, L.C. Briand, W.L.Melo, A validation of object-oriented design metrics as quality indicators, (1996)

• [7] M. Bunge, Treatise on Basic Philosophy: Ontology I: The furniture of the world (1997).

• [8] M. Bunge, Treatise on Basic Philosophy: Ontology II: The World of Systems (1979).

• [9] Y.Wand and R.Weber, "An ontological evaluation of systems analysis and design methods," (1989).

top related