an industrial case study on requirements volatility measures annabella loconsole department of...
Post on 21-Dec-2015
214 views
TRANSCRIPT
An Industrial Case Study on Requirements Volatility Measures
Annabella LoconsoleDepartment of Computing Science
Umeå University, [email protected]
2
Requirements: how do we get them?
A requirement is something that the product must do or a quality that the product must have.
I would like a system
that does…
Customer
Requirement Engineer
Requirements Specifications (in natural language)
Req. 1:….Req. 2:….Req. 3:….
3
Requirements: what happens to them?
Design
Use case 1
Use case 2Actor
Begin
if a<b then…..else
Requirements specifications (Use case model)
Source codeFinal product
Customer
4
Definitions of volatility
Volatile= easily changing [Hornby]
Intensity and distribution of changes [Rosenberg
et al.]
Ratio of requirements change (addition, deletion, and modification) the total # requirements for a given period of time [Stark et al.].
5
Entity, attributes, measures (in general)
Entity Attributes Measures
Age
Number of yearsNumber of monthsNumber of grey hairs………
Internal Attribute
External Attribute
Wisdom
6
Entity, attributes, measures (in this study)
Entity Attributes Measures
Size of UCMSize of change of
UCM
# lines# words# use cases# actorsTotal # changes# min, mod, maj
changes# revisions
Internal Attribute
External Attribute
Volatility
Use case 1
Use case 2Actor
Use case model (UCM)
7
Internal attributes and measures of volatility
Internal Attributes Measures
Size of UCM # lines# words# use cases# actors
Size of change of UCM Total # changes# min, mod, maj changes# revisions
8
So, what have I done?
1. An empirical validation of a subset of measures from my previous work [Loconsole 2001, Loconsole and Börstler 2003]
2. Investigating if the size of requirements affect their volatility
Why: To find out if the measures can be predictors of volatility How: An industrial case study What results:
1. Data analysis did not support empirical validation2. There is a significant correlation between the size of use
case model and the measure total number of changes
9
Context of the study
The companyThe project
o Size, people, process, time…
SubjectsObjectsVariablesInstruments
10
Hypotheses formulation
Null hypotheses H0 :
1. There is no significant correlation between the measures of size of change and size of UCM and the subject’s rating of volatility of the UCMs. In formulas: volatility f(size of UCM); volatility f(size of change of UCM)
2. There is no significant correlation between the size of change and the size of UCM. size of change of UCM f(size of UCM)
11
Data collected on UCMsUC models
# revisions
Total # changes
# actors
# uc
# lines
# words
Subjective volatility
Ucm1 4 8 1 3 99 750 0.328125
Ucm2 7 12 1 3 111 657 0.71875
Ucm3 6 8 1 1 64 218 0.25
Ucm4 4 8 1 2 98 792 0.421875
Ucm5 9 11 1 4 102 498 0.46875
Ucm6 6 10 1 2 94 670 0.546875
Ucm7 8 13 2 6 139 937 0.390625
Ucm8 5 19 2 3 119 763 0.453125
Ucm9 7 10 1 1 73 350 0.15625
Ucm10 8 21 2 6 143 928 0.46875
Ucm11 8 14 2 3 106 641 0.546875
Ucm12 8 19 3 3 163 1068 0.265625
Ucm13 4 3 1 1 58 216 0.203125
Ucm14 6 5 2 1 67 283 0.25
Subjective volatility of UCMs
UC models
Inception Elaboration Construction Transition
low medium high low medium high low medium
high low medium
high
Ucm1 af bde c a bdef c adef b c abde f c
Ucm2 bf acde f abcde
ade bcf f abcde
Ucm3 ab cdef ab cde f bde ac abdef c ab
Ucm4 c ab def c abdef a bcdef abdef c
Ucm5 abc def de abc f de abc f b acdef
Ucm6 bdef ac def abc b acdef
abde cf
Ucm7 abcde f c abde f cdede
b af de abcf
Ucm8 ade bf c ade bf c bdef ac af b cde
Ucm9 abde c f abde
cf abde c f abde cf
Ucm10
ade c bf cde a bf cf a bde acf b de
Ucm11
ac bde f abc f abcde f bc af de
Ucm12
cde a bf cf ade b acf bde abcdef
Ucm13
ade cf b ade bcf ade bc f abdef c
Ucm14
bde ac f abde
c f ab cde f abde cf
13
Spearman correlation coefficients
Total # changes
# min # mod # maj # revisions Subjective volatility
# lines 0.861 0.409 0.713 0.564 0.504 0.380
# words 0.747 0.394 0.618 0.493 0.264 0.408
# actors 0.656 0.326 0.649 0.155 0.418 -0.070
# use cases
0.670 0.498 0.442 0.838 0.550 0.447
Subjective volatility
0.369 -0.294 0.315 0.100 0.265 1
First hypothesis Second hypothesis
14
Surprising results!!!
1. we accepted H0 there is no consistent relationship between the two internal attributes of size and subject’s rating of volatility of the UCMs.
So what is volatility then??
2. we rejected H0 (with level of significance = 0.05) in case of total number of changes there is a strong relationship between the size of a Use case model and the total number of changes to it.
15
Threats …
to conclusion validityo Size of the sample.
to construct validityo Minimal, because we used the goal question metrics
and theoretical validation.
to external validityo Minimal, industrial study (real data, real projects..).
to internal validityo Accuracy of collected data, of responses, subject’s
motivation, plagiarism.
16
Conclusions
In this case study the measures are not validated
The bigger the size of a use case model the greater the total number of changes larger use case models are more volatile respect to smaller ones.
These results are preliminary, more studies are needed.
17
Future work
Analyse the data in more detailso By phase, by reason of change, …o at different levels of abstraction
Investigating multiple correlation:o volatility = f(size of UCM, size of change of
UCM)
Validation of the results through a replication of the study
18
References
Hornby A.S., The Oxford Advanced Learner's Dictionary of Current English, Oxford University Press, 1980.
A. Loconsole, “Measuring the Requirements Management Key Process Area”, Proceedings of ESCOM - European Software Control and Metrics Conference, London, UK, April 2001.
A. Loconsole, and J. Börstler, “Theoretical Validation and Case Study of Requirements Management Measures”, Umeå University Internal Report, Uminf 03.02, July 2003.
L. Rosenberg, and L. Hyatt, “Developing a Successful Metrics Program”, The Second Annual Conference on Software Metrics, Washington, DC - June, 1996.
G. Stark., P. Oman., A. Skillicorn, and R. Ameele, “An Examination of the Effects of Requirements Changes on Software Maintenance Releases”, in Journal of Software Maintenance Research and Practice, Vol. 11, 1999, pp. 293-309.
19
Selected publications
1. Loconsole, A. and Börstler J., (2005) An Industrial Case Study on Requirements Volatility Measures, submitted to RE05, IEEE International conference on requirements engineering, Paris, France, 10 pages.
2. Loconsole, A. Empirical studies on Requirement Management Activities,(2004) ICSE '04 , 26st IEEE/ACM International Conference on Software Engineering, Edinburg Scotland, UK; May 2004, Doctoral symposium, 3pages.
3. Loconsole, A. and Börstler J., (2004) A Comparison of two Academic Case Studies on Cost Estimation of Changes to Requirements - Preliminary Results, SMEF, 28-29-30 January 2004 in Rome, Italy, 10 pages.
4. Loconsole, A., and Börstler J., (2003) Theoretical Validation and Case Study of Requirements Management Measures, Umeå University Internal Report UMINF 03.02, July 2003, 20 pages.
5. Loconsole, A., (2002) Non-Empirical Validation of Requirements Management Measures, in Proceeding of WoSQ - Workshop on Software Quality, ICSE, Orlando, Fl, May 2002, 4 pages.
6. Loconsole, A., (2001) Measuring the Requirements Management Key Process Area - Application of the Goal Question Metric to the Requirements Management Key Process Area of the Capability Maturity Model. Proceedings of ESCOM - European Software Control and Metrics Conference, London, UK. April 2001, 10 pages.
7. Loconsole, A., (2001) Measuring the Requirements Management Process Area. in Proceedings of the Thirty Second SIGCSE- Technical Symposium on Computer Science Education, Charlotte, NC., 21-25 February 2001, abstract.
20
21
Other publications
1. Jürgen Börstler, Annabella Loconsole (editors) (2002): Proceedings of Umeå's Sixth Student Conference in Computing Science, Technical Report UMINF-02.06, Department of Computing Science, Umeå University, Sweden, Jun 2002.
2. Loconsole, A., Rodriguez D., Börstler J., Harrison R.,(2001) Report on Metrics 2001: The Science & Practice of Software Metrics Conference. Software Enginnering Notes - ACM SIGSOFT newsletter, vol 26, num 6, November 2001.
3. Jürgen Börstler, Annabella Loconsole, and Thomas Pederson (editors) (2001): Proceedings of USCCS'01, Umeå's Fifth Student Conference in Computing Science, Technical Report UMINF-01.10, Department of Computing Science, Umeå University, Sweden, May 2001.
4. Loconsole, A., (2000) Application of the Goal Question Metrics to the Requirements Management Key Process Area. Proceedings of USCC&I´00 - Umeå's 4th Student Conference in Computing Science & Informatics, Umeå University Report, Uminf 00.08, ISSN-0348-0542, 32-43.
5. Loconsole, A., (1998) Generazione automatica di metafore per le interfacce utente di database multimediali. MSc thesis, Bari University report.
22
23
24
Research background
Software requirements are a key issue for project success
Important to control changes to be able anticipate and respond to change requests
Software measurement can help us in providing guidance to the requirements management activities by quantifying changes to requirements and in predicting the costs related to changes
Few empirical studies have been performed in this field….
25
Measure
s Definiti
o
n
Introducti
on
Measures
Validatio
n
Case Study
26
What precisely was your contribution
1. What question did you answero What kind of questions do software engineers
investigate? Method for analysis or evaluation (how can I evaluate the quality of the requirements)
2. Why should the reader care3. What larger question does this address
Clear statement of the specific problem you solved and an explanation of how the answer will help solve an important software engineering problem
27
What is your new result
1. What new knowledge have you contributed that the reader can use elsewhere
2. What previous work do you build on3. What do you provide a superior alternative to4. How is your work different from and better
than this prior work5. What precisely and in detail is your new
resulto What kind of results do SE produce and which are
the most common
28
Why should the reader believe your result
What standard should be used to
evaluate your claim
What concrete evidence shows that
your result satisfies your claim
o There is no other set of validated RM
measureso In case of other goal, I will show that my
measures are better predictors of volatility
29
The problem I am trying to solve
Poor management of requirements in industry (Poor measurement )
oMeasures can help (increase) to control, understand, predict requirements
Low formality in software engineeringoIn particular in software measurement there is a lack of validated measures
In case of different goal the problem is: can we predict volatility and stability of requirements through my measures? Can we improve the predictions?
30
What I have done (and how)
Definition (or collection) of 38 RM measures in paper 1 (escom).o How? By applying the GQM to the RM KPA of CMM
Theoretical validation of 10 of the 38 measures in paper 2 (internal report, not published internation.)o How? By applying 2 theoretical validation definitions
to the measures
31
What I am doing (and how)
Performing an empirical validation of those measures connected to volatilityo showing the connection between the measures and the
attributes associated to them. In particular I am showing that some (how many) of the 38 measures are connected to the attribute volatility of requirements.
o How? I am measuring requirements on 2 historical projects at a company and checking the volatility of the requirements. I am creating a mapping from (my) RM measures and what is measurable at the company and doing the data collection
32
What is left to do
Finish the empirical validationo …..Once the data collection on the 2 projects is
completed I will predict the volatility of the requirements on a third project based on the historical data and check how much my predictions are correct.
Write a journal paper which includes the definition and validation of the measureso This is the way for me to validate my results
33
Questions/problems I have…
Are my (eventual) results enough for a PhD thesis?o If not, I could demonstrate that some of my
measures are better predictors of stability/ volatility and compare my results with some other results
Doubts on abstraction levels and atomic requirements