trust worthy software

1175
Design for Trustworthy Software: Tools, Techniques, and Methodology of Developing Robust Software By Bijay K. Jayaswal, Peter C. Patton ............................................... Publisher: Prentice Hall Pub Date: August 31, 2006 Print ISBN-10: 0-13-187250-8 Print ISBN-13: 978-0-13-187250-9 Pages: 840 Table of Contents | Index | Name Index An Integrated Technology for Delivering Better SoftwareCheaper and Faster! This book presents an integrated technology, Design for Trustworthy Software (DFTS), to address software quality issues upstream such that the goal of software quality becomes that of preventing bugs in implementation rather than finding and eliminating them during and after implementation. The thrust of the technology is that major quality deployments take place before a single line of code is written! This customer-oriented integrated technology can help deliver breakthrough results in cost, quality, and delivery schedule thus meeting and exceeding customer expectations. The authors describe the principles behind the technology as well as their applications to actual software design problems. They present illustrative case studies covering various aspects of DFTS technology including CoSQ, AHP, TRIZ, FMEA, QFD, and Taguchi Methods and provide ample questions and exercises to test the readers understanding of the material in addition to detailed examples of the applications of the technology. The book can be used to impart organization-wide learning including training for DFTS Black Belts and Master Black Belts. It helps you gain rapid mastery, so you can deploy DFTS Technology quickly and successfully. Learn how to • Plan, build, maintain, and improve your trustworthy software development system • Adapt best practices of quality, leadership, learning, and management for the unique software development milieu • Listen to the customer's voice, then guide user expectations to realizable, reliable software products • Refocus on customer-centered issues such as reliability, dependability, availability, and upgradeability • Encourage greater design creativity and innovation • Validate, verify, test, evaluate, integrate, and maintain software for trustworthiness • Analyze the financial impact of software quality • Prepare your leadership and infrastructure for DFTS Design for Trustworthy Software will help you improve quality whether you develop in-house, outsource, consult, or provide support. It offers breakthrough solutions for the entire spectrum of software and quality professionalsfrom developers to project leaders, chief software architects to customers. Bijay K. Jayaswal, CEO of Agilenty Consulting Group, has held senior executive positions and consulted on

Upload: dineshkadali

Post on 09-Nov-2015

12 views

Category:

Documents


2 download

DESCRIPTION

text book

TRANSCRIPT

  • Design for Trustworthy Software: Tools,Techniques, and Methodology of Developing RobustSoftwareBy Bijay K. Jayaswal, Peter C. Patton...............................................Publisher: Prentice HallPub Date: August 31, 2006Print ISBN-10: 0-13-187250-8Print ISBN-13: 978-0-13-187250-9Pages: 840

    Table of Contents | Index | Name Index

    An Integrated Technology for Delivering Better SoftwareCheaper and Faster!

    This book presents an integrated technology, Design for Trustworthy Software (DFTS), to address softwarequality issues upstream such that the goal of software quality becomes that of preventing bugs inimplementation rather than finding and eliminating them during and after implementation. The thrust of thetechnology is that major quality deployments take place before a single line of code is written!

    This customer-oriented integrated technology can help deliver breakthrough results in cost, quality, and deliveryschedule thus meeting and exceeding customer expectations. The authors describe the principles behind thetechnology as well as their applications to actual software design problems. They present illustrative case studiescovering various aspects of DFTS technology including CoSQ, AHP, TRIZ, FMEA, QFD, and Taguchi Methods andprovide ample questions and exercises to test the readers understanding of the material in addition to detailedexamples of the applications of the technology.

    The book can be used to impart organization-wide learning including training for DFTS Black Belts and MasterBlack Belts. It helps you gain rapid mastery, so you can deploy DFTS Technology quickly and successfully.

    Learn how to

    Plan, build, maintain, and improve your trustworthy software development system

    Adapt best practices of quality, leadership, learning, and management for the unique software developmentmilieu

    Listen to the customer's voice, then guide user expectations to realizable, reliable software products

    Refocus on customer-centered issues such as reliability, dependability, availability, and upgradeability

    Encourage greater design creativity and innovation

    Validate, verify, test, evaluate, integrate, and maintain software for trustworthiness

    Analyze the financial impact of software quality

    Prepare your leadership and infrastructure for DFTS

    Design for Trustworthy Software will help you improve quality whether you develop in-house, outsource,consult, or provide support. It offers breakthrough solutions for the entire spectrum of software and qualityprofessionalsfrom developers to project leaders, chief software architects to customers.

    Bijay K. Jayaswal, CEO of Agilenty Consulting Group, has held senior executive positions and consulted on

  • quality and strategy for 25 years. His expertise includes value engineering, process improvement, and productdevelopment. He has directed MBA and Advanced Management programs, and helped to introduce enterprise-wide reengineering and Six Sigma initiatives.

    Dr. Peter C. Patton, Chairman of Agilenty Consulting Group, is Professor of Quantitative Methods and ComputerScience at the University of St. Thomas. He served as CIO of the University of Pennsylvania and CTO at LawsonSoftware, and has been involved with software development since 1955.

  • Design for Trustworthy Software: Tools,Techniques, and Methodology of Developing RobustSoftwareBy Bijay K. Jayaswal, Peter C. Patton...............................................Publisher: Prentice HallPub Date: August 31, 2006Print ISBN-10: 0-13-187250-8Print ISBN-13: 978-0-13-187250-9Pages: 840

    Table of Contents | Index | Name Index

    Copyright Foreword Preface Acknowledgments About the Authors

    Part I: Contemporary Software Development Process, Their Shortcomings, and the Challenge of TrustworthySoftware Chapter 1. Software Development Methodology Today Overview Software Development: The Need for a New Paradigm Software Development Strategies and Life-Cycle Models Software Process Improvement ADR Method Seven Components of the Robust Software Development Process Robust Software Development Model Key Points Additional Resources Internet Exercises Review Questions Discussion Questions and Projects Endnotes Chapter 2. The Challenge of Trustworthy Software: Robust Design in Software Context Overview Software Reliability: Myth and Reality Limitations of Traditional Quality Control Systems Japanese Quality Management Systems and the Taguchi Approach The Nitty-Gritty of Taguchi Methods for Robust Design The Challenge of Software Reliability: Design for Trustworthy Software

  • A Robust Software Development Model: DFTS Process in Practice Key Points Additional Resources Internet Exercises Review Questions Discussion Questions and Projects Endnotes Chapter 3. Software Quality Metrics Overview Measuring Software Quality Classic Software Quality Metrics Total Quality Management Generic Software Quality Measures Current Metrics and Models Technology New Metrics for Architectural Design and Assessment Common Architectural Design Problems Pattern Metrics in OOAD Key Points Additional Resources Internet Exercises Review Questions Discussion Questions and Projects Endnotes Chapter 4. Financial Perspectives on Trustworthy Software Overview Why DFTS Entails Different Financial Analyses Cost and Quality: Then and Now Cost of Software Quality Cost of Software Quality Over the Life Cycle CoSQ and Activity-Based Costing Quality Loss Function in Software Financial Evaluation of a DFTS Investment Key Points Additional Resources Internet Exercises Review Questions Discussion Questions Problems Endnotes Chapter 5. Organizational Infrastructure and Leadership for DFTS Overview Organizational Challenges of a DFTS Deployment DFTS Implementation Framework Putting It All Together Key Points

  • Additional Resources Internet Exercises Review Questions Discussion Questions and Projects Endnotes Part II: Tools and Techniques of Design for Trustworthy Software Chapter 6. The Seven Basic (B7) Tools of Quality Overview The Seven Basic (B7) Tools B7 in a DFTS Context Other DFTS Tools, Techniques, and Methodologies Flowcharts Pareto Charts Cause-and-Effect Diagrams Scatter Diagrams Check Sheets Histograms Graphs Control Charts Key Points Additional Resources Review Questions Discussion Questions Endnotes Chapter 7. The 7 MP Tools: Analyzing and Interpreting Qualitative and Verbal Data Overview The N7 and 7 MP Tools Typical Applications of 7 MP Tools Affinity Diagram Interrelationship Diagraph (I.D.) Tree Diagram Prioritization Matrices Matrix Diagram Process Decision Program Chart (PDPC) Activity Network Diagram Behavioral Skills for 7 MP Tools Key Points Additional Resources Review Questions Discussion Questions and Projects Endnotes Chapter 8. The Analytic Hierarchy Process Overview Prioritization, Complexity, and the Analytic Hierarchy Process Multiobjective Decision-Making and AHP

  • Case Study 8.1 Solution Using Expert Choice Approximations to AHP with Manual Calculations Conclusion Key Points Additional Resources Internet Exercises Review Questions Discussion Questions and Projects Problems Endnotes Chapter 9. Complexity, Mistakes, and Poka Yoke in Software Development Processes Overview Poka Yoke as a Quality Control System Principles of Poka Yoke Causes of Defects: Variation, Mistakes, and Complexities Situations in Which Poka Yoke Works Well Mistakes as Causes of Defects Controlling Complexity in Software Development Mistakes, Inspection Methods, and Poka Yoke Deploying a Poka Yoke System Identifying a Poka Yoke Solution Key Points Additional Resources Internet Exercises Review Questions Discussion Questions and Projects Endnotes Chapter 10. 5S for Intelligent Housekeeping in Software Development Overview 5S: A Giant Step Toward a Productive Workplace Environment Implementation Phases of the 5S System The 5S System and the DFTS Process Overcoming Resistance Implementing 5S Key Points Additional Resources Internet Exercises Review Questions Discussion Questions and Projects Endnotes Chapter 11. Understanding Customer Needs: Software QFD and the Voice of the Customer Overview QFD: Origin and Introduction Problems with Traditional QFD Applied to Software Modern QFD for Software

  • The Blitz QFD Process Implementing Software QFD

    Conclusion Key Points Additional Resources Internet Exercises Review Questions Discussion Questions Endnotes About the Author

    Chapter 12. Creativity and Innovation in the Software Design Process: TRIZ and Pugh ConceptSelection Methodology Overview The Need for Creativity in DFTS Creativity and TRIZ TRIZ in Software Development TRIZ, QFD, and Taguchi Methods Brainstorming Pugh Concept Selection Methodology Software as Intellectual Property Key Points Additional Resources Internet Exercises Review Questions Discussion Questions and Projects Endnotes Chapter 13. Risk Assessment and Failure Modes and Effects Analysis in Software Overview FMEA: Failure Modes and Effects Analysis Upstream Application of FMEA Software Failure Tree Analysis Software Failure Modes and Their Sources Risk Assignment and Evaluation at Each Stage of DFTS Key Points Additional Resources Internet Exercises Review Questions Discussion Questions and Projects Endnotes Chapter 14. Object and Component Technologies and Other Development Tools Overview Major Challenges in Enterprise Business Applications Object-Oriented Analysis, Design, and Programming Component-Based Software Development Technology Extreme Programming for Productivity N-Version Programming for Reliability

  • Modern Programming Environments Trends in Computer Programming Automation Key Points Additional Resources Internet Exercises Review Questions Discussion Questions and Projects Endnotes Part III: Designing for Trustworthy Software Chapter 15. Quality Measures and Statistical Methods for Trustworthy Software Overview Trustworthy Software Microsoft's Trustworthy Computing Initiative Statistical Process Control for Software Development Processes Statistical Methods for Software Architects Key Points Additional Resources Internet Exercises Review Questions Discussion Questions and Projects Problems Endnotes Chapter 16. Robust Software in Context Overview The Software Specification Process What Is Robust Software? Requirements for Software to Be Robust Specifying Software Robustness Key Points Additional Resources Internet Exercises Review Questions Discussion Questions and Projects Problems Endnotes Chapter 17. Taguchi Methods and Optimization for Robust Software Overview Taguchi Methods for Robust Software Design An Example from Engineering Design An Example from Software Design and Development Orthogonal Matrices for Taguchi Parameter Design Experiments Applications to the Design of Trustworthy Software Key Points Additional Resources Internet Exercises

  • Review Questions Discussion Questions

    Problems Endnotes Chapter 18. Verification, Validation, Testing, and Evaluation for Trustworthiness Overview Continuing the Development Cycle Verification Validation Testing and Evaluation Key Points Additional Resources Internet Exercises Review Questions Discussion Questions and Projects Problems Endnotes Chapter 19. Integration, Extension, and Maintenance for Trustworthiness Overview Completing the Development Cycle Integration Extension Maintenance Key Points Additional Resources Internet Exercises Review Questions Discussion Questions and Projects Problems Endnotes Part IV: Putting It All Together: Deployment of a DFTS Program Chapter 20. Organizational Preparedness for DFTS Overview Time to Ponder Leadership Challenges for Transformational Initiatives Assessing Key Organizational Elements Key Points Additional Resources Internet Exercises Review Questions Discussion Questions and Projects Endnotes Chapter 21. Launching a DFTS Initiative Overview DFTS and the PICS Framework

  • Plan Implement

    Control Secure Application in Small Software Firms and e-Cottages What's Next? Key Points Additional Resources Internet Exercises Review Questions Discussion Questions Endnotes Part V: Six Case Studies Chapter 22. Cost of Software Quality (CoSQ) at Raytheon's Electronic Systems (RES) Group* Introduction RES and Its Improvement Program Cost of Software Quality Experiences and Lessons Learned Case Study Implications Endnotes Chapter 23. Information Technology Portfolio Alignment Overview Part OneThe Challenge Part TwoA New, Rational Approach Risk Extensions Summary Endnote Chapter 24. Defining Customer Needs for Brand-New Products: QFD for Unprecedented Software Overview Introduction Defining Brand-New Needs Tools Last Steps Layers of Resistance Conclusion Acknowledgments References About the Author Chapter 25. Jurassic QFD: Integrating Service and Product Quality Function Deployment Overview Company Profile of MD Robotics Why QFD? Triceratops Encounter at Universal Studios Florida Island of Adventure Summary

  • About the Authors References Endnotes Chapter 26. Project QFD: Managing Software Development Projects Better with Blitz QFD Overview Introduction Problems with New Development Focus on Value with Project QFD Summary Acknowledgments References About the Author

    Chapter 27. QFD 2000: Integrating QFD and Other Quality Methods to Improve the New-ProductDevelopment Process Overview Demand for New Products Quality and New-Product Development Resources for QFD and Other Quality Methods About the Author References Glossary of Technical Terms Index Name Index

  • CopyrightMany of the designations used by manufacturers and sellers to distinguishtheir products are claimed as trademarks. Where those designations appear inthis book, and the publisher was aware of a trademark claim, the designationshave been printed with initial capital letters or in all capitals.

    The author and publisher have taken care in the preparation of this book, butthey make no expressed or implied warranty of any kind and assume noresponsibility for errors or omissions. No liability is assumed for incidental orconsequential damages in connection with or arising from the use of theinformation or programs contained herein.

    The publisher offers excellent discounts on this book when ordered in quantityfor bulk purchases or special sales, which may include electronic versionsand/or custom covers and content particular to your business, training goals,marketing focus, and branding interests. For more information, please contact:

    U.S. Corporate and Government [email protected]

    For sales outside the United States, please contact:

    International [email protected]

    Visit us on the Web: www.prenhallprofessional.com

    Library of Congress Cataloging-in-Publication Data:Jayaswal, Bijay K., 1949- Design for trustworthy software : tools, techniques, and methodology of developing robust software / Bijay K. Jayaswal,Peter C. Patton. p. cm. Includes index. ISBN 0-13-187250-8 (hardback : alk. paper)1. Computer softwareReliability. 2. Computer softwareQuality control. 3. Computer softwareDevelopment. I.Patton, Peter C. II. Title. QA76.76.R44J39 2006 005dc22

  • 2006016484

    Copyright 2007 Pearson Education, Inc.

    All rights reserved. Printed in the United States of America. This publication isprotected by copyright, and permission must be obtained from the publisherprior to any prohibited reproduction, storage in a retrieval system, ortransmission in any form or by any means, electronic, mechanical,photocopying, recording, or likewise. For information on permissions, write to:

    Pearson Education, Inc.Rights and Contracts DepartmentOne Lake StreetUpper Saddle River, NJ 07458Fax: (201) 236-3290

    Text printed in the United States on recycled paper at R.R. Donnelley inCrawfordsville, IN.

    First printing, September 2006

    Dedication

    From Bijay:

    To my wife, Sheila, and our two children, Avinash and Anishka, and to myparents, Siya Prasad and Urmila Devi, who are no more but are always presentin my life.

    From Peter:

    To my wife, Naomi, and our late son Peter, Jr. who was a prize winning website designer among many other talents.

  • ForewordI have spent my career writing business enterprise software, but I can foreseethe day when application software will no longer be written by programmerslike myself. For more than twenty years, my advanced development team andI have been working on realizing the Holy Grail of programmingspecification-based software. This is application software generated automatically from avery precise specification written by a domain expert rather than a systemanalyst or a programmer. Today's business application programmers willbecome either domain experts or the system programmers who create themetacompilers for the specification languages or Domain-Specific DesignLanguages (DSDLs) that will automatically generate full application systems.After years of research, we at Lawson have announced such a tool, calledLandmark, which is being used to prepare our new application softwarereleases.

    It is impossible to automate any process that cannot be done manually in areliable and repeatable way. The unfortunate reliability situation withcomputer software historically has retarded its automation for years. The sort-merge generator is more than fifty years old, but this achievement has notbeen repeated for more-complex applications. It is one thing to write a precise,unambiguous specification for a sort-merge program and quite another towrite one for even a piece of an HR or supply/chain application. The firstproblem is to develop a specification language that has the unambiguousexpressive power to do for any application what a sort-merge specificationdoes for its application. Considerable progress has been made with this issue inrecent years with the advent of Pattern Languages and the DSDLs thatimplement them. The second problem is understanding the softwaredevelopment process in a prescriptive rather than a merely descriptive way.The five design technologies described and presented in both theory andexample in this book are one of the first attempts to do that with software.They are tried and tested in the hardware manufacturing process and productdesign but have yet to be applied to software development in any systematicway. This book represents a first attempt to do so, as a combined primer andhandbook. The fact that this book incorporates as a case study almost everypublished example of the application of these design technologies to softwaredesign and development identifies it as a pioneering work.

    I agree with the authors that the future of business application software lies inspecification-based languages. I also think that their book will be a bridge fromtoday's untrustworthy, manually created software to tomorrow's automaticallygenerated, fully trustworthy software. I recommend this book very highly.

  • H. Richard Lawson

    Vice Chairman, Lawson Software

  • PrefaceThe fastest-growing phenomenon in the world today is computer end-userexpectation. The computer revolution that began with the announcement ofthe ENIAC on Valentine's Day 1946 in the New York Times has completelychanged the world. Computer hardware has become so reliable that we cast itin silicon microchips and even embed it in other machines. We assume that ifhardware survives its "infant mortality" period, it will never need to berepaired, as do other machines. (Frequently upgraded to meet demand,perhaps!) Software has likewise come a long way, but it remains the Achilles'heel of truly trustworthy computing. No hard-goods manufacturer today woulddeliberately ship goods with known defects into a high-tech market, yetsoftware vendors do so routinely. They simply have no other choice given therelentless demand of computer end-user expectation, software's inherentcomplexity, and the general lack of the kind of strong "quality cultures" thatpervade high-tech hard-goods manufacturing.

    The authors bring more than 30 years of quality experience and 50 years ofsoftware development experience to bear on the problem of designing softwareto be inherently trustworthy. We were inspired by Craig Mundie's TrustworthyComputing Initiative at Microsoft Corporation. After reading the literature onsoftware quality and attending numerous conferences, we were convinced thatTaguchi Methods had much to offer. We were further emboldened to find thatTaguchi Methods had been recommended for such applications by none otherthan Dr. Taguchi himself. They had been applied in only a half-dozen cases,and all successfully. The major premise of this book is that although softwareis designed like hardware, nothing in its development process is akin to themanufacturing of hardware. Therefore, any quality method employed toimprove software reliability and hence trustworthiness would have to beapplied as far upstream as possible. The genius of Taguchi Methods is that theycan treat both controllable (inherent design) factors and uncontrollable noise(exogenous) factors at design time. By using a statistical experiment techniqueemploying orthogonal matrices or Latin Squares, Taguchi Methods can considerall factors simultaneously. This means the end of downstream testing,bottleneck analysis, and finding and fixing one bug at a time in softwareproducts. The goal of software quality now becomes preventing bugs inimplementation rather than finding and eliminating them during and afterimplementation. Like other quality methods, Taguchi Methods are not a "blackbox" that you simply insert into the software development process. Nor arethey used alone. They are used in the context of other upstream customer-oriented methods, such as Analytic Hierarchy Process (AHP), Quality FunctionDeployment (QFD), TRIZ, Pugh Concept Selection, and Failure Modes and

  • Effects Analysis (FMEA), all of which may be applied before a single line ofcode is written!

    The essence of Taguchi Methods is listening to the "voice of the customer." Bylistening carefully, the software architect or designer can get in front ofcomputer end-user expectation and guide it to realizable and reliable products.This is better than being dragged behind end-users in an endless cycle ofproduct "fix and repair" without any hope of ever catching up. This book offersa framework of tools, techniques, and methodologies for developing robustsoftware. This framework is an integrative technology based on the principlesof transformational leadership, best practices of learning organizations,management infrastructure, and quality strategy and systems, all blended intothe unique context of software development milieu. We call it Design forTrustworthy Software (DFTS).

    This book is intended to meet the needs of software developmentorganizations, big and small, that want to build the kind of trustworthy orhighly reliable software demanded by today's increasingly sophisticatedcomputer user. It is designed to be a resource for organization-wide learningthat helps you understand, implement, improve, and maintain a trustworthysoftware development system. It is meant for organizations that are led byvisionary leaders who understand and value such user needs and who areready to lead their organizations to develop such robust capability. Althoughwe have emphasized enterprise software, this book can be used by anyorganization in which software development is an important activity fordeveloping proprietary software, providing internal software support, orimparting outsourcing vendor service. Organizations can use it for formal DFTSblack belt, master black belt, and other certifications. Such formal certificationcan greatly enhance organization-wide DFTS learning and deployment. Thisbook can also be used as a practical reference by software developers as wellas quality professionals and senior management, who play a crucial role insuch organizations.

    This book is equally useful for students of software development technology,MIS, product design and development, operations, quality management, andtechnology management, at both undergraduate and graduate levels. Itparticularly complements Master of Science programs in engineering, MIS, IT,and computer science, as well as MBA programs that focus on operations,product development, and technology. It also is a useful resource for theAmerican Society for Quality's (ASQ's) Certified Software Quality Engineer(CSQE) examination.

    This book contains examples, sidebars, and case studies. It is supported by keypoints, review questions, discussion questions, projects, exercises, and

  • problems. It is further supported by additional learning material on theInternet to provide intensive and continually updated material for learning incorporate settings or classrooms or for self-study.

    The book is not a "handbook" in the classic sense. Instead, it is an expositionof the principles and practices involved in several proven qualitymethodologies that interact well and that are suitable for softwaredevelopment. They are particularly applicable at design time, beforeimplementation begins. Smaller software and other engineering design casestudies and examples are presented throughout the book to illustrate theapplication of the principles. Software architects will find examples thatsupport their design concepts. Software engineers will find examples thatsupport building in quality at the detailed design stage. Although all the DFTStechniques are applicable throughout the development process, the emphasischanges as a product goes from end-user need to concept, architecture,engineering design, implementation, testing, and finally support in the field.All five parts deal with relevant leadership and management infrastructure forsuccessful learning and deployment of DFTS technology.

    How This Book Is Organized

    The book is organized into five parts. Part I, containing Chapters 1 through 5,presents contemporary software development practices, with theirshortcomings and the challenges of and framework for developing trustworthysoftware. This is supported by chapter length treatment of two critical softwarequality issues, namely, software quality metrics and financial perspectives ontrustworthy software. Part II, containing Chapters 6 through 14, presents thetools and techniques advocated by the authors for developing trustworthysoftware and is the primary focus of the book. Part III, containing Chapters 15through 19, shows you how to apply these tools and techniques upstream inthe design process before program implementation even begins. Part IV,containing Chapters 20 and 21, lays the groundwork for deploying a DFTSinitiative in your organization. Like all quality initiatives, DFTS must besupported from the top to succeed and must become a part of theorganization's "culture." Part V, containing Chapters 22 through 27, presentssix major case studies of the software quality techniques presented in Parts Iand II. We have sought out world-class practitioners of these techniques, andthey have generously contributed their leading examples for yourconsideration and study.

    Useful Software

  • You can benefit from using several software packages that facilitate learningand the deployment of quality methodologies such as AHP, Taguchi Methods,and QFD. A number of Web sites provide free limited-use/limited-timedownloads. In particular, the following software is available:

    AHP: You can find a free 15-day trial version of Expert Choice athttp://www.expertchoice.com/software/grouptrialreg.htm

    Special prices are available for students, instructors, and corporate bulkpurchases. Call 1-888-259-6400 for pricing details.

    QFD: Modern Blitz QFD templates for Microsoft Excel are included inQFD Institute training programs. Details are available athttp://www.qfdi.org

    Taguchi Methods: Qualitek-4 DEMO software lets you review over 50examples and use an L8 array to design your own experiments. It can bedownloaded from http://www.nutek-us.com/wp-q4w.html

    You may also want to visit the following Web sites that we found useful:

    http://www.nutek-us.com/wp-q4w-screen.htmlhttp://www.nutek-us.com/wp-q4w-eval.html

    You may try the DEMO version for experiments involving L8 arrays. Thefull version may be negotiated with the vendors.

    This Book's Web Site

    This book's Web site keeps the book current between editions, providing newmaterial, examples, and case studies for students and instructors. The Website also provides materials for other users of this bookquality professionalsand corporate leaders who play a crucial role in the DFTS process. The book'stwo websites are:

    http://www.prenhallprofessional.com/title/0131872508

    http://www.agilenty.com/publications

    Instructors may contact the publisher for answers to the exercises andproblems. We look forward to comments and feedback on how the material can

  • be further enhanced and continually improved. Tell us about your experience,what you like about the book, how it has been useful, and, above all, how wecan improve it. We trust that you will.

    Bijay Jayaswal Peter PattonMinneapolis, MN St. Paul, MN [email protected] [email protected]

  • AcknowledgmentsWe are indebted to many individuals who have contributed to the developmentof this book over the last few years.

    We want to thank the reviewers and critics of various drafts:

    Richard A. DeLyser, University of St. Thomas

    Paul Holser

    Steve Janiszewski, PS & J Software Six Sigma

    Patrick L. Jarvis, University of St. Thomas

    H. Richard Lawson, Lawson Software, Inc.

    Bhabani Misra, University of St. Thomas

    Richard D. Patton, Lawson Software, Inc.

    German J. Pliego, University of St. Thomas

    In particular, we would like to express our deepest gratitude to Prof. C. V.Ramamoorthy of the University of California, Berkeley for his wise and patientcounsel throughout the last three years. This book would have been a verydifferent product without his guidance.

    We are grateful to two generations of scholars who have influenced our work.In particular, we would like to mention Yoji Akao, Genrich Altschuller, PhilipCrosby, W. Edwards Deming, Eliyahu Goldratt, Hiroyuki Hirano, KaoruIshikawa, Joseph Juran, Shigeru Mizuno, Taichi Ohno, Stewart Pugh, ThomasL. Satty, Walter A. Shewhart, Shigeo Shingo, and Genichi Taguchi. We wouldalso like to thank Barry W. Boehm, Maurice H. Halstead, and B. Kanchana,whose work we have cited extensively. The number of scholars are toonumerous for us not to miss any. We acknowledge the work of all of them.

    Numerous practitioners have inspired us too. We want to mention CraigMundie of Microsoft Corporation, whose white paper on trustworthy computingtriggered our own thought process on the formidable challenges of trustworthysoftware. We thank Craig for his work and friendship over the years. Two

  • corporate titans we have never met or discussed our project with wereinspirational to our workthe late Eiji Toyoda of Toyota Motor Corporation andJack Welch, the former chairman of General Electric. Their work and ablestewardship of two of the world's foremost corporations are great examples ofleadership by quality. We have extensively quoted GE and Toyota in this book.We thank both of them.

    We would like to express our gratitude to Glenn Mazur, Mike Jones of ExpertChoice, Inc. and Ranjit Roy of Nutek, Inc. for software support for the book.We would like to thank Paul O'Mara and Alice Haley of ASQ, Linda Nicol andLinda Hart of Cambridge University Press, Peter O'Toole of GE, Tina B. Gordonof Johnson & Johnson, Michelle Thibodeau of Pearson Education, Lia Rojales ofProductivity Press, and Richard Zultner of Zultner & Co. for permission to userelevant copyrighted material.

    It is a great pleasure to express our great appreciation to the Prentice Hallteam led by Bernard Goodwin, who is simply the best. He, Michelle Housley,Stephane Nakib, and Beth Wickenhiser were just wonderful. The project editorAndrew Beaster and copy editor Gayle Johnson have indeed done a superb job.A big thank you to all of them!

    Finally, we would like to thank the contributing authors: Andrew Bolt, JackCampanella, Ernest Forman of George Washington University, Herb Krasner ofthe University of Texas at Austin, Glenn Mazur of QFD Institute, and RichardZultner of Zultner and Co. They have enriched the book with theircontributions. We are forever indebted to them for their insight, wisdom, andgenerosity.

  • About the AuthorsBijay K. Jayaswal holds a B.Eng. (Hons.) degree in electrical engineeringfrom the Birla Institute of Technology and Science, Pilani, India, and an MBAand a master's degree in electrical engineering from Aston University inEngland. He is the CEO of Agilenty Consulting Group, LLC. He has held seniorexecutive positions and has consulted in quality and strategy for the last 25years. His consulting and research interests include value engineering, processimprovement, and product development. He has taught engineering andmanagement at the University of Mauritius and California State University,Chico and has directed MBA and Advanced Management programs. He hashelped introduce corporate-wide initiatives in reengineering, Six Sigma, andDesign for Six Sigma and has worked with senior executive teams toimplement such initiatives. He can be contacted [email protected].

    Dr. Peter C. Patton is Professor of Quantitative Methods and ComputerScience at the University of St. Thomas, St. Paul, Minnesota. He also isChairman of Agilenty Consulting Group. He has taught at the Universities ofMinnesota, Paris, and Stuttgart and has held the position of Chief InformationOfficer at the University of Pennsylvania. He has engineering and mathematicsdegrees from Harvard, Kansas, and Stuttgart. He was Chief Technologist atLawson Software from 1996 to 2002. He was Lawson's representative on theTechnical Advisory Committee of IBM's SanFrancisco Java Frameworkproject. He has been involved in computer hardware and softwaredevelopment since 1955. He can be contacted at [email protected].

  • Part I: Contemporary Software Development Process, TheirShortcomings, and the Challenge of Trustworthy Software

    Chapter 1 Software Development Methodology Today

    Chapter 2 The Challenge of Trustworthy Software: RobustDesign in Software Context

    Chapter 3 Software Quality Metrics

    Chapter 4 Financial Perspectives on Trustworthy Software

    Chapter 5 Organizational Infrastructure and Leadership forDFTS

  • Chapter 1. Software Development MethodologyToday

    Cease dependence on inspection to achieve quality.

    W. Edwards Deming

    Quality is a many-splendored thing, and every improvement of itsattributes is at once an advance and an advantage.

    C. V. Ramamoorthy

  • Overview

    Both personal productivity and enterprise server software are routinelyshipped to their users with defects, called bugs from the early days ofcomputing. This error rate and its consequent failures in operation would notbe tolerated for any manufactured or "hardware" product sold today. Butsoftware is not a manufactured product in the same sense as a mechanicaldevice or household appliance, even a desktop computer. Since programmingbegan as an intellectual and economic activity with the ENIAC in 1946, a greatdeal of attention has been given to making software programs as reliable asthe computer hardware they run on. Unlike most manufactured goods,software undergoes continual redesign and upgrading in practice because thesystem component adapts the general-purpose computer to its varied andoften-changing, special-purpose applications. As needs change, so must thesoftware programs that were designed to meet them. A large body oftechnology has developed over the past 50 years to make software morereliable and hence trustworthy. This introductory chapter reviews the leadingmodels for software development and proposes a robust software developmentmodel based on the best practices of the past, while incorporating the promiseof more recent programming technology. The Robust Software DevelopmentModel (RSDM) recognizes that although software is designed and "engineered,"it is not manufactured in the usual sense of that word. Furthermore, itrecognizes an even stronger need in software development to address qualityproblems upstream, because that is where almost all software defects areintroduced. Design for Trustworthy Software (DFTS) addresses the challengesof producing trustworthy software using a combination of the iterative RobustSoftware Development Model, Software Design Optimization Engineering, andObject-Oriented Design Technology.

    Chapter Outline

    Software Development: The Need for a New Paradigm

    Software Development Strategies and Life-Cycle Models

    Software Process Improvement

    ADR Method

    Seven Components of the Robust Software Development Process

    Robust Software Development Model

  • Key Points

    Additional Resources

    Internet Exercises

    Review Questions

    Discussion Questions and Projects

    Endnotes

  • Software Development: The Need for a New Paradigm

    Computing has been the fastest-growing technology in human history. Theperformance of computing hardware has increased by more than a factor of1010 (10,000 million times) since the commercial exploitation of the electronictechnology developed for the ENIAC 50 years ago, first by Eckert and MauchlyCorp., later by IBM, and eventually by many others. In the same amount oftime, programming performance, a highly labor-intensive activity, hasincreased by about 500 times. A productivity increase of this magnitude for alabor-intensive activity in only 50 years is truly amazing, but unfortunately itis dwarfed by productivity gains in hardware. It's further marred by lowcustomer satisfaction resulting from high cost, low reliability, and unacceptabledevelopment delays. In addition, the incredible increase in available computerhardware cycles has forced a demand for more and better software. Much ofthe increase in programming productivity has, as you might expect, been dueto increased automation in computer software production. Increased internaluse of this enormous hardware largesse to offset shortcomings in software and"manware" have accounted for most of the gain. Programmers are not 500times more productive today because they can program faster or better, butbecause they have more sophisticated tools such as compilers, operatingsystems, program development environments, and integrated developmentenvironments. They also employ more sophisticated organizational concepts inthe cooperative development of programs and employ more sophisticatedprogramming language constructs such as Object-Oriented Programming(OOP), class libraries, and object frameworks. The first automation toolsdeveloped in the 1950s by people such as Betty Holburton[1] at the HarvardComputation Laboratory (the sortmerge generator) and Mandalay Grems[2] atthe Boeing Airplane Company (interpretive programming systems) haveemerged again. Now they take the form of automatic program generation,round-tripping, and of course the ubiquitous Java Virtual Machine, itself aninterpretive programming system.

    Over the years, a number of rules of thumb or best practices have developedamong enterprise software developers, both in-house and commercial or third-party vendors. Enterprise software is the set of programs that a firm, small orlarge, uses to run its business. It is usually conceded that it costs ten times asmuch to prepare (or "bulletproof") an enterprise application for themarketplace as it costs to get it running in the "lab." It costs another factor of2 from that point to market a software package to the break-even point. Thehigh cost of software development in both time and dollars, not to mentionpolitical or career costs (software development is often referred to as an"electropolitical" problem, and a high-risk project as a "death march"), has

  • encouraged the rise of the third-party application software industry and itsmany vendors. Our experience with leading both in-house and third-partyvendor enterprise software development indicates that the cost of maintaininga software system over its typical five-year life cycle is equal to its originaldevelopment cost.

    Each of the steps in the software life cycle, as shown in Figure 1.1, issupported by numerous methods and approaches, all well-documented bytextbooks and taught in university and industrial courses. The steps are alsosupported by numerous consulting firms, each having a custom or proprietarymethodology, and by practitioners well-trained in it. In spite of all of thisexperience supported by both computing and organizational technology, thequestion remains: "Why does software have bugs?" In the past two decades ithas been popular to employ an analogy between hardware design andmanufacture and software design and development. Software "engineering"has become a topic of intense interest in an effort to learn from the provenpractices of hardware engineeringthat is, how we might design and build bug-free software. After all, no reputable hardware manufacturer would shipproducts known to have flaws, yet software developers do this routinely. Why?

    Figure 1.1. Essential Steps in the Traditional EnterpriseSoftware Development Process

    [View full size image]

    One response is that software is intrinsically more complex than hardwarebecause it has more states, or modes of behavior. No machine has 1,000operating modes, but any integrated enterprise business application system islikely to have 2,500 or more input forms. Software complexity isconventionally described as proportional to some factorsay, Ndepending on thetype of program, times the number of inputs, I, multiplied by the number ofoutputs, O, to some power, P. Thus

    software complexity = N*I*OP

  • This can be thought of as increasing with the number of input parameters butgrowing exponentially with the number of output results.

    Computers, controlled by software, naturally have more statesthat is, theyhave larger performance envelopes than do other, essentially mechanical,systems. Thus, they are more complex.

  • Sidebar 1.1: Computer Complexity

    When one of the authors of this book went from being an aircraft designer to a computer architectin 1967, he was confronted by the complexity of the then newly developing multiprocessorcomputer. At the time, Marshall McLuhan's book Understanding Media was a popular read. In it, thisCanadian professor of English literature stated that a supersonic air transport plane is far simplerthan a multiprocessor computer system. This was an amazing insight for a professor of Englishliterature, but he was correct.

    One of the authors of this book worked on the structural optimization of the Concorde and on astructural aspect of the swing-wing of the Boeing SST. In 1968 he was responsible for making theUnivac 1108 function as a three-way multiprocessor. Every night at midnight he reported to theUnivac test floor in Roseville, Minnesota, where he was assigned three 1108 mainframe computers.He connected the new multiprocessor CRT console he had designed and loaded a copy of the Exec 8operating system modified for this new functionality. Ten times in a row the OS crashed at adifferent step of the bootstrap process. He began to wonder if this machine were a finite automatonafter all. Of course it was, and the diverse halting points were a consequence of interrupt races, buthe took much comfort from reading Marshall McLuhan. Today, highly parallel machines arecommonplace in business, industry, and the scientific laboratoryand they are indeed far morecomplex than supersonic transport aircraft (none of which are still flying now that the Concorde hasbeen taken out of service).

    Although software engineering has become a popular subject of many booksand is taught in many university computing curricula, we find theengineering/manufacturing metaphor to be a bit weak for softwaredevelopment. Most of a hardware product's potential problems becomeapparent in testing. Almost all of them can be corrected by tuning thehardware manufacturing process to reduce product and/or process variability.Software is different. Few potential problems can be detected in testing due tothe complexity difference between software and hardware. None of them canbe corrected by tuning the manufacturing process, because software has nomanufacturing process! Making copies of eight CD-ROMs for shipment to thenext customer along with a box of installation and user manuals offers littlechance for fine-tuning and in any case introduces no variability. It is more likebook publishing, in which you can at most slip an errata sheet into themisprinted book before shipping, or, in the case of software, an upgrade or fix-disk.

    So, what is the solution? Our contention is that because errors in software arealmost all created well upstream in the design process, and because softwareis all design and development, with no true manufacturing component,everything that can be done to create bug-free software must be done as farupstream in the design process as possible. Hence our advocacy of TaguchiMethods (see Chapters 2, 15, and 17) for robust software architecture.Software development is an immensely more taxing process than hardwaredevelopment. Although there is no silver bullet, we contend that the Taguchi

  • Methods described in the next chapter can be deployed as a key instrument inaddressing software product quality upstream at the design stage. Processesare often described as having upstream activities such as design anddownstream activities such as testing. This book advocates moving the quality-related aspects of development as far upstream in the development process aspossible. The RSDM presented in this book provides a powerful framework todevelop trustworthy software in a time- and cost-effective manner.

    This introductory chapter is an overview of the software development situationtoday in the view of one of the authors. Although he has been developing bothsystems and applications software since 1957, no single individual's career canencompass the entire spectrum of software design and developmentpossibilities. We have tried in this chapter to indicate when we are speakingfrom personal experience and sharing our personal opinions, and when we arereferring to the experience of others.

  • Software Development Strategies and Life-Cycle Models

    Here we will describe from a rather high altitude the various developmentmethods and processes employed for software today. We focus on designing,creating, and maintaining large-scale enterprise application software, whetherdeveloped by vendors or in-house development teams. The creation and use ofone-off and simple interface programs is no challenge. Developing hugeoperating systems such as Microsoft XP with millions of lines of code (LOC), orlarge, complex systems such as the FAA's Enroute System, bring very specialproblems of their own and are beyond the scope of this book. This is not to saythat the methodology we propose for robust software architecture is notapplicable; rather, we will not consider their applications here. The time-honored enterprise software development process generally follows these steps(as shown in Figure 1.1):

    Specification or functional design, done by system analysts in consort withthe potential end users of the software to determine why to do this, whatthe application will do, and for whom it will do it.

    Architecture or technical design, done by system designers as the way toachieve the goals of the functional design using the computer systemsavailable, or to be acquired, in the context of the enterprise as it nowoperates. This is how the system will function.

    Programming or implementation, done by computer programmers togetherwith the system designers.

    Testing of new systems (or regression testing of modified systems) toensure that the goals of the functional design and technical design aremet.

    Documentation of the system, both intrinsically for its future maintainers,and extrinsically for its future users. For large systems this step mayinvolve end-user training as well.

    Maintenance of the application system over its typical five-year life cycle,employing the design document now recrafted as the TechnicalSpecification or System Maintenance Document.

    This model and its variations, which we overview in this chapter, are largelysoftware developer-focused rather than being truly customer-centric. They

  • have traditionally attempted to address issues such as project cost andimplementation overruns rather than customer satisfaction issues such assoftware reliability, dependability, availability, and upgradeability. It may alsobe pointed out that all these models follow the "design-test-design" approach.Quality assurance is thus based on fault detection rather than faultprevention, the central tenet of this book's approach. We will also discussinChapters 2, 4, and 11 in particularhow the model that we propose takes afault-prevention route that is based not only on customer specifications butalso on meeting the totality of the user's needs and environment.

    A software development model is an organized strategy for carrying out thesteps in the life cycle of a software application program or system in apredictable, efficient, and repeatable way. Here we will begin with the primarytime-honored models, of which there are many variants. These are the build-and-fix model, the waterfall model, the evolutionary model, the spiral model,and the iterative development model. Rapid prototyping and extremeprogramming are processes that have more recently augmented the waterfallmodel. The gradual acceptance of OOP over the past decade, together with itsobject frameworks and sophisticated integrated development environments,have been a boon to software developers and have encouraged newdevelopments in automatic programming technology.

    These life-cycle models and their many variations have been widelydocumented. So have current technology enhancements in various softwaredevelopment methods and process improvement models, such as the RationalUnified Process (RUP), the Capability Maturity Model (CMM), and the ISO 9000-3 Guidelines. Therefore, we will consider them only briefly. We will illustratesome of the opportunities we want to address using the RSDM within theoverall framework of DFTS technology. It is not our purpose to catalog andcompare existing software development technology in any detail. We onlywant to establish a general context for introducing a new approach.

    Build-and-Fix Model

    The build-and-fix model was adopted from an earlier and simpler age ofhardware product development. Those of us who bought early Volkswagenautomobiles in the 1950s and '60s remember it well. As new models werebrought out and old models updated, the cars were sold apparently withoutbenefit of testing, only to be tested by the customer. In every case, thevehicles were promptly and cheerfully repaired by the dealer at no cost totheir owners, except for the inconvenience and occasional risk of a breakdown.This method clearly works, but it depends on having a faithful and patient

  • customer set almost totally dependent on the use of your product! It is thesame with software. A few well-known vendors are famous for their numerousfree upgrades and the rapid proliferation of new versions. This always worksbest in a monopolistic or semimonopolistic environment, in which the customerhas limited access to alternative vendors. Unfortunately in the build-and-fixapproach, the product's overall quality is never really addressed, even thoughsome of the development issues are ultimately corrected. Also, there is no wayto feed back to the design process any proactive improvement approaches.Corrections are put back into the market as bug fixes, service packs, orupgrades as soon as possible as a means of marketing "damage control." Thus,little learning takes place within the development process. Because of this,build-and-fix is totally reactive and, by today's standards, is not really adevelopment model at all. However, the model shown in Figure 1.2 is perhapsstill the approach most widely used by software developers today, as many willreadily, and somewhat shamefully, admit.

    Figure 1.2. Build-and-Fix Software Development Model

    [View full size image]

    Waterfall Model

    The classic waterfall model was introduced in the 1970s by Win Royce atLockheed. It is so named because it can be represented or graphically modeledas a cascade from establishing requirements, to design creation, to programimplementation, to system test, to release to customer, as shown in Figure1.3. It was a great step forward in software development as an engineeringdiscipline. The figure also depicts the single-level feedback paths that were notpart of the original model but that have been added to all subsequentimprovements of the model; they are described here. The original waterfall

  • model had little or no feedback between stages, just as water does not reverseor flow uphill in a cascade but is drawn ever downward by gravity. This methodmight work satisfactorily if design requirements could be perfectly addressedbefore flowing down to design creation, and if the design were perfect whenprogram implementation began, and if the code were perfect before testingbegan, and if testing guaranteed that no bugs remained in the code before theusers applied it, and of course if the users never changed their minds aboutrequirements. Alas, none of these things is ever true. Some simple hardwareproducts may be designed and manufactured this way, but this model has beenunsatisfactory for software products because of the complexity issue. It issimply impossible to guarantee correctness of any program of more than about169 lines of code by any process as rigorous as mathematical proof. Provingprogram functionality a priori was advantageous and useful in the early days ofembedded computer control systems, when such programs were tiny, buttoday's multifunction cell phones may require a million lines of code or more!

    Figure 1.3. Waterfall Model for Software Development

    Rapid Prototyping Model

  • Rapid prototyping has long been used in the development of one-off programs,based on the familiar model of the chemical engineer's pilot plant. Morerecently it has been used to prototype larger systems in two variantsthe"throwaway" model and the "operational" model, which is really theincremental model to be discussed later. This development process produces aprogram that performs some essential or perhaps typical set of functions forthe final product. A throwaway prototype approach is often used if the goal isto test the implementation method, language, or end-user acceptability. If thistechnology is completely viable, the prototype may become the basis of thefinal product development, but normally it is merely a vehicle to arrive at acompletely secure functional specification, as shown in Figure 1.4. From thatpoint on the process is very similar to the waterfall model. The majordifference between this and the waterfall model is not just the creation of theoperational prototype or functional subset; the essence is that it be done veryquicklyhence the term rapid prototyping.[3]

    Figure 1.4. Rapid Prototyping Model

  • Incremental Model

    The incremental model recognizes that software development steps are notdiscrete. Instead, Build 0 (a prototype) is improved and functionality is addeduntil it becomes Build 1, which becomes Build 2, and so on. These builds arenot the versions released to the public but are merely staged compilations ofthe developing system at a new level of functionality or completeness. As amajor system nears completion, the project manager may schedule a newbuild every day at 5 p.m. Heaven help the programmer or team who does nothave their module ready for the build or whose module causes compilation orregression testing to fail! As Figure 1.5 shows, the incremental model is a

  • variant of the waterfall and rapid prototyping models. It is intended to deliveran operational-quality system at each build stage, but it does not yet completethe functional specification.[4] One of the biggest advantages of theincremental model is that it is flexible enough to respond to criticalspecification changes as development progresses. Another clear advantage isthat analysts and developers can tackle smaller chunks of complexity.Psychologists teach the "rule of seven": the mind can think about only sevenrelated things at once. Even the trained mind can juggle only so many detailsat once. Users and developers both learn from a new system's developmentprocess, and any model that allows them to incorporate this learning into theproduct is advantageous. The downside risk is, of course, that learning exceedsproductivity and the development project becomes a research projectexceeding time and budget or, worse, never delivers the product at all. Sincealmost every program to be developed is one that has never been writtenbefore, or hasn't been written by this particular team, research programsyndrome occurs all too often. However, learning need not exceed productivityif the development team remains cognizant of risk and focused on customerrequirements.

    Figure 1.5. Incremental Model

  • Extreme Programming

    Extreme Programming (XP) is a rather recent development of the incrementalmodel that puts the client in the driver's seat. Each feature or feature set ofthe final product envisioned by the client and the development team isindividually scoped for cost and development time. The client then selectsfeatures that will be included in the next build (again, a build is an operational

  • system at some level of functionally) based on a cost-benefit analysis. Themajor advantage of this approach for small to medium-size systems (10 to 100man-years of effort) is that it works when the client's requirements are vagueor continually change. This development model is distinguished by its flexibilitybecause it can work in the face of a high degree of specification ambiguity onthe user's part. As shown in Figure 1.6, this model is akin to repeated rapidprototyping, in which the goal is to get certain functionality in place for criticalbusiness reasons by a certain time and at a known cost.[5]

    Figure 1.6. Extreme Programming Model

    Adapted from Don Wells: www.extremeprogramming.org. Don Wells XPwebsite gives an excellent overview of the XP development process. A moreexhaustive treatment is given in Kent Beck. Extreme Programming Explained(Boston: Addison-Wesley, 2000)

    [View full size image]

    Spiral Model

    The spiral model, developed by Dr. Barry Boehm[6] at TRW, is an enhancementof the waterfall/rapid prototype model, with risk analysis preceding each phaseof the cascade. You can imagine the rapid prototyping model drawn in the formof a spiral, as shown in Figure 1.7. This model has been successfully used forthe internal development of large systems and is especially useful whensoftware reuse is a goal and when specific quality objectives can beincorporated. It does depend on being able to accurately assess risks duringdevelopment. This depends on controlling all factors and eliminating or at leastminimizing exogenous influences. Like the other extensions of and

  • improvements to the waterfall model, it adds feedback to earlier stages. Thismodel has seen service in the development of major programming projectsover a number of years, and is well documented in publications by Boehm andothers.

    Figure 1.7. Spiral Model

    Adapted from B. W. Boehm, "A Spiral Model of Software Development andEnhancement," IEEE Computer, 21 (May 1988), pp. 6172.

    [View full size image]

    Object-Oriented Programming

    Object-Oriented Programming (OOP) technology is not a software developmentmodel. It is a new way of designing, writing, and documenting programs that

  • came about after the development of early OOP languages such as C++ andSmalltalk. However, OOP does enhance the effectiveness of earlier softwaredevelopment models intended for procedural programming languages, becauseit allows the development of applications by slices rather than by layers. Thecentral ideas of OOP are encapsulation and polymorphism, which dramaticallyreduce complexity and increase program reusability. We will give examples ofthese from our experience in later chapters. OOP has become a majordevelopment technology, especially since the wide acceptance of the Javaprogramming language and Internet-based application programs. OOPanalysis, design, and programming factor system functionality into objects,which include data and methods designed to achieve a specific, scope-limitedset of tasks. The objects are implementations or instances of program classes,which are arranged into class hierarchies in which subclasses inherit properties(data and methods) from superclasses. The OOP model is well supported byboth program development environments (PDEs) and more sophisticated team-oriented integrated development environments (IDEs), which encourage or atleast enable automatic code generation.

    OOP is a different style of programming than traditional proceduralprogramming. Hence, it has given rise to a whole family of softwaredevelopment models. Here we will describe the popular Booch Round-Trippingmodel,[7] as shown in Figure 1.8. This model assumes a pair of coordinatedtool setsone for analysis and design and another for program development. Forexample, you can use the Uniform Modeling Language (UML) to graphicallydescribe an application program or system as a class hierarchy. The UML canbe fed to the IDE to produce a Java or C++ program, which consists of thehousekeeping and control logic and a large number of stubs and skeletonprograms. The various stub and skeleton programs can be coded to a greateror lesser extent to develop the program to a given level or "slice" offunctionality. The code can be fed back or "round-tripped" to the UMLprocessor to create a new graphical description of the system. Changes andadditions can be made to the new UML description and a new programgenerated. This general process is not really new. The Texas Instruments TEFtool set and the Xcellerator tool set both allowed this same process withprocedural COBOL programs. These tools proved their worth in the preparationfor the Y2K crisis. A working COBOL application with two-digit year dates couldbe reverse-engineered to produce an accurate flowchart of the application (notas it was originally programmed, but as it was actually implemented andrunning). Then it could be modified at a high level to add four-digit year datecapability. Finally, a new COBOL program could be generated, compiled, andtested. This older onetime reverse engineering is now built into the designfeedback loop of the Booch Round-Trip OOP development model. It can befurther supported with code generators that can create large amounts of code

  • based on recurring design patterns.

    Figure 1.8. Round-Tripping Model

    Iterative Development or Evolutionary Model

    The iterative development model is the most realistic of the traditionalsoftware development models. Rather than being open-loop like build-and-fixor the original waterfall models, it has continuous feedback between eachstage and the prior one. Occasionally it has feedback across several stages inwell-developed versions, as illustrated in Figure 1.9. In its most effectiveapplications, this model is used in an incremental iterative way. That is,applying feedback from the last stage back to the first stage results in eachiteration's producing a useable executable release of the software product. Alower feedback arrow indicates this feature, but the combined incrementaliterative method schema is often drawn as a circle. It has been applied to bothprocedural and object-oriented program development.

    Figure 1.9. Iterative Model of Software Development

    [View full size image]

  • Comparison of Various Life-Cycle Models

    Table 1.1 is a high-level comparison between software development modelsthat we have gathered into groups or categories. Most are versions orenhancements of the waterfall model. The fundamental difference between themodels is the amount of engineering documentation generated and used.Thus, a more "engineering-oriented" approach may have higher overhead butcan support the development of larger systems with less risk and can supportcomplex systems with long life cycles that include maintenance and extensionrequirements.

    Table 1.1. Comparison of Traditional Software DevelopmentModels

    Model Pros Cons

    Build-and-fix OK for small one-offprograms Useless for large programs

    Waterfall Disciplined, document-drivenResult may not satisfyclient

    Rapid prototyping Guarantees clientsatisfactionMay not work for largeapplications

    Extremeprogramming

    Early return on softwaredevelopment

    Has not yet been widelyused

    Spiral Ultimate waterfall model Large system in-housedevelopment only

    Incremental Promotes maintainability Can degenerate to build-and-fix

    OOP Supported by IDE tools May lack discipline

  • Iterative Can be used by OOP May allow overiteration

  • Software Process Improvement

    Although the legacy models for software development just discussed arehonored by time and are used extensively even today, they are surely not thelatest thinking on this subject. We will describe only briefly RUP, CMM, andISO 9000 software process improvement development models, because theywill receive attention in later chapters. These are very different things but areconsidered here as a diverse set of technologies that are often "compared" bysoftware development managers. RUP and CMM are the result of considerablegovernment-sponsored academic research and industrial development. Whenrigorously applied, they yield good, even excellent, results. They also provide adocumentation trail that eases the repair of any errors and bugs that domanage to slip through a tightly crafted process net. These newer methods arewidely used by military and aerospace contractors who are required to buildhighly secure and reliable software for aircraft, naval vessels, and weaponssystems. In our experience they have had relatively little impact on enterprisesoftware development so far, whether internally or by way of third-partyvendors.

    Rational Unified Process

    The Rational Unified Process (RUP) is modeled in two dimensions, rather thanlinearly or even circularly, as the previously described models are. Thehorizontal axis of Table 1.2 represents time, and the vertical axis representslogical groupings of core activities.[8]

    Table 1.2. A Two-Dimensional Process StructureRationalUnified Model

    Phase

    Workflow Inception Elaboration Construction Transition toNext Phase

    Applicationmodel Definition Comparison Clarification Consensus

    Requirements Gathering Evaluation User review Approval

    Architecture Analysis Design Implementation Documentation

  • Test Planning Units test System test Regressiontesting

    Deployment User training User planning Site installationUserregressiontesting

    Configurationmanagement

    Long-rangeplanning

    Changemanagement

    Detailed planfor evolution

    Planningapprovals

    Projectmanagement

    Statementsof work

    Contractor orteamidentification

    Bidding andselection

    Let contractsor budgetinternal teams

    Environment Hiring orrelocation Team building Training Certification

    The Rational Model is characterized by a set of software best practices and theextensive application of use cases. A use case is a set of specified actionsequences, including variant and error sequences, that a system or subsystemcan perform interacting with outside actors.[9] The use cases are very effectiveat defining software functionality[10] and even planning to accommodate erroror "noise." However, the RUP's most important advantage is its iterativeprocess that allows changes in functional requirements also to beaccommodated as they inevitably change during system development. Not onlydo external circumstances reflect changes to the design, but also the user'sunderstanding of system functionality becomes clearer as that functionalitydevelops. The RUP has been developing since 1995 and can claim well over1,000 user organizations.

    Capability Maturity Model

    The Capability Maturity Model (CMM) for software development was developedby the Software Engineering Institute at Carnegie Mellon University. CMM isan organizational maturity model, not a specific technology model. Maturityinvolves continuous process improvement based on evaluation of iterativeexecution, gathering results, and analyzing metrics. As such, it has a verybroad universe of application. The CMM is based on four principles:[11]

    Evolution (process improvement) is possible but takes time. The processview tells us that a process can be incrementally improved until the result

  • of that process becomes adequately reliable.

    Process maturity has distinguishable stages. The five levels of the CMM areindicators of process maturity and capability and have proven effective formeasuring process improvement.

    Evolution implies that some things must be done before others. Experiencewith CMM since 1987 has shown that organizations grow in maturity andcapability in predictable ways.

    Maturity will erode unless it is sustained. Lasting changes requirecontinued effort.

    The five levels of the CMM, in order of developing maturity, are as follows:

    Level 1 (Ad Hoc): Characterized by the development of software thatworks, even though no one really understands why. The team cannotreliably repeat past successes.

    Level 2 (Repeatable): Characterized by requirements management,project planning, project tracking, quality assurance, configurationmanagement.

    Level 3 (Defined): Organization project focus and project definition,training program, integrated software management, software productengineering, intergroup coordination, peer reviews.

    Level 4 (Managed): Quantitative process management, software qualitymanagement.

    Level 5 (Optimizing): Defect prevention, technology change management,process change management.

    Note that level 3 already seems to be higher than most software developmentorganizations attain to, and would seem to be a very worthy goal for anydevelopment organization. However, the CMM has two levels of evolutionarycompetence/capability maturity above even this high-water mark. CMM as wellas Capability Maturity Model Integration (CMMI) and PCMM (People CapabilityMaturity Model) have had enthusiastic acceptance among software developersin India. In 2000, the CMM was upgraded to CMMI. The Software EngineeringInstitute (SEI) no longer maintains the CMM model. IT firms in India accounted

  • for 50 out of 74 CMM level 5-rated companies worldwide in 2003.[12] They arealso leading in other quality management systems, such as Six Sigma, ISO9001, ISO 14001, and BS 7799. It would seem that embracing a multitude ofsystems and models has helped software developers in India take a rapid leadin product and process improvement, but still there is no silver bullet!

    ISO 9000-3 Software Development Guidance Standard

    This guidance standard is a guideline for the application of standards to thedevelopment, supply, and maintenance of computer software. It is not adevelopment model like RUP or even a organization developmental model likeCMM. Neither is it a certification process. It is a guidance document thatexplains how ISO 9001 should be interpreted within the software industry (seeFigure 1.10). It has been used since 1994, having been introduced as ISO9001 Software Quality Management.[13] It was updated in 2002 as ISO 9000-3. Prudent compliance of ISO 9000-3 may result in the following benefits:

    Increases the likelihood of quality software products

    Gives you a competitive advantage over non-ISO 9000 certifieddevelopment vendors

    Assures customers of the end product's quality

    Defines the phases, roles, and responsibilities of the software developmentprocess

    Measures the efficiency of the development process

    Gives structure to what is often a chaotic process

    Figure 1.10. ISO 9000-3 Software Development Model

    [View full size image]

  • The document was designed as a checklist for the development, supply, andmaintenance of software. It is not intended as a certification document, likeother standards in the ISO 9000 series. Copies of the guideline can be orderedfrom the ISO in Switzerland. Also, many consulting firms have Web sites thatpresent the ISO 9000-3 guidelines in a cogent, simplified, and accessibleway.[14]

    The Tickit process was created by the British Computer Society and the UnitedKingdom Department of Trade and Industry for actually certifying ISO 9000-3software development.[15] This partnership has turned the ISO 9000-3guideline standard into a compliance standard. It allows software vendors to becertified for upholding the ISO 9000-3 standard after passing the requiredaudits. As with other ISO 9000 standards, there is a great deal of emphasis onmanagement, organization, and process that we will not describe in this briefoverview. Rather, we will emphasize the ISO development procedures thatcontrol software design and development. These include the use of life-cyclemodels to organize and create a suitable design method by reviewing pastdesigns and considering what is appropriate for each new project. Thefollowing three sets of issues are addressed:

    Preparation of a software development plan to control:

    Technical activities (design, coding, testing)

  • Managerial activities (supervision, review)

    Design input (functional specs, customer needs)

    Design output (design specs, procedures)

    Design validation

    Design verification

    Design review

    Design changes

    Development of procedures to control the following documents and data:

    Specifications

    Requirements

    Communications

    Descriptions

    Procedures

    Contracts

    Development of procedures to plan, monitor, and control the production,installation, and service processes for managing the following:

    Software replication

    Software release

    Software installation

    Develop software test plans (for unit and integration testing)

  • Perform software validation tests

    Document testing procedures

    Much of this sounds like common sense, and of course it is. The advantage ofincorporating such best practices and conventional wisdom into a guidancestandard is to encourage uniformity among software vendors worldwide andleveling of software buyers' expectations so that they are comfortable withpurchasing and mixing certified vendors' products.

    Comparison of RUP, CMM, and ISO 9000

    A brief comparison of these process improvement systems is provided in Table1.3. Such a comparison is a bit awkward, like comparing apples and oranges,but apples and oranges are both fruit. In our experience, softwaredevelopment managers often ask each other, "Are you using RUP, CMM, orISO 9000?" as if these were logically discrete alternatives, whereas they arethree different things.

    Table 1.3. Comparison of RUP, CMM, and ISO 9000

    Method Pros Cons

    RUPWell supported by toolsSupports OOP developmentMore than 1,000 users

    Expensive to maintainHigh training costsUsed downstream withRSDM

    CMMSets very high goalsEasy to initiateHundreds of users

    Completely process-orientedRequires long-term topmanagement support

    ISO 9000-3Provides process guidelinesDocumentation facilitatedComprehensive, detailed

    Some firms may seek togain certification withoutprocess redesign

    The RUP is very well supported by an extensive array of software developmentand process management tools. It supports the development of object-orientedprograms. It is expensive to install and has a rather steep learning curve withhigh training costs but is well worth the time and cost to implement. RUP is

  • estimated to be in use by well over 1,000 firms. Its usability with RSDM will bedetailed later. The CMM sets very high ultimate goals but is easy to initiate.However, it does require a long-term commitment from top management to beeffective over time and to be able to progress to maturity level 3 and beyond.It is estimated to have well over 400 users in the United States. As statedearlier, it is very popular in India, where the majority of CMM user firms arelocated. ISO 9000-3 was updated in 2002. It is essential for the developmentof third-party enterprise software to be sold and used in the EEC. A largenumber of consulting firms in both Europe and North America are dedicated totraining, auditing, and compliance coaching for ISO 9000. Users report that itworks quite well, although at first it appears to be merely institutionalizedcommon sense. Perhaps the only downside is, because it is a requiredcertification, some firms may just try to get the certification without reallyredesigning software development processes to conform to the guidelines.

    Table 21.4 in Chapter 21 compares different quality systems currentlycommon in software companies. These systems serve different needs and cancoexist. The need for integration is discussed in Chapter 21 (see Case Study21.1) and Chapter 27.

  • ADR Method

    ADR stands for assembly (A), disassembly (D), and reassembly (R)the majoraspects of component-based software development.[16] Software componentsin enterprise systems are fairly large functional units that manage the creationand processing of a form, which usually corresponds to an actual business formin its electronic instance. For example, a general ledger (GL) system mayconsist of 170 components, some 12 or more of which must be used to createa general ledger for a firm from scratch. Each component in the GL applicationcorresponds to an accounting function that the application is designed toperform. This approach arose in the early days of 4GL (Fourth-GenerationLanguage) software development and has continued to be popular into theOOP era. OOP components tend to be somewhat smaller than 4GL componentsdue to the class factoring process that naturally accompanies Object-OrientedAnalysis and Design. In the cited paper,[16] Professor Ramamoorthy describesthe evolution of software quality models and generalizes and classifies them.

  • Seven Components of the Robust Software DevelopmentProcess

    Software has become an increasingly indispensable element of a wide range ofmilitary, industrial, and business applications. But it is often characterized byhigh costs, low reliability, and unacceptable delays. Often, they are downrightunacceptable (see Sidebar 1.2). Software life-cycle costs (LCC) typically farexceed the hardware costs. Low software quality has a direct impact on cost.Some 40% of software development cost is spent testing to remove errors andto ensure high quality, and 80 to 90% of the software LCC goes to fix, adapt,and expand the delivered program to meet users' unanticipated, changing, orgrowing needs.[17] While the software costs far exceed hardware costs, thecorresponding frequency of failure rate between software and hardware couldbe as high as 100:1. This ratio can be even higher for more advancedmicroprocessor-based systems.[18] Clearly, these are huge issues that cannotbe addressed effectively by continuing to deploy traditional softwaredevelopment approaches.

    It is well known that various quality issues are interrelated. Moreover, highcosts and delays can be attributed to low software reliability.[18] Thus, it isconceivable that several objectives may be met with the correct strategicintervention. Quality has a great many useful attributes, and you must clearlyunderstand the customer perspectives throughout the software life cycle. Thishelps you not only understand the changing user needs but also avoid costescalation, delays, and unnecessary complexity. You need to deploy amultipronged strategy to address quality issues in large and complex softwaresuch as enterprise applications. The Seven Components of a Robust SoftwareDevelopment Process are shown in Figure 1.11. They are as follows:

    1. A steadfast development process that can provide interaction with users byidentifying their spoken and unspoken requirements throughout thesoftware life cycle.

    2. Provision for feedback and iteration between two or more developmentstages as needed.

    3. An instrument to optimize design for reliability (or other attributes), cost,and cycle time at once at upstream stages. This particular activity, whichaddresses software product robustness, is one of the unique features of theRSDM, because other software development models do not.

    4. Opportunity for the early return on investment that incrementaldevelopment methods provide.

  • 5. Step-wise development to build an application as needed and to provideadequate documentation.

    6. Provision for risk analyses at various stages.

    7. Capability to provide for object-oriented development.

    Figure 1.11. Seven Components of the Robust SoftwareDevelopment Process

  • Robust Software Development Model

    Our proposed model for software development is based on DFTS technology, asshown in Figure 2.6 in Chapter 2. DFTS technology consists of Robust SoftwareDevelopment Model, Software Design Optimization Engineering, and Object-Oriented Design Technology. As you will soon see, it is a more elaboratecombined form of the cascade and iterative models with feedback at everylevel. In fact, it attempts to incorporate the best practices and features fromvarious development methodologies and collectively provides for a customer-focused robust software technology. It is intended to meet all seven keyrequirements for a robust software architecture development method justidentified. Although Taguchi Methods have been applied to upstream softwaredesign in a few cases,[19], [20] there is not yet an extensive body of literaturedevoted to this area.

    The primary focus of this book is to explain this model in the context of robustsoftware design and to show you how you can use it for DFTS. The purpose ofthis book is to give you a map for robust software design from the hardwaredesign arena to that of software design and development. We will alsoestablish a context for methodologies such as Taguchi Methods and QualityFunction Deployment (QFD) in the software arena. We will show you how theycan be used as the upstream architectural design process for some of theestablished software quality models using Professor Ramamoorthy's taxonomy,as well as the software quality management processes that will allow thedevelopment organization using it to become a learning organization.

  • Sidebar 1.2: Mission-Critical Aircraft Control Software

    The control computer of a Malaysian Airlines Boeing 777 seemed intent on crashing itself on a tripfrom Perth to Kuala Lumpur on August 1, 2005. According to The Australian newspaper, theMalaysian flight crew had to battle for control of the aircraft after a glitch occurred in thecomputerized control system. The plane was about an hour into the flight when it suddenly climbed3,000 feet and almost stalled. The Australian Air Transport Safety Bureau report posted on its Website said the pilot was able to disconnect the autopilot and lower the nose to prevent the stall, butthe auto throttles refused to disengage. When the nose pitched down, they increased power.[a]Even pushing the throttles to idle didn't deter the silicon brains, and the plane pitched up again andclimbed 2,000 feet the second time. The pilot flew back to Perth on manual, but the auto throttleswouldn't turn off. As he was landing, the primary flight display gave a false low airspeed warning,and the throttles jammed again. The display also warned of a nonexistent wind shear. Boeingspokesman Ken Morton said it was the only such problem ever experienced on the 777, but airlineshave been told via an emergency directive to load an earlier software version just in case. Theinvestigation is focusing on the air data inertial data reference unit, which apparently supplied falseacceleration figures to the primary flight computer.

    More recently, a JetBlue Airbus 320 flight from Burbank, California to New York on September 21,2005 attracted several hours of news coverage when the control software locked its front landinggear wheels at a 90-degree angle at takeoff. After dumping fuel for three hours, the plane landedwithout injuries at LAX. However, the front landing gear was destroyed in the process in a blaze ofsparks and fire. An NTSB official called the problem common[b]. A Canadian study issued last yearreported 67 nose wheel incidents with Airbus 319, 320, and 321 models. The NTSB official leadingthe investigation said that "If we find a pattern, we will certainly do something." (From the LosAngeles Times, September 22, 2005) Software failures in aircraft control systems are likely to incura much higher social and economic cost than an error in a client's invoice, or even an inventorymistake. Unfortunately they are much harder to find and correct as well.

    [a] http://www.atsb.gov.au/aviation/occurs/occurs_detail.cfm?ID=767

    [b]http://www.airweb.faa.gov/Regulatory_and_Guidance_Library/rgad.nsf/0/25F9233FE09B613F8625706C005D0C53?OpenDocument

  • Key Points

    In spite of 50 years of software development methodology and processimprovement, we need a new paradigm to develop increasingly complexsoftware systems.

    Productivity gains in software development have not kept up with theperformance increases in hardware. New hardware technology enables andencourages new applications, which require much larger and more complexprograms.

    Perhaps a dozen models of software development aim to improvedevelopment productivity and/or enhance quality. They all workreasonably well when faithfully and diligently applied.

    The Department of Defense has sponsored a number of softwaredevelopment process improvement initiatives as a leader in the use ofsophisticated computer applications and dedicated or embeddedapplications.

    The Design for Trustworthy Software (DFTS) technology addresseschallenges of producing trustworthy software using a combination of theiterative Robust Software Development Model, Software DesignOptimization Engineering, and Object-Oriented Design Technology.

  • Additional Resources

    http://www.prenhallprofessional.com/title/0131872508

    http://www.agilenty.com/publications

  • Internet Exercises

    1. Search the Internet for U.S., Canadian, and Australian government reports on failure in aircraft controlsoftware. Is this problem getting better or worse?

    2.Search the Internet for sites dedicated to the Rational Unified Process. How would you present anargument to your management to employ this process for software development in your owno