an illustration of the concepts of verification and validation in computational solid mechanics.pdf

36
AN AMERICAN NATIONAL STANDARD ASME V&V 10.1-2012 An Illustration of the Concepts of Verification and Validation in Computational Solid Mechanics --````````,,``,```,,,,`

Upload: kingmajor

Post on 03-Jan-2016

415 views

Category:

Documents


57 download

TRANSCRIPT

Page 1: An Illustration of the Concepts of Verification and Validation in Computational Solid Mechanics.pdf

A N A M E R I C A N N A T I O N A L S T A N D A R D

ASME V&V 10.1-2012

An Illustration of the Concepts of Verification and Validation in Computational Solid Mechanics

--````````,,``,```,,,,``,,````-`-`,,`,,`,`,,`---

Page 2: An Illustration of the Concepts of Verification and Validation in Computational Solid Mechanics.pdf

--````````,,``,```,,,,``,,````-`-`,,`,,`,`,,`---

Page 3: An Illustration of the Concepts of Verification and Validation in Computational Solid Mechanics.pdf

ASME V&V 10.1-2012

An Illustration of theConcepts of Verificationand Validation inComputational SolidMechanics

A N A M E R I C A N N A T I O N A L S T A N D A R D

Three Park Avenue • New York, NY • 10016 USA

--````````,,``,```,,,,``,,````-`-`,,`,,`,`,,`---

Page 4: An Illustration of the Concepts of Verification and Validation in Computational Solid Mechanics.pdf

Date of Issuance: April 16, 2012

This Standard will be revised when the Society approves the issuance of a new edition.

ASME issues written replies to inquiries concerning interpretations of technical aspects of thisStandard. Periodically certain actions of the ASME V&V Committee may be published as Cases.Cases and interpretations are published on the ASME Web site under the Committee Pages athttp://cstools.asme.org/ as they are issued.

Errata to codes and standards may be posted on the ASME Web site under the Committee Pages toprovide corrections to incorrectly published items, or to correct typographical or grammatical errorsin codes and standards. Such errata shall be used on the date posted.

The Committee Pages can be found at http://cstools.asme.org/. There is an option available toautomatically receive an e-mail notification when errata are posted to a particular code or standard.This option can be found on the appropriate Committee Page after selecting “Errata” in the “PublicationInformation” section.

ASME is the registered trademark of The American Society of Mechanical Engineers.

This code or standard was developed under procedures accredited as meeting the criteria for American NationalStandards. The Standards Committee that approved the code or standard was balanced to assure that individuals fromcompetent and concerned interests have had an opportunity to participate. The proposed code or standard was madeavailable for public review and comment that provides an opportunity for additional public input from industry, academia,regulatory agencies, and the public-at-large.

ASME does not “approve,” “rate,” or “endorse” any item, construction, proprietary device, or activity.ASME does not take any position with respect to the validity of any patent rights asserted in connection with any

items mentioned in this document, and does not undertake to insure anyone utilizing a standard against liability forinfringement of any applicable letters patent, nor assumes any such liability. Users of a code or standard are expresslyadvised that determination of the validity of any such patent rights, and the risk of infringement of such rights, isentirely their own responsibility.

Participation by federal agency representative(s) or person(s) affiliated with industry is not to be interpreted asgovernment or industry endorsement of this code or standard.

ASME accepts responsibility for only those interpretations of this document issued in accordance with the establishedASME procedures and policies, which precludes the issuance of interpretations by individuals.

No part of this document may be reproduced in any form,in an electronic retrieval system or otherwise,

without the prior written permission of the publisher.

The American Society of Mechanical EngineersThree Park Avenue, New York, NY 10016-5990

Copyright © 2012 byTHE AMERICAN SOCIETY OF MECHANICAL ENGINEERS

All rights reservedPrinted in U.S.A.

--````````,,``,```,,,,``,,````-`-`,,`,,`,`,,`---

Page 5: An Illustration of the Concepts of Verification and Validation in Computational Solid Mechanics.pdf

CONTENTS

Foreword . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ivCommittee Roster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vCorrespondence With the V&V Committee . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi

1 Executive Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

2 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

3 Purpose and Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

4 Background. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

5 Verification and Validation Plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

6 Model Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

7 Verification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

8 Validation Approach 1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

9 Validation Approach 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

10 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

11 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

12 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

Figures1 V&V Activities and Products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 Validation Hierarchy Illustration for an Aircraft Wing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 Schematic of the Hollow Tapered Cantilever Beam . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 Estimating a Probability Density Function From an Uncertainty Estimate, � . . . . . . . . 75 Illustration of the Two Validation Approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 Illustration of the Basis of the Area Metric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 Errors in Normalized Deflections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 Area Between the Experimental and Computed CDF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 Empirical CDF of the Validation Experiment Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1610 Random Variability in Modulus, E, Used in the Computational Model . . . . . . . . . . . . . 1811 Random Variability in Support Flexibility, fr, Used in the Computational Model . . . . 1912 Input Uncertainty Propagation Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2013 Computed CDF of Beam Tip Deflection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2014 CDF of the Model-Predicted Tip Deflection, Empirical CDF of the Validation

Experiment Tip Deflections, and Area Between Them (Shaded Region) . . . . . . . . . . . 21

Tables1 Normalized Deflections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 Numerical Solutions for Tip Deflections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 Measured Beam-Tip Deflections From the Validation Experiments . . . . . . . . . . . . . . . . . . 164 Test Measurements of the Modulus of Elasticity, E . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 Test Estimates of the Support Flexibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

iii

--````````,,``,```,,,,``,,````-`-`,,`,,`,`,,`---

Page 6: An Illustration of the Concepts of Verification and Validation in Computational Solid Mechanics.pdf

FOREWORD

From comments by the readership of the ASME Guide to Verification and Validation inComputational Solid Mechanics [1] (henceforth referred to as V&V 10), theASME V&V 10 Committee on Verification and Validation in Computational Solid Mechanicsrecognized the need for another document that would provide a more detailed step-by-stepdescription of a V&V application. The present document strives to fill that need by applying thegeneral concepts of V&V to an illustrative example.

The authority of a standards document derives from the consensus achieved by the membersof a standards committee (about 20 active members in V&V 10), whose interests span a broadrange. Achieving such consensus is a long and difficult task, but the ultimate benefit to thecomputational mechanics community justifies the effort. Many compromises were made in thecreation of the present illustrative example document. The main balance sought was to communi-cate to the reader on a basic level without distorting the many nuances associated with theexacting principles of verification and validation. The danger with being too basic is that thereader might take simplified concepts and statements out of the context of the illustrative example,and generalize them to situations not intended by the authors. The corresponding danger withbeing too exacting is that the reader might neither understand nor desire to understand the subtlepoints introduced by repeated qualification of terms (e.g., a “validated model” versus “a modelvalidated for its intended use”). In most cases, the Committee favored clarity over completeness.

The scope of the document has evolved considerably since its inception. For example, the initialintent was to include as a lead-off example a one-experiment-to-one-calculation comparisonwithout regard for uncertainties in either, since this is easiest to communicate and relate to readersand their possible past experience with validation. However, as a result of internal discussionsand external reviews, the Committee came to accept that to maintain consistency withV&V 10, the recommended validation procedures must always account for uncertainties in boththe calculations and the data that are compared. This led the Committee to restrict attentionto validation requirements and metrics that depend directly on the underlying distributionscharacterizing the uncertainties in the calculations and data.

ASME V&V 10.1-2012 was approved by the V&V 10 Committee on December 2, 2011, theV&V Standards Committee on February 20, 2012, and the American National Standards Instituteas an American National Standard on March 7, 2012.

iv--````````,,``,```,,,,``,,````-`-`,,`,,`,`,,`---

Page 7: An Illustration of the Concepts of Verification and Validation in Computational Solid Mechanics.pdf

ASME V&V COMMITTEEVerification and Validation in Computational Modeling

and Simulation(The following is the roster of the Committee at the time of approval of this Standard.)

STANDARDS COMMITTEE OFFICERS

C. J. Freitas, ChairS. W. Doebling, Vice Chair

R. L. Crane, Secretary

STANDARDS COMMITTEE PERSONNEL

J. A. Cafeo, General Motors R&DH. W. Coleman, University of Alabama, HuntsvilleR. L. Crane, The American Society of Mechanical EngineersS. W. Doebling, Los Alamos National Laboratory

V&V 10 SUBCOMMITTEE — VERIFICATION AND VALIDATION IN COMPUTATIONAL SOLID MECHANICS

S. W. Doebling, Chair, Los Alamos National LaboratoryJ. A. Cafeo, Vice Chair, General Motors R&DR. L. Crane, Secretary, The American Society of Mechanical

EngineersM. C. Anderson, Los Alamos National LaboratoryR. M. Ferencz, Lawrence Livermore National LaboratoryJ. K. Gran, SRI InternationalT. K. Hasselman, ACTA, Inc.S. R. Hsieh, Lawrence Livermore National LaboratoryE. H. Khor, ANSYS, Inc.H. M. Kim, The Boeing CompanyR. W. Logan, Consultant

v

K. J. Dowding, Sandia National LaboratoriesC. J. Freitas, Southwest Research InstituteR. R. Schultz, Idaho National Engineering LaboratoryB. H. Thacker, Southwest Research Institute

H. U. Mair, Johns Hopkins University Applied Physics LaboratoryD. M. Moorcroft, Federal Aviation AdministrationW. L. Oberkampf, William L. Oberkampf ConsultingJ. L. O’Daniel, U.S. Army Corps of Engineers, Engineer Research

and Development CenterT. L. Paez, Thomas Paez ConsultingR. Rebba, General Motors R&DC. P. Rogers, Crea Consultants Ltd.J. F. Schultze, Los Alamos National LaboratoryL. E. Schwer, Schwer Engineering and Consulting ServicesD. A. Simons, TASC, Inc.B. H. Thacker, Southwest Research InstituteY. Zhao, BetterLife Medical, LLC

--````````,,``,```,,,,``,,````-`-`,,`,,`,`,,`---

Page 8: An Illustration of the Concepts of Verification and Validation in Computational Solid Mechanics.pdf

CORRESPONDENCE WITH THE V&V COMMITTEE

General. ASME Standards are developed and maintained with the intent to represent theconsensus of concerned interests. As such, users of this Standard may interact with the Committeeby requesting interpretations, proposing revisions, and attending Committee meetings. Corre-spondence should be addressed to:

Secretary, V&V Standards CommitteeThe American Society of Mechanical EngineersThree Park AvenueNew York, NY 10016-5990

Proposing Revisions. Revisions are made periodically to the Standard to incorporate changesthat appear necessary or desirable, as demonstrated by the experience gained from the applica-tion of the Standard. Approved revisions will be published periodically.

The Committee welcomes proposals for revisions to this Standard. Such proposals should beas specific as possible, citing the paragraph number(s), the proposed wording, and a detaileddescription of the reasons for the proposal, including any pertinent documentation.

Proposing a Case. Cases may be issued for the purpose of providing alternative rules whenjustified, to permit early implementation of an approved revision when the need is urgent, or toprovide rules not covered by existing provisions. Cases are effective immediately uponASME approval and shall be posted on the ASME Committee Web page.

Requests for Cases shall provide a Statement of Need and Background Information. The requestshould identify the Standard, the paragraph, figure or table number(s), and be written as aQuestion and Reply in the same format as existing Cases. Requests for Cases should also indicatethe applicable edition(s) of the Standard to which the proposed Case applies.

Interpretations. Upon request, the V&V Committee will render an interpretation of any require-ment of the Standard. Interpretations can only be rendered in response to a written request sentto the Secretary of the V&V Standards Committee.

The request for interpretation should be clear and unambiguous. It is further recommendedthat the inquirer submit his/her request in the following format:

Subject: Cite the applicable paragraph number(s) and the topic of the inquiry.Edition: Cite the applicable edition of the Standard for which the interpretation is

being requested.Question: Phrase the question as a request for an interpretation of a specific requirement

suitable for general understanding and use, not as a request for an approvalof a proprietary design or situation. The inquirer may also include any plansor drawings, that are necessary to explain the question; however, they shouldnot contain proprietary names or information.

Requests that are not in this format may be rewritten in the appropriate format by the Committeeprior to being answered, which may inadvertently change the intent of the original request.

ASME procedures provide for reconsideration of any interpretation when or if additionalinformation that might affect an interpretation is available. Further, persons aggrieved by aninterpretation may appeal to the cognizant ASME Committee or Subcommittee. ASME does not“approve,” “certify,” “rate,” or “endorse” any item, construction, proprietary device, or activity.

Attending Committee Meetings. The V&V Standards Committee regularly holds meetings thatare open to the public. Persons wishing to attend any meeting should contact the Secretary ofthe V&V Standards Committee.

vi

--````````,,``,```,,,,``,,````-`-`,,`,,`,`,,`---

Page 9: An Illustration of the Concepts of Verification and Validation in Computational Solid Mechanics.pdf

ASME V&V 10.1-2012

AN ILLUSTRATION OF THE CONCEPTS OF VERIFICATION ANDVALIDATION IN COMPUTATIONAL SOLID MECHANICS

1 EXECUTIVE SUMMARY

This Standard describes a simple example of verifica-tion and validation (V&V) to illustrate some of the keyconcepts and procedures presented in V&V 10. Theexample is an elastic, tapered, cantilever, box beamunder nonuniform static loading. The validation prob-lem entails a uniform loading over half the length ofthe beam. The response of interest is the tip deflection.The validation test plan and the metrics and accuracyrequirements for comparing the calculated responseswith measurements are specified in the V&V Plan, whichis developed in the first phase of the V&V program.In setting validation requirements and establishing abudget for the V&V program, the V&V Plan considersthe level of risk in using the model for its intendedpurpose. Successfully meeting the V&V requirementsmeans that the computational model for the taperedbeam has been validated for the intended use discussedin this document, viz., predicting the response of atapered beam tested in the laboratory.

To encompass as much of the general V&V processas possible in this example, a computational model wasdeveloped specifically for the tapered beam problem,even though it is more likely that a general-purposefinite-element code would be used in practice. The con-ceptual model is a Bernoulli–Euler beam for which thegoverning equations are solved with the finite-elementmethod. The computational model was verified(checked for proper programming of the mathematicalmodel and the solution procedure) by comparing com-puted values of tip displacement with an analytical solu-tion to a relevant but simpler problem. A meshrefinement study initially revealed that the model didnot converge at the expected theoretical rate. Furtherdiagnosis revealed a programming error, correction ofwhich led to the proper convergence rate. Knowing theallowable error due to lack of convergence then allowedan appropriate level of mesh refinement to be selected.

For validation (comparing with experimental results),10 virtual trials of the same test were performed to quan-tify the distribution of results due to unintended varia-tions in material properties, construction of the testspecimens, and test execution. Other virtual tests wereconducted to characterize uncertainties in selectedmodel input parameters, namely rotational supportstiffness and elastic modulus.

1

The following two validation approaches wereconsidered:

(a) a case where uncertainty data were not availableand obtained instead from subject matter experts.

(b) a case where uncertainty data were available fromrepeat tests and calculations.

In both cases the same metric was employed to demon-strate the use of uncertainty information in the model-test comparison. The validation metric is a measure ofthe relative error between the calculated and measuredtip deflection of the beam.

The same model was used in both validation cases.In each case, the metric was compared to an accuracyrequirement of 10%. In both cases the model was vali-dated successfully. Had the validation been unsuccess-ful, it would have been necessary to correct any modeldeficiencies, collect additional or improved experimen-tal data, or relax the validation requirement.

2 INTRODUCTION

This Standard is the first in a planned series of docu-ments elaborating on the verification and validation top-ics addressed initially in the ASME V&V 10 Committee’sseminal document, Guide to Verification and Validationin Computational Solid Mechanics (ASME V&V 10) [1].V&V 10 was intentionally written as a high-level sum-mary of the essential principles of verification andvalidation.

The present document provides a step-by-step illus-tration of the key concepts of verification and validation.It is intended as a primer that illustrates much of themethodology comprising verification and validationthrough a consistent example.

The example selected is a tapered cantilever beamunder a distributed load. The deformation of the beam ismodeled with traditional Bernoulli–Euler beam theory.The supported end of the beam is fixed against deflec-tion but constrained by a rotational spring. This non-ideal boundary condition, along with variation in thebeam’s elastic modulus, enables us to illustrate the treat-ment of uncertain model parameters.

The illustrative portion of the document begins withthe Verification and Validation Plan (section 5). This planis the recommended starting point for all verificationand validation activities. The V&V Plan provides the

--````````,,``,```,,,,``,,````-`-`,,`,,`,`,,`---

Page 10: An Illustration of the Concepts of Verification and Validation in Computational Solid Mechanics.pdf

ASME V&V 10.1-2012

framework for conducting the verification and valida-tion assessment and provides an outline for the timingof activities and estimating required resources. The V&VPlan is developed as a team effort with participation bythe customer (who provides the requirements), decisionmakers, experimentalists, modelers, and those who willperform the validation comparisons.

Having agreed on a V&V Plan, the next step is modeldevelopment, which includes three types of models: con-ceptual, mathematical, and computational (section 6).

Developed on a parallel path with the mathematicaland computational models are the validation experi-ments — the physical realizations of the reality of inter-est (tapered cantilever beam) that will eventually serveas the referent against which the computational modelpredictions are compared in the validation phase.

Once the computational model has been developed,an assessment of the agreement between the statementof the mathematical model and the results from thecomputational model is required. This activity is calledverification and comprises two main parts: code andcalculation verification (section 7). Code verification istypically performed by comparing the results from ana-lytical solutions to the corresponding computationalmodel results. Calculation verification is performed withsuccessive grid refinements and estimations of the dis-cretization error using techniques based on Richardsonextrapolation [5].

Having verified that the computational model ismistake-free for the cases tested, and having establisheda level of mesh refinement that produces an acceptablediscretization error, the predictive calculation of the vali-dation experiments can proceed. In parallel, the valida-tion experiments can be conducted and results recorded.The validation assessment is then made by comparingthe outcomes of the model prediction and the validationexperiment to determine whether the validation require-ment has been satisfied.

The remainder of this document is organized as fol-lows. The Purpose and Scope (section 3) describes whichparts of the V&V process are, and are not, covered bythe illustrative example. Next, the Background (section4) provides a review of the verification and validationprocess and describes how the illustrative example fitsinto an overall validation hierarchy — a key element inthe validation process. Sections 5 through 9 contain theillustrative example. The document ends with a briefSummary (section 10), which restates the key resultsfrom the illustrative example, and finally someConcluding Remarks (section 11) providing a look tothe future of ASME V&V 10 verification and validationactivities.

3 PURPOSE AND SCOPE

The purpose of this document is to illustrate, bydetailed example, the most important aspects of V&V

2

described in the Committee’s framework document,Guide to Verification and Validation in ComputationalSolid Mechanics (V&V 10). V&V 10 intentionally omittedexamples, as its purpose was to provide “a commonlanguage, a conceptual framework, and general guid-ance for implementing the process of computationalmodel V&V,” an already broad scope for a 27-page con-sensus document. The present document is the first in aseries of more detailed and practical ones the Committeehas planned to incrementally fill the gap between V&V 10and a set of recommended practices.

To appeal to a broad range of mechanics backgrounds,a cantilever beam problem has been selected to illustratethe following aspects of V&V. The numbers in parenthe-ses are references to sections and paragraphs in V&V 10as follows:

(a) validation plan (para. 2.6)(1) validation testing (para. 2.6.1)(2) selection of response features (para. 2.6.2)(3) accuracy requirements (para. 2.6.3)

(b) modeling development (section 3)(1) conceptual model including intended use

(para. 3.1)(2) mathematical model (para. 3.2)(3) computational model (para. 3.3)

(c) verification (section 4)(1) code verification: comparisons using analytical

solution (para. 4.1)(2) calculation verification: mesh convergence

(para. 4.2)(d) parameter estimation (para. 3.4.1)(e) validation (section 5)

(1) validation experiments (paras. 5.1 and 5.2)(2) comparison of experimental results and model

prediction (para. 5.3)(3) decision of model adequacy (para. 5.3.2)

(f) uncertainty quantification (para. 5.2)(g) documentation (paras. 2.7, 4.3, and 5.4)

4 BACKGROUND

V&V is required to provide confidence that the resultsfrom computational models used to solve complex prob-lems are sufficiently accurate and indeed solve theintended problem. The conceptual aspects of V&V aredescribed in detail in V&V 10. V&V is used with increas-ing frequency in recognition that confidence in everincreasingly complex simulations can only be estab-lished through a formal, standardized process.

V&V includes assessment activities that are per-formed in the process of creating and applying computa-tional models to address technical questions about theperformance of physical systems. The overall process issummarized in Fig. 1, which is taken from V&V 10. Thepresent document describes and provides examples of

--````````,,``,```,,,,``,,````-`-`,,`,,`,`,,`---

Page 11: An Illustration of the Concepts of Verification and Validation in Computational Solid Mechanics.pdf

ASME V&V 10.1-2012

Fig. 1 V&V Activities and Products

Reality of Interest (Component, Subassembly, Assembly, or System)

Abstraction

Conceptual model

Mathematical modeling

Mathematical model

Implementation

Computational model

Calculation

Simulation results

Uncertainty quantification

Simulation outcomes

Quantitative comparison

Acceptable agreement?

Yes

Next reality of interest in the hierarchy

Experimental outcomes

Uncertainty quantification

Experimental data

Experimentation

Experiment design

Implementation

Physical model

Physical modeling

No

Revise appropriate

model or experiment

Preliminary calculations

Validation

Code verification

Calculation verification

Modeling, simulation, and experimental activities

Assessment activities

GENERAL NOTE: This figure also appears as Fig. 4 in V&V 10 [1].

most activities in the figure in a presentation unencum-bered by the complexities of a typical real-world physi-cal system and its associated mathematical model.

4.1 Verification

Verification is defined as the process of determiningthat a computational model accurately represents theunderlying mathematical equations and their solution.Verification has two aspects: code verification andcalculation verification. Code verification is defined asthe process of ensuring that there are no programmingerrors and that the numerical algorithms for solving thediscrete equations yield accurate solutions with respectto the true solutions of the governing equations. In addi-tion to numerical code verification — illustrated here viaa convergence analysis — code verification also includes

3

formal software quality engineering (SQE) processes;SQE will not be discussed in this Standard. Calculationverification is defined as the process of determiningthe solution accuracy of a particular calculation. Bothnumerical code verification and calculation verificationwill be demonstrated by applying a simple beam ele-ment code to successively more finely meshed modelsof statically loaded beams.

4.2 Validation

Validation is defined as the process of determining thedegree to which a computational model is an accuraterepresentation of the real world from the perspective ofthe intended uses of the model. One of the steps inthe validation process is a comparison of the resultspredicted by the model with corresponding quantities

Page 12: An Illustration of the Concepts of Verification and Validation in Computational Solid Mechanics.pdf

ASME V&V 10.1-2012

Fig. 2 Validation Hierarchy Illustration for an Aircraft Wing

Propulsion System Structure Passenger SystemControl System

Propulsion Structure Wings

Airfoil

Aluminum Box Structure

Nose Section Fuselage Tail Section

Control Surfaces

Hydraulic System Electrical CablingFuel

Aircraft Complete System

Subsystem

Assemblies

Subassemblies

Components

observed in the validation experiments, so that an assess-ment of the model accuracy is obtained. Several possiblevalidation activities will be mentioned with regard tothe simple beam example.

An important initial step in the model developmentprocess is to define what is to be predicted and howaccurate the prediction needs to be. These will in turndepend on system design requirements and on possibleconsequences of system or subsystem failure, as well asproject cost and schedule requirements. The validationexample used in this document is driven by customerrequirements for the accurate prediction of aircraft wingtip deflection. For the example, however, we shall onlyaddress validation of a computational model for a sim-plified laboratory structure that is related to an air-craft wing.

Having defined the top-level reality of interest as thewing of a complete aircraft, the model developmentteam can proceed to construct a validation hierarchy.An example of such a hierarchy is shown in Fig. 2. Thevalidation hierarchy starts as a top-down decompositionof the physical system into its subsystems, assemblies,subassemblies, and components. From the bottom up,the hierarchy can be seen as a sequence of specific andrelevant models representing distinct realities of interestleading to the one ultimately to be modeled.

Careful construction of a validation hierarchy is ofparamount importance, because it defines the variousproblem characteristics to be captured by different ele-ments in the hierarchy, it encapsulates the coupling andinteractions among the various elements in the hierar-chy, and it suggests the validation experiments thatshould be performed for each element. Each element in

4

the hierarchy requires a model, the validation of whichprovides an estimate of its degree of accuracy and evi-dence of its adequacy for use in the complete systemmodel. The component model labeled “Aluminum BoxStructure” in Fig. 2 represents the tapered cantileverbeam discussed from this point forward. The cantileverbeam model represents one particular reality of interestin the overall aircraft wing validation program.

If experimental data cannot, or will not, be obtainedat some higher tier, then model accuracy assessmentcannot be accomplished at that tier. As a result, a predic-tion must be made for the higher tier using the generalconcept of extrapolation of the model for the conditionsof the intended use at the higher tier. For this situation,an estimate of the total uncertainty in the predictionmust be made for the system response quantities ofinterest. The total uncertainty at the higher tier is acombination of

(a) the uncertainty in the model and the experimentaldata at the tier where experimental data are available

(b) the uncertainty in the modeling of the couplingamong the models occurring at the higher tier

(c) the uncertainty in the extrapolation to the highertier of the model along with all of the input data to themodel

4.3 Documentation

When a V&V effort ends, the most definitive statementof that effort that will endure is the V&V documentation.Considering the significant resources typically requiredfor any V&V effort, to leave the effort undocumented,or poorly documented, squanders the future benefit ofthose expended resources.

--````````,,``,```,,,,``,,````-`-`,,`,,`,`,,`---

Page 13: An Illustration of the Concepts of Verification and Validation in Computational Solid Mechanics.pdf

ASME V&V 10.1-2012

Just as the V&V process goes beyond simply conduct-ing calculations in its attempt to establish the credibilityof the results, so too the documentation of the V&Vprocess needs to go beyond merely reporting quantita-tive results. Therefore, it is not sufficient for V&V docu-mentation to provide only physics equations and tablesof numbers that summarize a set of complex simula-tions. Rather, the essence of V&V documentation is toprovide the rationale for the selected physics equations,list assumptions, define metrics, explain the relationshipbetween numerical and experimental results, and cata-log uncertainties. For example, an explanation of therationale for the choice of a particular constitutive modelwill have more future value than a mere listing of theconstitutive model parameters. The V&V documenta-tion reader will want to know the answers to “why”questions as much as the quantitative results.

The scope and level of detail of the V&V documenta-tion will depend on the potential consequences of theintended uses of the model, with routine in-house proj-ects probably requiring fairly brief and straightforwarddocumentation. At the other end of the spectrum, muchmore detailed documentation will be required for high-consequence model predictions or, for example, whenresults will be submitted to regulatory agencies forapproval to proceed with new designs or products.

To be done properly, V&V documentation should startat the beginning of the effort by documenting the V&VPlan (section 5), and then proceed through all phasesof the V&V effort, recording not only the successes butalso the failures; failures often teach more thansuccesses.

V&V documentation should comply with availablelocal and formal guidance related to the intended modeluse and area of application. For example, MIL-STD-3022[2] has templates for the V&V Plan and V&V Report forsimulations within Department of Defense activities.

5 VERIFICATION AND VALIDATION PLAN

The V&V Plan is a document that describes how theV&V process will be conducted. It is prepared after thevalidation hierarchy has been defined but before manyof the details of the model or experiments have beenfleshed out. It is primarily driven by customerrequirements, but should reflect the views of allinterested parties — decision makers, analysts, andexperimentalists — regarding what the modeling effortis to accomplish and — to a limited extent — how it isto be accomplished.

The starting point for a V&V Plan is the customer’sdescription of the top-level reality of interest and of theintended use of the model. Next, with a level of customerinvolvement that depends on the customer’s modelingor testing experience, a validation hierarchy should be

5

developed as exemplified in Fig. 2. The key items in theV&V Plan are

(a) description of the top-level reality of interest(b) statement of the intended uses of the top-level

model(c) validation hierarchy, including descriptions of all

lower level realities of interest and intended uses(d) for each item in the hierarchy, the following:

(1) selection of response features [i.e., systemresponse quantities (SRQs)] to be measured, computed,and compared

(2) statement of verification requirements, includ-ing (as applicable) software engineering methods anditerative or spatial convergence checks

(3) metrics to be used for comparing computedresults with experimental measurements

(4) model accuracy requirements (i.e., ranges of themetrics for which the simulation will be deemed ade-quate for its intended use)

(5) recommended courses of action if model accu-racy requirements are not satisfied

(6) a list of validation experiments to be performed(7) estimated cost and schedule(8) programmatic assumptions and limitations

(e.g., assumptions about being able to conduct particulartypes of experiments, or decisions to use previously vali-dated submodels or available experimental data)

All of these items except the last two will be brieflydiscussed in the next few subsections in the context ofthe selected example. Further detail about SRQs, valida-tion metrics, and validation accuracy requirements willbe provided in sections 8 and 9.

5.1 Reality of Interest, Intended Use, and ResponseFeatures

Referring to Fig. 2, an assemblies-level reality of inter-est is an aircraft wing. However, the Committee’s collec-tive experience has shown that failure to validate modelsat lower levels in the validation hierarchy is often thecause for subsequent model validation failures at higherlevels in the hierarchy. Thus, among the various lowestlevel components shown in Fig. 2, the decision was madeto validate the response of the “Aluminum BoxStructure.” This is the present illustration’s reality ofinterest. Specifically, the structure to be considered,shown schematically in Fig. 3, is a hollow, tapered, canti-lever beam under static loading in a laboratoryenvironment.

The intended use of the model, in the context of thissimple example, is to demonstrate competence in model-ing a low-level component. If such low-level componentmodels cannot be successfully validated, there is no hopeof moving up the hierarchy to validate more complexsubassemblies.

The SRQ of interest is the transverse tip deflection ofthe beam. In general, it is recommended to consider

--````````,,``,```,,,,``,,````-`-`,,`,,`,`,,`---

Page 14: An Illustration of the Concepts of Verification and Validation in Computational Solid Mechanics.pdf

ASME V&V 10.1-2012

Fig. 3 Schematic of the Hollow Tapered Cantilever Beam

multiple SRQs to enhance credibility in the model as awhole; however, for conciseness only the tip deflectionis considered here.

For the planned experiments, tapered beams are tobe embedded at their wide end into a stiff fixture approx-imating a “fixed-end” or cantilevered boundary condi-tion. The beams are to be loaded continuously along theouter half of their lengths.

During the experiment planning, it was acknowl-edged that the “fixed-end” boundary condition can onlybe approximated in the laboratory. Thus in the modeldevelopment the translational constraint at the bound-ary will be assumed fixed, but the rotational constraintwill be assumed to vary linearly with the magnitude ofthe moment reaction.

Additionally, the beam model to be developed isassumed to have negligible shear deformation, and thusshear deformation is ignored in the mathematical model.For the prescribed magnitude of the loading, the deflec-tion of the beam will be small relative to beam depth,so a small-displacement theory will be used, and thebeam material is assumed to be linear elastic.

These assumptions feed directly into the conceptualmodel of the physical structure, which will be definedprecisely in para. 6.1, and which guides both the devel-opment of the validation experiments and the definitionof the mathematical model.

5.2 Verification Requirements

In this example, both code and calculation verificationwill be performed. The requirements for code verifica-tion are as follows:

(a) It is conducted using the same system responsequantities as will be measured and used for validation.

(b) It demonstrates that the numerical algorithm con-verges to the correct solution of a problem closely relatedto the reality of interest as the grid is refined. This canbe difficult or impractical in many cases, but without it,the code is not verified.

6

(c) It demonstrates that the algorithm converges atthe expected rate.

The requirement for calculation verification in generalis to demonstrate that the numerical error (due to incom-plete spatial or iterative convergence) in the SRQs ofinterest be a small fraction of the validation requirement.In this example, the validation requirement will be 10%,and the numerical error is required to be no greater than2% of that (i.e., 0.2%).

5.3 Validation Approaches, Metrics, andRequirements

Two different validation approaches are demonstratedin this Standard. They differ mainly in the source ofinformation used to quantify the uncertainties in thecomputed and measured values of the SRQ. The V&VPlan must specify which approach is to be taken. Theplan must also specify a metric that incorporates theuncertainties in a way that provides a single measure ofthe relative difference between the simulation outcomesand the validation experiments. It must also specify avalidation requirement, representing the maximumacceptable difference between simulation and experi-ment in terms of the selected metric. Measured resultsfrom a validation experiment form the referent againstwhich the computational model predictions are com-pared (via the metric) to assess the accuracy of the com-putational model. After the experiments andcalculations are completed, the metric is calculated andcompared to the requirement to determine whether themodel has been validated.

The metric that will be used is defined in such a waythat the validation requirement may loosely be regardedas a maximum acceptable percentage difference betweencalculation and experiment. Validation requirementsshould be developed based on the needs and goals of the

--````````,,``,```,,,,``,,````-`-`,,`,,`,`,,`---

Page 15: An Illustration of the Concepts of Verification and Validation in Computational Solid Mechanics.pdf

ASME V&V 10.1-2012

Fig. 4 Estimating a Probability Density FunctionFrom an Uncertainty Estimate, �

Normal PDF

� ��

application of interest. Some considerations in settingvalidation requirements are

(a) predictive accuracy of the computational modelbased on the requirements of the customer

(b) limitations based on obtaining experimental mea-surements because of diagnostic sensor capabilities,project schedule, experimental facilities, and financialresources

(c) the current point in the engineering developmentcycle from conceptual design to final design

(d) consequences of the engineering system not meet-ing performance, reliability, and safety requirements

Validation Approach 1 is a possible approach to takewhen only a single experiment and a single simulationare available. In this case, the uncertainties associatedwith the single values of the SRQ must be estimated.Specifically, subject matter experts must be identifiedfor both the experiments and modeling. They may ormay not be the experimentalist and modeler in the cur-rent assessment. Based on their past experience in relatedwork, each is asked to estimate a symmetric intervalwithin which all practical results are expected to lie. Thehalf-width, �, of the interval is shown in Fig. 4. Forconvenience and simplicity in the absence of data, � istaken as the basis for constructing a Gaussian (normal)distribution of the uncertainty. In Validation Approach1, the measured or calculated value of the SRQ isassumed to be the mean (�) of the distribution, and theestimated half-width, �, is interpreted to be equal tothree standard deviations (3�). As shown in Fig. 4, thetotal range provided by � covers 99.7% of the probability.The standard deviation, �, is easily computed from thegiven �.

Validation Approach 2 is similar to Approach 1, exceptthe uncertainty in the experimental outcome is quanti-fied through replicate tests,1 and the uncertainty in thesimulation outcome is quantified through a probabilistic

1 In constructing and analyzing this illustrative example, no phys-ical experiments were performed. Throughout this document, anyreferences to a specific test article, measurement, experiment, orexperimental result, is to an item that has only been conjured forillustrative purposes.

7

analysis with uncertain model inputs, which are derivedfrom different types of replicate tests. The resulting prob-ability density functions (PDFs) therefore are not neces-sarily Gaussian or even symmetric.

The two validation approaches are notionally illus-trated in Fig. 5.

In this example, the metric employed in either valida-tion approach is based on the area between the measuredand calculated SRQs’ cumulative distribution functions(CDFs; the CDF is the integral of the PDF). This metric,sometimes referred to as the “area” metric, is illustratedin Fig. 6, and more detail about it is given below.

The area metric MSRQ is the area between the experi-ment and model CDF [3], normalized by the absolutemean of the experimental outcomes. Thus, if FSRQ(y) isthe CDF of either the model-predicted or measured SRQvalues, then

MSRQ p1

�SRQexp���

−��FSRQmod(y) − FSRQexp(y)� dy (1)

whereSRQexp p the mean of the experimental outcomes

This metric is nonnegative and vanishes only if thetwo CDFs are identical. To help understand what themetric represents, it can be shown that in the specialcase where the two CDFs do not cross, the integral ineq. (1) is the absolute value of the difference betweenthe means, and in general, it is a lower bound on themean of the absolute value of the difference betweenSRQmod and SRQexp [4]. In the deterministic case, whereboth CDFs are step functions, the area is simply theabsolute value of the difference between the two uniquevalues.

For both validation approaches in this example, thevalidation requirement is taken as

MSRQ ≤ 0.1 (2)

Obviously, satisfaction of a particular validationrequirement is the desired outcome of the validationassessment. However, the V&V Plan should include arecommended course of action if the validation require-ment is not met. Such contingency plans could includeimprovements in the model, improvements in the valida-tion experiments, better quantification of uncertainties,relaxation of the requirements, or some combination ofthese. The choice of which course to pursue dependson the application of interest, the consequences of failureto validate, and available resources.

--````````,,``,```,,,,``,,````-`-`,,`,,`,`,,`---

Page 16: An Illustration of the Concepts of Verification and Validation in Computational Solid Mechanics.pdf

ASME V&V 10.1-2012

Fig. 5 Illustration of the Two Validation Approaches

Single experimental outcome

Single simulation outcome

SRQ

(a) Validation Approach 1

PDFEstimated uncertainty

Estimated uncertainty

Experimental outcomes

Simulation outcomes

SRQ

(b) Validation Approach 2

PDFReplicate tests

Uncertain inputs

Fig. 6 Illustration of the Basis of the Area Metric

System Response Quantity

Model CDFExperimental CDF

Area

CD

F

8--````````,,``,```,,,,``,,````-`-`,,`,,`,`,,`---

Page 17: An Illustration of the Concepts of Verification and Validation in Computational Solid Mechanics.pdf

ASME V&V 10.1-2012

The metric we have defined is normalized by the meanof the experimental outcomes. This is in contrast withsome other validation assessment procedures that sug-gest using a metric normalized by the sample standarddeviation of the measured SRQ, or equivalently, requir-ing the absolute error to be within some specified num-ber of standard deviations of the measurements. This isnot recommended because it confounds two unrelatedissues:

–the uncertainty in the experimental measurements–the predictive accuracy required of the model for the

application of interest

For example, one could have large experimental uncer-tainty making it easy for the model result to fall withintwo standard deviations of the measurements. Stateddifferently, model accuracy requirements should beindependently set on how well the model reproduces fea-tures of the experimentally measured response. Theserequirements are determined by such things as engi-neering design and system performance requirementsand not by sources of measurement uncertainty or theimpact of input parameter uncertainty on systemresponses.

6 MODEL DEVELOPMENT

This section discusses the application of general mod-eling concepts to different phases of model development,with emphasis on the lowest level of the hierarchy, whichis where the present example resides.

6.1 Conceptual ModelThe conceptual model is the initial product of the

process of abstracting the reality of interest. It includesthe set of assumptions that permits construction of boththe mathematical model and definition of the validationrequirements. As indicated in Fig. 1, a conceptual modelis required for each element of the hierarchy shown inFig. 2. The conceptual model should be developed witha clear view of the reality of interest, the intended useof the model, the response features of interest, and thevalidation accuracy requirements. All of these items aredefined in the V&V Plan. The conceptual model shouldconsider physical processes, geometric characteristics,influence of the surroundings (e.g., loading), and uncer-tainties in each of these.

The example problem at hand concerns quasistaticdeformation of a built-in, tapered, statically loaded, elas-tic beam. When the beam deforms, plane sections areassumed to remain plane and normal to the middlesurface of the beam (i.e., Bernoulli–Euler beam theoryapplies). The beam structure and loading are assumed tobe symmetric such that there is no twisting deformation.Shear deformation is neglected. Material properties areassumed to be isotropic, homogeneous, and linear elas-tic. Beam deflections are assumed to be small enoughthat geometric nonlinearity can be neglected.

9

Although physical realizations of any system will varyrandomly, variation of most of the system-characterizingquantities in this illustrative example are considered tobe small and will be neglected. The two exceptions inthis example are the rotational stiffness at the wall andthe modulus of elasticity of the aluminum. The variabil-ity in these two parameters cannot be eliminated ortightly controlled in the validation experiments, so inValidation Approach 2 the approach taken is to measurethe variability (from separate characterization tests) andthen include this variability in the probabilistic model.It is furthermore assumed that these two parameters areindependent of each other. The foregoing assumptionsabout the conceptual model provide the basis on whichthe mathematical model is constructed. That none ofthem will be precisely satisfied in the validation experi-ment, and that many will vary from one repetition ofthe experiment to another, suggests why the model pre-dictions, especially from a deterministic model, may notexactly match experimentally measured results.

In summary, the major assumptions to be used indevelopment of the beam model are as follows:

(a) The beam material is homogeneous, isotropic, andlinear elastic.

(b) The beam undergoes only small deflections.(c) Beam deflections are governed by static

Bernoulli–Euler beam theory.(d) The beam and its boundary conditions are per-

fectly symmetric from side to side, and all loads areapplied symmetrically; therefore, beam deflectionsoccur in a plane.

(e) The beam boundary constraint is fixed in transla-tion and constrained against rotation by a linear rota-tional spring.

6.2 Mathematical Model

The mathematical model uses the information fromthe conceptual model, including idealizing assumptionsconcerning the behavior of the beam, to derive equationsgoverning the structure’s behavior. For the beam consid-ered here, the assumptions listed when defining theconceptual model in para. 6.1 combine to yield the equa-tions of static Bernoulli–Euler beam theory:

d2

dx2�EI(x)d2

dx2w(x)� p q(x), 0 ≤ x ≤ L,

w(0) pdwdx �

xp0

p frEI(0)d2w

dx2 �xp0

, �EI(x)d2

dx2w(x)��

xpL

p 0,

(3)ddx�EI(x)

d2

dx2w(x)��

xpL

p 0,

I(x) p112�b0 �1 − �

xL� h3 − �b0 �1 − �

xL� − 2t�h − 2t

3�where

b0 p width at the support

--````````,,``,```,,,,``,,````-`-`,,`,,`,`,,`---

Page 18: An Illustration of the Concepts of Verification and Validation in Computational Solid Mechanics.pdf

ASME V&V 10.1-2012

E p modulus of elasticity of the beam materialfr p flexibility of the linear rotational spring

restraining the beam at its constrained endh p depth of the beam

I(x) p area moment of inertia of the beamL p length of the beam

q(x) p distributed load in the y-directiont p wall thickness

w(x) p beam deflection in the direction normal to theundeformed centerline

x p measured from the supported end� p taper factor 1-(tip width)/(base width)

The boundary conditions at the supported end repre-sent the assumption of zero translation and a linearrotational spring constraint. At the free end of the beamthere are the conditions of zero moment and shear.Under some circumstances eq. (3) can be easily solvedin closed form, but when the area moment of inertiavaries along the length, a numerical solution is usuallysought.

The development of mathematical models in present-day computational solid mechanics typically does notinclude explicitly writing the differential equations.Rather, the computational model is constructed directlyfrom the conceptual model via selection of elementtypes, material models, boundary conditions, and asso-ciated options. This is not to say the differential equa-tions are omitted from the modeling process; rather, thedifferential equations are documented in the software’suser or theory manual (e.g., in descriptions of variousbeam element types and options). In that case, it is highlyrecommended that the analyst review those models,equations, and assumptions (given the options chosenin the code) to ensure they are consistent with theintended use of the model. Without carefully consider-ing these equations, an error or inconsistency in themathematical modeling can easily occur.

6.3 Computational Model

The computational model provides the numericalsolution of the mathematical model, and normally doesso in the framework of a computer program. The rangeof discretization approaches (e.g., finite element, finitedifference) and options within each approach in com-mercial software is often extensive. The analyst needsto find a balance between representing the physicsrequired by the conceptual model and the computationalresources required by the resulting computationalmodel. For example, finite element type options for themathematical/computational model for the airfoil alu-minum skin would include the following:

(a) continuum elements: use solid elements throughthe thickness of the aluminum skin

(b) shell elements: plane stress assumption throughthe skin thickness

10

(c) plate elements: same as shell element but omitsurface curvatures

(d) hollow-section beam elements: strains areassumed to be primarily axial and torsional

(e) closed-section beam elements: equivalent constantcross-sectional properties of a solid section

(f) uniform closed-section beam elements: averagecross section properties along length

In this example, the computer code used to solve forthe beam deflections and rotations was specially writtenfor this application example. It can be used only to ana-lyze beam structures. It is a finite element programemploying Bernoulli–Euler beam elements with con-stant cross section. If the element itself is loaded by anycombination of uniform distributed load, transverse endforces, and couples at the ends, then the relative dis-placements and rotations at the ends are exact in thecontext of Bernoulli–Euler beam theory. On the otherhand, when used to model a tapered beam, these relativedeformations are only approximations.

The beam considered here is shown schematically inFig. 3. The length of the beam is 2 m, the depth is 0.05 m,the width varies linearly from 0.20 m at the supportedend to 0.10 m at the free end, and the wall thickness is0.005 m. The material is aluminum, with a modulus ofelasticity of 69.1 GPa. A uniform distributed load of500 N·m is applied vertically in the downward directionon the outer half of the beam.

7 VERIFICATION

Code verification seeks to ensure that there are noprogramming errors and that the code yields the accu-racy expected of the numerical algorithms used toapproximate the solutions of the underlying differentialequations. This is in contrast with calculation verification,which is concerned with estimating the discretizationerror in the numerical solution of the specific problem ofinterest. The distinction is subtle but important, becausecode verification requires an independent, highly accu-rate reference solution and can (and usually will) operateon a problem that is different from the problem ofinterest.

What is important in code verification is that all por-tions of the code relevant to the problem at hand befully exercised to ensure that they are mistake free. Thisis done by comparing numerical results with analyticalsolutions, and in the process, confirming that the numer-ical solution converges to the exact one at the expectedrate as the mesh is refined.

At the root of both code and calculation verificationis the concept of the order of accuracy of a numericalalgorithm. Under h-refinement (variation of elementsize, as contrasted with p-refinement or variation of alge-braic order of interpolation functions), it is defined asthe exponent p in the power series expansion

--````````,,``,```,,,,``,,````-`-`,,`,,`,`,,`---

Page 19: An Illustration of the Concepts of Verification and Validation in Computational Solid Mechanics.pdf

ASME V&V 10.1-2012

wexact p wh + Ahp + H.O.T. as h → 0 (4)

whereA p constant

H.O.T. p higher order terms, which tend to zerofaster than the lowest order error term (thesecond term on the right) as h tends to zero

h p grid sizewexact p exact solution

wh p numerical solution at grid size h

In the manipulations to follow, grid size is assumedto be small enough that the H.O.T. are negligible com-pared to the lowest order error term [5]. When this isso, the numerical solution is said to be in the asymptoticconvergence regime.

Observed order of accuracy is the value of p inferredfrom eq. (4) when it is applied as detailed below to twoor three numerical solutions at different grid resolutions.Theoretical order of accuracy is the value of p derivedfrom a mathematical analysis of the algorithm.According to eq. (4), the slope of a log–log plot of theabsolute value of numerical error wh − wexact vs. gridsize will tend to p as grid size tends to zero. Since in alinear array grid size is inversely proportional to numberof elements, the slope of a plot of error vs. number ofelements will tend to −p as number of elements becomeslarge, as will be illustrated in para. 7.1.

7.1 Code Verification

There happens to exist an analytical solution to themain problem of interest, viz., a tapered cantileverloaded over half its length. While it would be temptingto use it for both code and calculation verification, weshall do neither, because in general such a solution willnot be available. (If it were, there would be no reasonto undertake a numerical solution in the first place.)Therefore in this example, numerical code verificationwill be performed on a slightly different problem, viz.,the same tapered cantilever beam but with a uniformload along its entire length.

The key points here are as follows:(a) While different, this problem is closely related to

the problem of interest, as in general it must be to serveas the reference solution for code verification.

(b) The problem has an exact analytic solution.

Tip deflections at various grid refinements will be com-pared to that analytical solution, leading to estimatesof the observed order of accuracy based on eq. (4). Thiswill then be compared to the theoretical order ofaccuracy.

The analytic solution of a linearly tapered, uniformlyloaded cantilever beam has been derived by integration

11

Table 1 Normalized Deflections

EI0w/qL4

Number of Elements Initial Coding Final

2 0.13281250 0.146428584 0.13549805 0.141729808 0.13769015 0.14057124

16 0.13890418 0.1402823832 0.13953705 0.1402102164 0.13985962 0.14019217

128 0.14002239 0.14018766

of eq. (3)1 with q p constant. The resulting normalizedtransverse deflection for a beam that tapers linearly inwidth is

EI0

qL4w(x) p

12� ��1

�− 1�

2

�xL

+�1�

−xL� ln �1 − �

xL�� (5)

−16 �xL�

3

+ �1 −1

2�� �xL�2�

The taper factor � for the problem at hand is 0.5,so from eq. (5) the exact normalized tip deflection is5⁄6 − ln(2) ≈ 0.14018615, which will be the basis for codeverification comparisons.

Table 1 gives the numerically derived normalized tipdeflections from two different sets of calculations.Figure 7 shows absolute values of the differencesbetween these and the exact value. Based on algorithmssimilar to the one used here, the theoretical order ofaccuracy is expected to be 2; further, it can be shownanalytically that the theoretical order of accuracy of thenumerical algorithm when applied to this specific prob-lem is in fact 2. First consider the results of the initialcoding. The error plot strongly implies that the resultsare systematically converging to the exact solution, buteven without performing further calculations, the slopeof the line indicates an order of accuracy much closerto 1 than 2. During the early stages of the developmentof this example, these were the computed results. Theunexpectedly low observed order of accuracy prompteda detailed review of the coding, and an error was found.Upon correction of that error, the second set of resultswas obtained. Now the observed order of accuracy asindicated by the slope of the error plot is seen to bemuch closer to the theoretical value of 2. The error wasunintentional, and could not possibly have provided abetter illustration of the value of numerical codeverification.

It now remains to extract a precise numerical estimatefor the observed order of accuracy of the correctednumerical solution. This is accomplished by computingthe logarithmic slope between the last two points on the

--````````,,``,```,,,,``,,````-`-`,,`,,`,`,,`---

Page 20: An Illustration of the Concepts of Verification and Validation in Computational Solid Mechanics.pdf

ASME V&V 10.1-2012

Fig. 7 Errors in Normalized Deflections

1.E-02

1.E-03

1.E-04

1.E-05

1.E-06

1.E-07

Number of Elements

Err

or

in E

I 0w

/qL4

1 10 100 1,000

Initial coding

After correction

GENERAL NOTE: Note log–log scale.

lower line in Fig. 7. Using the numbers to 8 significantfigures as shown in Table 1, this comes out to −1.995.

When using commercial or other “black box” soft-ware, the theoretical order of accuracy p is typically notknown. In such a case, the user can compare the softwaresolution to an appropriate analytical solution and per-form mesh convergence studies to derive the observedorder of accuracy, exactly as was done in the previousparagraphs.

7.2 Calculation Verification

The goal of calculation verification in the presentexample is to estimate the numerical error in tip deflec-tion as a function of discretization. In conjunction withan independently specified numerical accuracy require-ment, this can then be used to guide the choice of discret-ization in the numerical solution of the problem at hand.

12

Rotational compliance is not treated in this section,as it would unnecessarily complicate the exposition. Fur-ther, in both the conceptual and mathematical model,the contribution of the support rotation to tip deflectioncan simply be added to that due to beam deformation.Therefore, the solutions presented here are for a beamwith its supported end perfectly fixed against transla-tional and rotational motion. In practice, this sort ofsimplification must be carefully evaluated on a case-by-case basis.

We shall present two closely related methods for esti-mating discretization error. The first uses three grids;the second uses only two but requires an assumed valuefor the order of convergence. Both are based onRichardson extrapolation [5], which will now be explained.As in the prior section, it is assumed that the numericalvalue of the quantity of interest is related to the exact

--````````,,``,```,,,,``,,````-`-`,,`,,`,`,,`---

Page 21: An Illustration of the Concepts of Verification and Validation in Computational Solid Mechanics.pdf

ASME V&V 10.1-2012

numerical solution by eq. (4) with higher order termsneglected (i.e., the numerical solution is in the asymp-totic convergence regime). Assuming that wh and h areknown, eq. (4) then contains three unknowns, wexact,A, and p. Richardson extrapolation is the process ofcomputing wexact by using eq. (4). The first version weshall apply simply entails writing it three times, usingthree pairs (wh, h) based on three numerical solutions atdifferent grid resolutions. This provides three indepen-dent equations that may be solved for the threeunknowns. To be specific, writing eq. (4) three times,then eliminating A and wexact leads to

w2 − w1

w3 − w2p

hp1 − hp

2

hp2 − hp

3

(6)

where now the subscripts 1, 2, 3 refer respectively tothe finest, intermediate, and coarsest numerical solu-tions for the SRQ of interest. This is a transcendentalequation that may be solved for p by any suitable numer-ical method. Although certainly not necessary, in somecases it may happen that the grid refinement ratio isconstant (i.e., h2/h1 p h3/h2 p r) in which case eq. (6)can be solved in closed form to yield

p p ln �w3 − w2

w2 − w1�/ln(r) (7)

Once p is determined, eliminating A from the twofiner mesh instances of eq. (4) leads to

wexact p w1 +w1 − w2

(h2/h1)p − 1(8)

This value of wexact is a Richardson extrapolation basedon three grids. [Although neither w3 nor h3 appearsexplicitly in eq. (8), they both implicitly affect p througheither eq. (6) or eq. (7).] Equation (8) can be easilyrearranged to provide an estimate of the numerical errorin the fine-mesh solution w1, by rewriting it as

wexact − w1 p�

(h2/h1)p − 1w1 (9)

where� p (w1 − w2)/w1

For some purposes eq. (9) may be all that is neededto estimate the numerical error in the fine-grid solution.However, to standardize reporting of numerical errorestimates, Roache [5] defined a grid convergence index(GCI) based on the right side of eq. (9), and it has beenfairly widely adopted in the computational fluiddynamics community. Dividing eq. (9) by w1 and multi-plying by a factor of safety Fs (which Roache takes as1.25 for “convergence studies with a minimum of 3 gridsto...demonstrate the observed order of convergence...”),one obtains the GCI:

13

Table 2 Numerical Solutions for Tip Deflections

Number ofGrid Number Elements h, m w, mm

3 4 0.5 13.0987392 8 0.25 13.0083671 12 0.16666667 12.991657

Surrogate for 200 0.01 12.978342exact solution

GCI p Fs���

(h2/h1)p − 1(10)

Thus the GCI is a dimensionless indicator for the meshconvergence error relative to the finest-zoned solution.Specifically, when multiplied by the finest-zoned solu-tion, it provides the width of an error band, centeredon that solution, within which the exact solution is verylikely to be contained. Its validity rests on the assump-tion that the numerical solutions from which p wasdetermined are in the asymptotic convergence range. Interms of the GCI, the error band is w1(1 ± GCI).Equation (10) can be used for any discretization methodwith any order of spatial convergence. The only otherrequirement for eq. (10) is that h1 < h2. It is also recom-mended that successive grid refinement be greater than1.3 (i.e., h2 /h1 > 1.3).

The reasons for referring to the GCI-based interval asa band rather than a bound, and for applying the factorof safety, are the same: There is no guarantee that theconverged numerical solution will fall within the band,just a high likelihood. The converged solution could falloutside the band for various reasons mostly related tothe numerical solutions not being in the asymptotic con-vergence regime [which means that the higher orderterms in eq. (4) were not negligible]. A strong piece ofevidence that the numerical solution is in the asymptoticregime is that the inferred order of convergence is closeto the theoretical one.

The foregoing analysis will now be applied to numeri-cal solutions of the problem at hand, which are summa-rized in Table 2.

Using the first three of these displacements in eq. (6)and solving for p yields p p 2.00256154.

Since this is so close to the theoretical value of 2, thesolutions are judged to be in the asymptotic convergenceregime. From eq. (10) with Fs p 1.25, we have GCI p0.00128381. The error band defined by this GCI aboutthe fine-zoned solution w1 is (12.9750, 13.0083) mm. Thelast line in Table 2 lists a 200-zone solution, which weregard as a surrogate for the exact numerical solution,and note that it does indeed fall within the GCI-definedinterval.

The recommended method of error estimation isbased on three numerical solutions as just described.If resources are insufficient to permit three numerical

Page 22: An Illustration of the Concepts of Verification and Validation in Computational Solid Mechanics.pdf

ASME V&V 10.1-2012

solutions, it is still possible to estimate discretizationerror based on only two solutions, although thisapproach is to be avoided if possible. When only twosolutions are available, then rather than computing theobserved order of accuracy p, the theoretical value mustbe used. This is what Richardson did when he firstproposed his extrapolation, and accordingly, the two-grid process is referred to as standard Richardson extrap-olation. Having thus selected a value for p, there areonly two remaining unknowns in eq. (4), and the twoavailable instances of it combine once again to yieldeq. (8). The GCI can still be defined by eq. (10), similarlyto the three-grid case, but now Roache recommends alarger factor of safety, viz., Fs p 3, to account for thegreater uncertainty in the order of accuracy. Using thetwo finer grid solutions for displacement in eq. (10) withp p 2 and Fs p 3 yields the two-grid GCI p 0.003087,larger than the three-grid value in this case by almostexactly the ratio of the safety factors 3/1.25. Naturallythe surrogate exact solution still falls in the wider errorband defined by the two-grid GCI. If instead we ignoresolution 1, regard grid 2 as the finer solution, and usegrids 2 and 3, the two-grid GCI relative to w2 is0.00694727, more than twice the one based on the finertwo solutions, and the inferred error band is correspond-ingly wider. As a final observation on GCI, note thataccording to eq. (10), with mesh doubling, second orderspatial convergence, and a factor of safety of 3, the two-grid GCI reduces simply to the fractional error ���.

An immediate application of these calculation verifi-cation results is to assess the adequacy of the grid resolu-tion used when comparing predictions of structuralresponse to experimental measurements. As mentionedin para. 5.2, in this example we specify that estimatednumerical error be no more than 2% of the 10% valida-tion requirement, or 0.2%. Thus the three-grid GCI of0.001284 or 0.1284% implies a numerical error in tipdeflection no greater than 0.13% when 12 elements areused. While this would be adequate, for added safety 20elements with length 0.100 m will be used for validationpredictions.

8 VALIDATION APPROACH 1

Validation Approach 1 considers the case whereuncertainty data are not available, and suggests anapproach whereby subject matter experts are relied uponto provide estimates of the expected uncertainty. Thearea metric is used to assess the validity of the model.The value of an unknown model parameter is first esti-mated from a single, special test, followed by the modelprediction, measurement of the beam response in a vali-dation experiment, and finally the validationassessment.

14

8.1 Obtaining the Model Result

The numerical simulation requires a value for the rota-tional flexibility, fr, at the clamped end of the beam.One could assume that fr is negligibly small (i.e., therotational stiffness is infinite), but engineering experi-ence has shown this to be a poor assumption in mostapplications. The parameter fr cannot be measureddirectly. However, it can be determined using a proce-dure that combines experimental measurements on aspecially constructed beam and computational model-ing of that beam. This procedure will be discussed inmore detail in section 9. For Validation Approach 1, thisprocedure, when applied to a single, special test article,yielded a value of fr p 8.4 � 10−7 rad/N·m.

All the information necessary to create a finite elementmodel of the beam is now available. Specifically, thisincludes

(a) the geometric and material characteristics of thebeam as listed in para. 6.3.

(b) the specified load (over the outer half of thelength).

(c) the parameters needed to characterize the bound-ary conditions.

(d) the number of finite elements needed to demon-strate that the numerical solution error is negligible, asdetermined by the calculation verification procedure inpara. 7.2. The tip deflection predicted by the model is

wmod p −14.2 mm (11)

8.2 Experimental Data

To obtain validation data, we construct a single beamwith the dimensions and constraints listed in para. 6.3.After constructing the beam, it is embedded into thewall. The beam is instrumented with a displacementtransducer at the tip to measure deflection. The measure-ment system is initialized to record displacement relativeto the initial, gravitationally deformed configuration.The beam is then loaded by placing a 500 N·m load onthe right half of the beam. The resulting tip deflectionis recorded as

wexp p −15.0 mm (12)

8.3 Validation Assessment

The primary activities in validation assessment aremodel accuracy assessment by comparison with experi-mental validation data, and determination whether thevalidation accuracy requirement is satisfied. In addition,it is commonly necessary to assess the accuracy of amodel’s predictions for conditions where experimentaldata are not available (i.e., where extrapolation of themodel is required). The latter activity, however, isregarded as beyond the scope of this Standard.

As stated in para. 5.3, Validation Approach 1 specifiesthat the model-predicted tip deflection of the beam be

--````````,,``,```,,,,``,,````-`-`,,`,,`,`,,`---

Page 23: An Illustration of the Concepts of Verification and Validation in Computational Solid Mechanics.pdf

ASME V&V 10.1-2012

within 10% of the respective experimental measure-ments, as measured by the area metric. We describethe validation comparison procedure in detail in thefollowing paragraphs. Before doing so, however, weemphasize again that the details of the validationrequirements should be decided before performing thevalidation experiments and the model predictions. Oneof the reasons for this emphasis is that it focuses onexpectations of the model for the particular applicationof interest, as opposed to what accuracy is determinedin the validation assessment.

In Validation Approach 1, uncertainties are estimatedby subject matter experts (SMEs), who are each askedto provide their estimate of the range within which allpractical experimental or model outcomes are expectedto fall. Specifically (see Fig. 4), each SME is asked toprovide the half-width, �, of the full uncertainty interval,which is then interpreted to be equal to three standarddeviations in an assumed normal PDF. For this demon-stration, the estimates provided by the respective SMEsbased on their experience with related problems are

�exp p 0.75 mm, �mod p 0.71 mm (13)

From this, assumed normal PDFs and CDFs for theexperiment and model can be constructed with the fol-lowing parameters

�exp(pwexp) p −15.0 mm; � exp p0.75

3p 0.25 mm

(14)�mod(pwmod) p −14.2 mm; � mod p

0.713

p 0.24 mm

These inputs are now used to compute the validationmetrics. As shown in Fig. 8, the area between the CDFswas computed to be 0.8 mm, so according to eq. (1)the area metric is MA

SRQ p 5.3%. (In keeping with theproperties listed in para. 5.3 and that the two CDFs crossonly near one extreme where they are practically equal,this value is almost exactly the absolute relative differ-ence of the means.) Because the validation metric fallswithin the 10% requirement, the model is assessed asvalid.

9 VALIDATION APPROACH 2

Validation Approach 2 considers the case whereuncertainty data are available, and employs a straight-forward probabilistic analysis to relate model inputuncertainties to the model output uncertainty. The areametric is then used to assess the validity of the model.

9.1 Validation Experiments

For validating a model, multiple replications of thevalidation experiment are highly recommended. Thiswill account for inevitable variations that exist in thetest article fabrication, experimental setup, and mea-surement system. For any application, the questions that

15

must be addressed and answered in the V&V Planinclude the following:

(a) How many experiments should be conducted?(b) Should different technicians or subcontractors

assemble the system to be tested?(c) Should different experimental facilities conduct

the experiment?These types of questions must be answered on a case-

by-case basis.In this example the data used to characterize the

response of the system come from validation experi-ments on 10 nominally identical beams, constructed withthe nominal dimensions listed in para. 6.3.

After constructing each beam, the same proceduredescribed in para. 8.2 concerning measurements relativeto the gravitational equilibrium conditions of the beamis used. The uncertainty in the response of the beam isdue to random and systematic uncertainty in the multi-ple experimental measurements as well as variabilityin the properties of the test article. In this example,properties variations are assumed to be confined to themodulus of elasticity in the material used to constructeach beam, and the support flexibility. These indepen-dent sources of uncertainty can be separated using statis-tical design of experiment techniques [6, 7].

Uncertainty in the experimental measurements willbe due to a number of random and systematic uncertain-ties. Some examples of random uncertainty are trans-ducer noise, attachment of individual transducers, andsetup and calibration of all of the instrumentation foreach test. Examples of systematic uncertainties are cali-bration of the transducers, unknown bias errors in theexperimental procedures, and unknown bias errors inthe experimental equipment. The measured displace-ments in the validation experiment are denoted aswi

exp,i p 1, ..., 10. The measurements are given in Table 3.These data can be used to compute the sample mean

and standard deviation of the experimental tipdeflections:

w– exp p110 �

10

ip1wexp

i p −15.4 mm(15)

� exp p 110 − 1 �

10

ip1�wexp

i − w– exp�2 p 0.57 mm

For use in the area metric, an empirical CDF can beconstructed from the validation experiment data. Thedata listed in Table 3 are sorted in ascending order anda probability value of i/N is assigned to each of the datapoints. The empirical CDF is shown in Fig. 9. This CDFis “stair-stepped” because of the finite number of datapoints, each with a single associated probability.

9.2 Model Uncertainty Quantification

Uncertainty quantification provides the basis forquantifying and understanding the effect of uncertain-ties in experimental data and computational predictions,

--````````,,``,```,,,,``,,````-`-`,,`,,`,`,,`---

Page 24: An Illustration of the Concepts of Verification and Validation in Computational Solid Mechanics.pdf

ASME V&V 10.1-2012

Fig. 8 Area Between the Experimental and Computed CDF

Tip Displacement, mm

Experiment

AreaCD

F

Model

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

�16 �15.5 �15 �14.5 �14 �13.5 �13

GENERAL NOTE: Used in area metric.

Table 3 Measured Beam-Tip Deflections From theValidation Experiments

Test Number, i Tip Deflection, wiexp, mm

1 −16.32 −15.53 −16.14 −14.85 −14.4

6 −15.07 −15.38 −15.49 −15.6

10 −15.2

16

Fig. 9 Empirical CDF of the Validation ExperimentData

Tip Deflection, m

CD

F

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

�0.0165 �0.016 �0.0155 �0.015 �0.0145 �0.014

--````````,,``,```,,,,``,,````-`-`,,`,,`,`,,`---

Page 25: An Illustration of the Concepts of Verification and Validation in Computational Solid Mechanics.pdf

ASME V&V 10.1-2012

for judging the adequacy of the model for its intendeduse, and for quantifying the predictive accuracy of themodel. As compared to traditional deterministic analy-sis, quantifying the effects of uncertainties requires addi-tional effort to collect and characterize input data, toperform the uncertainty analysis, and to interpret theresults.

Uncertainties enter into a computational simulationfrom a variety of sources, such as variability in inputparameters due to inherent randomness, lack of or insuf-ficient information pertaining to models or modelinputs, and modeling assumptions and approaches. Forexample, a material property such as modulus of elastic-ity will vary from sample to sample because of the inher-ent variability in manufacturing of the material.Loadings will often be random due to the uncertaintyof environmental factors such as wind, wave height, andas-built conditions. Uncertainties like these are beyondour ability to control and are referred to as inherent orirreducible. Recognizing this, the approach is to assesstheir effects on the results of the simulation and theperformance of the system.

To facilitate the numerical representation and simula-tion of uncertainties, some type of uncertainty modeland uncertainty quantification method is required. Awell-known approach to modeling uncertainties is basedon the theory of probability using what are called ran-dom variables. In simple terms, a random variable is afunction that relates the possible values of the variableto the corresponding probability of occurrence. In thisapplication, uncertainty is manifested in the materialmodulus and constraint flexibility, leading to uncer-tainty in the SRQ of interest, namely the tip deflection.

9.3 Random Variability in the Material Modulus

One source of variability in the validation experimentsis the modulus of elasticity, E, of the material used inthe beam. This variability is due to inherent variationsin the material, and these variations cannot be elimi-nated in the production of the beam or in the validationexperiment. Therefore, we measure the material variabil-ity from replicate experiments on coupon samples takenfrom the same material as used in the beam experiments.This variability will then be included in the beam model,as discussed later. A set of 10 sample measurements isgiven in Table 4.

Using the 10 measurements of E given in Table 4, thesample mean and standard deviation are computed tobe 70.2 GPa and 3.5 GPa, respectively. A histogram illus-trating the variability in E is shown in Fig. 10, illustration(a). For use in the computational model, it is convenientand usually acceptable to represent the histogram witha continuous function. Because it fits the data well, aGaussian distribution with the same mean and standarddeviation is selected, and this is shown in both PDFand CDF form in Fig. 10, illustration (b). A different

17

Table 4 Test Measurements of the Modulus ofElasticity, E

Test Modulus, E, GPa

1 69.12 68.83 74.44 72.65 72.9

6 67.57 74.18 68.39 71.0

10 63.2

distribution (e.g., lognormal or Weibull) also could beselected if it fit the data better.

9.4 Random Variability in Support Flexibility

To characterize the random variability in the supportflexibility fr , 20 experiments were performed using aspecially made and instrumented beam. This specialbeam is not tapered and not subjected to the same load-ing specified for the beams used in the validation experi-ments. The attachment of this special beam, however,closely replicated the method of attachment of the beamsused in the validation experiments such that the mea-sured variability would be similar. The beam wasdesigned so that very high confidence was attained inall of the assumptions made in the underlying mathe-matical model. For example, all dimensions were accu-rately measured and used in the special computationalmodel mentioned below, and the elastic modulus wasmeasured in a special coupon test using the same mate-rial as the beam. During each experiment a carefullycalibrated transverse load was applied, and the rotationat the supported end of the beam was inferred as follows:a high fidelity computational model was run with suffi-cient spatial resolution that a converged numerical solu-tion was guaranteed, so that high confidence wasattained in the simulation results. The only parameterin the model left unspecified was fr. For each of theexperiments conducted, a measurement was made forthe tip deflection at the free end of the beam. Then, thefr parameter was adjusted so that the computationalresult matched the experimental measurement for thetip deflection at the free end of the beam. (This is atrivial example of an optimization procedure usuallyreferred to as parameter estimation.) The results from theseexperiments are listed in Table 5.

Using the 20 estimates of fr given in Table 5, the samplemean and standard deviation are computed to be 8.4 �10−7 rad/N·m and 4.3 � 10−8 rad/N·m, respectively. Thevariability of fr shown in the table is a combination ofexperimental measurement uncertainty and variability

--````````,,``,```,,,,``,,````-`-`,,`,,`,`,,`---

Page 26: An Illustration of the Concepts of Verification and Validation in Computational Solid Mechanics.pdf

ASME V&V 10.1-2012

Fig. 10 Random Variability in Modulus, E, Used in the Computational Model

Modulus, E, GPa

5

4

3

2

1

0

64 68 72 76

Freq

uen

cy

Modulus, E, GPa

0.12

0.10

0.08

0.06

0.04

0.02

0.00

55.0 65.0 75.0 85.0

0.0

0.2

0.4

0.6

0.8

1.0

PD

F, E

CD

F, E

(a) Displayed as a Histogram (b) Displayed as PDF and CDF

Table 5 Test Estimates of the Support Flexibility

Test fr, rad/N·m � 10−7

1 8.52 8.23 8.14 8.45 7.8

6 7.77 8.88 8.69 8.6

10 8.1

11 8.612 8.813 8.514 8.015 9.3

16 8.117 9.318 8.119 8.320 8.3

18

--````````,,``,```,,,,``,,````-`-`,,`,,`,`,,`---

Page 27: An Illustration of the Concepts of Verification and Validation in Computational Solid Mechanics.pdf

ASME V&V 10.1-2012

Fig. 11 Random Variability in Support Flexibility, fr , Used in the Computational Model

Flexibility, fr, rad/N·m x 1E-7

5

4

3

2

1

0

7.7 8.3 8.9 9.5

Freq

uen

cy

1.0

0.8

0.6

0.4

0.2

0.0

6.5 7.5 8.5 9.5

0.0

0.2

0.4

0.6

0.8

1.0

PD

F

CD

F

(a) Displayed as a Histogram (b) Displayed as PDF and CDF

6

7

8 8.6 9.2 9.8

Flexibility, fr, rad/N·m x 1E-7

10.5

in repeatedly attaching the special beam to the test fix-ture. A histogram illustrating the variability in fr isshown in Fig. 11, illustration (a). Again, a Gaussiandistribution is selected. A different distribution (e.g.,lognormal or Weibull) also could be selected if it fit thedata better. Both the PDF and the corresponding CDFare shown in Fig. 11, illustration (b).

9.5 Uncertainty Propagation

The process of propagating input uncertaintiesthrough the computational model is called uncertaintypropagation. This process is illustrated in Fig. 12, wherethe input uncertainties are propagated through the beammodel. If all inputs had been treated as deterministic(single valued), the beam tip displacement would alsohave been single valued. It is the uncertainty in all of theinput parameters, and their impact on computationalmodel output, that yields the uncertainty in the beamtip displacement shown in the figure.

There are many different techniques for propagatinginput uncertainties through a computational model.These different methods have been developed primarilyto propagate uncertainties in an efficient and accuratemanner for different situations (e.g., static vs. dynamicresponses) and for different purposes (e.g., computingoverall statistics such as the mean and standard devia-tion vs. computing extremely small probabilities).

A well-known technique for performing uncertaintypropagation is Monte Carlo simulation, which is astraightforward random sampling method. In MonteCarlo simulation, a random sample is taken from eachof the input distributions. (The distributions may beGaussian or not.) This sample is then used in the compu-tational model to produce one output sample of theresponse. This process is repeated a number of times, thenumber of which typically depends on the magnitude of

19

the response probability of interest. For example, if anoutcome with a low probability of occurrence is of inter-est, then a large number of Monte Carlo samples arerequired to estimate this probability accurately. Oncethe Monte Carlo simulation is finished, the output sam-ples are processed to produce the CDF of the tipdeflection.

The CDF of the beam tip deflection resulting from aMonte Carlo simulation is shown in Fig. 13. The CDFappears as a relatively smooth curve because a largenumber of Monte Carlo samples were used. The com-puted mean and standard deviation for the tip displace-ment are −14.1 mm and 0.65 mm, respectively.

9.6 Validation Assessment

Figure 14 shows the CDFs of the computed and mea-sured tip-deflection of the beam. The area between them,indicated by the shaded regions in the figure, is calcu-lated to be 1.3 mm. Validation Approach 2 specifies thatthe area normalized by the absolute experimental meanbe less than 10%. This is calculated to be 1.3/15.4 p 8.4%.Thus, the validation requirement for tip displacement issatisfied.

If the validation requirement had not been met, possi-ble next steps could include the following:

(a) obtaining additional validation test data(b) obtaining additional data on the model input

uncertainties(c) modifying the model and/or experiment to correct

any suspected deficiencies(d) relaxing the validation requirementsAlthough obtaining additional data is always desir-

able, (a) and (b) may or may not decrease the metric,depending on how they change the relative shapes of theCDFs. The item listed in (c) always should be performedwhen the validation requirements are not satisfied.

--````````,,``,```,,,,``,,````-`-`,,`,,`,`,,`---

Page 28: An Illustration of the Concepts of Verification and Validation in Computational Solid Mechanics.pdf

ASME V&V 10.1-2012

Fig. 12 Input Uncertainty Propagation Process

ModelInput

Uncertainties

Output

Uncertainty

PD

FP

DF

MaterialStiffness

SupportFlexibility

PD

F

Beam TipDisplacement

Fig. 13 Computed CDF of Beam Tip Deflection

CD

F

Tip Deflection, m

1

0.9

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.1

0

�0.0125�0.0135�0.0145�0.0155�0.0165

20

--````````,,``,```,,,,``,,````-`-`,,`,,`,`,,`---

Page 29: An Illustration of the Concepts of Verification and Validation in Computational Solid Mechanics.pdf

ASME V&V 10.1-2012

Fig. 14 CDF of the Model-Predicted Tip Deflection, Empirical CDF of the Validation Experiment Tip Deflections,and Area Between Them (Shaded Region)

Tip Deflection, y, m

1

0.9

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.1

0

�0.0125�0.0135�0.0145�0.0155�0.0165

CD

F, y

Experimental CDF

Computed CDF

10 SUMMARY

The preceding sections provide the reader with a step-by-step illustration of some of the key concepts of V&Vin the setting of computational solid mechanics. Begin-ning with a V&V Plan, successive stages included modeldevelopment, the two aspects of verification, and finallyvalidation illustrated using two alternative validationapproaches. In addition to illustrating the V&V method-ology, the reader was also provided with a frameworkfor approaching V&V efforts on a scale larger than thesimple cantilever beam example used in the illustration.

Both code verification and calculation verificationwere demonstrated. The code verification illustrationused both direct comparison of the model result forbeam tip deflection to an available analytical solution,and a more demanding test of accuracy using the com-puted rate of convergence as a function of mesh resolu-tion. The convergence-rate test identified an error in theinitial coding of the equations that was not revealed bythe traditional test of simply comparing the deflectionof the beam. The other part of verification, calculationverification, focused on estimating the amount of errorinherent in the model solution due to spatial discretiza-tion. The traditional technique of Richardson extrapola-tion was presented and then related to the more widelyused grid convergence index (GCI). Results for GCI were

21

presented and used to select a discretization that led toan estimated numerical error much smaller than theassociated metric requirement.

Two alternative validation approaches were presentedand discussed. The validation assessments were imple-mented by way of requirements on a validation metricbased on the area between the CDFs of the experimentaland computed SRQs. When only one validation experi-ment and one numerical simulation are performed, acomparison using estimated uncertainties between theexperimental result and the corresponding model pre-diction is used to implement Validation Approach 1.Validation Approach 2 considers the case where theuncertainties in both the experimental and model out-comes are quantified. In the example, in both validationapproaches, the validation requirement was satisfied.

11 CONCLUDING REMARKS

Although the cantilever beam V&V example includesas many aspects of the V&V process as practical, it isnot all inclusive. Among the significant omissions are

(a) documentation of the V&V process(b) description of the experiment planning and

execution --````````,,``,```,,,,``,,````-`-`,,`,,`,`,,`---

Page 30: An Illustration of the Concepts of Verification and Validation in Computational Solid Mechanics.pdf

ASME V&V 10.1-2012

(c) results comparisons when system response quan-tities vary in time and space, and how to combine dispa-rate system quantities of interest in an overall validationrequirement (e.g., beam tip deflection and quarter-pointstrain)

(d) identification of uncertain parameters in both themodeling and experiments and how to quantify theiruncertainty and make practical use of the informationin both the V&V Plan and the validation assessment

(e) procedures for assessing the effect of experienceand competence of the analyst performing the modelingand simulation

While it is obvious that the V&V documentation needsto report the final results, what is perhaps less obviousis the importance of reporting intermediate results, bothsuccessful and unsuccessful. Good documentation notonly fulfills the reporting requirements, it also looks tothe future and anticipates the needs of those using themodel long after the experimental hardware and simula-tion software are no longer present or are obsolete. Asan example, a description of the decision process usedfor selecting a constitutive model will have potentialfuture value, whereas simply providing a copy of thesoftware input for the constitutive model may have littlefuture relevance.

Equal partners in any validation effort are the modeldevelopers and the experiment designers. The overallflowchart of the V&V process (Fig. 1) shows parallelpaths for the mathematical and physical model develop-ment. It is perhaps natural for an emphasis to be placedon the mathematical modeling in a document writtenby a committee comprising primarily computationalmechanicians, despite the recognition of the equal roleof experimentalists.

The use of metrics to provide quantitative compari-sons of experimental and simulation results is an activearea for the V&V 10 Committee. A Task Group hasbeen formed to draft a document addressing a range ofvalidation metrics including ones for comparing contin-uously varying data. The area of comparative metrics,especially those that involve multiple types of similaror disparate system response quantities, is an area ofvalidation that particularly would benefit from contribu-tions by the academic community where mathematicalconcepts and ideas from areas other than mechanicscould be usefully employed.

The entire topic of Verification and Validation hasadvanced considerably over the past decade. In additionto numerous research publications, mini-symposia, andworkshops, there recently have appeared book-lengthtreatments by Roache [5], Coleman and Steel [8], andOberkampf and Roy [9]. These provide in-depth treat-ments of various V&V topics, in addition to offeringalternative views on some topics that continue to maturealong with the whole of the V&V process. Another docu-ment of interest is ASME’s Standard for Verification and

22

Validation in Computational Fluid Dynamics and HeatTransfer [10].

The V&V 10 Committee has several members whoare active in research areas concerned with uncertaintyquantification (UQ), especially as it applies to validation.A Task Group has been formed to provide a guidancedocument on the role of UQ in validation. Although sucha document is a needed addition to the V&V standardsliterature, it likely will not address the need for a muchimproved understanding and appreciation of UQamong analysts, their managers, and clients. Here againis an opportunity for university educators to contributeto improving the overall awareness of V&V. Introductionof the V&V process in the classroom has started butis not as widespread as the teaching of computationalmechanics. Also, encouraging students to take appro-priate mathematical course work will provide the stu-dent with UQ understanding necessary to participatein future V&V activities where UQ will certainly be thelanguage of validation.

A significant factor in the outcome of any V&V effortis the experience and competency of the analyst per-forming the modeling and simulation. The expectationis that the more experienced the analyst is using thesoftware for the application of interest, the more likelya successful validation effort outcome. The difficulty isassessing a priori, and perhaps quantifying, the analyst’scompetence and thus determining the suitability of theanalyst for performing the modeling and simulation.Some formal guidance is provided in the NAFEMSQuality System Supplement (Appendix B, PersonnelCompetence) [11] to aid program managers in makingcritical staffing decisions. Although a difficult topic, andcertainly widely ignored, guidance in assessing compe-tency is welcomed, and could have a significant positiveeffect on the overall validation process.

12 REFERENCES

[1] ASME, 2006, Guide for Verification & Validationin Computational Solid Mechanics, ASME V&V 10-2006,The American Society of Mechanical Engineers,New York, NY.

[2] Department of Defense Standard Practice:Documentation of Verification, Validation, andAccreditation (VV&A) for Models and Simulations,MIL-STD-3022, 29 January, 2008.

[3] Ferson, S., Oberkampf, W., and Ginsburg, L., 2008,“Model validation and predictive capability for the ther-mal challenge problem,” Comput. Methods Appl.,Mech., Engrg. 197, pp. 2408–2430.

[4] Vallender, S.S., 1973, “Calculation of theWasserstein distance between probability distributionson the line,” Theory of Probability and itsApplications 18 (1973), pp. 784–786.

[5] Roache, P. J., 2009, Fundamentals of Verificationand Validation, Hermosa Publishers, Albuquerque, NM.

--````````,,``,```,,,,``,,````-`-`,,`,,`,`,,`---

Page 31: An Illustration of the Concepts of Verification and Validation in Computational Solid Mechanics.pdf

ASME V&V 10.1-2012

[6] Montgomery, D. C., 2000, Design and Analysis ofExperiments, 5th Ed., John Wiley, Hoboken, NJ.

[7] Box, G. E. P., Hunter, J. S., and Hunter, W. G., 2005,Statistics for Experimenters: Design, Innovation, andDiscovery, 2nd Ed., John Wiley, New York, NY.

[8] Coleman, H. W., and Steele, W.G., 2009,Experimentation, Validation, and Uncertainty Analysisfor Engineers, 3rd. Ed., John Wiley, Hoboken, NJ.

[9] Oberkampf, W. L., and Roy, C. J., Verification andValidation in Scientific Computing, CambridgeUniversity Press, 2010.

23

[10] ASME, 2009, Standard for Verification andValidation in Computational Fluid Dynamics and HeatTransfer, ASME V&V 20-2009, The American Society ofMechanical Engineers, New York, NY.

[11] Smith, J.M, NAFEMS Quality StandardSupplement 001, November 2007 http://www.nafems.org/publications/browse—buy/qa/QSS001/.

Page 32: An Illustration of the Concepts of Verification and Validation in Computational Solid Mechanics.pdf

INTENTIONALLY LEFT BLANK

24

--````````,,``,```,,,,``,,````-`-`,,`,,`,`,,`---

Page 33: An Illustration of the Concepts of Verification and Validation in Computational Solid Mechanics.pdf

ASME Services

ASME is committed to developing and delivering technical information. At ASME’s Customer Care, we make every effort to answer your questionsand expedite your orders. Our representatives are ready to assist you in the following areas:

ASME Press Member Services & Benefits Public InformationCodes & Standards Other ASME Programs Self-Study CoursesCredit Card Orders Payment Inquiries Shipping InformationIMechE Publications Professional Development Subscriptions/Journals/MagazinesMeetings & Conferences Short Courses Symposia VolumesMember Dues Status Publications Technical Papers

How can you reach us? It’s easier than ever!

There are four options for making inquiries* or placing orders. Simply mail, phone, fax, or E-mail us and a Customer Care representative willhandle your request.

Mail Call Toll Free Fax—24 hours E-Mail—24 hoursASME US & Canada: 800-THE-ASME 973-882-1717 [email protected] Law Drive, Box 2900 (800-843-2763) 973-882-5155Fairfield, New Jersey Mexico: 95-800-THE-ASME07007-2900 (95-800-843-2763)

Universal: 973-882-1167

* Customer Care staff are not permitted to answer inquiries about the technical content of this code or standard. Information as to whetheror not technical inquiries are issued to this code or standard is shown on the copyright page. All technical inquiries must be submitted inwriting to the staff secretary. Additional procedures for inquiries may be listed within.

--````````,,``,```,,,,``,,````-`-`,,`,,`,`,,`---

Page 34: An Illustration of the Concepts of Verification and Validation in Computational Solid Mechanics.pdf

INTENTIONALLY LEFT BLANK

--````````,,``,```,,,,``,,````-`-`,,`,,`,`,,`---

Page 35: An Illustration of the Concepts of Verification and Validation in Computational Solid Mechanics.pdf

--````````,,``,```,,,,``,,````-`-`,,`,,`,`,,`---

Page 36: An Illustration of the Concepts of Verification and Validation in Computational Solid Mechanics.pdf

ASME V&V 10.1-2012

C07612

--````````,,``,```,,,,``,,````-`-`,,`,,`,`,,`---