iso dis 5168part2

12
 6 Estimation and presentation  o f  elemental  errors 6.1  Summary   of  procedure  Obtain  an  estimate  of  each  error.  If th e  data  is available  to estimate the experimental  standard  devi - ation,  classify th e  error as  a rand om erro r. Otherwise, classify  it as a systematic error. Review the test  objective,  test  du ration and number of  calibrations  that  will  affect  th e  test  result.  Make th e  final  classification  of  elemental  erro rs for  each measurement.  If an erro r increases the  scatter  in the measurement result  in th e defined test,  it is a  random error;  otherwise,  it is a systematic error. 6.2  Calculate   th e  experim ental standa rd   deviation  The re ar e ma ny  ways  to  calcula te the exp erim enta l standard deviation: a)  If the parameter to be measured can be held constant,  a  number  of repeat ed  measure - ments can  be  used to evaluate equation  (1) ~  N i  (10) b)  If there ar e M redundant instruments  or M redundant  measurements  an d  th e  parame- te r  to  be  measured can be  held constant  to take  N  repeat  readings,  th e  following pooled  estimate  of  the experimental  stand - ard deviation  for individual  readings can be used: I  ( X ~X j ~ Spoo~  M(N  1 ) Fo r  the expe rimenta l  standard  deviation  of the average value of  the parameter s  SP O 0 I,d ~ (12) c)  If  a  pair  of  instruments  (providing  m ea - surements  X 1 1  and  X 2~ )wh ich ha ve  the same  experimental  standard  deviation  are used  to  estimate  a  parameter  that  is  n o t constant with  time,  the difference betw een the readings, z ~, ma y be  used to estimate the experimental  standard  deviation  of  the individual  instruments as  follows: = xli   x 2~ If th e  degrees  of  freedom ar e less  than  30,  the  small sample methods shown in annex C  ar e required. 6. 3  Estimatethe  systematic error limit In  spite  of  applying  all  known corre ctions  to  over- come  imperfections  in  calibration,  data  acquisition and data reduction processes, some  systematic  errors will probably  remain. To  determine the exact system - atic  error in a  measurement,  it would be necessary to compare  the true value and the  measurements.  H ow - ever,  as  the true  value is  unknown,  it  is necessary to carry o u t special tests or utilize existing da ta tha t  will pro vide systematic  error  information.  Th e  following exa mple s are given in order of preference. a)  Interlaboratory  or  interfacility  tests  make it  possible  to  obtain  th e  distribution  of systematic erro rs  between facilities  (Refer - ence ISO 5725). b)  Comparisons  of  standards  with  instru- ments in the act ua l  test  environment  may be  used. c)  Comparison of independe nt measurements that  depend  on  different prin ciples  can pro vide systematic  er ror information.  Fo r example,  in a  gas  turbine  test,  airflow can be  measu red with  (1)  an  orifice,  (2)  a beilmouth  nozzle,  (3)  compressor  speed- flo w rig  data,  (4)  turbine  flow  parameters and (5)  je t  nozzle  calibrations. d)  When it  is  known  that  a systematic  error results  from a  particular cause,  calibrations may  be  performed  allowing  th e  cause  to perturbate  through  its  comp lete ran ge  to determine the range of systematic error.  _ _ = V  2(N—  1)  (13) where (11) 15 0259,

Upload: guarecast

Post on 02-Nov-2015

245 views

Category:

Documents


0 download

DESCRIPTION

normas

TRANSCRIPT

  • 6 Estimation and presentation of elemental errors

    6.1 Summary of procedure

    Obtain an estimate of each error. If the data isavailable to estimate the experimental standard devi-ation, classify the error as a random error. Otherwise,classify it as a systematic error.

    Review the test objective, test duration and numberof calibrations that will affect the test result. Makethe final classification of elemental errors for eachmeasurement. If an error increases the scatter in themeasurement result in the defined test, it is a randomerror; otherwise, it is a systematic error.

    6.2 Calculate the experimental standard deviation

    There are many ways to calculate the experimentalstandard deviation:

    a) If the parameter to be measured can be heldconstant, a number of repeated measure-ments can be used to evaluate equation (1)

    ~ Ni (10)

    b) If there are M redundant instruments or Mredundant measurements and the parame-ter to be measured can be held constant totake N repeat readings, the followingpooled estimate of the experimental stand-ard deviation for individual readings can beused:

    I (X~Xj~Spoo~ M(N 1)

    For the experimental standard deviation ofthe average value of the parameter

    s SPO0I,d~ (12)

    c) If a pair of instruments (providing mea-surements X11 and X2~)which have thesame experimental standard deviation areused to estimate a parameter that is notconstant with time, the difference between

    the readings, z~,may be used to estimate theexperimental standard deviation of theindividual instruments as follows:

    = xli x2~

    If the degrees of freedom are less than 30, the smallsample methods shown in annex C are required.

    6.3 Estimate the systematic error limit

    In spite of applying all known corrections to over-come imperfections in calibration, data acquisitionand data reduction processes, some systematic errorswill probably remain. To determine the exact system-atic error in a measurement, it would be necessary tocompare the true value and the measurements. How-ever, as the true value is unknown, it is necessary tocarry out special tests or utilize existing data that willprovide systematic error information. The followingexamples are given in order of preference.

    a) Interlaboratory or interfacility tests makeit possible to obtain the distribution ofsystematic errors between facilities (Refer-ence ISO 5725).

    b) Comparisons of standards with instru-ments in the actual test environment maybe used.

    c) Comparison of independent measurementsthat depend on different principles canprovide systematic error information. Forexample, in a gas turbine test, airflow canbe measured with (1) an orifice, (2) abeilmouth nozzle, (3) compressor speed-flow rig data, (4) turbine flow parametersand (5) jet nozzle calibrations.

    d) When it is known that a systematic errorresults from a particular cause, calibrationsmay be performed allowing the cause toperturbate through its complete range todetermine the range of systematic error.

    ______

    = V 2(N 1) (13)where

    (11)

    15

    0259,

  • e) If there is no source of data for systematicerror, the estimate must be based on judg-ment. An estimate of an upper limit of thesystematic error is needed. Instrumentationmanufacturers reports and other refer-ences may provide information. It is impor-tant to distinguish between the estimateof an upper limit on systematic errorobtained by this method and the morereliable estimate of a random error arrivedat by analyzing data. There is a generaltendency to underestimate systematic un-certainties when a subjective approach isused, partly through human optimism andpartly through the possibility of overlook-ing the existence of some sources of system-atic error. Great care is therefore necessarywhen quoting systematic error limits.

    Sometimes the physics of the measurement systemprovide knowledge of the sign but not the magnitudeof the systematic error. For example, hot thermocou-ples radiate and conduct thermal energy away fromthe sensor to indicate lower temperatures. The sys-tematic error limits in this case are non-symmetrical,i.e., not of the form B. They are of the form B~forthe upper limit and B for the lower limit. Thus,typical systematic error limits associated with aradiating thermocouple could be:

    B+ = 0 degreesB = 10 degrees

    For elemental error sources, the interval from B+ toB must include zero.

    6.4 Final error classification based on the definedmeasurement

    Uncertainty statements must be related to a welldefined measurement process. The final classificationof errors into systematic (bias) and random (preci-sion) depends on the definition of the measurementprocess. Some of these considerations are:

    a) Long versus Short Term Testing (see 6.4.1)

    b) Comparative versus Absolute Testing (see6.4.2)

    c) Averaging to Reduce Random Error (see6.4.3)

    6.4.1 Long versus short term testing

    The calibration histories accumulated before or dur-ing the testing period may influence the uncertaintyanalysis.

    1) When the instrumentation is calibratedonly once, all the calibration error is frozeninto systematic error. The error in thecalibration correction is a constant andcannot increase the scatter in a test result.Thus, the calibration error, made up ingeneral of systematic and fossilized randomerrors, is considered to be all systematicerrors in this case.

    2) If the test period is long enough thatinstrumentation may be calibrated severaltimes or more and/or several test standsare involved, the random error in thecalibration hierarchy (see 5.4) should betreated as contributing to the overall exper-imental standard deviation. The experi-mental standard deviations may be derivedfrom calibration data.

    6.4.2 Comparative versus absolute testing

    The objective of a comparative test is to determine,with the smallest measurement uncertainty possible,the net effect of a design change. The first test is runwith the standard or baseline configuration. Thesecond test is run with the design change. Thedifference between the results of these tests is anindication of the effect of the design change. As longas only the difference or net effect between the twotests is considered, all systematic errors, being fixed,will cancel out. The measurement uncertainty will becomposed of random errors only.

    The uncertainty of the back-to-back tests can beconsiderably reduced by repeating them several timesand averaging the differences.

    All errors in a comparative test arise from randomerrors in data acquisition and data reduction. System-atic errors are effectively zero. Since calibrationrandom errors have been considered systematic er-rors, they also are effectively zero.

    The test result is the difference in flow between twotest results, r1 and r2.

    16

  • 6.5 Example: a calibration constantL~r= r~ r2

    and

    S~r= ~S~1+ S~,= ~ S~

    where Sr is the random error of the first test, the rootsum-square of the experimental standard deviationsfrom data acquisition and data reduction, and Sr2 isassumed to equal Sr

    6.4.3 Averaging to reduce random error

    Averaging test results is often used to improve therandom uncertainty. Careful consideration should begiven to designing the test series to average as manycauses of variation as possible within cost constraints.The design should be tailored to the specific situation.For example, if experience indicates time-to-time andrig-to-rig variations are significant, a design thataverages multiple test measurement results on one rigon one day may produce optimistic random errorestimates compared to testing several rigs, eachmounted several times, over a period of weeks. Thelist of possibilities may include the above plus teststand-to-test stand, instrument-to-instrument,mount-to-mount and environmental, fuel, power andtest crew variation. Historic data is invaluable forstudying these effects.* If the pretest uncertaintyanalysis identifies unacceptably large error sources,special tests to measure the effects should be consid-ered.

    * A statistical technique, analysis of variance (ANOVA) is useful forpartitioning total variance by cause.

    (14) Assume a test meter is to be compared or calibratedwith a master meter at one flow level. The objective isto determine a correction, called a calibration con-stant, that will be added to the test meter observa-tions when it is installed for test. This calibration

    (15) constant correction will make the test meter readlike the master meter. During the calibration, themaster meter is used to set the flow level as it isusually more accurate than the test meter. To reducethe calibration random error, N=13 comparisons willbe made and averaged. If the data were plotted, thedata might look like figure 10.

    17

    If the master meter systematic error limit from itsown calibration is judged to be no larger than BM,what will the test meter uncertainty be after calibra-tion?

    Define i~= Master Meter Reading1 Test MeterReading

    Calibration Constant equals the average

    - L~.K = = (16)

    The sample standard deviation of the calibrationconstant K is:

    S --~--- I~~(L~_~)2KVi~l 13(12) (17)The l~estmeter is later installed in a test stand. Eachobservation made on the test meter is corrected byadding K. By this process, the error in K from thecalibration process is propagated to the corrected datafrom the test stand.

    O299~-

  • Master meter systematic error = BMCalibration random error = SK

    Figure 10 Calibration should compensate for test meter systematic error

    If the defined measurement process is short, involvinga single calibration, K isconstant and this error mustbe a constant or systematic error. It includes thesystematic error in the master meter plus the randomerror in the calibration process. The random error isfossilized into systematic error. The fossilization isindicated by an asterisk. We can estimate an upperlimit for this systematic error as:

    BK ~IB~(t95 5)*2

    Where BM is the systematic error limit of the mastermete~rand t95 = 2.179 for 12 degrees of freedom(annex C).)

    This calibration systematic error limit would becombined with systematic error limits from dataacquisition and data reduction to obtain the measure-ment systematic error limit. There would also berandom error from these last two processes.

    If the uncalibrated test meter had a systematic errorlimit judged to be BT, the calibration process im-proved the test accuracy if BK is less than BT. Notethat the calibration process does not change the testmeter random error which is included in the dataacquisition random error. However, the test meter

    random error contributes to the calibration randomerror SK. This contribution is reduced by averagingthe calibration data.

    If the test process is long and involves severalcalibrations, the calibration error contributes bothsystematic error (BM) and random error (t95 SK) tothe fmal test result.

    (18) If the test process is comparative, the differenebetween two tests with a single calibration, thecalibration error is all systematic error and cancelsout when one result is subtracted from the other.

    18

    7 Combination and propagation errors

    7.1 Summary of procedure

    Root-sum-square the systematic error limits andexperimental standard deviations for each measure-ment. Propagate the measurement systematic errorand random error limits separately all the way to thefinal test result, either by sensitivity factors or byfinitely incrementing the data reduction program.Work consistently in either absolute units or percent-ages.

    (UNKNOWN)True Value

    10-

    0-

    0z

    U.

    0az

    Meter-to-be-calibratedaverage

    Master meteraverage

    UUMMMMMMMM

    MUM

    xxxxx

    xxx

    xCCxx

    FLOW RATE

    0259*

  • B = Z B~

    7.2 Combining sample standard deviations

    The experimental standard deviation (S) of themeasurement is the root-sum-square of the elementalexperimental standard deviations from all sources,that is;

    5kS= IZS~

    j=l ii

    where j defines the category: such as (1) calibration,(2) data acquisition, (3) data reduction, (4) errors ofmethod and (5) subjective or personal, and i definesthe sources within the categories.

    For example: the experimental standard deviation forthe calibration process in table 1 is:

    S15C,Iibratjon = ~S91+~ S31+ S41 (20)

    The measurement experimental standard deviation isthe root-sum-square of all the elemental experimentalstandard deviations in the measurement system:

    Ss 2_/5 2 2 M,,*ur~me.,t j ~ I 2 3

    ii

    Categories (4) or (5) are optional and may or may notbe employed.

    7.3 Combining elemental systematic errorlimits

    If there were only a few sources of elemental system-atic errors, it might be reasonable to add themtogether to obtain the overall systematic error limits.For example, if there were three sources, the probabil-ity that they would all be plus (or minus) would beone-half raised to the third power or one eighth.However, the probability that all three will have thesame sign and be at the systematic error limit isextremely small. In actual practice, most measure-ments will have ten, twenty or more sources ofsystematic error. The probability that they would allbe plus (or minus) and be at their limit is close tozero, and therefore, it is more appropriate to combinethem by root-sum-square.

    If a measurement uncertainty analysis identifies fouror less sources of systematic error, there should besome concern that some sources have been over-looked. The analysis should be redone and expert helpshould be recruited to examine the calibration hierar-chy, the data acquisition process and the data reduc-tion procedure for additional sources.

    Therefore, the systematic error limit will be usedherein as the root-sum-square of the elemental errorsfrom all sources.

    (22)

    For example: the systematic error limit for the(19) calibration hierarchy (table 1) is

    B1 = ~ = ~jB~1B~1B~1B~1 (23)

    The systematic error limit for the basic measurementprocess is

    B = ~B~+ B~+B~ (24)

    If any of the elemental systematic error limits arenon-symmetrical, separate root-sum-squares are usedto obtain B+ and B. For example, assume B21 andB23 are non-symmetrical, i.e. B~1,B~1,B5 and B~are available. Then

    (21) B0 = VB~1(B~1)2B~1B~1B~+ B~3(B~)

    2 (25)

    W = ~jB~1(B~)2B~,B~1B~+ B~3-i-(B~)

    2 (26)

    7.4 Propagation of measurement errors

    Fluid flow parameters are rarely measured directly;usually more basic quantities such as temperatureand pressure are measured, and thi fluid flowparameter is calculated as a function of the measure-ments. Error in the measurements is propagated tothe parameter through the function. The effect of thepropagation may be approximated with the Taylorsseries methods. It is convenient to introduce theconcept of the sensitivity of a result to a measuredquantity as the error propagated to the result due tounit error in the measurement. The sensitivitycoefficient (also known as influence coefficient) ofeach subsidiary quantity is most easily obtained inone of two ways.

    a) Analytically

    Where there is a known mathematicalrelationship between the result, R, andsubsidiary quantities, Y1, Y2 . . ~K thedimensional sensitivity coefficient, 9~of

    19

  • the quantity Y~, is obtained by partialdifferentiation. Thus, if R = f (Y1, Y2~K) then

    oR

    BY,

    Analogously, the relative (nondimen-sional) sensitivity coefficient, 9~,is

    I9R/Ri~y7YT

    In this form, the sensitivity is expressedas percent/percent. That is, O~is thepercentage change in R brought aboutby a 1% change in Y1. This is the formto be used if the uncertainties to becombined are expressed as percentagesof their associated variables rather thanabsolute values.

    b) Numerically

    Where no mathematical relationship isavailable or when differentiation is diffi-cult, finite increments may be used toevaluate 9~.This is a convenient methodwith computer calculations.

    Here 0~is given by

    AR- AY1 (29)

    The result is calculated using Y1 to obtainR, and then recalculated using (Y~+ AY) to

    obtain (R + AR). The value of AY usedshould be as small as practicable.

    Care should be taken to ensure that the errors areindependent. With complex parameters, the same

    (27) measurement may be used more than once in theformula. This may increase or decrease the errordepending on whether the sign of the measurement isthe same or opposite. If the Taylors series relates themost elementary mesurements to the ultimate param-eter or result, these linked relationships will be

    (28) properly accounted for.

    20

    This effect can be covered by calculating a modified 9by simultaneous perturbation of all the inputs likelyto be affected, thus:

    = (Change in output R due to simultaneousapplication of linked error in all inputs, y1)

    An example of this is barometric pressure whichaffects all pressure inputs simultaneously, in agauge-pressure system. Another example is the useof a common working standard to calibrate all thepressure transducers.

    Such linked errors can then be combined withindependent ones, thus:

    S(R) = ~4I[On~~kS(yl2flk)]2

    + X [9~S(y1) ]~.5 Airflow example

    (30)

    In this example, airflow is determined by the use of asonic nozzle and measurements of upstream stagna-tion temperature and stagnation pressure (figure 11).

    0259w

  • Airflow Measurement. W~

    Flow

    The flow is calculated from

    PitW = CaF (p~a (31)

    where

    W is the mass flowrate of airFa is the factor to account for thermal expansion of

    the venturia is the venturi throat area~i~ is the total (stagnation) pressure upstreamT1~is the total temperature upstream~p5 is the factor to account for the properties ofthe

    air (critical flow constant)C is the discharge coefficient

    The experimental standard deviation for the Flow(5w) is calculated using the Taylors series expansion.

    Assuming C equals 1 and,has negligible error

    = { (9g. SF,)2 + (9,~.S~.)2(9,S,)2

    + (Op1 Sp1~)2(9T

    11ST1~)

    2}1/2 (32)

    where

    owOF,

    denotes the partial derivative of W with respect to Fa.

    21

    11 (p*aplt \~(F,aP1~ ~~C~\,~ SFaJ+\~pr .f Fa(p~Pit \2 (F,(p*a \2!\ ,~_ S81 +~ ,f1R~~S~11)

    f F,q,~aP11 \2 11/2~ ST) I

    2~fi~ It J (33)

    By inserting the values and random errors from table4 into equation (32), the random error of 0.17 kg/secfor airflow is obtained.

    The systematic error in the flow calculation ispropagated from the systematic error limits of themeasured variables. Using the Taylors series formulagives

    B~= { (9~B81)2 + ~ B~)2+ (083 B13)2

    + . . . (0 B8 )2]~2Xm m (34)

    For this example,

    [(0F, B~,)2+ (0w. B~.)2+ (9a Ba)2

    + (9~ B~11)2+ (9T

    1, BT1)

    2]1/2

    It (35)Taking the necessary partial derivatives gives

    Figure 11 Flow through a sonic nozzle

    0259,

  • 8.2 Uncertainty intervalsB,, C [ j~~*aPlt _______

    = ~ ~ti~:~ + (F8aPit B,.)2(F,p*Pi,

    + \/~ B,)~+ (Fa~Y~aB~1)2

    (Fa(P*aPtt BT )2]+ 2~T~t

    By inserting the values and systematic error limits ofthe measured parameters from table 4 into equation(36), a systematic error limit of 0.32 kg/sec isobtained for a nominal airflow of 112.64 kg/sec.

    Table 4 contains a summary of the measurementuncertainty analysis for this flow measurement. Itshould be noted the errors listed only apply to thenominalvalues.

    Table 4 Flow data

    .

    Parameter UnitsNominal

    value

    Experimentalstandarddeviation

    (oneexperimentalstandarddeviation)

    Systematicerror

    F,

    C

    (p~

    a

    Pit

    T1~

    w

    kg K112newton sec

    m2

    Pa

    K

    kg/sec

    1.00

    1.0

    0.0404

    0.191

    2.54X iO~

    303.0

    112.64

    0.0

    0.0

    0.0

    9.55X105

    345.0

    0.17

    0.17

    0.001

    0.0

    4.04xi0~3

    3.82X104

    345.0

    0.17

    0.32

    8 Calculation of uncertainty

    8.1 Summary of procedure

    Select U~Dand/or URSS and combine the systematicand random errors of the test result to obtain theuncertainty. The test result plus and minus theuncertainty is the uncertainty interval that shouldcontain the true value with high probability.

    If information exists to justii~rthe assumption that thesystematicerror limits have a random distribution, a rigorous statistic can bedefined as shown in annex K

    For simplicity of presentation, a single number (somecombination of systematic and random errors) isneeded to express a reasonable limit for error. Thesingle number should have a simple interpretation(like the largest error reasonably expected) and beuseful without complex explanation. It is usually

    (36) impossible to define a single rigorous statistic becausethe systematic error is an upper limit based onjudgment which has unknown characteristics.* Thisfunction is a hybrid combination of an unknownquantity (systematic error) and a statistic (randomerror). If both numbers were statistics, a confidenceinterval would be recommended. 95% or 99% confi-dence levels would be available at the discretion of theanalyst. Although rigorous statistical confidence lev-els are not available, two uncertainty intervals,approximately analogous to 95% and 99% levels, arerecommended. This analogy is discussed in Annex F.

    8.3 Symmetrical intervals

    Uncertainty (figure 12) for the symmetrical systemat-ic error case is centered about the measurement andthe uncertainty intervals are defined as:

    R U, R + U,where

    U~D= U99 = (B + t955)

    UR~= U9~= \I B2+ (t95S)2(37)

    (38)If the sample standard deviation is based on smallsamples, the methods in annex C may be used todetermine a value of Students t95. For large samples(>30), 2 may be substituted for t95 in equations (37)and (38).

    If the test result is an average (Th based on samplesize N, instead of a singlevalue (R), S/ ~JNshould besubstituted for S.

    The uncertainty interval selected (equations (37) or(38)) should be provided in the presentation; thecomponents (systematic error, random error, degreesof freedom) should be available irs an appendix or insupporting documentation. These three componentsmay be required to substantiate and explain theuncertainty value, to provide a sound technical base

    22

  • for improved measurements, and to propagate theuncertainty from measured parameters to fluid flowparameters and from fluid flow parameters to other

    more complex performance parameters (i.e., fuel flowto Thrust Specific Fuel Consumption (TSFC), TSFCto aircraft range, etc).

    Measurement

    Uncertainty Interval(The True Value Should Be WithinThis Interval)

    Figure 12 Measurement uncertainty interval (U99); symmetrical systematic error

    9 Presentation of results

    9.1 Summary of requirement

    The summary report should contain the nominal levelof the test result, the systematic error, the samplestandard deviation, the degrees of freedom and theuncertainty. The equation used to calculate uncer-tainty, UADD or URSS should be stated. The summaryshould reference a table of the elemental errorsconsidered and included in the uncertainty.

    9.2 Reporting error summary

    The definition of the components, systematic errorlimit, experimental standard deviation and the limit(U) suggests a summary format for reporting mea-surement error. The format will describe the compo-nents of error, which are necessary to estimatefurther propagation of the errors, and a single value(U) which is the largest error expected from the

    23

    combined errors. Additional information, degrees offreedom for the estimate of S, is required to use theexperimental standard deviation if small sampleswere used to calculate S. These summary numbersprovide the information necessary to accept or rejectthe measurement error. The reporting format is:

    a) 5, the estimate of the experimental stand-ard deviation, calculated from data.

    b) For small samples, v, the degrees of free-dom associated with the estimate of theexperimental standard deviation (5). Thedegrees of freedom forrn small samples (lessthan 30) is obtained from the Welch-Sat-terthwaite procedure illustrated inannexC.

    c) B,-the upper limit of the systematic error ofthe measurement process or B and B~ifthe systematic error limit is non-symmetri-cal.

    Largest Positive Error

    Measurement Scale

    FO 182521

    0259,

  • d) The uncertainty formula should be stated.U99 = (B + t95 S) or U95 = ~jB

    2+ (t95 S),

    the uncertainty interval, within which theerror should fall. If the systematic errorlimit is non-symmetrical, U~=Wt95Sand U~= B~+ t95 S. No more than twosignificant places should be reported. Forsmall samples see annex C.

    The model components, 5, v, B, and U, are requiredto report the error of any measurement process. Thefirst three components, S, v, and B, are necessary to:(1) indicate corrective action if the uncertainty isunacceptably large before the test, (2) to propagatethe uncertainty to more complex parameters, and (3)to substantiate the uncertainty limit.

    9.3 Reporting error table of elemental sources

    To support the measurement uncertainty summary, atable detailing the elemental error sources is neededfor several purposes. If corrective action is needed toreduce the uncertainty or to identify data validityproblems, the elemental contributions are ..required.Further, if the uncertainty quoted in the summaryappears to be optimistically small, the list of sourcesconsidered should be reviewed to identify missingsources. For this reason, it is important to list allsources considered even if negligible.

    Note that all errors in table 5 have been propagatedfrom the basic measurement to the end result beforelisting and, therefore, they are expressed in units ofthe test result.

    Table 5 Elemental error sources

    ExperimentalMeasurement standard Systematic Source of

    nominal deviation Degrees of error systematicSource value S~1 freedom v~ limit B~1 error

    aa

    -aa

    aa

    aaa

    iisubscript

    112131

    12223242

    132333

    Results

    NominalValue S=~~ vw/s B=~7~

    U95 ~B2(t,5S)

    2=

    t95

    %

    24

  • 9.4 Pre-test analysis and corrective action 9.5 Post-test analysis and data validity

    Uncertainty is a function of the measurement pro-cess. It provides an estimate of the largest error thatmay reasonably be expected for that measurementprocess. Errors larger than the uncertainty shouldrarely occur. If the difference to be detected in anexperiment is of the same size or smaller than theprojected uncertainty, corrective action should betaken to reduce the uncertainty. Therefore, it isrecommended that an uncertainty analysis always bedone before the test or experiment. The recommend-ed corrective action depends on whether the system-atic or the random error is too large as shown intable 6.

    Table 6 Recommended corrective action ifthe predicted pretest measurement accuracy

    is unacceptable

    25

    Post-test analysis is required to confirm the pretestestimates or to identify data validity problems. Com-parison of measurement test results with the pretestanalysis is an excellent data validity check. Therandom error of the repeated points or redundantinstruments should not be significantly larger thanthe pretest measurement estimates. When redundantinstrumentation or calculation methods are available,the individual uncertainty intervals should be com-pared for consistency with each other and with thepretest measurement uncertainty analysis.

    Three cases are illustrated in figure 13.

    WThen there is no overlap between uncertainty inter-vals, as in Case I, a problem exists. The true valuecannot be contained within both intervals. That is,there should be a very low probability that the truevalue lies outside any of the uncertainty intervals.Either the uncertainty analysis is wrong or a datavalidity problem exists. Investigation to identify badreadings, overlooked systematic error, etc., is neces-sary to resolve this discrepancy. Redundant anddissimilar instrumentation should be compared. Par-tial overlap of the uncertainty intervals, as in Case II,also signals that a problem may exist. The magnitudeof the problem depends on the amount of overlap.The only situation when one can be confident thatthe data is valid and the uncertainty analysis iscorrect is Case III, when the uncertainty intervalscompletely overlap.

    Systematic Error Limit Too Large: Random Error Too Large:

    Improve calibration Larger test sample

    Independent calibrations for More precise instrumentationredundant meters

    Redundant instrumentationConcomniitant variable

    Data smoothing In place calibration

    Moving average Filter Regression

    Improve design ofexperiment

    0259r

  • Case INo Overla~

    Case IIPartial Overlap

    . Case IIIComplete Overla~

    uili~j_

    uili,J. U2J

    uil ~~

    .

    .~

    LJ2J. ?~x2

    ~.

    Figure 13 Three post-test measurement uncertainty interval comparisons

    26