introduction to bayesian statistics - test science · 2019-01-22 · introduction to bayesian...

Post on 11-Jul-2020

6 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Introduction to Bayesian Statistics

Alyson Wilson, Ph.D., PStat R©

Department of StatisticsLaboratory for Analytic SciencesNorth Carolina State University

agwilso2@ncsu.edu

April 4, 2017

A. Wilson (NCSU Statistics) Introduction to Bayesian Statistics April 4, 2017 1 / 55

Background and InterestsBackground and Research Interests

A. Wilson (NCSU Statistics) Bayesian Analysis January 31, 2017 2 / 52A. Wilson (NCSU Statistics) Introduction to Bayesian Statistics April 4, 2017 2 / 55

When be Bayesian?

From Meeker and Hong (2014), “Many of the applications described inthis article and particularly in this concluding section will requirecombining information from different sources (e.g., data, inexactphysics-based knowledge, and certain kinds of expert opinion).Additionally, statistical models being used will often contain multiplesources of variability. Bayesian statistical methods provide a naturalapproach for combining such information.”

W. Q. Meeker and Y. Hong (2014). Reliability Meets Big Data:Opportunities and Challenges. Quality Engineering 26(1): 102 - 116.

A. Wilson (NCSU Statistics) Introduction to Bayesian Statistics April 4, 2017 3 / 55

Bayesian ReliabilityHeterogeneous Information

• DoD systems experience system design, contractor testing,developmental testing, and operational testing.

• Later in the lifecycle, we see new variants of systems, life extensionprograms, . . . .

• The goal of science-based stockpile stewardship is the assessment ofsafety and reliability in aging warheads in the absence of nucleartesting.

A. Wilson (NCSU Statistics) Introduction to Bayesian Statistics April 4, 2017 4 / 55

From these examples

• Common threads• We need to combine many sources of information• We need to use the information we have already seen to plan

subsequent data collection• Typically more broadly than factors, experimental region, and tentative

model structure

• Meeker and Hong also discuss extrapolation, a common desire inreliability analyses: “Extrapolation will be more reliable if predictionsare based on a combination of science-based models of reliability (e.g.,knowledge of the physics of well-understood failure modes) and dataare used to develop predictive models for a failure time distribution.”

A. Wilson (NCSU Statistics) Introduction to Bayesian Statistics April 4, 2017 5 / 55

Why be Bayesian?

• Allows incorporation of prior information• May supplement limited data• May provide improvements in cost or precision• Provides a formal framework to think about how to combine

information

• Computational simplifications• Censored data• When framed as a Bayesian problem, complex models can often be fit

relatively easily using Markov chain Monte Carlo or othercomputational algorithms.

• Straightforward to produce estimates and credible intervals forcomplicated functions of model parameters (e.g., predictions,probability of failure, quantiles of lifetime distribution)

• Philosophical Pragmatic

A. Wilson (NCSU Statistics) Introduction to Bayesian Statistics April 4, 2017 6 / 55

Example

Question of Interest: How likely is the upcoming launch by country X tobe successful?

Basic Statistics Problem: Unknown population parameter (θ) must beestimated.

θ = Probability of a successful launch of a new vehicle by an inexperiencedagent. During the period 1980–2002, eleven launches of new vehicles wereperformed by companies or agencies with little launch vehicle designexperience. Of these eleven, three were successful and eight were failures.

A. Wilson (NCSU Statistics) Introduction to Bayesian Statistics April 4, 2017 7 / 55

Launch Vehicle Outcome Data

Vehicle Outcome

Pegasus SuccessPercheron FailureAMROC FailureConestoga FailureAriane 1 SuccessIndia SLV-3 FailureIndia ASLV FailureIndia PSLV FailureShavit SuccessTaepodong FailureBrazil VLS Failure

A. Wilson (NCSU Statistics) Introduction to Bayesian Statistics April 4, 2017 8 / 55

ExampleStep 1: Careful Modeling

Step 1 of both the Bayesian and non-Bayesian formulations is to choose astatistical model (sampling distribution) for the data.

One choice for f (y | θ) is that the number of successful launches (Y )follows a binomial distribution with n launches and the probability of anyone test being a “success” denoted as θ.

Equivalently, we can think about each launch as a Bernoulli trial withprobability of θ.

What are some of the problems with this model?

A. Wilson (NCSU Statistics) Introduction to Bayesian Statistics April 4, 2017 9 / 55

For a More Detailed Analysis

V. Johnson, A. Moosman, P. Cotter (2005). A hierarchical model forestimating the early reliability of complex systems. IEEE Transactions onReliability 54(2): 224-231.

A. Wilson (NCSU Statistics) Introduction to Bayesian Statistics April 4, 2017 10 / 55

Classical/Frequentist (non-Bayesian) Analysis

1 All pertinent information enters the problem through the likelihoodfunction in the form of data (Y1, . . . ,Yn)

L(θ) = f (y | θ)

=

(n

y

)θy (1− θ)n−y

2 Software packages all have this capability

3 Maximum likelihood, unbiased estimation, etc.

4 Confidence intervals, tests of hypotheses

A. Wilson (NCSU Statistics) Introduction to Bayesian Statistics April 4, 2017 11 / 55

Classical/Frequentist AnalysisMaximum Likelihood

log f (y | θ) ∝ y log(θ) + (n − y) log(1− θ),where y = 3, n = 11

Taking the first derivative of the log-likelihood with respect to θ andsetting the result equal to 0, we see that the MLE must satisfy

0 =∂ log f (y | θ)

∂θ=

y

θ− n − y

1− θ

Solving for θ implies that the MLE of θ, say θ̂, is given by

θ̂ =y

n=

3

11

In other words, the MLE of the success probability θ in a binomial model issimply the observed proportion of successes.

A. Wilson (NCSU Statistics) Introduction to Bayesian Statistics April 4, 2017 12 / 55

Classical/Frequentist AnalysisMaximum Likelihood

A. Wilson (NCSU Statistics) Introduction to Bayesian Statistics April 4, 2017 13 / 55

Classical/Frequentist InferenceConfidence Intervals

For the launch vehicle data, a point estimate of the success probability ofa new launch system developed by an inexperienced manufacturer isprovided by the MLE:

θ̂ =y

n=

3

11= 0.272

An interval estimate for the population proportion of success can beobtained using the asymptotic normal sampling distribution of the MLE θ̂.

For θ, we have a 90% confidence interval of

(0.272− 1.645× 0.134, 0.272 + 1.645× 0.134) = (0.052, 0.492)

In repeated sampling, one expects the confidence interval to include theunknown parameter with probability close to 0.90.

A. Wilson (NCSU Statistics) Introduction to Bayesian Statistics April 4, 2017 14 / 55

Bayesian AnalysisOne more piece of the model

1 Data enters through the likelihood function

2 However, other information can be incorporated through the priordistribution

3 Prior distribution: Before any data collection, the view of/informationabout the parameter

• Expressed as a probability distribution on θ• Can come from expert opinion, historical studies, previous research,

computer models, similar systems, subject matter knowledge, . . .• There exist “noninformative” (“flat,” “diffuse”) priors that represent

states of ignorance.

A. Wilson (NCSU Statistics) Introduction to Bayesian Statistics April 4, 2017 15 / 55

Bayesian Analysis

4 Bayes’ Theorem:

p(θ | y1, . . . , yn) =f (y | θ)× π(θ)∫f (y | θ)× π(θ)dθ

.

The posterior distribution, p(θ | bfy), is a constant multiplied by thelikelihood, f (y | θ), muliplied by the prior distribution, π(θ). (Theposterior distribution is proportional to the prior times the likelihood.)

5 Posterior distribution: In light of the data, the updated viewof/information about the parameter

6 All inference is based on the posterior distribution.

A. Wilson (NCSU Statistics) Introduction to Bayesian Statistics April 4, 2017 16 / 55

Thought Experiment

The Strength of Evidence versus the Power of Belief: Are We AllBayesians? Lecture by Jessica Utts at the International Conference onTeaching Statistics (2010).http://videolectures.net/icots2010_utts_awab/

• Parapsychology (“ESP”)

• Remote viewing experiment

• n = 2124 experiments, with y = 709 “successes,” where you wouldexpect a 25% success rate by chance alone

• The point estimate is 709/2124 = 0.334, with a 95% confidenceinterval of (0.314, 0.354).

• Are you convinced that the probability is greater than 0.25?

A. Wilson (NCSU Statistics) Introduction to Bayesian Statistics April 4, 2017 17 / 55

Thought Experiment

Suppose that you are a skeptic. Most likely value is 0.25, and you are 95%sure that the real value is less than 0.255.

A. Wilson (NCSU Statistics) Introduction to Bayesian Statistics April 4, 2017 18 / 55

Thought ExperimentSuppose that you are a believer. Most likely value is 0.33, and you are95% sure that the real value is below 0.36. (If this were a huge effect,we’d all know.)

A. Wilson (NCSU Statistics) Introduction to Bayesian Statistics April 4, 2017 19 / 55

Thought Experiment

Suppose that you are “open-minded.” Most likely value is 0.25, and youare 95% sure that the real value is below 0.30.

A. Wilson (NCSU Statistics) Introduction to Bayesian Statistics April 4, 2017 20 / 55

Thought Experiment

Priors Posteriors

A. Wilson (NCSU Statistics) Introduction to Bayesian Statistics April 4, 2017 21 / 55

Bayesian Analysis

1 “Subjective” (what/whose information is contained in the priordistribution?)

• Careful, detailed elicitation• Sensitivity analysis (These kinds of analyses are necessarily very explicit

about assumptions and very diligent in understanding the impact ofeach assumption and source of information on inference and decisions.)

• Calibration of priors• Experts, historical data, previous experiments, computer models,

subcomponent tests

2 Increasing number of software packages have this capability (SASPROC MCMC, OpenBUGS, JAGS, STAN, NIMBLE, R packages)

3 Result is a probability distribution

4 Credible intervals use the language that everyone wants to use.(Probability that θ is in the interval is 0.90.)

A. Wilson (NCSU Statistics) Introduction to Bayesian Statistics April 4, 2017 22 / 55

ExampleBayesian Analysis: Beta-Binomial

A convenient choice to represent prior information about the probability ofa successful launch is the Beta distribution. One interpretation of theparameters defining this distribution are the number of a priori successesand failures.

For example, if an expert hypothesizes that her opinion about theprobability of successful launch is worth 8 vehicle launches vehicles andfurther expects successful launches 6 times, we would reflect this with aBeta(6, 2) distribution.

A. Wilson (NCSU Statistics) Introduction to Bayesian Statistics April 4, 2017 23 / 55

Example

• Non-Bayesian analysis: If our data are Binomial(n, θ) then we wouldcalculate Y /n as our estimate and use a confidence interval formulafor a proportion.

• Bayesian analysis: If our data are Binomial(n, θ) and our priordistribution is Beta(a, b), then our posterior distribution isBeta(a + y , b + n − y).

A. Wilson (NCSU Statistics) Introduction to Bayesian Statistics April 4, 2017 24 / 55

ExampleBayesian Analysis

• In this case, the posterior distribution isBeta(6 + 3, 2 + 11− 3) = Beta(9, 10).

• This means that we can say that the probability that θ is in theinterval (0.26, 0.69) is 0.95.

• Notice that we don’t have to address the problem of “in repeatedsampling”; this is a direct probability statement that relies on theprior distribution.

A. Wilson (NCSU Statistics) Introduction to Bayesian Statistics April 4, 2017 25 / 55

ExampleBaseline Posterior Distribution

n = 11, y = 3, a = 6, b = 2

0.0 0.2 0.4 0.6 0.8 1.0

02

46

8

Proportion

Den

sity

A. Wilson (NCSU Statistics) Introduction to Bayesian Statistics April 4, 2017 26 / 55

ExampleDiffuse Prior Posterior Distribution

n = 11, y = 3, a = 1, b = 1

0.0 0.2 0.4 0.6 0.8 1.0

02

46

8

Proportion

Den

sity

A. Wilson (NCSU Statistics) Introduction to Bayesian Statistics April 4, 2017 27 / 55

ExampleLarge n Baseline Posterior Distribution

n = 110, y = 30, a = 6, b = 2

0.0 0.2 0.4 0.6 0.8 1.0

02

46

8

Proportion

Den

sity

A. Wilson (NCSU Statistics) Introduction to Bayesian Statistics April 4, 2017 28 / 55

ExampleLarge n Diffuse Prior Posterior Distribution

n = 110, y = 30, a = 1, b = 1

0.0 0.2 0.4 0.6 0.8 1.0

02

46

8

Proportion

Den

sity

A. Wilson (NCSU Statistics) Introduction to Bayesian Statistics April 4, 2017 29 / 55

ExampleAll Four Posterior Distributions

0.0 0.2 0.4 0.6 0.8 1.0

02

46

8

Proportion

Den

sity

Beta(6,2), n=11Beta(1,1), n=11Beta(6,2), n=110Beta(1,1), n=110

A. Wilson (NCSU Statistics) Introduction to Bayesian Statistics April 4, 2017 30 / 55

Credible Intervals

90% confidence interval: (0.052, 0.492)

90% credible intervals

• Beta(6,2), n = 11: (0.291,0.659)

• Uniform(0,1), n = 11: (0.123,0.527)

• Beta(6,2), n = 110: (0.238,0.376)

• Uniform(0,1), n = 110: (0.210,0.348)

A. Wilson (NCSU Statistics) Introduction to Bayesian Statistics April 4, 2017 31 / 55

ExampleConsider an example involving continuous-valued random variables. Theparticular data set we consider represents viscosity breakdown times for 50samples of a lubricant.

A. Wilson (NCSU Statistics) Introduction to Bayesian Statistics April 4, 2017 32 / 55

Example

Question of Interest: What is the time of viscosity breakdown for aparticular lubricating fluid?

Basic Statistics Problem: Unknown population parameters (θ) must beestimated.

θ = Mean and standard deviation of (log) viscosity breakdown times for aparticular lubricating fluid. Data are collected (in 1000s of hours) for 50samples.

A. Wilson (NCSU Statistics) Introduction to Bayesian Statistics April 4, 2017 33 / 55

Viscosity Breakdown Times

Viscosity breakdown times (in 1000s of hours) for 50 samples of alubricating fluid.

5.45 16.46 15.70 10.39 6.71 3.77 7.42 6.899.45 5.89 7.39 5.61 16.55 12.63 8.18 10.446.03 13.96 5.19 10.96 14.73 6.21 5.69 8.184.49 3.71 5.84 10.97 6.81 10.16 4.34 9.814.30 8.91 10.07 5.85 4.95 7.30 4.81 8.446.56 9.40 11.29 12.04 1.24 3.45 11.28 6.645.74 6.79

A. Wilson (NCSU Statistics) Introduction to Bayesian Statistics April 4, 2017 34 / 55

ExampleStep 1: Careful Modeling

Step 1 of both the Bayesian and non-Bayesian formulations is to choose astatistical model (sampling distribution) for the data.

One choice for f (y | θ) is that the natural logarithm of the viscositybreakdown times (log(Y )) has an normal distribution with parametersθ = (µ, σ2).

A. Wilson (NCSU Statistics) Introduction to Bayesian Statistics April 4, 2017 35 / 55

Specifying the Model

• θ = (µ, σ2) are the mean and variance of the natural logarithm of thelubricant measurements.

• A convenient choice for π(θ) is a normal distribution for µ and aninverse gamma distribution for σ2. We’ll assume mutualindependence.

• System engineers who have studied the viscous properties of theselubricants believe that log(Y ) should be centered at 2, but that valueis known only with standard deviation 1.

• They have very little knowledge about the variability they expect tosee.

• The process of getting a prior distribution from statements made byexperts is called elicitation.

• How could we check this prior distribution with the engineers?

A. Wilson (NCSU Statistics) Introduction to Bayesian Statistics April 4, 2017 36 / 55

Joint Posterior Distribution

A. Wilson (NCSU Statistics) Introduction to Bayesian Statistics April 4, 2017 37 / 55

Marginal Posterior Distributions

A. Wilson (NCSU Statistics) Introduction to Bayesian Statistics April 4, 2017 38 / 55

ComputationA Brief Mention

• What do we do when the posterior distribution of θ does not have anice distributional form?

• Note that the dimension of θ is often large in real problems.

• Direct numerical integration (for example, to determine means ormarginal distributions) is problematic

• We could construct a normal approximation . . .

• The answer is: We sample. In particular, we figure out how to draw arandom sample from p(θ | y1, . . . , yn). Then we can computequantities of interest using Monte Carlo techniques.

A. Wilson (NCSU Statistics) Introduction to Bayesian Statistics April 4, 2017 39 / 55

Computation

There are a variety of algorithms that can be used to get our randomsamples:

• Rejection sampling

• Importance sampling

• Sampling importance resampling

• Gibbs sampling

• Metropolis-Hastings algorithm

A. Wilson (NCSU Statistics) Introduction to Bayesian Statistics April 4, 2017 40 / 55

System Reliability

Statistical Areas of InterestSystem Reliability

• Data• Multilevel• Multiple types: binary, lifetime,

degradation, expert judgement,computer model

• Systems• Representation• Assessment: model checking and

diagnostics, model fit

• Planning data collection

A. Wilson (NCSU Statistics) Introduction to Bayesian Statistics April 4, 2017 41 / 55

System Reliability

Multilevel DataMultilevel Pass/Fail Data, Series System

• Information collected at C0, C1, C2, andC3

• Information at C0 provides partialinformation about C1, C2, and C3

• Goal: simultaneous inference aboutsystem and component reliabilities

Successes Failures Trials

Component 1 8 2 10

Component 2 7 2 9

Component 3 3 1 4

System 10 2 12

A. Wilson (NCSU Statistics) Introduction to Bayesian Statistics April 4, 2017 42 / 55

System Reliability

Example: System ReliabilityMultilevel Pass/Fail Data, Series System

C0

C1 C2 C3

UnitsSuccesses Failures Tested

Component 1 8 2 10Component 2 7 2 9Component 3 3 1 4System 10 2 12

L(p1, p2, p3; x) = p81(1− p1)2p72(1− p2)2p33(1− p3)p10S (1− pS)2

= p81(1− p1)2p72(1− p2)2p33(1− p3)(p1p2p3)10(1− p1p2p3)2

Prior information for p1, p2, p3, pS

A. Wilson (NCSU Statistics) Introduction to Bayesian Statistics April 4, 2017 43 / 55

System Reliability

Example: System ReliabilityMultilevel Pass/Fail Data, Series System

C0

C1 C2 C3

UnitsSuccesses Failures Tested

Component 1 8 2 10Component 2 7 2 9Component 3 3 1 4System 10 2 12

L(p1, p2, p3; x) = p81(1− p1)2p72(1− p2)2p33(1− p3)p10S (1− pS)2

= p81(1− p1)2p72(1− p2)2p33(1− p3)(p1p2p3)10(1− p1p2p3)2

Prior information for p1, p2, p3, pS

A. Wilson (NCSU Statistics) Introduction to Bayesian Statistics April 4, 2017 43 / 55

System Reliability

Example: System ReliabilityMultilevel Pass/Fail Data, Series System

C0

C1 C2 C3

UnitsSuccesses Failures Tested

Component 1 8 2 10Component 2 7 2 9Component 3 3 1 4System 10 2 12

L(p1, p2, p3; x) = p81(1− p1)2p72(1− p2)2p33(1− p3)p10S (1− pS)2

= p81(1− p1)2p72(1− p2)2p33(1− p3)(p1p2p3)10(1− p1p2p3)2

Prior information for p1, p2, p3, pS

A. Wilson (NCSU Statistics) Introduction to Bayesian Statistics April 4, 2017 43 / 55

System Reliability

System Representations

A B C

S

CBA

S

CBA

Reliability Block Diagram

Fault Tree

Bayesian Network

A. Wilson (NCSU Statistics) Introduction to Bayesian Statistics April 4, 2017 44 / 55

System Reliability

ExampleMultilevel Pass/Fail Data, Bayesian Network, Reliability Changing with Time

����C

����A

����B

JJJ]

Age A B C

1 19/19 35/35 15/162 - 47/48 14/143 16/19 37/38 12/144 12/12 - -5 - 44/45 13/146 - 35/37 11/127 9/13 - -8 - 33/42 5/169 - - 12/1910 3/10 30/39 8/14

A. Wilson (NCSU Statistics) Introduction to Bayesian Statistics April 4, 2017 45 / 55

System Reliability

System RepresentationsDeveloping the model for a complex system

A. Wilson (NCSU Statistics) Introduction to Bayesian Statistics April 4, 2017 46 / 55

System Reliability

Extensions

This basic approach has been extended in many directions.

• Multiple diagnostics measured at the components (Anderson-Cook etal. (2008))

• Binary, lifetime, or degradation data at components and system (Guoand Wilson (2013))

• Parallel line of research focused on developing system models:elicitation, software, representations, model checking (e.g., Wilson etal. (2007), Anderson-Cook (2008), Zhang and Wilson (2016))

• Prior distributions (to capture knowledge about parameters “before”this experiment)

• “Naive” specifications can lead to surprisingly bad results• Variable selection priors

A. Wilson (NCSU Statistics) Introduction to Bayesian Statistics April 4, 2017 47 / 55

System Reliability

Favorite Books

Popular Science

• S. McGrayne (2011). The Theory that Would Not Die: How Bayes’Rule Cracked the Enigma Code, Hunted Down Russian Submarines,and Emerged Triumphant from Two Centuries of Controversy. YaleUniversity Press.

Bayesian Reliability

• H. Martz, R. Waller (1982). Bayesian Reliability Analysis. John Wiley& Sons.

• N. Singpurwalla (2006). Reliability and Risk: A Bayesian Perspective.Wiley.

• M. Hamada, A. Wilson, C. S. Reese, H. Martz (2008). BayesianReliability. Springer.

A. Wilson (NCSU Statistics) Introduction to Bayesian Statistics April 4, 2017 48 / 55

System Reliability

Favorite Books

Introductory Bayesian Methods

• B. Carlin, T. Louis (2008). Bayesian Methods for Data Analysis, 3rdEdition. Chapman and Hall.

• P. Hoff (2009). A First Course in Bayesian Statistical Methods.Springer.

Eliciting Prior Distributions

• M. Meyer, J. Booker (2001). Eliciting and Analyzing ExpertJudgment: A Practical Guide. ASA/SIAM.

• A. O’Hagan et al. (2006). Uncertain Judgements: Eliciting Experts’Probabilities. Wiley.

A. Wilson (NCSU Statistics) Introduction to Bayesian Statistics April 4, 2017 49 / 55

System Reliability

Favorite Books

Bayesian Computing

• D. Gamerman, H. Lopes (2006). Markov Chain Monte Carlo:Stochastic Simulation for Bayesian Inference, 2nd Edition. Chapman& Hall/CRC.

• J. Albert (2009). Bayesian Computation with R, 2nd edition.Springer.

• C. Robert, G. Casella (2010). Introducting Monte Carlo Methodswith R. Springer.

• C. Robert, G. Casella (2010). Monte Carlo Statistical Methods.Springer.

A. Wilson (NCSU Statistics) Introduction to Bayesian Statistics April 4, 2017 50 / 55

System Reliability

Bayesian Reliability Articles I

• C. Anderson-Cook (2008). Evaluating the Series or Parallel StructureAssumption for System Reliability. Quality Engineering 21(1): 88-95.

• C. M. Anderson-Cook, S. Crowder, A. V. Huzurbazar, J. Lorio, J.Ringland, A. G. Wilson (2011). Quantifying reliability uncertaintyfrom catastrophic and margin defects: A proof of concept. ReliabilityEngineering and System Safety 96: 1063-1075.

• C. Anderson-Cook, T. Graves, M. Hamada, N. Hengartner, V.Johnson, C. S. Reese, A. Wilson (2007). Bayesian StockpileReliability Methodology for Complex Systems with Application to aMunitions Stockpile. Journal of the Military Operations ResearchSociety 12(2): 25-38.

• C. Anderson-Cook, T. Graves, N. Hengartner, R. Klamann, A.Koehler, A. Wilson, G. Anderson, G. Lopez (2008). ReliabilityModeling Using Both System Test and Quality Assurance Data.Journal of the Military Operations Research Society 13: 5-18.

A. Wilson (NCSU Statistics) Introduction to Bayesian Statistics April 4, 2017 51 / 55

System Reliability

Bayesian Reliability Articles II• J. Chapman, M. Morris, C. Anderson-Cook (2012). Computationally

Efficient Comparison of Experimental Designs for System ReliabilityStudies with Binomial Data. Technometrics 54(4): 410-424.

• T. Graves, M. Hamada, R. Klamann, A. Koehler, H. Martz (2008).Using simultaneous higher-level and partial lower-level data inreliability assessments. Reliability Engineering and System Safety93(8): 12731279.

• J. Guo, A. Wilson (2013). Bayesian Methods for Estimating theReliability of Complex Systems using Heterogeneous MultilevelInformation. Technometrics 55(4): 461-472.

• M. Hamada, H. F. Martz, C. S. Reese, T. Graves, V. Johnson, A. G.Wilson (2004). A fully Bayesian approach for combining multilevelfailure information in fault tree quantification and optimal follow-onresource allocation. Reliability Engineering and System Safety 86:297-305.

A. Wilson (NCSU Statistics) Introduction to Bayesian Statistics April 4, 2017 52 / 55

System Reliability

Bayesian Reliability Articles III• M. Hamada, A. Wilson, B. Weaver, R. Griffiths, H. Martz (2014).

Bayesian Binomial Assurance Tests for System Reliability UsingComponent Data. Journal of Quality Technology 46(1): 24-32.

• V. Johnson, T. Graves, M. Hamada, and C. S. Reese (2003). Ahierarchical model for estimating the reliability of complex systems. InBayesian Statistics 7, eds. Bernardo, J., Bayarri, M., Berger, J.,David, A., Heckerman, D., Smith, A., and West, M., OxfordUniversity Press, pp. 199-213.

• V. Johnson, A. Moosman, P. Cotter (2005). A Hierarchical Model forEstimating the Early Reliability of Complex Systems. IEEETransactions on Reliability 54(2): 224-231.

• M. Li, W. Meeker (2014). Application of Bayesian Methods inReliability Data Analyses. Journal of Quality Technology 46(1): 1-23.

• W. Meeker, Y. Hong (2014). Reliability Meets Big Data:Opportunities and Challenges. Quality Engineering 26(1): 102-116.

A. Wilson (NCSU Statistics) Introduction to Bayesian Statistics April 4, 2017 53 / 55

System Reliability

Bayesian Reliability Articles IV• C. S. Reese, A. Wilson, M. Hamada, H. Martz, K. Ryan (2004).

Integrated Analysis of Computer and Physical Experiments.Technometrics 46(2): 153-164.

• C. S. Reese, A. Wilson, J. Guo, M. Hamada, V. Johnson (2011). AHierarchical Model for Estimating Reliability from Weapon SystemSurveillance Data. Journal of Quality Technology 43(2): 127-141.

• A. Wilson, C. Anderson-Cook, A. Huzurbazar (2011). A Case Studyfor Quantifying System Reliability and Uncertainty. ReliabilityEngineering and System Safety, 96(9): 1076-1084.

• A. Wilson, T. Graves, M. Hamada, C. S. Reese (2006). Advances inData Combination, Analysis, and Collection for System ReliabilityAssessment. Statistical Science 21(4): 514-531.

• A. Wilson, A. Huzurbazar (2007). Bayesian Networks for MultilevelSystem Reliability. Reliability Engineering and Systems Safety 92(10):1413-1420.

A. Wilson (NCSU Statistics) Introduction to Bayesian Statistics April 4, 2017 54 / 55

System Reliability

Bayesian Reliability Articles V

• A. Wilson, L. McNamara, G. Wilson (2007). Information integrationfor complex systems. Reliability Engineering and System Safety 92:121–130.

• X. Zhang, A. Wilson (2016). System Reliability and ComponentImportance under Dependence: A Copula Approach. To appear inTechnometrics.

A. Wilson (NCSU Statistics) Introduction to Bayesian Statistics April 4, 2017 55 / 55

top related