precision nuclear physics

12
Precision nuclear physics Observable calculations are becoming increasingly precise What are the theory errors? Hamiltonian Calculation Observable Ground-state energies for even oxygen isotopes Experiment Hergert et al. PRL 110, 242501 (2013) Chiral effective field theory (EFT) is used to generate microscopic nuclear Hamiltonians and currents (note: plural!). Many versions (scales/ schemes) on the market.

Upload: others

Post on 17-Feb-2022

7 views

Category:

Documents


0 download

TRANSCRIPT

Precision nuclear physicsObservable calculations are becoming increasingly precise

What are the theory errors?

HamiltonianCalculation

Observable

Ground-state energies for even oxygen isotopes

Experiment

Hergert et al. PRL 110, 242501 (2013)

Chiral effective field theory (EFT) is used to generate microscopic nuclear Hamiltonians and currents (note: plural!). Many versions (scales/

schemes) on the market.

Sources of uncertainty in EFT predictions

Hamiltonian: truncation errors regulator artifacts

Low-energy constants: error from fitting to data

Numerics: many-body methods

basis truncation anything else

Full uncertainty on prediction

Sources of uncertainty in EFT predictions

Hamiltonian: truncation errors regulator artifacts

Low-energy constants: error from fitting to data

Numerics: many-body methods

basis truncation anything else

Full uncertainty on prediction

Bayesian methods treat on equal footing

a0!

a1!

0!

BUQEYE Collaboration!

Prior!Posterior!True value! Goal:

Full uncertainty quantification (UQ) for effective field theory (EFT)

predictions using Bayesian statistics

Some BUQEYE publications on UQ for EFT • “A recipe for EFT uncertainty quantification in nuclear physics”,

J. Phys. G 42, 034028 (2015) • “Quantifying truncation errors in effective field theory”,

Phys. Rev. C 92, 024005 (2015) • “Bayesian parameter estimation for effective field theories”,

J. Phys. G 43, 074001 (2016) • “Bayesian truncation errors in chiral EFT: nucleon-nucleon observables”,

Phys. Rev. C 96, 024003 (2017) [Editors’ Suggestion]

Bayesian Uncertainty Quantification: Errors for Your EFT

Bayesian interpretation of probability

Unrepeatable situations:

Probability that it will rain in Washington, D.C. tomorrow

Great introduction for physicists: “Bayes in the Sky” [arXiv:0803.4089]

Properties of the Universe (we have exactly one sample!)

Formulation of probability as “degree of belief”

an Probability of a parameter

Repeatable situations:

Rolling dice

Repeatable measurements

Beta decay credit: The 2015 Long Range Plan for Nuclear Science

“Based on a large amount of observations of the event, here is the probability”

“From the best of knowledge and previous measurements: the probability lies in this range”

Why Bayes for theory errors?Frequentist approach: long-run relative frequency• Outcomes of experiments treated as random variables • Predict probabilities of observing various outcomes

• Well adapted to quantities that fluctuate statistically • But systematic errors are problematic Bayesian probabilities: pdf is a measure of state of knowledge• Ideal for systematic/ theory errors that do not behave stochastically • Assumptions and expectations encoded in prior pdfs • Make explicit what is usually implicit: assumptions may be applied

consistently, tested, and modified in light of new information

Why Bayes for theory errors?Frequentist approach: long-run relative frequency• Outcomes of experiments treated as random variables • Predict probabilities of observing various outcomes

• Well adapted to quantities that fluctuate statistically • But systematic errors are problematic Bayesian probabilities: pdf is a measure of state of knowledge• Ideal for systematic/ theory errors that do not behave stochastically • Assumptions and expectations encoded in prior pdfs • Make explicit what is usually implicit: assumptions may be applied

consistently, tested, and modified in light of new information

pdf for uncertainty: different prior assumptions

about higher-order corrections

Observable(x)

x

68% level

Why Bayes for theory errors?Frequentist approach: long-run relative frequency• Outcomes of experiments treated as random variables • Predict probabilities of observing various outcomes

• Well adapted to quantities that fluctuate statistically • But systematic errors are problematic Bayesian probabilities: pdf is a measure of state of knowledge• Ideal for systematic/ theory errors that do not behave stochastically • Assumptions and expectations encoded in prior pdfs • Make explicit what is usually implicit: assumptions may be applied

consistently, tested, and modified in light of new informationWidespread application of Bayesian approaches in theoretical physics• Interpretation of dark-matter searches; structure determination in

condensed matter physics, constrained curve-fitting in lattice QCD • Is supersymmetry a “natural” approach to the hierarchy problem? • Estimating uncertainties in perturbative QCD (e.g., parton distributions)

Joint probability for theory parameters

Example: want to “fit” parameters

is read: “The probability that x is true given y”

pr(x|y)

pr(a|D, k, kmax

, I)

Vector of parameters {a0, a1, …ak} Data

k : truncation order kmax : omitted orders

I: any other information

Here

Bayesian rules of probability as principles of logic

1: Sum rule If set {xi} is exhaustive and exclusive

X

i

pr(xi|I) = 1

Zdx pr(x|I) = 1

pr(x|I) =Z

dy pr(x, y|I)pr(x|I) =X

j

pr(x, yj |I)

• cf. complete and orthonormal • implies marginalization (cf. inserting complete set of states)

Bayesian rules of probability as principles of logic

1: Sum rule

2: Product rule

If set {xi} is exhaustive and exclusive

X

i

pr(xi|I) = 1

Zdx pr(x|I) = 1

• cf. complete and orthonormal • implies marginalization (cf. inserting complete set of states)

Expanding a joint probability of x and y

pr(x, y|I) = pr(x|y, I) pr(y|I) = pr(y|x, I) pr(x|I)

• If x and y are mutually independent: pr(x|y, I) = pr(x|I)

• Rearrange rule equality to get Bayes Theorem

pr(x, y|I) ! pr(x|I)⇥ pr(y|I)

pr(x|y, I) = pr(y|x, I)pr(x|I)pr(y|I)

pr(x|I) =Z

dy pr(x, y|I)pr(x|I) =X

j

pr(x, yj |I)

pr(a|D, k, kmax

) / pr(D|a, k, kmax

) ⇥ pr(a|k, kmax

)

Posterior Likelihood Prior

1D projections of a1 and a3 for naturalness prior:

Likelihood overwhelms prior

Prior suppresses unconstrained likelihood

Interaction between data and prior