nuclear data uncertainty quantification and data ...865971/fulltext01.pdf · nuclear data...

86
ACTA UNIVERSITATIS UPSALIENSIS UPPSALA 2015 Digital Comprehensive Summaries of Uppsala Dissertations from the Faculty of Science and Technology 1315 Nuclear data uncertainty quantification and data assimilation for a lead-cooled fast reactor Using integral experiments for improved accuracy ERWIN ALHASSAN ISSN 1651-6214 ISBN 978-91-554-9407-0 urn:nbn:se:uu:diva-265502

Upload: others

Post on 22-Jul-2020

6 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

ACTAUNIVERSITATIS

UPSALIENSISUPPSALA

2015

Digital Comprehensive Summaries of Uppsala Dissertationsfrom the Faculty of Science and Technology 1315

Nuclear data uncertaintyquantification and data assimilationfor a lead-cooled fast reactor

Using integral experiments for improved accuracy

ERWIN ALHASSAN

ISSN 1651-6214ISBN 978-91-554-9407-0urn:nbn:se:uu:diva-265502

Page 2: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

Dissertation presented at Uppsala University to be publicly examined in polhemsalen,Ångströmlaboratoriet, Lägerhyddsvägen 1, Uppsala, Thursday, 17 December 2015 at 09:15for the degree of Doctor of Philosophy. The examination will be conducted in English.Faculty examiner: Dr. Oscar Cabellos (OECD Nuclear Energy Agency (NEA)).

AbstractAlhassan, E. 2015. Nuclear data uncertainty quantification and data assimilation for a lead-cooled fast reactor. Using integral experiments for improved accuracy. Digital ComprehensiveSummaries of Uppsala Dissertations from the Faculty of Science and Technology 1315. 85 pp.Uppsala: Acta Universitatis Upsaliensis. ISBN 978-91-554-9407-0.

For the successful deployment of advanced nuclear systems and optimization of current reactordesigns, high quality nuclear data are required. Before nuclear data can be used in applicationsthey must first be evaluated, tested and validated against a set of integral experiments, andthen converted into formats usable for applications. The evaluation process in the past wasusually done by using differential experimental data which was then complemented with nuclearmodel calculations. This trend is fast changing due to the increase in computational powerand tremendous improvements in nuclear reaction models over the last decade. Since thesemodels have uncertain inputs, they are normally calibrated using experimental data. However,these experiments are themselves not exact. Therefore, the calculated quantities of model codessuch as cross sections and angular distributions contain uncertainties. Since nuclear data areused in reactor transport codes as input for simulations, the output of transport codes containuncertainties due to these data as well. Quantifying these uncertainties is important for settingsafety margins; for providing confidence in the interpretation of results; and for deciding whereadditional efforts are needed to reduce these uncertainties. Also, regulatory bodies are nowmoving away from conservative evaluations to best estimate calculations that are accompaniedby uncertainty evaluations.

In this work, the Total Monte Carlo (TMC) method was applied to study the impact of nucleardata uncertainties from basic physics to macroscopic reactor parameters for the EuropeanLead Cooled Training Reactor (ELECTRA). As part of the work, nuclear data uncertaintiesof actinides in the fuel, lead isotopes within the coolant, and some structural materials havebeen investigated. In the case of the lead coolant it was observed that the uncertainty in the keff

and the coolant void worth (except in the case of 204Pb), were large, with the most significantcontribution coming from 208Pb. New 208Pb and 206Pb random nuclear data libraries with realisticcentral values have been produced as part of this work. Also, a correlation based sensitivitymethod was used in this work, to determine parameter - cross section correlations for differentisotopes and energy groups.

Furthermore, an accept/reject method and a method of assigning file weights based onthe likelihood function are proposed for uncertainty reduction using criticality benchmarkexperiments within the TMC method. It was observed from the study that a significant reductionin nuclear data uncertainty was obtained for some isotopes for ELECTRA after incorporatingintegral benchmark information. As a further objective of this thesis, a method for selectingbenchmark for code validation for specific reactor applications was developed and applied tothe ELECTRA reactor. Finally, a method for combining differential experiments and integralbenchmark data for nuclear data adjustments is proposed and applied for the adjustment ofneutron induced 208Pb nuclear data in the fast energy region.

Keywords: Total Monte Carlo, ELECTRA, nuclear data, uncertainty propagation, integralexperiments, nuclear data adjustment, uncertainty reduction

Erwin Alhassan, Department of Physics and Astronomy, Applied Nuclear Physics, Box 516,Uppsala University, SE-751 20 Uppsala, Sweden.

© Erwin Alhassan 2015

ISSN 1651-6214ISBN 978-91-554-9407-0urn:nbn:se:uu:diva-265502 (http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-265502)

Page 3: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

Dedicated to Mabel & Emily

Page 4: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic
Page 5: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

List of papers

This thesis is based on the following papers, which are referred to in the text

by their Roman numerals.

I E. Alhassan, H. Sjöstrand, J. Duan, C. Gustavsson, A.J. Koning, S.

Pomp, D. Rochman, M. Österlund. Combining Total Monte Carloand Benchmarks for nuclear data uncertainty propagation on aLead Fast Reactor’s Safety. Nuclear Data Sheets (2014) 118,542-544.

My contribution: I wrote the scripts and I performed the simulations,

the analyses and interpretation of results. I also wrote the paper.

II E. Alhassan, H. Sjöstrand, P. Helgesson, A.J. Koning, M. Österlund,

S. Pomp, D. Rochman. Uncertainty and correlation analysis of leadnuclear data on reactor parameters for the European Lead CooledTraining Reactor. Annals of Nuclear Energy (2015) 75, 26-37.

My contribution: I wrote most of the scripts and I performed the

simulations, the analyses and interpretation of results. I also wrote the

paper.

III H. Sjöstrand, E. Alhassan, J. Duan, C. Gustavsson, A.J. Koning, S.

Pomp, D. Rochman, M. Österlund. Propagation of nuclear datauncertainties for ELECTRA burn-up calculations. Nuclear DataSheets (2014) 118, 527-530.

My contribution: I wrote part of the scripts, performed part of the

simulations and took part in the writing of the paper.

IV E. Alhassan, H. Sjöstrand, P. Helgesson, M. Österlund, S. Pomp, A.J.

Koning, D. Rochman. On the use of integral experiments foruncertainty reduction of reactor macroscopic parameters withinthe TMC methodology. In review process after revision, Progress ofNuclear Energy, 2015.

My contribution: I developed the method; wrote most of the scripts;

and also performed the simulations, the analyses and the interpretation

of the results. I also wrote the paper.

v

Page 6: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

V E. Alhassan, H. Sjöstrand, P. Helgesson, M. Österlund, S. Pomp, A.J.

Koning, D. Rochman. Selecting benchmarks for reactorsimulations: an application to a Lead Fast Reactor. Submitted toAnnals of Nuclear Energy, 2015.

My contribution: I developed the method; wrote the scripts; and also

performed the simulations, the analyses and the interpretation of the

results. I also wrote the paper.

Reprints were made with permission from the publishers.

Page 7: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

Other papers not included in this thesis

List of papers related to this thesis but not included in the comprehensive sum-

mary. I am first author or co-author of all listed papers.

1. E. Alhassan, H. Sjöstrand, J. Duan, P. Helgesson, S. Pomp, M. Öster-

lund, D. Rochman, A.J. Koning. Selecting benchmarks for reactorcalculations. In proc. PHYSOR 2014 International Conference, Kyoto,Japan. 28 September - 3 October 2014.

2. H. Sjöstrand, E. Alhassan, S. Conroy, J. Duan, C. Hellesen, S. Pomp, M.

Österlund, A.J. Koning, D. Rochman. Total Monte Carlo evaluationfor dose calculations. Radiation Protection Dosimetry (2013) 161 (1-4), 312-315.

3. R. Della, E. Alhassan, N.A. Adoo, C.Y. Bansah, E.H.K. Akaho, B.J.B.

Nyarko. Stability analysis of the Ghana Research Reactor-1 (GHARR-1). Energy Conversion and Management (2013) 74, 587-593.

4. C.Y. Bansah, E.H.K. Akaho, A. Ayensu, N.A. Adoo, V.Y. Agbodemegbe,

E. Alhassan, R. Della. Theoretical model for predicting the relativetimings of potential failures in steam generator tubes of a PWR dur-ing a severe accident. Annals of Nuclear Energy (2013) 59, 10-15.

5. J. Duan, S. Pomp, H. Sjöstrand, E. Alhassan, C. Gustavsson, M. Öster-

lund, D. Rochman, A.J. Koning. Uncertainty Study of Nuclear ModelParameters for the n+56Fe Reactions in the Fast Neutron Region Be-low 20 MeV. Nuclear Data Sheets (2014) 118, 346-348.

6. P. Helgesson, D. Rochman, H. Sjöstrand, E. Alhassan, A.J. Koning.

UO2 vs MOX: propagated nuclear data uncertainty for keff, withburnup. Nuclear Science and Engineering (2014) 3, 321-336.

7. E. Alhassan, H. Sjöstrand, J. Duan, C. Gustavsson, A.J. Koning, S.

Pomp, D. Rochman, M. Österlund. Uncertainty analysis of Lead crosssections on reactor safety for ELECTRA. In proc. SNA + MC 2013,02401 (2014), EDP Sciences.

8. E.K. Boafo, E. Alhassan, E.H.K. Akaho. Utilizing the burnup capa-bility in MCNPX to perform depletion analysis of an MNSR fuel.Annals of Nuclear Energy (2014) 73, 478-483.

9. P. Helgesson, H. Sjöstrand, A.J. Koning, D. Rochman, E. Alhassan, S.

Pomp. Incorporating experimental information in the TMC method-ology using file weights. Nuclear Data Sheets (2015) 123, 214-219.

vii

Page 8: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

10. S. Pomp, A. Al-Adili, E. Alhassan, C. Gustavsson, P. Helgesson, C.

Hellesen, A.J. Koning, M. Lantz, M Österlund, D. Rochman, V. Simutkin,

H. Sjöstrand, A. Solders. Experiments and theoretical data for study-ing the impact of fission yield uncertainties on the nuclear fuel cyclewith TALYS/GEF and the TMC method. Nuclear Data Sheets (2015)123, 220-224.

11. A. Al-Adili, E. Alhassan, C. Gustavsson, P. Helgesson, K. Jansson, A.J.

Koning, M. Lantz, A. Mattera, A.V. Prokofiev, V. Rakopoulos, H. Sjos-

trand, A. Solders, D. Tarrio, M. Osterlund, S. Pomp. Fission activitiesof the nuclear reactions group in Uppsala. In proc., Scientific Work-shop on Nuclear Fission dynamics and the Emission of Prompt Neutronsand Gamma Rays, THEORY-3. Physics Procedia (2015) 64, 145-149.

12. P. Helgesson, H. Sjöstrand, A.J. Koning, J. Ryden, D. Rochman, E. Al-hassan, S. Pomp. Sampling of systematic errors to compute like-lihood weights in nuclear data uncertainty propagation. In Press,Nuclear Instruments and Methods A, 2015.

Page 9: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

Contents

Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix

1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

1.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

1.2 Nuclear data needs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

1.3 Outline of thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

2 Nuclear data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

2.1 Experimental data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

2.1.1 Differential data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

2.1.2 Integral data (benchmarks) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

2.2 Model calculations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

2.3 Nuclear data evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

2.4 Nuclear data libraries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

2.5 Nuclear data definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

2.6 Resonance parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

2.7 Covariance data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

3 Uncertainty Quantification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

3.1 Sources of uncertainties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

3.2 Statistical estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

3.3 Uncertainty quantification approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

3.3.1 Deterministic methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

3.3.2 Stochastic methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

4 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

4.1 Simulation tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

4.1.1 TALYS based code system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

4.1.2 Processing codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

4.1.3 Neutron transport codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

4.2 Model calculations with TALYS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

4.3 Nuclear data uncertainty calculation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

4.3.1 Global uncertainty analyses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

4.3.2 Local uncertainty analyses - Partial TMC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

4.4 Reactor Physics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

4.4.1 Reactor description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

4.4.2 Reactor neutronic parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

4.4.3 Convergence of the first 4 moments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

Page 10: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

4.5 Nuclear data uncertainty reduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

4.5.1 Binary Accept/Reject method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

4.5.2 Reducing uncertainty using file weights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

4.5.3 Combined benchmark uncertainty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

4.6 Benchmark selection method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

4.6.1 Benchmark cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

4.7 Correlation based sensitivity analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

4.8 Burnup calculations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

4.9 Nuclear data Adjustments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

4.9.1 Production of random nuclear data libraries . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

4.9.2 Nuclear data adjustment of 208Pb in the fast region . . . . . . . . . . . . . . . . 47

5 Results and Discussions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

5.1 Model calculations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

5.2 Global uncertainties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

5.3 Partial variations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

5.4 Nuclear data uncertainty reduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

5.5 Similarity Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

5.6 Correlation based sensitivity measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

5.7 Nuclear data adjustments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

6 Conclusion and outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

7 Sammanfattning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

Acknowledgment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70

Page 11: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

1. Introduction

’If you would be a real seeker after truth, it is necessary that at leastonce in your life you doubt, as far as possible, all things.’

- Renè Descartes

1.1 Background

Today, about 1.4 billion people globally still have no access to electricity,

and an additional one billion more only have access to unreliable supply of

electricity [1]. As a result of population growth and economic development

especially in developing countries, the global energy demands are projected

to rise [2]. With increasing energy consumption worldwide, nuclear power

is expected to play an increasing role in providing the energy needs for the

future [3]. Due to public acceptance issues, the next generations of nuclear

power reactors must not only be economically competitive with other energy

sources but must also address waste management, proliferation and safety con-

cerns. The GEN-IV International Forum (GIF) was therefore initiated with

very challenging technology goals, which include sustainability, economics,

safety, reliability, proliferation resistance and physical protection as reported

in the GEN-IV Technology Roadmap [4]. The six reactor concepts identified

by the GEN-IV International Forum as most promising advanced reactor sys-

tems are [4]: the gas-cooled fast reactor (GFR), the lead-cooled fast reactor

(LFR), the molten salt reactor (MSR), the sodium fast reactor (SFR), the very-

high-temperature reactor (VHTR) and the supercritical water-cooled reactor

(SCWR).

The Lead Fast Reactor concept, was ranked top in sustainability by the GIF

because it uses a closed fuel cycle for the conversion of fertile isotopes, and in

proliferation resistance and physical protection because of its long-life core [5].

Its safety features are enhanced by the choice of a relatively inert coolant

which has the capability of retaining hazardous radionuclides such as iodine

and cesium even in the event of a severe accident. As part of GEN-IV devel-

opment in Sweden, the GENIUS project which was a collaboration between

Chalmers University of Technology, Royal Institute of Technology and Up-

psala University was initiated for the development of the GEN-IV concept in

1

Page 12: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

Sweden [6]. The development of a lead-cooled Fast Reactor called ELECTRA

(European Lead-Cooled Training Reactor), which will permit full recycling of

plutonium and americium in the core, was proposed within the project.

For the design and successful implementation of GEN-IV reactor concepts,

high quality and accurate nuclear data are required. For several decades, re-

actor design has been supported by computer simulations for the investiga-

tion of reactor behavior under both steady state and transient conditions. The

physical models implemented in these simulations codes are dependent on the

underlying nuclear data used, implying that uncertainties in nuclear data are

propagated to the outputs of these codes. Before nuclear data can be used

in applications, they are first evaluated, benchmarked against integral experi-

ments and then converted into formats usable for applications. The evaluation

of neutron induced reactions usually involves the combination of nuclear re-

action models with experimental data (nuclear reaction models are adjusted

to reproduce experimental data). However, these experiments are themselves

not exact and therefore, the calculated quantities of model codes such as cross

sections and angular distributions, contain uncertainties.

In the past, nuclear data uncertainties within the Reactor Physics community

were mostly propagated using deterministic methods. With this approach, the

local sensitivities of a particular response parameter, such as the keff, to the

variations in input parameters (nuclear data in our case), can be determined by

using the generalized perturbation theory [7]. Once the sensitivity coefficient

matrix has been determined, it is combined with the covariance matrix to ob-

tain the corresponding uncertainty on any response parameter of interest. For

example, sensitivity profiles obtained by using the so-called perturbation card

in MCNP [8] are combined with covariance data using the SUSD code [9] to

obtain nuclear data uncertainty on the reactor response parameter of interest.

This approach relies on the assumption that the sensitivity of the output pa-

rameter depends linearly on the variation in each input parameter [10]. With

the increase in computational power, however, Monte Carlo methods are now

possible. In the Nuclear Research and Consultancy Group (NRG), Petten, The

Netherlands, a method called ’Total Monte Carlo (TMC)’ was developed for

nuclear data evaluation and uncertainty propagation. An advantage of this ap-

proach is that it eliminates the use of covariances and the assumption of linear-

ity used in the perturbation approach. Quantifying nuclear data uncertainties,

is important for reactor safety assessment, reliability and risk assessment, and

for deciding where additional efforts need to be taken to reduce these uncer-

tainties.

In this work, the TMC method was applied to study the impact of nuclear data

uncertainties from basic nuclear physics to macroscopic reactor parameters

for the ELECTRA reactor in respect of major and minor actinides, structural

materials and the coolant. This work is important because, if all uncertainties

2

Page 13: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

(including nuclear data uncertainties) are not taken into account in reactor de-

sign safety margins could be under assigned to key reactor safety parameters.

This may lead to severe accidents. As part of the work, the impact of nuclear

data uncertainties of some of the actinides in the fuel (PAPER I and IV), lead

isotopes in the coolant (PAPER II) and some structural materials at beginning

of life (BOL) have been estimated. In addition, the propagation of 239Pu trans-

port data uncertainties in reactor burnup calculations was carried out in PA-PER III. A further objective has been to develop methodologies for selecting

benchmarks that can help in reactor code validation (PAPER V). Also, meth-

ods for reducing nuclear data uncertainties using integral benchmarks have

been developed and presented in more detail in PAPER I and IV. Finally, a

method for combining differential experiments and integral benchmark data

for data assimilation and nuclear data adjustments using file weights based on

the likelihood function is proposed in section 4.9. The proposed method is

applied for the adjustment of neutron induced reactions of 208Pb in the fast

energy region and the preliminary results are presented in Section 5.7.

1.2 Nuclear data needs

For the successful deployment of advanced nuclear systems and for the op-

timization of current reactor designs, accurate information about the nuclear

reactions taking place within the reactor core are required. Using sensitivity

and uncertainty analyses, a list of priorities for improvement of nuclear data

has been determined [11, 12] and this includes both lead and plutonium iso-

topes.

From the application side, a preliminary attempt to assign target uncertain-

ties for some GEN-IV systems have been carried out by the Organisation for

Economic Co-operation and Development (OECD)/Nuclear Energy Agency

(NEA) expert group [11]. From Ref. [11], target uncertainties within 1-sigma

have been identified for the following parameters (target uncertainties in brack-

ets) for fast reactors: multiplication factor at Beginning of Life (BOL) (0.3%),

power peaking factor (2%), reactivity coefficients at BOL (7%) while nuclide

densities at the end of life should be within 10% uncertainty. To fulfil these

targets the scientific community must develop both theoretical nuclear physics

models as well as uncertainty quantification and uncertainty reduction meth-

ods. In parallel, the initiation of high-quality differential experimental mea-

surements are necessary.

To achieve the goal of further reducing nuclear data uncertainties in macro-

scopic reactor parameters, the reduction of nuclear data uncertainties using

integral benchmark data, and nuclear data adjustments using both differential

and integral data is presented as part of this work.

3

Page 14: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

1.3 Outline of thesis

The thesis is structured as follows; the background, nuclear data needs for

advanced nuclear systems and the outline of the thesis are presented in Chap-

ter 1. In Chapter 2, nuclear data definitions, differential and integral bench-

mark experimental data, the nuclear data evaluation process which involves

the combination of differential experimental data and model calculations, as

well as nuclear covariance data are described. Sources of uncertainties, statis-

tical estimations, uncertainty quantification approaches and uncertainty anal-

ysis in Reactor Physics modeling are given in Chapter 3. In Chapter 4, the

simulation tools used in this work and the application of the TMC method for

uncertainty quantification of macroscopic reactor parameters are presented for

global and local (partial) variations of nuclear data. Also, methods developed

for benchmark selection, for nuclear data uncertainty reduction, and nuclear

data adjustments in the fast energy region are described. Furthermore, model

calculations and generation of random nuclear data using the TALYS based

code system (T6) are presented. In chapter 5, the results obtained are pre-

sented and discussed. Chapter 6 contains the conclusion and the outlook for

future work and finally, the summary of the thesis in Swedish is presented in

Chapter 7.

4

Page 15: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

2. Nuclear data

’It is nice to know that the computer understands the problem.But I would like to understand it too.’

- Eugene P. Wigner

Nuclear data are physical parameters that describe the properties of the atomic

nuclei and the fundamental physical relationship that governs their interac-

tions [13]. These include atomic data, nuclear reaction data, thermal scatter-

ing, radioactive decay data and fission yields data.The data are important for

theoretical nuclear models development and for applications involving radia-

tion and nuclear technology [14]. Because of the wide variety of applications

nuclear data can be divided into three types [15]. The first being transport

data which describes the interactions of various projectiles such as neutrons

and protons with a target nucleus. Transport data are usually associated with,

e.g., cross sections and angular distributions [15]. These data are utilized for

both transport and depletion calculations. The second is fission yield data

which are, e.g., used for the calculation of waste disposal inventories and de-

cay heat, for depletion calculations and in the calculation of beta and gamma

ray spectra of fission product inventories [16]. The third is decay data which

describes, among others, the nuclear levels, half-lives, Q-values and decay

schemes [13, 15]. These data are used for, e.g., dosimetry calculations and for

estimating decay heat in nuclear repositories.

This chapter presents a description of experimental data which include both

differential and integral data; model calculations; nuclear data definitions; as

well as resonance parameters. Also, the nuclear data evaluation process, nu-

clear data libraries containing large sets of nuclear data, and covariance data

in the ENDF formatted libraries are presented.

2.1 Experimental dataExperimental data can be divided into differential and integral data. Differen-

tial data are microscopic quantities that describes the properties of nuclei and

their interactions with particles while integral data mostly take into account the

global behavior of a macroscopic system. Experimental data are needed for

fine tuning nuclear reaction models and for nuclear data assimilation. In the

following subsections, differential experimental data and integral benchmarks

used for nuclear data evaluation are presented.

5

Page 16: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

2.1.1 Differential data

Microscopic quantities such as cross sections, fission yields and angular dis-

tributions are measured at a large number of experimental facilities, such as

accelerators, world-wide. Typically, these data are measured as a function of

the energy of an incoming particle, e.g., neutron. These data are collected,

compiled and stored in the EXFOR database (Exchange among nuclear re-

action data centers) [17]. The EXFOR database is maintained by the Interna-

tional Network of Nuclear Reaction Data Centres (NRDC) and coordinated by

the Nuclear Data Section of the International Atomic Energy Agency (IAEA).

The database contains experimental and bibliographic information on exper-

iments for neutron, charged particle and photon-induced reactions on a wide

range of isotopes and incident energies [18]. In Fig. 2.1, an example of dif-

ferential experimental data for the 208Pb(n,2n) cross section as a function of

incident neutron energy is presented. Usually, when there exist discrepan-

cies between similar experiments for the same reaction, comparisons of the

original publications containing the measurements is carried out. The mea-

surements are sometimes compared with theoretical model calculations [18].

Accurate experimental data are important for nuclear data evaluation and the

Figure 2.1. An example of experimental data for 208Pb(n,2n) cross section as a

function of incident neutron energy. The data were obtained from the EXFOR

database [17].

calibration of nuclear reaction model codes. For these reasons, a careful as-

sessment of possible systematics and statistical experimental uncertainties are

needed [19]. The sources of experimental uncertainties are presented in sec-

tion 3.2. Users of nuclear data, e.g., the nuclear reactor community, usually

give feedback which helps in prioritizing measurements of particular isotopes

and reactions.

6

Page 17: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

2.1.2 Integral data (benchmarks)

Integral experiments are used to measure macroscopic quantities such as flux

or keff. Indeed, an integral experiment is not used for measuring microscopic

quantities such as cross-sections. However, integral data obtained from these

experiments, are used for testing and validating nuclear data libraries. An

example of extensive testing of nuclear data libraries with a large set of crit-

icality safety and shielding benchmarks is presented in Ref. [20]. Integral

benchmarks (only referred to as benchmarks in this thesis), are integral exper-

iments which have gone through a strict validation process. For example, the

ICSBEP Handbook contains models (normally in MCNP) of the integral ex-

periment with an evaluated benchmark value. The evaluated benchmark value

is the obtained experimental observable from the benchmark experiment (e.g.

keff) corrected for the simplifications done in the modeling of the experiment.

The evaluated benchmark value is also associated with an evaluated uncer-

tainty.

In Fig. 2.2, a diagram showing the cylindrical benchmark model for the hmf57

(case 3) benchmark is presented. From the diagram, the following can be

observed: the central cylinder made of HEU (yellow), surrounded by a lead

reflector (blue), with a source placed in the middle of the HEU core. The eval-

uated benchmark value of the hmf57 case 3, is keff = 1.0000±0.0032 pcm.

Figure 2.2. The cylindrical benchmark model for hmf57 (cases 3) showing the lead

reflector (in blue) the HEU fuel (yellow) and the source (red). The figure was taken

from the ICSBEP Handbook [21]. Note, the diagram is not to scale.

In the past, benchmark testing of nuclear data was done after the evaluation

process, however, in most modern evaluations the benchmark testing step is

carried out as an integral part of the evaluation process [22]. There are a

number of international efforts geared towards providing the nuclear commu-

nity with qualified benchmark data. One of such projects is the International

Criticality Safety Benchmark Evaluation Project (ICSBEP) mentioned above

7

Page 18: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

which contains criticality safety benchmarks derived from experiments that

were performed at various nuclear critical facilities around the world [21].

These benchmarks are categorized according to their fissile media (Plutonium,

HEU, LEU, etc.), their physical form (metal, compound, solution etc.), their

neutron energy spectrum (thermal, intermediate, fast and mixed spectra) and a

three digit reference number. Other benchmarks used for nuclear data and re-

actor applications are the Evaluated Reactor Physics Benchmark Experiments

(IRPHE) [23] which contains a set of reactor physics-related integral data and

the Radiation shielding experiments database (SINBAD) [24] which contains

a compilation of reactor shielding benchmarks, fusion neutronics and accel-

erator shielding experiments. For existing reactor technology, these bench-

marks can be used to validate computer codes, test and validate nuclear data

libraries, nuclear data adjustments and uncertainty reduction [21, 25]. In this

work, benchmark data are used for nuclear data testing, data adjustment and

nuclear data uncertainty reduction.

2.2 Model calculationsModern nuclear data evaluation in the fast region is performed using nuclear

reaction models [26]. These models are used for providing data where experi-

mental data are scarce or unavailable [27]. The use of nuclear reaction models

also has an added advantage that various partial cross sections can be automat-

ically summed up to the total cross section. This leads to internal consistency

within evaluated files [27, 28]. Where experimental data are available, they

are used for constraining and fine-tuning model parameters used as inputs to

nuclear model codes.

Nuclear reaction theories have been implemented in well-known nuclear re-

action codes such as TALYS [26, 29] and EMPIRE [30] for theoretical model

calculations. In this work nuclear model calculations have been performed

using the TALYS code. More details on the TALYS code can be found in

section 4.1.1.

2.3 Nuclear data evaluationSeveral approaches exist for nuclear data evaluation. These methods include:

experimental data interpolation, Bayesian methods, re-normalization of exist-

ing evaluations, copy and paste from other nuclear data evaluations and nu-

clear reaction modeling [31].

The basic steps involved in nuclear data evaluation process involving nuclear

reaction modeling are:

1. The selection and careful analysis of differential experimental data mostly

obtained from the EXFOR database.

8

Page 19: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

2. Analysis of the low energy region which includes thermal energy, and

the resolved and unresolved resonance regions. Analyses of neutron res-

onances are usually performed using R-matrix codes for light nucleus

reactions and for heavy nuclides in the low incident energy regions [32].

These codes are complemented with data from the Atlas of neutron reso-

nances [33] which contains a compilation of resonance parameters, ther-

mal neutron cross sections, and other quantities. In Ref. [33], the reso-

nance parameters are analyzed in terms of the multilevel Breit-Wigner

(MLBW) formalism. In new evaluations, however, the use of the Reich-

Moore approximation is encouraged [34].

3. Theoretical model calculations using nuclear reaction codes such as EM-

PIRE [30] or the TALYS code [26, 29] for the fast energy region. Where

available, parameters and their uncertainties as recommended in the Ref-

erence Input Parameter Library (RIPL) [27] are used as inputs for these

nuclear reaction codes. The RIPL database contains Reference Input Pa-

rameters for nuclear model calculations. In the fast region, the Hauser-

Feshbach, pre-equilibrium and fission models, are used for modeling

medium and heavy nucleus reactions [22, 32]. Here model parameters

in these codes are adjusted and fine tuned to fit selected experimental

data. These model adjustments are important because current models

are deficient and hence are not able to reproduce available experimental

data. Also, even if models do reproduce differential experimental data,

final adjustments are sometimes needed to reproduce integral data. By

adding resonance information as explained in (2) above, and other quan-

tities such as the average number of fission neutrons, fission neutron

spectra, (n,f) cross section, for actinides, a complete ENDF file which

covers from thermal to high energies can be produced.

4. The data produced is then checked and tested using utility codes such as

CHECKR, FIZCON and PSYCHE [35] to verify that the data conforms

to current formats and procedures. The data once checked, are then

validated against a large set of integral benchmark experiments. The end

product is a nuclear data library which contains information on a large

number of incident particles for a large set of isotopes and materials.

Feedback from model adjustments to fit both differential and integral

data and from applications are needed for the improvement of theoretical

models and for identifying energy regions where additional experimental

efforts are needed.

2.4 Nuclear data libraries

After successful evaluation, nuclear data are compiled and stored in nuclear

data libraries. The ENDF format is the accepted format for data storage. The

9

Page 20: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

content of an evaluated nuclear data file include the following: general infor-

mation such as the number of fission neutrons (nubar) and delayed neutron

data; resonance parameters; reaction cross sections; decay schemes; fission

neutron multiplicities and nuclear data covariance information which are rep-

resented by the so-called MF-numbers. These MF-numbered files are sub-

divided into different reaction channels such as (n,tot), (n,el), (n,inl), which

are represented by MT-numbers [34]. There are some national and interna-

tional evaluated nuclear data libraries currently available from various nu-

clear data centers. These are, the Japanese Evaluated Nuclear Data Library

(JENDL) [36], Evaluated Nuclear Data Library (ENDF/B) [31] from the USA,

the TALYS Evaluated Nuclear Data Library (TENDL) [37], Joint Evaluated

Fission and Fusion File (JEFF) [38] from the OECD/NEA data bank, the Chi-

nese Nuclear Data Library (CENDL) [39] and the Russian Nuclear Evaluated

Data library (BROND) [40]. Other libraries for use in certain applications are

the International Reactor Dosimetry File (IRDF) for reactor dosimetry appli-

cations [41], the European Activation File (EAF) [42] and the Fusion Eval-

uated Nuclear Data Library (FENDL) from the International Atomic Energy

Commission (IAEA) [43].

2.5 Nuclear data definitions

The relationships between different reaction cross sections in the fast energy

region in the evaluated nuclear data files (ENDF) are presented in this section.

The total cross section (σtot) is given as a summation of the elastic scattering

cross section (σel) and non-elastic (σnon−el) cross section:

σtot(MT = 1) = σel(MT = 2)+σnon−el(MT = 3) (2.1)

where MT=1, 2 and 3 are the total, elastic and non-elastic cross sections in

ENDF nomenclature. The non-elastic cross section (σnon−el) which repre-

sents all other cross sections except the elastic scattering cross section can be

expressed as:

σnon−el(MT = 3) = σinl(MT = 4)+σ2n(MT = 16)+σ3n(MT = 17)

+σ f ission(MT = 18)+σnα(MT = 22)+σnp(MT = 28) (2.2)

+σγ(MT = 102)+σchargeparticle(MT = 103−107)

where σinl is the (n,inl), σ2n is the (n,2n), σ3n is the (n,3n), σ f ission is the (n,f),

σnα is the (n,nα), σnp is the (n,np), σγ is the (n,γ) and σchargeparticle represents

the cross sections of producing charge particles. The MT numbers as presented

in Eq. 2.2, are the corresponding MT numbers of the cross sections in the

ENDF nomenclature. The inelastic cross section (MT=4) is given as the sum

of the cross sections for the 1st-40th excited states. In the situation where the

10

Page 21: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

elastic channel is fed by the compound nucleus decay besides the shape elastic

scattering, the elastic cross section is expressed as [26, 29]:

σel(MT = 2) = σshape−el +σcomp−el (2.3)

where σshape−el and σcomp−el are the shape elastic and compound elastic cross

sections. The shape elastic part comes from the optical model while the com-

pound elastic part comes from compound nucleus theory. The total fission

cross section (σ f ission) can be found on MT=18 in the ENDF formatted file

and is given as the summation of the first, second, third and fourth chance

fission cross sections:

σ f ission(MT = 18) = σn, f (MT = 19)+σn,n f (MT = 20)

+σn,2n f (MT = 21)+σn,3n f (MT = 38) (2.4)

where σn, f is the first chance fission cross section and σn,n f , σn,2n f and σn,3n fare the second, third and fourth chance fission cross sections respectively. The

cross sections discussed above can be found on MF=3.

The angular distributions, which is made up of elastic and inelastic angular dis-

tributions, can be found on MF=4. The elastic angular distribution

[dσ el

]has

two components - the direct and compound parts. The direct (shape-elastic)

part comes directly from the optical model while the compound part comes

from compound nucleus theory [26, 29]:

dσ el

dΩ=

dσ shape−el

dΩ+

dσ comp−el

dΩ(2.5)

Similarly, the inelastic angular distribution to a single discrete state i

[dσ i

n,n′

]can be given as the addition of the direct and compound parts [26, 29]:

dσ in,n′

dΩ=

dσ i,directn,n′

dΩ+

dσ i,compoundn,n′

dΩ(2.6)

2.6 Resonance parameters

The resolved and unresolved resonance parameters are found in File 2 (MF2-

MT151) in the evaluated nuclear data file. These parameters include Eλ , Γ,

Γn, Γγ , and Γ f , where Eλ is the resonance energy, Γ,Γn,Γγ ,Γ f are the total,

neutron, radiative and fission widths respectively. These parameters can not

be predicted from theory but can be determine from experiments. In Fig. 2.3,

11

Page 22: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

E

0

Figure 2.3. An example of 239Pu (n,γ) cross section at Eλ = 35.5 eV. The plot was

generated from the ENDF/B-VII.1 nuclear data library using the JANIS nuclear data

tool [44].

an example of a single resonance of 239Pu capture cross section at the reso-

nance energy of Eλ = 35.5 eV is presented. The plot was generated from the

ENDF/B-VII.1 nuclear data library using the JANIS nuclear data tool [44].

Where σ0 is the cross section for the formation of the compound nucleus,

σ0Γγ

Γis the height of the resonance peak for the (n,γ) cross section,

Γγ

Γis the

probability of decay via γ-emissions capture and, Γ, the total width, given as

the sum of the partial widths is expressed as:

Γ = Γn +Γγ +Γ f (2.7)

The resonance parameters are needed for the calculation of cross sections at

reactor operating temperatures due to Doppler broadening of neutron reso-

nances.

2.7 Covariance data

Covariance data specifies nuclear data uncertainties and their correlations.

These data are needed for the assessment of uncertainties of design and safety

parameters in nuclear technology applications [22]. Several methods are avail-

able for the computation of nuclear data covariances. These methods can be

classified under three categories: deterministic, Monte Carlo, and Hybrid ap-

proaches [45]. In this work, the Monte Carlo method was used to implicitly

determine the covariance of nuclear data.

Covariance data are usually stored in MF31-35 and MF40 in the evaluated

nuclear data file. Once these covariance data are available in a nuclear data

12

Page 23: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

evaluated file, uncertainty propagation for current and advanced reactor appli-

cations can be performed as discussed in more detail in section 3.3.

13

Page 24: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

3. Uncertainty Quantification

’It is in the admission of ignorance and the admission of uncertainty thatthere is a hope for the continuous motion of human beings in some direction

that doesn’t get confined, permanently blocked, as it has so many timesbefore in various periods in the history of man.’

- Richard P. Feynman

This chapter presents uncertainty quantification approaches made up of deter-

ministic and stochastic methods currently utilized for nuclear data sensitivity

and uncertainty analysis. Also, sources of uncertainties and a brief discussion

of statistical estimations used to make statistical deductions are discussed.

Uncertainty propagation involves the quantification of the effects of uncertain

input parameters on model outputs. This involves studying the impact of vari-

ability of the input parameters on model outputs. A closely related field known

as sensitivity analysis involves the evaluation of the impacts of the variation of

each input parameter in contributing to the output uncertainty. With sensitivity

analysis, model inputs that cause significant uncertainty in the output and the

relationships between the input and output parameters are identified. Fig. 3.1

illustrates the relation between the input, the model and the model output. If

Model

g = f(x1,x2,…,xn)

g(X)

x1

x2

xn

Figure 3.1. A flowchart showing the propagation of uncertainty in input parame-

ters through a model. The figure shows the input X = (x1,x2, ...,xn), the model

g = f (x1,x2, ...,xn) and the model output = g(X).

a model, g = f (x1,x2, ...,xn), then the uncertainties in the input parameters,

i.e., a collection of random variables X = (x1,x2, ...,xn), can be propagated

through the model to obtain a distribution of g(X). From the distribution, the

first four moments can be determined as presented later in section 3.2. g(X)could be the neutron cross sections - in which case, the input variable X are

14

Page 25: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

the distributions of the nuclear reaction model parameters and the model rep-

resents the nuclear reaction models such as the optical and Hauser-Feshbach

models. g(X) could also represent any reactor quantity of interest such as the

keff; in this case, the input parameter X represents the distributions of the neu-

tron cross sections. The model represents the geometrical and physical model

of a neutron transport code such as presented in section 4.1.3.

In the TMC method discussed in more detailed in section 3.3.2, the model in-

cludes both the nuclear physics and the neutron transport codes, consequently

linking the reaction model parameter distributions, X, all the way to the macro-

scopic reactor parameter distribution, g(X), of interest.

3.1 Sources of uncertainties

As mention earlier, nuclear data evaluation benefits from both experimental

data and nuclear reaction modeling. Since the models have uncertain inputs,

they are normally calibrated using experimental data. However, since exper-

iments are not exact, measured quantities contain uncertainties. The errors

in measurements come from a variety of sources such as from the method of

measurement, errors due to the measuring instrument used and errors coming

from the person performing the experiment [46]. If an error is defined as the

difference between the measured and the true value, the uncertainty is defined

as the estimate of the magnitude of the error. Measurement errors are usu-

ally classified as random or systematic errors. Random errors are caused by

unknown and unpredictable changes in the experiment, e.g., detector noise.

These errors can be estimated by repeating a particular measurement under

the same conditions; the mean value of the measurements then becomes the

best estimate of the measured quantity. Systematic errors on the other hand are

reproducible inaccuracies that shift all measurements within an experiment in

a systematic way. An example of systematic uncertainty is the detector ef-

ficiency. These errors can be identified by using a more accurate measuring

instrument or by comparing a given result with a measurement of the same

quantity performed with a different method [47]. In many cases, the errors

can not be directly inferred from measurements but their magnitude is rather

inferred from uncertainty propagation.

Similar to experimental uncertainties, the sources of uncertainties in modeling

can generally be classified into two categories:

1. Aleatory or random uncertainty, which arises from the random nature of

the system under investigation or by the random process in the model-

ing, e.g., Monte Carlo modeling.

15

Page 26: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

2. The epistemic uncertainty [46], which results in a systematic shift be-

tween the modeled response parameter and the response parameters true

value eg. model defect.

In Reactor Physics, a response parameter can be defined as any quantity of

interest such as the keff or reaction rates, that can be measured or predicted in

a simulation [48]. Normally, epistemic uncertainties can arise from one of the

following sources [46, 10]:

1. From the data and the parameters used in the model, e.g., cross sections,

boundary conditions.

2. Uncertainties in the physics of modeled processes as a result of ’incom-

plete knowledge’.

3. Assumptions and simplifications of the methods used for solving model

equations.

4. From uncertainties arising from our inability to model complex geome-

tries accurately.

3.2 Statistical estimationSince evaluated nuclear data are obtained from models in combination with

experiments which come with their corresponding uncertainties, statistical de-

ductions are used to extract valuable information. Statistical information is

normally provided by moments of the respective probability distributions [47].

The mean is commonly used to describe the best estimate of a parameter in a

normal distribution while the spread is best described using the standard devi-

ation and the variance. The uncertainties in physical observations are usually

interpreted as standard deviations.

If we consider a collection of random variables X = (x1, ...,xn) and g is a func-

tion of X given as g = f (x1,x2,x3, ...,xn), then the expectation of g(X), which

is the first central moment, denoted by E(g(X)) can be defined as [47]:

E(g(X))≡∫SX

g(X)p(X)dX (3.1)

where p(X) is the probability density function and SX is the n-dimensional

space formed by all the possible values of X .

The second central moment, the variance of g(X), is the dispersion of a ran-

dom variable around its mean, and can be expressed as:

V (g(X))≡ E[(g(X)−E(g(X)))2

](3.2)

From Eq. 3.2, the standard deviation which is the positive square root of the

variance denoted by σ can be expressed as:

σ ≡ [V (g(X))]1/2 ≡ [E[(g(X)−E(g(X)))2

]]1/2(3.3)

16

Page 27: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

Other higher moments are the skewness and the kurtosis. The skewness or the

third central moment, S(g(X)), is a measure of the degree of asymmetry of a

particular distribution and can be given as:

S(g(X))≡E([g(X)−E(g(X))]3

)σ(g(X))3

(3.4)

A perfectly symmetric (Gaussian) distribution has a skewness value of zero

while a negative skewness value implies a long tail of data more to the left of

the mean. A positive skewness value indicates a long tail of data towards the

right of the mean. The kurtosis which is the fourth central moment, can be

given as:

K(g(X))≡E([g(X)−E(g(X))]4

)σ(g(X))4

(3.5)

The kurtosis measures the ’peakedness’ of a probability distribution of ran-

dom variables. It determines whether the data are peaked or flat, relative to a

normal distribution.

For a multivariate probability distribution X = (x1,x2...,xn), the second or-

der central moments comprise not only the variances but also the covariance.

Covariance can be defined as a measure of how much two random variables

change together; it measures the strength of the correlation between two vari-

ables. The covariances between different input variables can be described by

the so-called covariance matrix. Covariance matrices are symmetric compris-

ing of off-diagonal covariances and variances along the diagonal. The sample

covariance of N observations between two random variables x j and xk, can be

expressed as:

cov(x j,xk) =1

N −1

N

∑i=1

(xi j − x j)(xik − xk) (3.6)

where x j and xk are the mean values of the x j and xk variables respectively.

If all the variables in X are uncorrelated the covariance matrix is made up of

only the diagonal elements.

Using Eq. 3.6, the correlation coefficient (ρx jxk ), which is the measure of

strength of the linear relationship between two variables can be given as:

ρx jxk =cov(x j,xk)

σx j σxk

(3.7)

where σx j and σxk are the standard deviation of x j and xk respectively. A

perfect negative correlation is represented by the value -1, while a 0 indicates

no correlation and a +1 indicates a perfect positive correlation.

17

Page 28: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

3.3 Uncertainty quantification approachesMethods for nuclear data uncertainty quantification in best-estimate model

predictions are usually either stochastic such as the TMC method used in

this work or deterministic such as the perturbation approach utilized in, e.g.,

Ref. [49]. There are recently also hybrid methods that combine the strengths

of the Monte Carlo and the deterministic approaches for uncertainty analy-

sis [45].

3.3.1 Deterministic methods

With the deterministic approach, local sensitivities of particular response pa-

rameters such as keff to variations in input parameters (nuclear data in our

case), can be determined by using the generalized perturbation theory, e.g., [49].

The local sensitivities of the model’s response to input parameter variation are

accomplished by computing the response with input parameter values per-

turbed usually within 1% from the nominal parameter values, i.e., by varying

x1,x2, ...,xn, individually by 1%, the response of g(x) is computed. The sensi-

tivity vector (Sx = (Sx1, ...,Sxn)), can be computed using:

Sx1=

�g(X)

�x1(3.8)

The variable, X, is typically comprised of cross sections in multi-group for-

mat. This method relies on the assumption that the sensitivity of the output

parameter depends linearly on the variation in each input parameter [10].

Given that Vx denotes the covariance matrix for the parameters (x1, ...,xn), then

by using the ’sandwich rule’, the variance of the response can be expressed

as [46]:

var(g(X)) = SxVxSTx (3.9)

where the superscript ’T’ indicates a transposed matrix and the covariance

matrix (Vx) contains both diagonal and off-diagonal elements for correlated

parameters. In the case where the parameters are uncorrelated, Eq. 3.9 be-

comes:

var(g(X)) =n

∑i=1

S2xi

var(xi) (3.10)

This approach has been extensively used to evaluate the impact of neutron

cross-section uncertainties on some significant reactor response parameters

related to the reactor fuel cycle [49].

3.3.2 Stochastic methods

The focus in this thesis is on Monte Carlo or stochastic methods used for

nuclear data uncertainty propagation. These methods are based on random

18

Page 29: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

sampling of input parameters, X. For each random set, m, of input parame-

ters, Xm = (x1,m, ...,xn,m) a set of output responses, g(Xm) of interest are pro-

duced [10]. Stochastic methods used for nuclear data uncertainty propagation

can generally be classified into two types:

1. Propagation of nuclear data uncertainties from basic nuclear physics.

2. Uncertainty propagation from existing covariance information that come

with modern nuclear data evaluations

Among (1), the Total Monte Carlo (TMC) method is used in this work and

described in detail in the next subsection.

Total Monte Carlo (TMC)The TMC methodology (here called ’original TMC’) was first proposed by

Koning and Rochman in 2008 [50] for nuclear data uncertainty propagation.

With the method, inputs to a nuclear reaction code such as TALYS [26, 29],

are created after sampling from nuclear model parameter distributions to cre-

ate random nuclear data files [26]. Experimental data and their uncertainties

are accounted for by using an accept/reject approach where calculations that

fall within an acceptance band determined from comparison with experimen-

tal data, are accepted and those that do not fulfil this criterion are rejected.

The acceptance band was obtained by visual evaluation methods [51]. This

approach has been criticised for not including experimental data in a more rig-

orous way. There are, however, on-going work with a goal of incorporating

both differential [51, 52] and integral experiments (PAPER IV) into the TMC

methodology in a more rigorous way. The inclusion of integral experimental

data in a rigorous way is also one of the main objectives of this thesis and is

described in detail in section 4.5 and in PAPER IV.

To create a complete ENDF file covering from thermal to fast neutron ener-

gies, non-TALYS data such as the neutron resonance data, total (n,tot), elastic

(n,el), capture (n,γ) or fission (n,f) cross sections at low neutron energies, av-

erage number of fission neutrons, and fission neutron spectra (for fissionable

nuclei) are added to the results obtained from the TALYS code using other

auxiliary codes [26].

A summary of the TMC method is depicted in a flow chart in Fig. 3.2. From

the figure, parameters in both phenomenological and microscopic models im-

plemented in nuclear reaction codes such as the optical model, pre-equilibrium,

compound nucleus models are adjusted to reproduce experimental data. These

parameters could be the real central radius (rv) or the real central diffuseness

(av) of the optical model for example. The output of the codes such as cross

sections, fission yields and angular distributions are compared with differen-

tial experimental data by defining an uncertainty band which covers most of

19

Page 30: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

Compare with Experimental

data

model parameters

Physical models

A large set of accepted random ENDF files

Applications:Reactor calculations;Depletion studies,Transient analysis

Observables: cross sections, fission yields, angular distributions

Simulations

1.006 1.008 1.01 1.012 1.014 1.0160

5

10

15

20

25

keff values

Num

ber

of c

ount

s/bi

n

01 1 012 1

obs

Figure 3.2. A flowchart depicting the Total Monte Carlo approach for uncertainty

analysis. Random files generated using the TALYS based code system [26] are pro-

cessed and used to propagate nuclear data uncertainties in reactor calculations.

the experimental data available. Data that falls within this uncertainty band are

accepted while those that do not fulfil this criterion are rejected. The accepted

files are then processed into the ENDF format using the TEFAL code [53].

These ENDF formatted files are translated into usable formats and fed into

neutron transport codes to obtain distributions in reactor parameters of inter-

est. From these distributions, statistical information such as the moments pre-

sented in subsection 3.2 can be inferred.

In Fig. 3.3, the (n,el) and (n,γ) of 50 random 208Pb files are plotted as a func-

tion of incident neutron energy. A spread in data can be observed for the

entire energy region as presented in Fig. 3.3. This is expected as each file con-

tains a unique set of nuclear data obtained from the distribution of the nuclear

model parameters. Due to the variation of nuclear data, different distributions

with their corresponding mean values and standard deviations can be obtained

for different response parameters such as keff, temperature feedback coeffi-

cients and kinetic parameters [54]. By varying nuclear data using the TMC

method for a particular response parameter, the observed total variance of the

response parameter (σ 2obs) in the case of Monte Carlo neutron transport codes,

e.g., MCNP or SERPENT, can be expressed as:

σ2obs = σ2

ND +σ2stat (3.11)

where σ2ND is the variance of the response parameter under study due to nu-

clear data uncertainties and, σ2stat is the variance due to statistics from the

Monte Carlo code. In the case of deterministic codes, there are no statistical

20

Page 31: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

10−2 10−1 100 101102

103

104

105208Pb (n,el)

Incident Energy (MeV)

Cro

ss s

ectio

n (m

b)

10−2 10−1 100 101

10−2

100

102

104

208Pb (n,γ)

Incident Energy (MeV)

Cro

ss s

ectio

n (m

b)

Figure 3.3. 50 random 208Pb cross sections plotted as a function of incident neutron

energy. Left: 208Pb(n,el) and right: 208Pb(n,γ). Note that each random file contains a

unique set of nuclear data.

uncertainties and Eq. 3.11 becomes:

σ2obs = σ2

ND (3.12)

With the ’original TMC’ described above, the time taken for a single calcula-

tion is increased by a factor of n, where n (the number of samples or random

files) ≥ 500 making it not suitable for some applications. As a solution, a faster

method called the ’Fast TMC’ was developed [55]. By changing the seed of

the random number generator within the Monte Carlo code and changing nu-

clear data at the same time, a spread in the data that is due to both statistics

and nuclear data is obtained (same as in Eq. 3.11). However, by using different

seeds for each simulation, a more accurate estimate of the spread due to statis-

tics is obtained and, therefore, the statistical requirement on each run could be

lowered, thereby reducing the computational time involved for each calcula-

tion. The usual rule of the thumb used for original TMC is: σstat � 0.05σobs.

However, for fast TMC, σstat � 0.5σobs [55]. A detailed presentation of fast

TMC methodology is found in Refs. [55, 56, 57]. In this work, the fast TMC

was used and from here on, only referred to as the TMC method.

Other Monte Carlo methods from basic nuclear physicsOther Monte Carlo uncertainty propagation methods from basic nuclear physics

include, the Unified Monte Carlo (UMC) approach proposed by D.L. Smith [58]

and the Backward-Forward Monte Carlo (BFMC) proposed in Ref. [59]. The

UMC method is based on the applications of Bayes theorem and the principle

of maximum entropy as well as on fundamental definitions from probability

theory [58]. The method seeks to incorporate experimental data into model

calculations in a more rigorous and consistent manner [45].

The Backward-Forward Monte Carlo (BFMC) method involves the Backward

21

Page 32: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

and the Forward Monte Carlo steps. In the Backward step, a covariance matrix

of model parameters is obtained by using a generalized χ2 with differential

data as constrains leading to observables consistent with experimental data.

The generalized χ2 is used to quantify the likelihood of model calculation

with respect to a set of experimental constraints. Starting from the covariance

matrix of model parameters obtained from the backward step, a sampling of

the Backward Monte Carlo parameter distribution is performed. The result-

ing distribution of ND observables hence includes experimental uncertainty

information [45].

Sampling from the evaluated co-variance matrixSeveral other approaches are based on Monte Carlo sampling of nuclear data

inputs based on covariance information that come with new nuclear data eval-

uations. The disadvantage of these approaches is that they rely on the assump-

tion of normal distributions of the input parameters. The covariance data avail-

able are usually not comprehensive and complete too [26]. One such method

has been implemented in the AREVA GmbH code NUDUNA (NUclear Data

UNcertainty Analysis) [60]. With this method, nuclear input parameters are

first randomly sampled according to a multivariate distribution model based

on covariance data. A large set of random data is generated and used for

the computation of different response parameters. This method can include

uncertainties of multiplicities, resonance parameters, fast neutron cross sec-

tions and angular distributions. Another method is the GRS (Gesellschaft für

Anlagen-und Reaktorsicherheit, Germany) method implemented in the SUSA

(Software for Uncertainty and Sensitivity Analysis) code which depends on

randomly grouped cross sections generated from existing covariance files [61]

which are propagated to macroscopic reactor parameters.

Similarly, in Ref. [62], a stochastic sampling method for quantifying nuclear

data uncertainties is accomplished by utilizing perturbed ACE formatted nu-

clear data generated using multigroup nuclear data covariance information.

This approach has been implemented successfully in the NUSS tool [62]. In

another study, the SharkX tool [63] under development at the PSI, is used

in combination with the CASMO-5 code [64] for uncertainty quantification

and sensitivity analysis. Cross sections, fission spectrum, neutron multiplic-

ities, decay constants as well as fission yields are perturbed based on statis-

tical sampling methods. Also, uncertainty and sensitivity analyses applied to

lattice calculations using perturbed multigroup cross section libraries with the

DRAGON lattice code [65] have been presented in Refs. [66, 67]. In this work,

however, the random nuclear data were produced using the TMC method.

22

Page 33: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

4. Methodology

’With the current and future computer technology, and the accumulatedamount of knowledge in the nuclear data community, we believe that this

methodology is technologically condemned to succeed.’- D. Rochman and A.J. Koning

In this chapter, the methodology used is presented. First, in section 4.1, the

simulation tools used in this work are briefly presented. In section 4.2, model

calculations performed with the TALYS code is presented. Methods for com-

puting uncertainties due to both global and partial (local) variations of nu-

clear data on neutronic parameters are discussed in 4.3. In section 4.4, the

description of the ELECTRA reactor together with the propagation of nuclear

data uncertainties for some macroscopic reactor parameters using the TMC

method is presented. Section 4.5 describes methods developed as part of this

work for nuclear data uncertainty reduction using integral experiments. This

includes an accept/reject method and a method of assigning file weights based

on the likelihood function. In addition, a methodology developed for select-

ing benchmark experiments for reactor simulations, is presented in section 4.6.

Also, in section 4.7, a correlation based sensitivity method is used to determine

the sensitivity of benchmarks and application cases to different cross sections

for particular isotopes and energy groups. Finally, a method for uncertainty

propagation of nuclear data uncertainties in burnup calculations as well as a

method for combining differential and integral experimental data for nuclear

data adjustments are explained in sections 4.8 and 4.9.

4.1 Simulation tools

The work in this thesis was done using a number of computer codes. These

codes include the TALYS based code system used for nuclear reaction cal-

culations and the production of random nuclear data files. The NJOY [68]

and PREPRO [69] codes for nuclear data processing, and MCNP5/X [8] and

SERPENT [70] codes for reactor core calculations. Brief descriptions of these

codes are presented in the subsequent subsections.

23

Page 34: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

4.1.1 TALYS based code system

The TALYS based code system is made up of a group of codes coupled to-

gether with a script called AutoTALYS [26]. This code system is used for

nuclear data evaluation and the production of random nuclear data libraries.

An example is the TALYS Evaluated Nuclear Data Library (TENDL) [37]

which contains complete ENDF formatted nuclear data libraries including co-

variance matrices for many isotopes, particles, energies, reaction channels and

secondary quantities. The codes included in the TALYS based code system

are the TALYS code [26, 29], TASMAN [71], TAFIS [72], TANES [73],

TARES [74] and TEFAL [53] codes. These codes are sometimes referred

to as the T6 code package.

The TALYS code which forms the main basis for the TMC methodology is a

state of the art nuclear physics code used for the predictions and analysis of

nuclear reactions [26, 29]. In the TMC methodology, the TALYS code is used

to generate nuclear data for all open channels in the fast neutron energy re-

gion, i.e., beyond the resonance region. This is achieved by fine tuning model

parameters of various nuclear reaction models within the code so that model

calculations reproduce differential experimental data. In situations where ex-

perimental data are unavailable TALYS is used for the prediction and extrapo-

lation of data [26]. The output of TALYS include total, elastic, inelastic cross

sections, elastic and inelastic angular distributions and other reaction channels

such as (n,2n), (n,np). To create a complete ENDF file covering from ther-

mal to fast neutron energies, non-TALYS data such as the neutron resonance

data, cross sections at low neutron energies, average number of fission neu-

trons, and fission neutron spectra are added to the results obtained from the

TALYS code. This is achieved by using other auxiliary codes [26] such as

the TARES code [74] for resonance parameters, the TAFIS and TANES codes

for the average number of fission neutrons and the fission neutron spectrum

respectively [72, 73]. The TASMAN code [71], is used to create input files to

TALYS and the other codes by generating random distributions of input pa-

rameters by randomly sampling each input parameter from a distribution with

a specific width for each parameter.

The uncertainty distribution in nuclear model parameters is often assumed to

be either Gaussian or uniform shaped [26]. The different input files created

by the TASMAN code are then run multiple times with the TALYS code, each

time with a different set of model parameters, to obtain distributions in calcu-

lated quantities. From the distributions obtained, statistical information such

as the mean, standard deviations and variances and a full covariance matrix

which includes both diagonal and off-diagonal elements can be obtained. Fi-

nally, the TEFAL code is used to translate the nuclear reaction results obtained

from all the different modules within the T6 code package into ENDF format-

ted nuclear data libraries [53]. It must be noted that even though these codes

24

Page 35: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

are coupled together, each code can work as a standalone simulation tool. Be-

cause of the huge amount of time involved in the production of these random

nuclear data files, most of the random files used in this work were obtained

from the TENDL project [75]. However, random 208Pb and 206Pb files were

produced as part of this work as further described in section 4.2.

4.1.2 Processing codes

Between the ENDF formatted evaluated nuclear data and the users of nuclear

data are a set of data-processing codes. These codes prepare nuclear data from

the ENDF format to a variety of usable formats used in application codes [51].

Even though these codes are often overlooked, the accuracy of transport and

reactor calculations depend to a large extent on the assumptions and approxi-

mations introduced by these processing codes [51]. One widely used code is

the NJOY nuclear data processing code [52] which is used to convert ENDF

format files into useful forms for practical applications. To reflect the tem-

peratures in real systems, for instance, energy dependent cross sections have

to be reconstructed from resonance parameters, and then Doppler broadened

to defined temperatures. In this subsection the processing of the nuclear data

using the NJOY and PREPRO codes are presented.

NJOY processing codeThe NJOY processing code [68] is used for preparing ENDF formatted nuclear

data into usable formats for use in deterministic and Monte Carlo transport

codes which are used for reactor calculations and analyses. In this work, the

NJOY code was used to process random ENDF files into the ACE format at

defined temperatures using the following module sequence:

MODER −→ RECONR −→ BROADR −→UNRESR−→ HEAT R −→ PURR −→ ACER (4.1)

The MODER module was used to convert ENDF input data into NJOY blocked

binary mode. These data were then reconstructed into pointwise cross sec-

tions which are Doppler broaden using the BROADR module. The UNRESR

module is used to calculate effective self-shielded pointwise cross sections in

the unresolved resonance region while the HEATR module is used to gener-

ate pointwise heat production and radiation damage production cross sections.

PURR is used to prepare unresolved region probability tables mostly used by

MCNP and finally the ACER module converts the libraries into ACE format.

PREPRO codeSimilar to the NJOY code, PREPRO is a collection of modular codes designed

to prepare data in the ENDF format into usable formats for applications [69].

25

Page 36: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

In this work, the following module sequence was used:

LINEAR −→ RECENT −→ SIGMA1 −→ GROUPIE −→ FIXUP (4.2)

The LINEAR module was used to convert ENDF random nuclear data files

into a linear-linear interpolable form. The RECENT module was used to

reconstruct the resonance contributions into a linear interpolable form. The

SIGMA1 module was used to Doppler broaden cross sections to defined tem-

peratures for use in applications while the GROUPIE module can be used to

calculate self-shielded cross sections and for collapsing pointwise ENDF files

into multigroup cross sections. In this work, the GROUPIE module was used

to calculate multigroup cross sections for random ENDF formatted nuclear

data for use in cross section-parameter correlation calculations discussed in

section 4.7. The FIXUP module was used for testing data formats from the

different modules for consistency.

4.1.3 Neutron transport codes

Neutron transport codes are used to simulate the transport of neutrons in mate-

rials. Some of these codes, such as SERPENT and MCNP, can also calculate

the criticality. Two transport codes were used in this work: the SERPENT

code version 1.1.17 [70] was used for all simulations involving the ELECTRA

reactor. For the benchmark cases, criticality calculations were performed us-

ing the MCNPX code version 2.5 [8]. These codes are briefly presented in this

subsection.

SERPENT Monte Carlo codeThe 3-D continuous-energy Reactor Physics code Serpent (version 1.1.17) [70]

developed at VTT Technical Research Centre in Finland was used for simula-

tions in this work. Serpent is specialized in 2-D lattice physics calculations but

has the capability of modeling complicated 3-D geometries as well. It also has

a built-in burnup capability for reactor analyses. SERPENT uses the universe-

based geometrical modeling for describing two or three dimensional fuel and

reactor core configurations [70].

SERPENT utilizes following types of data for simulations:

1. Continuous energy neutron data which are made up of reaction cross sec-

tions, energy and angular distributions, fission yields and delayed neu-

tron data used for transport calculations. For this thesis, the uncertainties

in these quantities are considered.

2. Thermal scattering data for moderator materials such as hydrogen bound

in light water (H2O), deuterium bound in heavy water (D2O) and carbon

in graphite, which are used to account for the chemical binding of the

26

Page 37: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

moderator nuclei [70]. This data was not used in this work since only

fast systems were considered.

3. For burnup calculations, radioactive decay and fission yield data are used

in combination with data from (1). These data were only used in PAPERIII, and were taken from JEFF 3.1 nuclear data library. The uncertainties

in these data were not considered.

The SERPENT code was used in this work because the geometry input file for

the ELECTRA reactor was created using the SERPENT code. Furthermore,

SERPENT is an open source code and therefore access to the nuclear data

inputs used in the code was straight forward. The code is also well validated

with an in-built burnup capability.

MCNPX codeMCNPX, which stands for Monte Carlo N-Particle eXtended, is a continuous-

energy, general-purpose Monte Carlo radiation transport code used for track-

ing many particle types over a broad range of energies [8]. MCNPX is fully

three-dimensional. MCNPX was used for simulations for all the criticality

benchmarks used in this work. The MCNP code was used because it is a

robust and widely used code. Furthermore, the benchmarks obtained from

Ref. [21] came with corresponding MCNP geometry input files for criticality

calculations.

4.2 Model calculations with TALYS

A successful TMC uncertainty propagation relies on realistic central values

from the output of the TALYS based code system. These central values are

complete evaluations with their corresponding ’best’ input parameter sets. The

selection of these ’central values’ is guided by high-quality experimental data,

complemented by the nuclear model codes such as TALYS and the experience

of the evaluator [28, 75]. Once the central value is obtained, an uncertainty

band is defined for the model parameters using experimental data as a guide.

In Table 4.1, typical uncertainties of some of the model parameters for 208Pb

used in this work, are given as a fraction (%) of their absolute values.

The central value used in this work was provided by the TENDL team. The

newer versions of TALYS come with a directory known as ’best’ which con-

tains the ’best’ set of model input parameters per nuclide. This directory

contains, for example, adjusted optical model and level density parameters

and, therefore, saves the inexperience user from conducting an entire evalua-

tion in the fast region. By adding best y to the TALYS input file, TALYS is

prompted to use the ’best’ parameter sets for calculation instead of the default

27

Page 38: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

parameters. The uncertainties in Table 4.1 were derived taken differential ex-

perimental data into consideration. As mentioned earlier, this approach has

been criticized because experimental information was not included in a statis-

tically rigorous way. There is, however, on-going work with the objective of

incorporating experimental information including their correlations in a more

rigorous way [51, 76, 77]. The approach defined in [51] has been applied for

the production of nuclear data libraries in TENDL-2015 [78].

Table 4.1. Uncertainty of some nuclear model parameters of TALYS for 208Pb, givenas a fraction(%) of the absolute value. A complete list of all the model parameters canbe found in Ref. [26].

Parameter Uncertainty(%) Parameter Uncertainty(%)

rnV 1.5 an

V 2.0

vn1 1.9 vn

2 3.0

vn3 3.1 vn

4 5.0

wn1 9.7 wn

2 10.0

dn1 9.4 dn

2 10.0

dn3 9.4 rn

D 3.5

anD 4.0 rn

SO 9.7

anSO 10.0 vn

so1 5.0

vnso2 10.0 wn

so1 20.0

wnso2 20.0 Γγ 5.0

a(207Pb) 4.5 a(206Pb) 6.5

a(208Pb) 5.0 a(205Pb) 6.5

σ2 19.0 M2 21.0

gπ (207Pb) 6.5 gν (207Pb) 6.5

In this thesis, methods for the inclusion of integral experimental information

into the TMC methodology are presented. In PAPER IV, uncertainty reduc-

tion methods using benchmark information are presented. Also, a method for

combining differential and integral experimental data for nuclear data adjust-

ments is presented in sections 4.9 and 5.7.

A large set of random 208Pb and 206Pb nuclear data libraries were produced

using the TALYS based code system. The inputs to the TALYS based code

system were obtained by random sampling of nuclear model parameters from

a uniform distribution. The resonance parameters used were adopted from the

JEFF-3.1 library together with their corresponding background cross sections

found on MF3 using the TARES code. Uncertainties in the resonance region

used in this work are default uncertainties within the TARES code that repre-

sent the best effort of the code developer at the time [74]. As a final step, the

TEFAL code was used to translate the output from the TALYS and TARES

codes into the ENDF format. These files, after data checking, have been ac-

cepted by the TENDL team and can be obtained from TENDL-2014 [37]. The

28

Page 39: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

random ENDF nuclear data produced were processed into the ACE format

using the NJOY processing code and used for reactor core calculations.

4.3 Nuclear data uncertainty calculation

In the following subsections, the application of the TMC methodology for

the computation of isotope specific global and reaction specific local (partial

TMC) uncertainties due to nuclear data is presented. For the variation of nu-

clear data, the one-at-a-time approach [79] was used. This was done by vary-

ing one isotope at a time while keeping all other isotopes as either JEFF-3.1

or ENDF/B-VII.0 nuclear data libraries. The one-isotope-at-a-time was used

because the goal is to quantify and identify which nuclear data inputs that have

significant impact on the uncertainty of the output.

4.3.1 Global uncertainty analyses

The TMC approach was used to assess the global effect of nuclear data un-

certainty (per isotope) on reactor parameters as discussed later in section 4.4.

Random files in ENDF format, produced using the TALYS based code system

are processed into the ACE format with the NJOY processing code for reactor

calculations. Each random nuclear data file contains a unique set of resonance

parameters (MF2), reaction cross-sections (MF3), angular and energy distri-

butions (MF4 and MF5 respectively), double differential distributions (MF6),

among others. A bash script is used to run the SERPENT code multiple times,

each time with a different nuclear data input file and distributions in any reac-

tor quantity of interest can be obtained. The uncertainties inferred from these

distributions are therefore due to the global variation of nuclear data which

comes from the variation in model parameters.

In Fig. 4.1, a diagram showing the practical implementation of the Total Monte

Carlo method is presented. As can be seen in the diagram, the process can be

divided into three major sections:

1. Model calculations using the TALYS based code system which as men-

tioned earlier, involves comparing model calculations with differential

experimental data to obtain a specific uncertainty for each nuclear model

parameter. Subsequently, by running the TALYS based code system a

large number of times, each time with a different unique set of model

parameters, a set of random nuclear data files are obtained.

2. The nuclear data files generated from (1) are processed into usable for-

mats for neutron transport codes using the NJOY code. A bash script

29

Page 40: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

Output from model calculations by TALYS based system

Nuclear data processing

Reactor core simulations

A large set of random ENDF files

(TENDL project)

Rand n

Rand 2

Rand 1

NJOY

ACE n

ACE 2

ACE 1

SIM 2

SIM 1

SERPENT

JEFF-3.1 Library

SIM n

T6 code package

+ EXFOR

+ Resonance parameters

Feedback to model calculations

Figure 4.1. Practical implementation of the TMC methodology for nuclear data un-

certainty propagation. The method is composed of model calculations, nuclear data

processing, reactor core simulations and statistical analyses.

is used to handle the interaction between the SERPENT code, the ran-

dom nuclear data files database and the base library. The base library is

the library used for the isotopes not under investigation. In the case of

the serpent calculations of ELECTRA, JEFF-3.1 was used as the base

library while in the case of criticality benchmarks, ENDF/B-VII.0 was

used.

3. Reactor calculations are performed using the processed random files

from (2) to obtained distributions in any macroscopic reactor parame-

ter of interest. Statistical information are inferred from the probability

distributions for the reactor parameters under consideration using statis-

tical deductions. The information obtained are important, not only for

uncertainty and sensitivity analysis, but can be used to prioritize which

cross section data should be focused on for improvement of both model

calculations and experiments as shown in the feedback loop in Fig. 4.1.

The outputs from the different codes are handled automatically by a suite

of bash scripts.

As seen from the figure, uncertainties from basic nuclear physics are propa-

gated all the way to reactor parameters. Nuclear data uncertainties on some

neutronic parameters are presented in more detail in Paper I, II and IV.

30

Page 41: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

4.3.2 Local uncertainty analyses - Partial TMC

In section 4.3.1, the methods for computing isotope specific global uncertain-

ties due to nuclear data for some reactor safety parameters were presented.

However, for the purpose of giving informed feedback to model calculations

and differential measurements, it is of interest to quantify the contributions of

different partial cross sections and parts of the ENDF file to the global uncer-

tainties obtained. To achieve this goal, perturbed random files were produced

by varying specific parts of the ENDF file while keeping other parts constant

for a large number of random nuclear data. To investigate the impact of only

resonance parameters on reactor parameters, for example, only MF2 was per-

turbed. This implies that each complete ENDF file after perturbation con-

tains a unique set of resonance parameters such as the scattering radius, the

average level spacing and the average reduced neutron width. For practical

implementation, the first file or central file (i.e., random file zero files pro-

duced or obtained from the TENDL-2012 [75]) was kept as the unperturbed

file while different sections of the random ENDF files are perturbed and a

unique set of random files produced. By using this method of partial variation

(also called ’partial TMC’), the impact of uncertainties due to angular (MF4)

and energy (MF5) distributions, and double-differential (MF6) distributions

which are normally difficult to propagate using other uncertainty quantifica-

tion methods, can be quantified.

In Fig. 4.2, perturbed random ACE 208Pb cross sections are plotted as a func-

tion of incident neutron energy. In the top left and top right, the (n,el) and

(n,γ) cross sections are presented respectively, after perturbing only resonance

parameter data. As can be observed, the partial variation of only resonance pa-

rameters, affect both 208Pb(n,el) (top left) and 208Pb(n,γ) (top right) cross sec-

tions from thermal up to about 5 MeV and 1 MeV for the (n,el) and the (n,γ)

cross sections respectively. In the bottom left and bottom right of Fig. 4.2, the208Pb(n,el) and 208Pb(n,γ) are presented for the partial variation of the (n,el)

cross section in the fast energy range respectively. A spread is observed in the

fast region for the partial variation of 208Pb(n,el) cross section (bottom left) as

can be observed from Fig. 4.2. Since the results in the fast energy region were

obtained with the TALYS code, the spread can be attributed to the variation

of model parameters within the TALYS code. The lack of spread observed for

the (n,γ) is expected as the variation of the (n,el) cross section has no impact

on the (n,γ) cross section.

All the perturbed random files were processed into ACE files with the NJOY

processing code at 600 K and used in the SERPENT code for reactor core

calculations to obtain distributions in any reactor quantity of interest. The

variance of the response parameter (reactor quantity of interest) due to the

partial variation (σ2(n,xn),obs) can be expressed as:

σ2(n,xn),obs = σ2

(n,xn),ND +σ2stat (4.3)

31

Page 42: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

where σ2(n,xn),obs is the variance of the observable due to partial variation, σ2

stat

is the mean value of the variance due to statistics and σ2(n,xn),ND is the variance

due to nuclear data as a result of partial variation and (n,xn) = (n,γ), (n,el),(n, inl), (n,2n), resonance parameters or angular distributions. In this way,

nuclear data uncertainties due to specific reaction channels or specific parts of

the ENDF files were studied and quantified. A detailed presentation of this

method and its application to lead isotopes has been presented in PAPER II.

In principle, the impact of energy groups on macroscopic reactor parameters

for different cross sections can be studied.

10−2 10−1 100 101102

103

104

105208Pb, MF2, (n,el)

Incident Energy (MeV)

Cro

ss s

ectio

n (m

b)

10−2 10−1 100 101

10−4

10−2

100

102

104

208Pb, MF2, (n,γ)

Incident Energy (MeV)

Cro

ss s

ectio

n (m

b)

10−2 10−1 100 101103

104

105208Pb, MF3−MT2, (n,el)

Incident Energy (MeV)

Cro

ss s

ectio

n (m

b)

10−2 10−1 100 101

10−4

10−2

100

102

104

208Pb, MF3−MT2, (n,γ)

Incident Energy (MeV)

Cro

ss s

ectio

n (m

b)

Figure 4.2. Random ACE 208Pb cross sections are plotted as a function of incident

neutron energy for varying only resonance parameter data (2 top plots) and then only

elastic scattering cross sections (2 bottom plots). For the top left: 208Pb(n,el) and top

right: 208Pb(n,γ), only MF2 (resonance parameters) were varied while for bottom left:208Pb(n,el) and bottom right: 208Pb(n,γ), only the elastic scattering cross sections in

the fast energy range were varied.

4.4 Reactor Physics

In following subsections, the description of the ELECTRA reactor is presented

together with the propagation of nuclear data uncertainties for some macro-

scopic reactor parameters using the TMC method. The impact of nuclear data

32

Page 43: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

uncertainties on the multiplication factor and the coolant temperature coeffi-

cient is presented.

4.4.1 Reactor description

The ELECTRA - European Lead-Cooled Training Reactor, is a conceptual

0.5 MW lead cooled reactor fuelled with (Pu,Zr)N with an estimated average

neutron flux at beginning of life (BOL) of 6.3×1013n/cm2s [6]. The fuel com-

position was chosen such that the Pu vector resembles a typical spent fuel of

a pressurized water reactor UOX fuel with a burnup of 43 GWd/tonne, which

was allowed to cool for four years before reprocessing with an additional two

years storage before loading into the ELECTRA core.

Figure 4.3. Radial view of the ELECTRA core showing the hexagonal fuel assembly

in the center made up of 397 fuel rods, the lead coolant (pink), the control assembly

showing the six rotating control drums around the fuel assembly with the control rods

fully inserted.

The extra storage time after reprocessing gives the initial fuel vector realistic

levels of Am, which is a product from beta decay of 241Pu [6]. The fuel com-

position is made up of 60% mol of ZrN and 40% mol of PuN. ELECTRA is

cooled by pure lead. Fig. 4.3 shows the radial configurations of the ELECTRA

core. The objective is to achieve a 100 % heat removal via natural convection

while ensuring enough power density to keep the coolant in a liquid state. The

core is hexagonally shaped with an active core height of 30 cm and consists of

397 fuel rods. Reactivity compensation is achieved by the rotation of absorb-

ing drums made up of B4C enriched to 90% in 10B. [6]. Because of the hard

33

Page 44: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

spectrum, ELECTRA has a relatively small negative Doppler constant, how-

ever, the presence of a large negative coolant temperature coefficient makes

temperature control a possible way of managing reactivity transients.

In Fig. 4.4, the neutron flux spectrum in the fuel as a function of neutron en-

ergy using the SERPENT code is presented. The neutron flux in the fuel was

estimated by defining a detector within the fuel material with user defined en-

ergy boundaries from 1e-5 to 20 MeV. As seen in the figure, the peak of the

spectrum occurs at about 700 keV. The relatively hard spectrum allows for an

efficient use of both the fissile and fertile isotopes within the ELECTRA core.

A detailed description of the reactor has been presented in Ref [6].

10−2 10−1 100 1010.000

0.005

0.010

0.015

0.020

0.025

Neutron energy (MeV)

Flux

per

leth

argy

Figure 4.4. Neutron flux per lethargy in the fuel against neutron energy. The flux was

normalized with the total flux. The peak of the spectrum occurs at about 700 keV.

4.4.2 Reactor neutronic parameters

The propagation of nuclear data uncertainties to reactor neutronic parameters

such as the keff and the coolant temperature coefficient is presented in this

subsection. Each quantity of interest is evaluated by simulating the reference

core or perturbed configurations and the criticality change was determined

while varying nuclear data for the isotope of interest. The JEFF-3.1 nuclear

data library was used as the base library. The reference temperature of the

fuel and coolant were 1200 K and 600 K respectively. More details have been

presented in PAPER II.

Neutron multiplication factor (keff)To quantify the uncertainty in keff due to uncertainties in nuclear data (ND),

ND for specific isotopes were varied while computing the keff each time,

and given probability distributions were obtained. From these distributions,

corresponding mean values and standard deviations were determined. Using

Eq. 3.11, the uncertainty due to nuclear data was extracted. In Paper II, the

34

Page 45: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

results for the variation of 204,206,207,208Pb nuclear data for the keff and other

safety parameters are presented in more detail. More results on the nuclear

data uncertainty of some actinides and structural materials are presented in

PAPER I, IV and in Chapter 5.

Coolant Temperature Coefficient (CTC)The impact of nuclear data uncertainty on the CTC was quantified by per-

forming criticality calculations with the SERPENT Monte Carlo code (version

1.1.17) [80] at two different coolant densities corresponding to the tempera-

tures T1 = 600 K and T2 = 1800 K and varying lead nuclear data each time

while keeping all other isotopes as the JEFF-3.1 library. Since the density ef-

fect is dominant in the CTC, all lead cross sections used in the calculation of

the CTC were processed with the NJOY99.336 code at 600 K. Since the keff is

≈ 1 for both configurations, the CTC for a temperature change from T1 to T2

can be expressed as:

CTC =ke f f (T1)− ke f f (T2)

T1 −T2(4.4)

The nuclear data uncertainty in the CTC is propagated here similar to equa-

tion 3.11. If the statistical uncertainty on the keff at T1 and T2 are σstat,T1

and σstat,T2respectively, the combined statistical uncertainty (σstat,comb) for

the computation of CTC assuming that the statistical errors at T1 and T2 are

uncorrelated, was computed using:

σ2stat,comb = σ2

stat,T1+σ2

stat,T2(4.5)

From the square of the total spread (σobs) of the CTC distribution which is

equal to the quadratic sum of the nuclear data uncertainty (σND) and the com-

bined statistical uncertainty (σstat,comb), the uncertainty due to nuclear data can

be extracted as follows:

σND = [σ2obs −σ2

stat,comb]1/2 (4.6)

Since the difference between ke f f (T1) and ke f f (T2) is usually small, the CTCdistribution can easily be dominated by statistics and hence longer computer

time is needed in the Monte Carlo simulations to obtain statistical uncer-

tainty as small as possible; the usual rule of the thumb used for fast TMC

is: σstat � 0.5×σobs [55].

More information on uncertainty propagation for other reactor parameters

such as the coolant void worth and the delayed neutron fraction, can be found

in PAPER II.

4.4.3 Convergence of the first 4 moments

Convergence and consistency verifications of cross sections probability distri-

butions are important in the TMC method to ensure that the final keff converges

35

Page 46: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

and that enough random nuclear data libraries are used. Therefore, the con-

vergence of the keff distributions obtained were determined by computing the

first four moments of the distribution as a function of the random sampling of

nuclear data. In Fig. 4.5, an illustration of the convergence of the keff obtained

in the case of varying 239Pu nuclear data is presented. From the figure, small

fluctuations can be observed in the convergence toward the final keff values

(top left), the associated standard deviation σ(keff) (top right), the skewness

(bottom left) and the kurtosis (bottom right) as a function of sample numbers.

50 100 150 200 2501.0000

1.0005

1.0010

1.0015

1.0020

Number of random files

Ave

rage

kef

f

239Pu

50 100 150 200 250150

200

250

300

350

400

Number of random files

σ k eff[p

cm]

239Pu

50 100 150 200 250−1.5

−1.0

−0.5

0.0

0.5

Number of random files

Ske

wne

ss

239Pu

50 100 150 200 2501

2

3

4

5

6

Number of random files

Kur

tosi

s

239Pu

Figure 4.5. An illustration of convergence for the keff in the case of varying 239Pu

nuclear data for the pmf5c1 (case 1) benchmark. The first four moments of the dis-

tribution are presented: mean (top left), the standard deviation σ(keff) (top right), the

skewness (bottom left) and the kurtosis (bottom right).

Another approach is to compute the uncertainty of the uncertainty in nuclear

data for each isotope and for each reactor response parameter as presented in

Ref. [56]. This approach has also been used in PAPERS II and IV.

4.5 Nuclear data uncertainty reduction

Even though information on differential measurements together with their un-

certainties are included (implicitly) in the production of random files in the

TMC methodology, wide spreads have been observed in parameter distribu-

tions (known here as our ’prior distribution’) leading to large uncertainties in

36

Page 47: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

reactor parameters for some nuclides for the European Lead-Cooled Training

Reactor (PAPER I and II). To meet integral uncertainty requirements, the

present nuclear data uncertainties for reactor systems must be reduced signif-

icantly [7, 11]. This can be achieved by comparing the random files against

the results from integral experiments (benchmarks). Using benchmark infor-

mation as a constraint, weights are assigned to the nuclear data libraries de-

pending on their agreement with the benchmarks observable (e.g. keff). The

benchmark observable could vary depending on the type of benchmark. In

this work, benchmarks from the ICSBEP Handbook [21] and consequently

only the keff have been used as the benchmark observable. The weighted nu-

clear data libraries are used for a specific reactor system, to reduce the nuclear

data uncertainty in a reactor response parameter such as the keff. A binary

accept/reject method and a method of assigning file weights based on the like-

lihood function for nuclear data uncertainty reduction are presented in this

section, with more details in PAPER IV.

4.5.1 Binary Accept/Reject method

It was demonstrated earlier in PAPER I that, by setting a more stringent cri-

terion for accepting random files based on integral benchmark information,

nuclear data uncertainty could be reduced further. In PAPER I, however,

arbitrary limits were set on accepting random files using criticality bench-

marks without including the evaluated benchmark uncertainty information.

The method was improved by including benchmark uncertainty information in

the computation of an acceptance interval which is proportional to the bench-

mark uncertainty and presented in PAPER IV. In the paper, an acceptance in-

terval (FE) directly proportional to the combined benchmark uncertainty (σB, j)

was used:

FE ∝ σB, j (4.7)

The combined benchmark uncertainty is presented and discussed in more de-

tail in section 4.5.3. By introducing a proportionality constant κ which defines

the magnitude of the spread and given as the inverse of the Pearson correla-

tion coefficient (R) computed between the benchmark and the application case,

Eq. 4.7 becomes:

FE = κσB, j (4.8)

where κ is expressed as:

κ =1

|R| (4.9)

In theory, κ could have been set to 1 for all benchmarks. However, by letting

κ > 1, more conservative and in practice, less weights are given to bench-

marks with weak correlation to the application case. In Fig. 4.6, a correlation

plot example between the application case (ELECTRA) and the 239Pu Jezebel

37

Page 48: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

benchmark is presented showing the evaluated benchmark keff value and the

corresponding acceptance band (FE). For the practical implementation of the

0.98 0.99 1 1.01 1.020.98

0.99

1

1.01

1.02

1.03

keff values for 239Pu Jezebel benchmark

k eff v

alue

s fo

r ELE

CTR

A FE

Figure 4.6. keff correlation plot between ELECTRA and the 239Pu Jezebel benchmark

showing the acceptance band (FE ). A correlation coefficient of R=0.84 was obtained,

signifying a strong linear relationship between the two systems.

binary accept/reject method, the following is considered: given that i denotes

the random files (random nuclear data) and keff(i), is the calculated bench-

mark keff for the random file i. Then the maximum value of keff(i) is kMaxeff =

kBeff,exp +FE and the minimum value, kMin

eff = kBeff,exp −FE , where kB

eff,exp is the

evaluated experimental benchmark value. Hence an acceptance range is de-

fined as kMineff ≤ keff(i) ≤ kMax

eff , and any random file i that falls within this range

is accepted and therefore assigned a binary value of one while those that do

not meet this criteria take binary values of zero and are therefore rejected.

Using only the accepted files for an application, a posterior distribution in a pa-

rameter of interest, e.g., keff can be obtained. The posterior distribution should

normally be narrower in spread than the prior distribution.

4.5.2 Reducing uncertainty using file weights

A more rigorous method is to base the uncertainty reduction on the likeli-

hood function using Bayesian theory. According to Baye’s rule, the posterior

distribution which is the distribution of parameters after taking into account

new data, is proportional to the likelihood function multiplied by the prior

distribution. The prior represents the distribution of parameters before the in-

clusion of new information [32]. Calibration of nuclear data using differential

information has been performed by many authors (Refs. [28, 52, 51, 81]). In

Ref. [52], file weights proportional to the likelihood function were assigned

38

Page 49: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

to the TENDL random nuclear data files depending on how well they agreed

with differential cross-sections:

wi =e−

12 χ2

i

e−12 χ2

min(4.10)

where i is the random file number, wi is the weight for the random file i. Ex-

perimental uncertainties and their correlations were included by computing

a generalized χ2i which takes into consideration the differential experimental

covariance matrix and their correlations for the random file i. The method was

also used in this thesis, as discussed in section 4.9.2.

A similar approach is applied to nuclear data uncertainty reduction for reactor

safety parameters by introducing integral benchmark experimental informa-

tion as an additional constraint in the Total Monte Carlo chain. Using TENDL

random nuclear data libraries as our prior, file weights are assigned to each

random file depending on their quality with respect to a benchmark value us-

ing a modified likelihood function. Similar to the binary accept/reject case,

the correlation between the benchmark and the application case is taken into

account by introducing the Pearson correlation (R) as presented in Eq. 4.11:

wi, j =e−

12 χ2

i |R|

e−12 χ2

min|R|(4.11)

where χ2 is expressed by:

χ2i, j =

(kBeff(i)− kB

eff,exp)2

σ2B, j

(4.12)

where, kBeff(i) is the calculated benchmark value for the ith random file and

kBeff is the evaluated experimental benchmark value; σ2

B, j which is the com-

bined benchmark uncertainty for the benchmark B for the jth isotope whose

nuclear data uncertainty is being reduced is described in the next section in

Eq. 4.13. Introducing the correlation coefficient (R) into Eq. 4.11 is a com-

promise between acquiring good accuracy and still preserve random files in

the case where there is a weak correlation between the systems. After the

computation of the weights, the weighted moments of the distributions can be

calculated. More details on uncertainty reductions using file weights is pre-

sented in PAPER IV.

4.5.3 Combined benchmark uncertainty

The evaluated benchmark uncertainty normally contains information on uncer-

tainties in geometry, material compositions and experimental setup. However,

39

Page 50: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

nuclear data uncertainties are not taken into account in the evaluated bench-

mark uncertainty. Uncertainties from geometrical modeling of the benchmark

in, e.g., MCNPX, the calculation bias, the uncertainties from statistics (in the

case of a Monte Carlo code) and the uncertainties in nuclear data have an

impact on the calculation of the response parameter of the benchmark. To

use a benchmark to reduce uncertainties, all these uncertainties can be taken

into account by computing a combined benchmark uncertainty for a particular

benchmark (B) and isotope (j), given as:

σ2B, j = σ2

E +σ2C, j (4.13)

where σC, the uncertainty in the calculation which takes into account the un-

certainties in nuclear data of all isotopes within the benchmark other than the

isotope whose uncertainty is being reduced, the uncertainties from geometri-

cal modeling, the computational bias and the uncertainties from statistics, is

expressed as:

σ2C, j = ∑

over all p,where p= j

σ2ND,p +σ2

calc,bias +σ2geo,mod +σ2

stat (4.14)

where p is the index for the different isotopes contained in the benchmark,

σND,p is the nuclear data uncertainty of the benchmark for the pth isotope,

and j is the isotope whose nuclear data uncertainty is being reduced, σcalc,biaswhich is the computational bias, takes into account the uncertainty from the

numerical methods used to solve the transport equation, σgeo,mod is the geo-

metrical modeling uncertainties, σstat is the statistical uncertainty in the case

where a Monte Carlo code is used. More details have been presented in PA-PER IV. It can be noted that in this work, the nuclear data uncertainties for the

benchmarks, for the different isotopes, is computed based on the uncertainties

contained in the TENDL library, using the TMC method.

4.6 Benchmark selection method

In PAPER I, a method for selecting benchmarks for reactor code validation,

based on the Total Monte Carlo method is proposed. Such a method is useful

for the code user to choose which benchmark could be used when validating

the reactor code for a particular application. This method has been described

in more detailed in PAPER V.

A flow diagram that illustrates the benchmark selection method is presented

in Fig. 4.7. The method involves first, the production of random nuclear data

libraries. It must be noted that, even though the random nuclear data used in

this work were obtained based on the TMC method [26], other approaches also

exist for random nuclear data production as presented earlier in section 3.3.2.

40

Page 51: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

(4b) Reactor calculation and code validation with

benchmark B

(1) Random Nuclear Data (ND) libraries for isotope j (TENDL project)

(2) Process random ND with, e.g., NJOY

(3a) Perform transport calculation with random ND files for reactor application (application case)

(3b) Perform transport calculation with random ND files for benchmark, B (benchmark case)

Yes

1BB(4a) SB > 0.3?

No

Figure 4.7. Flow chart diagram for the benchmark selection process. Random nuclear

data files obtained from the TENDL project for the isotope j, are processed into the

ACE format and used for calculations with the benchmark and the application case.

Similarities between benchmarks and application cases are quantified using a similar-

ity index (SB). As a rule of the thumb, a limit of SB > 0.3 is set for the similarity index

(the limit has been chosen arbitrarily).

Using the same processed random files, reactor simulations are performed for

the benchmark under consideration and an application case. The application

case is defined as the reactor system under consideration - for this, a model

of the system with full geometry, concentrations, isotopic compositions is

required. The benchmark case can be either a criticality, reactor physics or

shielding benchmark which are available in various handbooks such as the

ICSBEP [21], IRPHE [23] and SINBAD [24]. In this work, however, only

criticality benchmarks were used. From this, the Pearson correlation coeffi-

cient can be computed between the application case and one or several bench-

marks. The computation of the correlation coefficient is the first step towards

the computation of the similarity index.

The goal is to identify benchmarks with similarity in neutron spectrum, iso-

topic composition and consequently a similarity in reaction rates with a reactor

application system. To include the contribution from both the isotopic compo-

41

Page 52: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

sition and neutron spectrum, a Similarity Index (SB) expressed as the product

of the Pearson correlation coefficient (R) and the Variance Ratio (VR) com-

puted as the ratio between the variance of the benchmark to the variance of the

application case for a particular varied isotope is proposed.

Given two sequences of reactor calculations expressed as: X = (xi : i= 1, ...,xn)

and Y = (yi : i = 1, ...,yn), where X and Y are a collection of reactor response

parameter variables computed for the application case and the benchmark re-

spectively with n random files. The similarity index (SB) which is a measure

that quantifies the relationship between X and Y is expressed as:

SB = R×(

Var(Y )Var(X)

)for Var(Y )≤Var(X) (4.15)

where Var(Y), the variance of the benchmark due to the variation of nuclear

data of the isotope under consideration, is used here as a measure of the impor-

tance of that isotope in contributing to the overall uncertainty in the response

parameter such as the keff for example. The ratio between Var(Y) and Var(X)

is used for comparing the sensitivities between the application and the bench-

mark to the variation of the isotope of interest. A high value signifies similar

sensitivity to the variation of a particular isotope for both the application and

the benchmark case. Eq. 4.15 is valid for Var(Y) ≤ Var(X). In a case where

Var(Y) ≥ Var(X), Eq. 4.15 is modified as follows:

SB = R×(

Var(X)

Var(Y )

)for Var(Y )≥Var(X) (4.16)

The similarity index is interpreted as a measure that quantifies the similarity

between the two systems and its value is given between +1 and -1. For inter-

preting SB, we propose the following:

1. Very strong similarity: 0.7 ≤ SB ≤ 1.00

2. Strong similarity: 0.5 ≤ SB ≤ 0.69

3. weak similarity: 0.2 ≤ SB ≤ 0.49

4. Very weak similarity: SB ≤ 0.19

It should be noted that the ranges presented have been chosen arbitrarily. A

high SB computed between the application case and the benchmark case sig-

nifies a strong similarity between the two systems for the particular isotope of

interest and thus, the benchmark under consideration can be selected as a good

representation of the reactor system under investigation for the validation of

reactor codes using the nuclear data of that particular isotope as input.

4.6.1 Benchmark cases

Criticality safety benchmarks used in this work were taken from the Inter-

national Handbook of Evaluated Criticality Safety Benchmark Experiments

42

Page 53: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

(ICSBEP) [21]. The benchmarks used in this work covers four categories: the

Plutonium Metallic Fast (pmf), Plutonium Metallic Intermediate (pmi), Highly

enriched uranium Metallic Fast (hmf) and Low enriched uranium Compound

Thermal (lct) systems. Each benchmark, comes with an evaluated benchmark

keff with a corresponding evaluated benchmark uncertainty.

4.7 Correlation based sensitivity analysis

To determine the sensitivity of benchmarks to various partial cross sections

for particular isotopes and energy groups, a correlation based sensitivity anal-

ysis was performed. This is also important for understanding the underlying

physics implemented in reactor simulation codes and for nuclear data adjust-

ments. In Fig. 4.8, the correlation based Monte Carlo sensitivity method is

presented in a flowchart.

Random files from TENDL project p jp j

(cross section (xs), Parameter) correlation computations

Monte Carlo sensitivity analysis

Linearize cross sections (xs) Module: Linear

Doppler broaden xs Module: Sigma1

Cal. group averaged xs Module: Groupie

Linearize cross sections (xs)Module: Linear

Doppler broaden xsModule: Sigma1

Cal. group averaged xsModule: Groupie

PREPRO code

Reconstructs cross sections from resonance parameters

Module: RECENT

Figure 4.8. A flowchart depicting the correlation based Monte Carlo sensitivity

method. Correlation coefficients are computed between random cross sections av-

eraged over an energy group and any reactor response parameter of interest.

Given a set of n random files, Pearson correlation coefficients are computed

between a reactor response parameter such as the keff and a particular par-

tial cross section averaged over a specific energy group. Random files ob-

tained from the TENDL project were first linearized and reconstructed from

43

Page 54: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

resonance parameters using the LINEAR and RECENT modules of the PRE-

PRO processing code respectively [69]. The cross sections were then Doppler

broadened and collapsed into 187 energy groups (from thermal energy to 20

MeV) using the SIGMA1 and the GROUPIE modules of the PREPRO code

respectively. Correlation coefficients were computed between the cross sec-

tion for each energy group and the reactor response parameter between the 0

- 20 MeV energy range for different reaction channels. These correlations are

interpreted as representing the relationship between a particular partial cross

section and a reactor response parameter of interest. A high positive correla-

tion coefficient signifies a strong sensitivity/importance of a particular partial

cross section to the variance of a particular response parameter for a designated

incident energy group. The method was utilized and presented in PAPERS IIand section 5.6, for correlation based sensitivity analysis of ELECTRA.

4.8 Burnup calculationsIn steady state calculations as presented in section 4.4, the total uncertainty

in a physical observable was estimated as the sum of the statistical uncer-

tainty from the Monte Carlo code and the uncertainty due to nuclear data.

However, statistical uncertainties are not propagated for all quantities in bur-

nup calculations. Therefore, to propagate nuclear data uncertainties in bur-

nup calculations, we performed two sets of transport calculations with the

SERPENT code: (1) SERPENT calculations with constant nuclear data and

random seeds and (2) SERPENT calculations with random nuclear data and

random seeds. In Fig. 4.9, keff distributions for simulations with constant nu-

clear data (light bars) and simulations with random 239Pu nuclear data (blue)

are presented.

0.89 0.9 0.91 0.92 0.930

10

20

30

40

50

60

70

Keff

# of

rand

om fi

les

740 random Pu−239740 random Pu−239Constat NDConstant ND

Figure 4.9. keff distributions for two sets of simulations: The blue bars represent the

740 random files, where each simulation has unique 239Pu nuclear data and the light

bars represents 700 simulations with unique seeds but constant nuclear data.

From (1) above, since it was only the pseudo random numbers of the code

44

Page 55: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

that were changed, the uncertainty obtained is as a result of statistics. In this

work, the TMC method was used to estimate decay heat and inventory uncer-

tainties due to the variation of 239Pu transport data as a function of time and

presented in more detailed in PAPER III. It was observed that, by using criti-

cality benchmarks as presented in PAPER I to select acceptable nuclear data

files for reactor burnup calculations, inventory uncertainties could be reduced

further.

4.9 Nuclear data Adjustments

As mentioned earlier, the evaluation process in the past was done by using dif-

ferential experimental data which was then complemented with nuclear model

calculations [26]; post adjustments to fit a set of integral measurements was

treated as a separate entity from the evaluation process [82]. In Ref. [83], a

method to couple differential and integral data for the evaluation and adjust-

ment of resonance parameters was proposed and utilized. Feedbacks from

integral benchmark experiments were used to adjust resonance parameters in

the resolved resonance region. In Refs. [82, 84], a method that recombines

the evaluation using implicitly differential data, with a posteriori adjustments

to integral measurements referred to as the ’Petten method’ was developed

and utilized. A random search for the best possible nuclear data file which

contains the ’best’ possible combination of nuclear model parameters is per-

formed based on distance minimization. In this way, a file which gives a good

description of integral benchmark data while implicitly taken into consider-

ation differential data, can be identified. In the Ref. [82], similar to what is

proposed in section 4.9.2, both differential and Integral data were combined

for the evaluation and adjustment of neutron induced reactions of 63,65Cu,. In

Refs. [82, 84], however, experimental data were not handled in a statistically

rigorous way.

In this work, an approach for combining differential and integral experimental

data for nuclear data adjustments using file weights based on the likelihood

function is proposed. Once the ’best’ file is identified, it follows the normal

course of events from data checking to data testing and validation to the final

product (an evaluated nuclear data file usually in ENDF format). A flowchart

showing the proposed method of combining differential and integral bench-

mark data in nuclear data evaluation is presented in Fig. 4.10. As can be seen

from the figure, model parameters in a nuclear reaction code such as TALYS,

are adjusted to reproduce differential (A) and integral (B) experimental data.

Starting from TALYS database of model parameters which also contains the

RIPL database (Reference Input Parameter Library) [27], uncertainties are as-

signed to model parameters in the nuclear reaction code, TALYS. The TALYS

code is run multiple times, each time with a different set of model parameters

45

Page 56: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

(2)

Model parameters

Nuclear reaction code e.g.

TALYS

Model adjustments

to fit exp. data

Data checking

Adjustment to reproduce integral data

Evaluated nuclear data

Application e.g. transport calculations

(1)

Differential experiments

TALYS database of model parameters

Integral experiments

Physical observables

e.g. cross sections,

fission yields etc.

Data processing

A B

Data testing

and validation

BNL codes are used. Eg. CHECKR, FIZCON, PSYCHE

(3)

Figure 4.10. Flowchart showing the proposed method of combining differential and

integral benchmark data in nuclear data evaluation based on nuclear reaction model-

ing. Models are adjusted to reproduce both differential (A) and integral (B) experi-

mental data by assigning weights based on the likelihood function.

to obtain physical observables such as cross sections and angular distributions.

These observables are adjusted to fit differential data by computing weights

proportional to the likelihood function as given in Eq. 4.18, using differential

experimental information for each reaction channel. The files are again ad-

justed to fit integral benchmark experiments by computing weights based on

the likelihood function using integral benchmark information using Eq. 4.19.

The methodology for combining the weights from differential and integral

data is presented in section 4.9.2. These weights are then combined and the

’best’ file selected. To verify that the selected file conforms to ENDF format,

the file is tested using Brookhaven National Laboratory (BNL) utility codes:

CHECKR, FIZCON and PSYCHE codes [35]. Once this is done, the data

is validated against a large set of integral benchmarks and the final file is an

evaluated nuclear data file. For the file to be used in applications, it must be

processed into usable formats using processing codes such as NJOY. Several

feedbacks are possible as shown in Fig. 4.10:

1. Feedback from model adjustment to fit differential experimental data can

be given to model calculations for the adjustment of model parameters

and their uncertainties.

2. Feedback from integral data adjustments, can be given to differential ex-

perimentalist, to identify energy regions where discrepancies exist and

46

Page 57: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

consequently new data is needed. Furthermore, feedback to model cal-

culations can be provided, to enable the adjustment of a priori model

parameter and their uncertainties.

3. By identifying subsets of the nuclear data, which contributes most sig-

nificantly to the nuclear data uncertainty in the application, one can se-

lect which experimental nuclear data is most needed. In addition, this

process can also guide which part of the model calculations that should

be prioritized, i.e., feedback from the application to both model calcula-

tions and experimental measurements.

This study is, therefore, an attempt to present an automatic approach for an

optimal search for the best file based on differential and integral data, and

nuclear reaction models using the likelihood function.

4.9.1 Production of random nuclear data libraries

The first step is to identify the optimal set of parameters (nominal file) which

reproduces differential experimental data. A good starting point is to use ref-

erence input parameters from the RIPL database [27] for nuclear model cal-

culations as shown in Fig. 4.10. The nominal file (with its parameter set) used

in this work was obtained from the TENDL-2014 [37] evaluation. To obtain a

non-informative prior as much as possible, the model parameter uncertainties

presented earlier in Table 4.1 were all multiplied by a factor of three. The

parameters were also sampled from a uniform distribution.

The elastic angular distributions (MF4-MT2) were adopted from ENDF/B-

VII.1 nuclear data library while the resonance parameters were adopted from

the JEFF-3.1 library together with their corresponding background cross sec-

tions using the TARES code. All the parameters were randomly varied within

these uncertainties, and a total of 3000 random nuclear data files were pro-

duced. These files were processed into ACE files at 293 K using NJOY99

version 336 and then processed into XY tables by means of a bash script.

Experimental information was included by computing file weights using the

likelihood function. In this work, the weights were computed based on ex-

perimental data from four cross sections: (n,tot), (n,el), (n,inl) and (n,γ) using

Eq. 4.10 and integral experiments using Eq. 4.19.

4.9.2 Nuclear data adjustment of 208Pb in the fast region

The following were used for the adjustment procedure: differential experi-

mental data from the EXFOR database (Exchange among nuclear reaction

data centers) [17], criticality benchmarks from the ICSBEP Handbook [21],

the state of-the-art reaction code TALYS, the TARES code [74], the MCNPX

47

Page 58: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

code and a Monte Carlo adjustment procedure based on the likelihood func-

tion. Since the adjustment was done for the fast incident neutron energy re-

gion, only lead sensitive benchmarks that exhibit fast spectra, were used. In

order to include assumptions on the magnitude of the uncertainties and the

correlation between the different differential experiments, the default EXFOR

interpretation rules as presented in Ref. [77], were used in this work. In Ta-

ble 4.2, the benchmark experiments used in this work, showing the evalu-

ated benchmark keff, the evaluated benchmark uncertainty, and the combined

benchmark uncertainty are presented.

Table 4.2. Criticality safety benchmarks used in this work with their case numbers,the evaluated benchmark keff and the evaluated benchmark uncertainty. These bench-marks were obtained from the International Handbook of Evaluated Criticality SafetyBenchmark Experiments (ICSBEP) [21]. HEU-MET-FAST stands for Highly EnrichedUranium (HEU) Metallic Fast and LEU-COMP-THERM for Low Enriched Uranium(LEU) Compound Thermal benchmarks. The combined benchmark uncertainty givenin column 5 were computed using Eq. 4.13.

Benchmark category caseEvaluated

Benchmark keff

Evaluated Benchmark

uncertainty [pcm]

Combined benchmark

uncertainty [pcm]

PMF035 1 1.0000 160 1183

HMF027 1 1.0000 250 1163

HMF057 1 1.0000 200 1110

HMF057 2 1.0000 230 1149

HMF057 3 1.0000 320 1122

HMF057 4 1.0000 400 1183

HMF057 5 1.0000 190 1087

HMF064 1 0.9996 80 635

More details on how weights are assigned to random nuclear data libraries,

based on differential experimental data using the likelihood function can be

found in Refs. [52, 77]. The methods presented in Refs. [52, 77], were used in

this work for nuclear data adjustments.

Given that CE is the differential covariance matrix which is made up of both

diagonal and off-diagonal elements, a generalized χ2k(E) can be computed us-

ing:

χ2k(E) = (x− τ(p(k)))TC−1

E (x− τ(p(k))) (4.17)

where τ(p(k)) is a vector of the calculated observables found in the kth random

file, p(k) is the parameter set of the kth random file and x is a vector of exper-

imental observables. The χ2k(E) estimator is used to quantify the likelihood of

model calculations with respect to a set of experimental constraints using the

likelihood function:

wk(E) = e− 1

2 χ2k(E) (4.18)

48

Page 59: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

where wk(E) is the weight assigned to the kth random file based on differential

experimental data (E) (the weights are computed based on (n,tot), (n,el), (n,inl)

and (n,γ) cross sections). More details on how differential data are incorpo-

rated into the TMC methodology and how outliers are treated is presented in

Refs. [52, 77]

Once these weights have been computed, they can be combined with weights

computed from integral benchmark data:

wk(B) = e− 1

2 χ2k(B) (4.19)

where the χ2 of the benchmark for the kth random file, assuming that the

benchmarks are uncorrelated (since we use one benchmark at a time), χ2k(B), is

given by:

χ2k(B) =

(kBeff(k)− kB

eff,exp)2

σ2B, j

(4.20)

where, kBeff(k) is the calculated benchmark value for the kth random file, kB

eff,exp

is the evaluated experimental benchmark value and σ2B, j is the combined bench-

mark uncertainty given in Eq. 4.13. Since the adjustment is made for a general

purpose file, the correlation coefficient (R) as utilized in Eq. 4.11 was not in-

cluded in the χ2 computation.

If we assume that the differential and integral experimental data are uncorre-

lated, a combined weight can be computed from:

wT,k = wk(E)×wk(B) (4.21)

where wT,k is the total weight (combined weight) for the kth random file, wk(E)

and wk(B) are the weights assigned to the kth random file, using differential and

integral benchmark data respectively. To compare wk(E) and wk(B), the weights

were both normalized with the maximum weights computed for the differen-

tial and the integral experimental data respectively. Since experimental covari-

ance data for benchmarks were not readily available, the ’one-benchmark-at-

a-time’ approach was used.

In the case where covariance data including their correlations are available

for the integral experiments, the generalized χ2 as given in Eq. 4.17 can be

used for χ2k(B). The inclusion of multiple benchmarks with their correlations

is recommended for future work. By combining these weights, a new ’best’

file (random file with the largest weight) is selected as the file which com-

pares best with both differential and integral benchmark data. This file was

compared with the major nuclear data libraries as presented in Table 5.7 and

validated by comparing its performance against integral benchmarks results.

It should be noted that the same benchmarks used for adjustment should not

to be used for validation.

49

Page 60: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

5. Results and Discussions

’With four parameters I can fit an elephant, andwith five I can make him wiggle his trunk.’

- John von Neuman

In this chapter, a summary of the results obtained are presented and discussed.

First, results from model calculations using the TALYS code are presented.

Subsequently, nuclear data uncertainties and keff distributions for global and

partial variations of nuclear data are presented. Furthermore, results from the

benchmark selection method, nuclear data uncertainty reduction and correla-

tion based sensitivity measure are also discussed. Finally, results from the

adjustment of 208Pb nuclear data in the fast region using both differential and

integral benchmark information are discussed.

5.1 Model calculations

Model calculations were carried out with the TALYS code in combination with

other auxiliary codes as presented earlier in section 4.2 to generate 208Pb and206Pb random nuclear data. A total of 550 each of 208Pb and 206Pb nuclear

data libraries were produced. The random files generated were processed into

the ACE format with the NJOY code at 600 K. Fig. 5.1 shows 50 random ACE208Pb cross sections as a function of energy for the following neutron induced

reactions: (n,tot), (n,el), (n,2n) and (n,γ). These cross sections were compared

with the 208Pb nuclear data from the ENDF/B-VII.1 nuclear data library. The

ENDF/B-VII.1 data were processed at the same temperature and with the same

version of the NJOY code as the random files.

From Fig 5.1, a spread can be observed for the entire energy region for all the

studied reactions. This is not surprising as each line represents a random nu-

clear data file with a unique set of model parameters. Using the TMC method

described earlier in section 3.3.2, the spread in nuclear data observed here is

propagated all the way through to macroscopic reactor parameters (PAPERII). A detailed and extensive presentation of choice of parameters and models

used for the TALYS calculations is, however, beyond the scope of this work.

50

Page 61: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

10−2 10−1 100 101102

103

104

105208Pb (n,tot)

Incident Energy (MeV)

Cro

ss s

ectio

n (m

b)Random filesENDF/B−VII.1

10−2 10−1 100 101102

103

104

105208Pb (n,el)

Incident Energy (MeV)

Cro

ss s

ectio

n (m

b)

Random filesENDF/B−VII.1

5 10 15 200

1

2

3

208Pb (n,2n)

Incident Energy (MeV)

Cro

ss s

ectio

n (b

)

Random filesENDF/B−VII.1

10−2 10−1 100 101

10−2

100

102

104

208Pb (n,γ)

Incident Energy (MeV)

Cro

ss s

ectio

n (m

b)

Random filesENDF/B−VII.1

Figure 5.1. Random ACE cross sections as a function of energy for some neutron-

induced reactions on 208Pb. Results are compared with ENDF/B-VII.1 nuclear data

library. Top left: (n,tot), top right: (n,el), bottom left: (n,2n) and bottom left: (n,γ).

5.2 Global uncertainties

In Fig. 5.2, the keff distributions for varying the nuclear data of some of the ac-

tinides contained within the fuel: 238Pu, 239Pu, 240Pu and 241Am are presented.

Gaussian distributions are observed for all the actinides under consideration

except for 238Pu and 241Am which exhibited asymmetric behaviors with tails

in the high keff region; skewness values of 0.73 and 0.28 were recorded for241Am and 238Pu respectively. The skewness has been described in more de-

tail in section 3.2. Large deviations between the JEFF-3.1 and TENDL-2012

keff values were obtained for the variation of 241Pu and 242Pu nuclear data.

The reason for the deviation has to be investigated further. Delayed neutron

data were not varied in the random actinide data. Hence no computations were

performed to assess the impact of actinide data uncertainties on the βeff.

In Table 5.1, the global nuclear data uncertainty in the keff is presented for238,239,240Pu, 241Am, 56Fe and 58Ni. Large uncertainties were obtained for239,240Pu, however, the global uncertainties of 238Pu and 241Am were observed

to be relatively small but significant. Since partial variations were not per-

formed for these isotopes, it is difficult to ascertain the contributions from

51

Page 62: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

0.99 1 1.010

10

20

30

40

50

keff values

Num

ber o

f cou

nts/

bin 239Pu

0.98 0.99 1 1.01 1.020

5

10

15

20

25

keff values

Num

ber o

f cou

nts/

bin 240Pu

0.999 1 1.001 1.0020

5

10

15

20

keff values

Num

ber o

f cou

nts/

bin 241Am

0.995 1 1.0050

5

10

15

20

keff values

Num

ber o

f cou

nts/

bin 238Pu

Figure 5.2. keff distribution for ELECTRA due to the variation of actinide nuclear

data. Top left: 239Pu, top right: 240Pu, bottom left: 241Am and bottom right: 238Pu.

partial channels to the global uncertainty observed. Small uncertainties were

observed for most structural materials on the keff except for 56Fe and 58Ni,

which had significant impact and are therefore presented in Table 5.1.

Table 5.1. The impact of nuclear data on the keff for the variation of actinide nucleardata and some structural materials for ELECTRA. The uncertainty of uncertainty ofthe nuclear data uncertainties computed are also presented.

Isotope σND(keff) [pcm]

238Pu 295±12239Pu 748±19240Pu 1015±32

241Am 70±356Fe 185±658Ni 20±1

The probability distributions of the keff for the variation of the lead nuclear

data: 204,206,207,208Pb is depicted in Fig. 5.3. The random nuclear data files for207Pb and 204Pb were obtained from TENDL-2012 [75] while 208Pb and 206Pb

random files were produced as part of this work and can be obtained from

the TENDL-2014 nuclear data library [37]. In Table 5.2, the global nuclear

data uncertainties together with their uncertainties for 204,206,207,208Pb are pre-

sented for keff, coolant temperature coefficient, and the coolant void worth.

52

Page 63: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

0.98 0.99 1 1.01 1.020

10

20

30

40

50

keff values

Num

ber o

f cou

nts/

bin

208Pb

1.0021.0031.0041.0051.0061.0070

10

20

30

40

keff values

Num

ber o

f cou

nts/

bin

207Pb

1 1.002 1.004 1.0060

10

20

30

40

keff values

Num

ber o

f cou

nts/

bin

206Pb

0.9995 1 1.0005 1.0010

10

20

30

40

keff values

Num

ber o

f cou

nts/

bin

204Pb

Figure 5.3. keff distribution for ELECTRA for varying lead nuclear data at 600 K

coolant temperature. Top left: 208Pb, top right: 207Pb, bottom left: 206Pb and bottom

right: 204Pb. The keff distribution for 208Pb, 207Pb and 206Pb slightly deviate from

Gaussian distribution with tails in the high keff region.

Table 5.2. Nuclear data uncertainty (global) in some macroscopic reactor parametersfor ELECTRA due to the variation of 204,206,207,208Pb nuclear data. The results areall given in pcm. The values quoted in the sixth row are the quadratic sum of the NDuncertainties coming from 204,206,207,208Pb (σND,Pb,tot ). The uncertainty of uncertaintyof the nuclear data uncertainties computed are also presented.

Isotopes σND(ke f f ) σND(CTC) σND(CVW )208Pb 896±28 61±2 890±28207Pb 118±4 - 117±4206Pb 136±5 - 136±5204Pb 12±2 - 12±2

Total(σND,Pb,tot)[pcm] 914 61 907

Relative uncertainties (%) 0.9 2.6 3.3

It was assumed in Table 5.1 that the uncertainties between the lead isotopes

were uncorrelated. It should be noted, that to obtain the ND uncertainty for the

CTC in pcm/K the values in column three must be divided by the difference

in temperature (1200 K) between the two perturbed states.

Large uncertainties in the keff were observed for 208Pb indicating that the

ELECTRA core is highly sensitive to 208Pb nuclear data variation and hence

its uncertainties. Relatively large uncertainties in the keff were recorded for206Pb and 207Pb. The uncertainty from the 204Pb was, however, small. The ob-

53

Page 64: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

served spread in the CTC for 204,206,207Pb was dominated by statistics. Except

for 204Pb, the impact of nuclear data uncertainty for all lead isotopes on the

CVW were relatively high.

5.3 Partial variations

In Fig. 5.4, we present the distribution in keff due to the variation of elastic

scattering, resonance parameters, angular distributions and neutron capture

cross sections of 208Pb. Non Gaussian shapes are observed for the variation

1 1.01 1.02 1.030

5

10

15

20

keff values

Num

ber o

f cou

nts/

bin

resonance parameters

1.006 1.008 1.01 1.012 1.014 1.0160

5

10

15

20

25

keff values

Num

ber o

f cou

nts/

bin elastic

1.006 1.0065 1.0070

5

10

15

20

keff values

Num

ber o

f cou

nts/

bin

capture

1 1.002 1.004 1.006 1.008 1.010

5

10

15

20

keff values

Num

ber o

f cou

nts/

bin

angular distribution

Figure 5.4. keff distribution for ELECTRA for varying resonance parameters only (top

left), the elastic scattering only (top right) and neutron capture only (bottom left) in

the fast region, and angular distributions only (bottom right) of 208Pb.

in the elastic scattering, resonance parameters and the angular distributions as

can be seen from the figure. Their skewness values are presented in Table 5.4.

High tails are observed in the high keff regions for the elastic scattering cross

section and the resonance parameter variations. A tail in the low keff region

was, however, observed for the angular distributions. Nuclear data uncertainty

in keff due to partial variations of 206,207,208Pb nuclear data are presented in

Table 5.3. The major contribution to the global uncertainty comes from the

resonance parameters, however, the (n,el) and angular distributions also had

significant impact as can be seen in the Table 5.3.

54

Page 65: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

Table 5.3. Nuclear data uncertainty in keff due to partial variations of 208Pb,207Pb and 206Pb nuclear data. All lead files used here were obtained from TENDL-2012 [75]. The uncertainty of uncertainty of the nuclear data uncertainties computedare also presented.

208Pb 207Pb 206Pb

Nuclear data varied σND(ke f f )[pcm] σND(ke f f )[pcm] σND(ke f f )[pcm]

n,el 289±12 58±3 50±2

n,2n 7±3 4±6 5±4

n,γ 83±4 10±2 10±2

n, inl 8±3 30±2 23±2

Resonance parameters 862±35 55±3 145±6

Angular distributions 226±9 101±4 107±5

The bulk contribution to the nuclear data uncertainty on the keff and the CVWcome from uncertainties in the resonance parameters, the elastic scattering

cross section and from the angular distributions. Uncertainties due to (n,2n)

and (n,inl) were found to be small for all isotopes. The impact from the (n,γ)

on the keff was also observed to be small as expected since fast reactors like

ELECTRA have a small fraction of capture reactions in the core. The small

impact of the 208Pb (n,inl) cross section observed can be attributed to the small

fraction of inelastic reactions in the ELECTRA core (the inelastic channel of208Pb opens at about 2.7 MeV which is well above the peak of the ELECTRA

spectrum, 700 keV).

In Table 5.4, the skewness values of keff distributions for the partial variation

of 208,207,206Pb are presented. High skewness values are recorded for 207Pb

and 206Pb (n,el) cross sections as can be observed from the table.

Table 5.4. Table showing the skewness values for the keff distribution due to partialvariation of 208Pb, 207Pb and 206Pb nuclear data.

Skewness values

keff208Pb 207Pb 206Pb

Resonance parameters 0.75 0.12 -0.31

(n,el) cross section 0.98 0.86 0.73

Angular distributions -0.48 -0.18 -0.16

(n.γ) cross section 0.08 -0.12 0.04

Global keff 0.58 0.37 0.33

5.4 Nuclear data uncertainty reduction

In this section the results from the uncertainty reduction methods presented

in section 4.5 and PAPER IV are presented. For a benchmark to have a high

55

Page 66: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

ability to reduce the ND uncertainty for a particular application, the bench-

mark should have: (1) high sensitivity to the isotope under investigation, (2)

a low combined benchmark uncertainty and (3) a high absolute value of the

correlation coefficient computed between the benchmark observable and the

application response parameter.

In Fig. 5.5, keff distributions due to the variation of 239Pu nuclear data after

implementing the method of assigning file weights based on the likelihood

function is presented using benchmarks for the ELECTRA reactor. It can be

observed from the figure that the posterior distributions have narrower spreads

compared to the prior distributions. This can be attributed to the high sensi-

tivity to the variation of 239Pu nuclear data for the pmf benchmarks and the

strong correlation obtained between the pmf benchmarks and ELECTRA due

to 239Pu nuclear data variation. The distributions of weights assigned to ran-

0.985 0.99 0.995 1 1.005 1.01 1.0150

50

100

150

200

keff values

Nor

mal

ized

cou

nts/

bin

pmf1c1 Posterior distributionPrior distribution

0.985 0.99 0.995 1 1.005 1.01 1.0150

50

100

150

200

keff values

Nor

mal

ized

cou

nts/

bin

pmf9c1 Posterior distributionPrior distribution

0.985 0.99 0.995 1 1.005 1.01 1.0150

50

100

150

200

keff values

Nor

mal

ized

cou

nts/

bin

pmf10c1 Posterior distributionPrior distribution

0.985 0.99 0.995 1 1.005 1.01 1.0150

50

100

150

200

keff values

Nor

mal

ized

cou

nts/

bin

pmf11c1 Posterior distributionPrior distribution

Figure 5.5. keff distributions due to the variation of 239Pu nuclear data after combining

prior information with integral benchmark information from pmf1c1 (top left), pmf9c1

(top right), pmf10c1 (bottom left) and pmf11c1 (bottom right) benchmarks for the

ELECTRA reactor using the method of assigning file weights based on the likelihood

function. ’c’ denotes the case of the benchmark.

dom 239Pu and 240Pu nuclear data after combining the prior information with

integral benchmark information from pmf1c1 (top left), pmf8c1 (top right),

pmf9c1 (bottom left) and pmf10c1 (bottom right) benchmarks for the ELEC-

TRA reactor are presented in Fig. 5.6. It can be observed from the figure that

a large number of random nuclear data files were assigned with low weights

and, therefore, contributed less to the final uncertainty obtained. This resulted

in narrower posterior distributions.

56

Page 67: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

0 0.2 0.4 0.6 0.8 10

20

40

60

80

100

120

Weights

Num

ber o

f cou

nts/

bin pmf1c1

0 0.2 0.4 0.6 0.8 10

50

100

150

200

Weights

Num

ber o

f cou

nts/

bin pmf8c1

0 0.2 0.4 0.6 0.8 10

20

40

60

80

100

Weights

Num

ber o

f cou

nts/

bin pmf9c1

0 0.2 0.4 0.6 0.8 10

20

40

60

80

100

120

Weights

Num

ber o

f cou

nts/

bin pmf10c1

Figure 5.6. Distributions of weights assigned to random 239Pu nuclear data after com-

bining prior information with integral benchmark information from pmf1c1 (top left),

pmf8c1 (top right), pmf9c1 (bottom left) and pmf10c1 (bottom right) benchmarks for

the ELECTRA reactor. ’c’ denotes the case of the benchmark.

5.5 Similarity Index

As proof of concept, similarities between different benchmarks calculated

with the method presented in section 4.6 are presented. These similarities are

important for classifying benchmarks for reactor calculations and code vali-

dation purposes. In Fig. 5.7, a similarity matrix computed between different

plutonium metallic fast benchmarks denoted by "pmf" due to the variation of239Pu nuclear data are presented. The values displayed in the matrices are sim-

ilarity indices computed. Very strong positive similarity indices are recorded

in Fig. 5.7 for all benchmarks. This signifies a strong similarity in neutron

spectrum and isotopic concentration between the two benchmarks.

The highest similarity index of 0.97 was recorded between pmf5c1 vs. pmf8c1

benchmarks. The main factors that contributed to the high similarity index ob-

tained are the high correlation coefficient (R=0.997) and high variance ratio

of 0.968 computed between the two systems. As can be observed, the pmf5c1

benchmark with the following composition: (239−241Pu: 94.79/ 4.90 / 0.31

[at. % ]) has similar plutonium composition with the pmf8c1 benchmark:

(239−241Pu: 93.59/ 5.10 / 0.30 [at. %]); both benchmarks have fast spectrum

too.

In Table 5.5, a summary of similarity indices computed between ELECTRA

and a set of lead sensitive benchmarks using 208,207,206Pb nuclear data are pre-

57

Page 68: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

0.69

0.87

0.92

0.93

0.96

0.74

1.00

0.94

0.82

0.79

0.76

0.73

1.00

0.74

0.71

0.90

0.93

0.97

1.00

0.73

0.96

0.73

0.93

0.96

1.00

0.97

0.76

0.93

0.76

0.95

1.00

0.96

0.93

0.79

0.92

0.78

1.00

0.95

0.93

0.90

0.82

0.87

1.00

0.78

0.76

0.73

0.71

0.94

0.69

Benchmarkspmf1c1 pmf2c1 pmf5c1 pmf8c1 pmf9c1 pmf10c1 pmf11c1

Ben

chm

arks

pmf11c1

pmf10c1

pmf9c1

pmf8c1

pmf5c1

pmf2c1

pmf1c1

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Figure 5.7. Similarity matrix between selected plutonium sensitive benchmarks due

to the variation of 239Pu nuclear data. (pmf) denotes plutonium metallic fast bench-

marks. ’c’ denotes the case of the benchmark. The values displayed in the matrix are

similarity indices.

Table 5.5. Table showing similarity indices computed between ELECTRA and a setof lead sensitive benchmarks due to the variation of 208,207,206Pb nuclear data. 300random nuclear data files were used for each isotope.

Benchmark category Case SB(208Pb) SB(

207Pb) SB(206Pb)

PU-MET-FAST-035 (case 1) 0.23 (3) -0.06 (4) -0.03 (4)

HEU-MET-FAST-027 (case 1) 0.25 (3) -0.01 (4) -0.02 (4)

HEU-MET-FAST-064 (case 1) 0.64 (2) -0.03 (4) -0.001 (4)

HEU-MET-FAST-057 (case 1) 0.80 (1) -0.06 (4) -0.04 (4)

HEU-MET-FAST-057 (case 2) 0.76 (1) -0.05 (4) -0.02 (4)

HEU-MET-FAST-057 (case 3) 0.68 (2) -0.03 (4) -0.02 (4)

HEU-MET-FAST-057 (case 4) 0.67 (2) -0.08 (4) 0.0004 (4)

HEU-MET-FAST-057 (case 5) 0.59 (2) -0.03 (4) -0.005 (4)

LEU-COMP-THERM-010 (case 1) 0.17 (4) 0.04 (4) 0.01 (4)

LEU-COMP-THERM-017 (case 1) 0.04 (4) 0.06 (4) 0.01 (4)

sented. Results in brackets represent the strength of similarity between the ap-

plication and the benchmarks (1, 2, 3 and 4 represent very strong, strong, weak

and very weak similarities respectively). Strong similarity indices are recorded

between ELECTRA and the hmf57c1 (SB=0.80), hmf57c2 (SB=0.76) and the

hmf64c1 (SB=0.64) benchmarks due to the variation of 208Pb nuclear data.

This is not surprising since high variance ratios of 0.8, 0.77 and 0.64 were

obtained between ELECTRA and the hmf57c1, hmf57c2 and the hmf64c1

benchmarks respectively. High variance ratios (close to 1) signifies that the

58

Page 69: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

systems exhibit similar sensitivity to the variation of nuclear data for the iso-

tope of interest. Since the lct10c1 and lct17c1 are thermal benchmarks, very

weak similarity indices were obtained between these benchmarks and ELEC-

TRA. Since the highest similarity indices were recorded between ELECTRA

and the hmf57c1 and hmf57c2 benchmarks, these benchmarks can be consid-

ered as good candidates for reactor code validation using 208Pb nuclear data

as input. Very weak similarities were, however, obtained between ELECTRA

and the benchmarks in the case of 207Pb and 206Pb nuclear data variation as

can be seen from the table. This is expected since all the benchmarks have

weak sensitivities to the variation of 207Pb and 206Pb nuclear data.

5.6 Correlation based sensitivity measure

In this section, results from the correlation based sensitivity method presented

in section 4.7 and PAPER II are presented and discussed. In Fig. 5.8, an

example of the cross section-keff correlations (not included in the papers) is

presented as a function of incident neutron energy for ELECTRA, using a

187 energy group structure. Four partial cross sections: (n,f), (n,el), (n,γ) and

(n,inl) cross sections are presented for 241Am nuclear data.

-0.8

-0.6

-0.4

-0.2

0.0

0.2

0.4

0.6

0.8

1.0

0.001 0.01 0.1 1 10

Co

rre

latio

n

Incident Energy (MeV)

241Am(n,g)

(n,el)(n,inl)

(n,f)

Figure 5.8. Cross section-keff correlation against incident neutron energy for 241Am.

The high correlations are observed for the 241Am (n,f) cross section in the high

energy region at about 1 MeV. This could be attributed to the relatively large241Am (n,f) cross sections in the high energy region. Also, the ELECTRA

reactor exhibits a hard spectrum. Weak correlations were, however, obtained

for the (n,γ), (n,inl) and (n,el) cross sections. The correlation based sensitivity

measure has been applied to lead isotopes and presented in PAPER II.

59

Page 70: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

5.7 Nuclear data adjustments

In section 4.9, a new method for nuclear data adjustment based on combining

information from both integral and differential nuclear data is proposed. In

this section, the results from the method are reported.

The weights for the differential data were computed based on the (n,tot) and

(n,el) cross sections and the (n,tot), (n,el), (n,inl) and (n,g), cross sections, in

the energy region of 1 to 20 MeV. From the 3000 prior random files, a new

best file (the file with the highest weight) based on the differential data was

selected. In Fig. 5.9, this new best file, together with 50 random files and

the prior central file is compared with the ENDF/B-VII.1 evaluation for the208Pb(n,el) cross section. Similar comparisons have been made for the other

reaction channels. A comparison of file weights computed between different

5 10 15 20

2000

4000

6000

8000

208Pb (n,el)

Incident Energy (MeV)

Cro

ss s

ectio

n (m

b)

Random filesENDF/B−VII.1Central filebest file (differential data only)

Figure 5.9. Comparing the prior adjustments and central file with ENDF/B-VII.1

nuclear data library for 208Pb(n,el) cross section in the energy range of 5 MeV to 20

MeV. The central file represents the evaluators best effort before the adjustment and

the ’best file’ is the file selected after adjustment with differential data.

evaluations and this work are presented in Table 5.7. The weights are com-

pared with ENDF/B-VII.1, JEFF-3.1, JENDL-4.0, CENDL-3.1, the central

file and this work (the new best file). From the table, the weights (based on

differential experimental data), show that our best file performed better com-

pared to the other nuclear data libraries. This is expected since our best file

is chosen based on the agreement with these experiments. It should also be

noted that other channels, such as (n,α), (n,p), and angular distributions, were

not included in the computation of the weights. Furthermore, since the outliers

were identified by considering the deviation of experiments to model calcula-

tions [77], this could be a possible reason why high weights were obtained for

our best files.

60

Page 71: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

Table 5.6. Comparison of file weights between different nuclear data libraries andthis work for prior adjustments with only differential data. The weight for the newbest file, is the maximum weight computed using only differential experimental data.

Weights based on:

Nuclear data [(n,tot) and (n,el)] [(n,tot), (n,el), (n,inl), (n,γ)]

ENDF/B-VII.1 1.00e-08 5.15e-56

JEFF-3.1 1.02e-08 1.30e-50

JENDL-4.0 7.33e-08 2.42e-60

CENDL-3.1 9.93e-09 3.48e-49

Central file 2.72e-12 9.85e-111

New best file 1.40e-7 6.83e-42

In Table 5.7, ratios of calculated to experimental values (C/E) for different lead

sensitive benchmarks for adjustments using differential data only, the hmf57

case 1 benchmark only, and a combination of differential and hmf57 case 1

benchmark data are compared with the ENDF/BVII.0 evaluation.

Table 5.7. The ratios of calculated to experimental values (C/E) for different leadsensitive benchmarks for adjustments using differential data only, the hmf57 case 1benchmark only, and a combination of differential and hmf57 case 1 benchmark data.For the benchmark adjustment, the ENDF/B-VII.0 was used as the reference libraryfor all isotopes, except for 208Pb which was varied. In the case of ENDF/B-VII.0, allisotopes were maintained as the ENDF/B-VII.0 nuclear data library. In the case ofthe benchmarks, only the HMF057-01 was used for adjustment, the other benchmarkswere used for testing and validation. The statistical uncertainties obtained range from39 to 44 pcm.

BenchmarksDifferential

data only

Benchmark

(HMF057-01)Differential data

+ HMF057-01ENDF/B-VII.0

PMF035-01 0.99873 1.00635 0.99873 0.99856

HMF027-01 1.00222 1.00676 1.00222 1.00182

HMF057-02 1.00109 1.00851 1.00109 0.99888

HMF057-03 1.02020 1.03023 1.02020 1.01726

HMF057-04 0.99049 0.99793 0.99049 0.98784

HMF057-05 1.02470 1.03580 1.02470 1.02188

HMF057-06 0.99992 1.00909 0.99992 0.99713

HMF064-01 0.99835 1.00896 0.99796 0.99460

HMF064-02 0.99970 1.01225 0.99970 0.99610

As outlined in section 4.9, the 3000 208Pb files adjusted with differential data,

were further adjusted by using the hmf57 case 1 benchmark. The adjustment

was carried out in the fast region, above the unresolved resonance region. The

reason to choose this benchmark is that the hmf57 case 1 benchmark is lead

sensitive and also exhibits a fast neutron spectrum. Consequently, three differ-

61

Page 72: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

ent files were obtained: one from adjustment with differential data, one from

adjustment with the hmf57 case 1 benchmark, and one from the combination

of the weights obtained from differential and benchmark adjustments. These

files were validated against nine different benchmarks using the MCNPX ver-

sion 2.5 code and their performance are compared to the ENDF/BVII.0 eval-

uation as presented in Table 5.7. The benchmarks, as shown in the table,

were used for testing and validation. This was done in order not to use the

same benchmarks for both calibration and validation. For example, the best

file obtained after adjustment with hmf57 case 1 (random file 0534 with a keff

= 1.00000±0.00038), was validated against benchmarks as presented in Ta-

ble 5.7.

In Fig. 5.10, a ratio of the best file obtained from the adjustments with com-

bined differential and integral experimental data to the ENDF/B-VII.1 eval-

uation for the 208Pb(n,el) cross section is presented. From the figure, a non-

Figure 5.10. Comparing adjustments with combined differential and integral experi-

mental data with ENDF/B-VII.1. The ratio between the two evaluations against inci-

dent energy is presented below the plot.

smooth curve is observed for the ENDF/B-VII.1 evaluation at about 10 MeV,

which is difficult to explain from reaction theory. The evaluation (best file)

from this work, however, gave a smoother curve as can be seen from Fig. 5.10.

However, for a complete evaluation of a general purpose file more channels,

such as (n,α) and (n,p), and a large set of benchmarks with different spectra

and composition will have to be used.

In the case of adjustment with only differential data, very few files were as-

signed with significant weights. This accounts for the reason why the adjusted

value based on only differential data gave the same results (the same file was

62

Page 73: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

selected) with the adjustments with combined differential and benchmark data,

as can be observed from Table 5.7. The few significant weights obtained are

in agreement with the observations made earlier in Ref. [77]. This could be

as a result of the sampling of model parameters from a rather wide param-

eter space, or could be attributed to model defects. This, however, must be

investigated further in future research. Also, to take full advantage of integral

information, multiple benchmarks together with their uncertainties and corre-

lations must be used. Multiple benchmarks have, however, not been used in

this work.

63

Page 74: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

6. Conclusion and outlook

’I think; therefore I am.’- Renè Descartes

The Total Monte Carlo method was used to study the impact of nuclear data

uncertainties of 238,239,240Pu, 241Am, 204,206,207,208Pb and some structural ma-

terials of importance to the conceptual ELECTRA reactor. Electra is a plu-

tonium fuelled and lead cooled fast reactor. For the actinides studied, large

uncertainties were observed in the keff due to 239,240Pu nuclear data uncer-

tainties. Relatively small uncertainties were, however, recorded for 238Pu and241Am nuclear data. In the case of the lead coolant, it was observed that the

uncertainty in the keff and the coolant void worth were large with the most

significant contribution coming from 208Pb nuclear data uncertainties. The

dominant contributions to the uncertainty in the keff came from uncertainties

in the resonance parameters in the case of 208Pb; however, elastic scattering

cross section and the angular distributions also had significant impacts. New208Pb and 206Pb random files with realistic central values have also been pro-

duced as part of this work.

In this thesis, methods for the inclusion of integral experiments for nuclear

data uncertainty reduction of reactor parameters within the TMC method have

been developed. It was observed from the study that significant reductions of

uncertainties due to nuclear data were obtained for some isotopes for ELEC-

TRA after incorporating integral benchmark information.

In addition, a method for combining differential experiments and integral bench-

mark data for data assimilation and nuclear data adjustments using file weights

based on the likelihood function is proposed. The proposed method was ap-

plied for the adjustment of neutron induced 208Pb nuclear data in the fast en-

ergy region.

Furthermore, a method is proposed for computing the similarities between

benchmarks and a specific application. This method can be used to select

benchmarks for code validation for a specific application. Also, a correlation

based sensitivity method was used to study cross section-parameter correla-

tions for different energy groups and reactions.

As an outlook, the following are recommended for future research:

64

Page 75: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

1. Uncertainty quantification for specific isotopes in this work were per-

formed using the ’one-isotope-at-a-time’ approach where each isotope

was varied one after the other. This approach was used because the main

goal was to identify the nuclear data inputs with significant impact on

the uncertainty of the output. This approach, however, does not take

into consideration the interaction between different nuclear data files.

To take these interactions into consideration, the random nuclear data

libraries for all isotopes could be varied simultaneously; this gives the

global effect. Also, cross-correlations between isotopes were not taken

into account in this work. This is recommended for future work.

2. A more detailed work in the resonance region and the angular distribu-

tions of lead isotopes is needed, taking into consideration differential ex-

periments as well as integral benchmark data. This is important because

it was observed that the resonance region and the angular distributions

had significant impact, especially in the case of 208Pb. Furthermore,

significant amount of work is needed to obtain better angular distribu-

tions [85].

3. Also, the ’one-benchmark-at-a-time’ approach was used for the reduc-

tion of nuclear data uncertainties. However, to benefit fully from all the

benchmark information available, multiple benchmarks taking into con-

sideration their covariance data and correlations must be used. The use

of multiple benchmarks is also recommended for use in the combined

adjustment methodology based on differential and integral experiments

proposed in this work.

4. The uncertainty quantification carried out in this thesis were performed

at steady state for the ELECTRA reactor and, to some extent, burnup.

A comprehensive study to assess the impact of nuclear data uncertain-

ties on reactor burnup calculations by varying both transport and fission

yield data is recommended.

5. Feedback from nuclear data adjustments can be used to inform the dis-

tributions of model parameters and their uncertainties. This will help

improve nuclear data as well as nuclear reaction models. For example,

from the few files with significant weights obtained from the adjustment

with differential data (as observed in section 5.7), feedback to model

calculations could be provided for the adjustment of model parameters

and their uncertainties.

65

Page 76: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

7. Sammanfattning

"Human progress is neither automatic nor inevitable ..."- Martin Luther King, Jr.

Idag saknar cirka 1,4 miljarder människor världen över fortfarande tillgång

till elektricitet. Ytterligare en miljard har endast sporadisk tillgång till elek-

tricitet [1]. Som ett resultat av befolkningsökningen och den ekonomiska

utvecklingen, i synnerhet i utvecklingsländer, förväntas de globala energibeho-

ven öka kraftigt [2]. Energianvändningen världen över ökar kraftigt varvid

kärnkraften förväntas spela en allt större roll vad gäller att möta energibehovet.

För att få allmänhetens acceptans måste nästa generations kärnreaktorer dels

kunna konkurrera ekonomiskt med andra energikällor och dels måste samhäl-

lets krav avseende avfallshantering, kärnämneskontroll, och säkerhet uppfyl-

las. GEN-IV International Forum (GIF) startades därmed med mycket ut-

manande tekniska mål, som omfattar hållbarhet, ekonomi, säkerhet, tillför-

litlighet, kärnämneskontroll och fysiskt skydd. De sex reaktorkoncept som

identifierats som de mest lovande fjärde generationens (GEN-IV) reaktorsys-

tem är [4]: den gaskylda snabbreaktorn, den blykylda snabbreaktorn, den na-

triumkylda snabbreaktorn, saltsmältereaktorn, högtemperaturreaktorn och den

superkritiska vatttenkylda reaktorn.

GIF har rankat konceptet med den blykylda snabbreaktorn högst vad gäller

hållbarhet, eftersom det använder en sluten bränslecykel för omvandling av

fertila isotoper. Den rankas också högt vad gäller icke-spridningsaspekter

och fysiskt skydd, bl.a. beroende på att härden har en lång livslängd [5].

Reaktorns säkerhetsfunktioner förstärks av att kylmediet är relativt inert och

har förmåga att hålla kvar farliga radionuklider, som exempelvis jod och ce-

sium, även om en allvarlig olycka skulle inträffa. Som en del av GEN-IV-

utvecklingen i Sverige lanserades 2009 GENIUS-projektet för utveckling av

GEN-IV-konceptet i ett svenskt sammanhang [6]. Projektet var ett samar-

bete mellan Chalmers tekniska högskola, Kungliga Tekniska högskolan och

Uppsala universitet. Inom ramen för projektet föreslogs utvecklingen av en

blykyld snabbreaktor som kallas ELECTRA, European Lead-Cooled Training

Reactor, som ska klara av en fullständig återvinning av plutonium och ameri-

cium i härden.

För ett framgångsrikt införande av GEN-IV-reaktorer krävs kärndata av hög

66

Page 77: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

kvalitet. Kärnadata kan t.ex. vara sannolikheten för en kärnreaktion. Denna

sannolikhet benämns tvärsnitt. Ett exempel på ett tvärsnitt är fissionstvärsnit-

tet, d.v.s. sannolikheten för att en atomkärna ska klyvas.

Innan man kan använda kärndata i tillämpningar måste de evalueras, valid-

eras mot integrala experiment och konverteras till format som kan användas

för olika tillämpning. Evalueringsprocessen har tidigare skett med hjälp av

differentiella experimentella data som sedan kompletterades med kärnmodell-

beräkningar. Detta förfaringssätt håller på att förändras tack vare avsevärda

förbättringar av kärnreaktionsteorier och tillgången till ökad beräkningska-

pacitet. Eftersom kärnmodellkoderna inte är perfekta valideras de vanligtvis

mot en omfattande uppsättning experimentella data. Dessa data är dock inte

exakta och därför innehåller även storheter beräknade med hjälp av model-

lkoder, t.ex. tvärsnitt, osäkerheter. Eftersom kärndata används i reaktorkoder

som indata för simuleringar, innehåller resultaten från reaktorkoderna osäk-

erheter på grund av dessa data. För att kunna bedöma reaktorsäkerheten är

det därför viktigt att kunna kvantifiera dessa osäkerheter. Vi vill också kunna

bestämma var det krävs ytterligare ansträngning för att minska dessa osäker-

heter.

Tills nyligen skedde fortplantning av osäkerheter till största delen med hjälp

av deterministiska metoder. Med detta tillvägagångssätt kan man avgöra o-

säkerheter för specifika reaktorparametrar, som t.ex. reaktorkriticiteten, med

hjälp av störningsberäkningar [7]. Denna metod innehåller en rad antagan-

den och förenklingar [10]. Med tillgång till ökad datorkraft kan mer exakta

metoder baserade på Monte Carlo-beräkningar användas. I Nuclear Research

and Consultancy Group (NRG), Petten, Nederländerna, har man utvecklat en

metod som kallas ’Total Monte Carlo (TMC)’ för evaluering av kärndata och

fortplantning av osäkerheter.

I mitt arbete har jag tillämpat TMC-metoden för att studera osäkerheter i

reaktorsäkerhetsparametrar för ELECTRA-reaktorn på grund av osäkerheter i

kärndata. Kärndatan kan då också kopplats till olika kärnfysikparametrar. Jag

har studerat betydelsen av aktinider i bränslet, strukturmaterial och kylmedel.

Detta arbete är viktigt eftersom säkerhetsmarginaler för viktiga reaktorsäk-

erhetsparametrar kan bli felaktiga om inte alla osäkerheter, inklusive kärn-

dataosäkerheter, beaktas vid utformningen av reaktorn. Denna avhandling om-

fattar även utveckling inom TMC-metoden. En sammanfattning av de artiklar

som ingår i denna avhandling presenteras nedan.

I ARTIKEL I kombinerar jag Total Monte Carlo-metoden med integrala ex-

periment (benchmarks) för fortplantning av osäkerheter från kärndata. Jag ob-

serverade att osäkerheter i kärndata kan minskas avsevärt genom att introduc-

era ett kriterium för acceptans/avvisning baserat på de integrala experimenten.

I ARTIKEL II använde jag TMC-metoden för att studera påverkan av kärn-

67

Page 78: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

data osäkerheterna i blyisotoperna på reaktorsäkerhetsparametrar för ELEC-

TRA. Som en del av arbetet studerade jag både osäkerheter på isotopnivå

samt vilka delar av kärndata som bidrog till osäkerheten. Jag studerade även

korrelationer mellan tvärsnitts och reaktorparametrar med hjälp av en korrela-

tionsbaserad känslighetsmetod. Både voidkoefficient och kriticitet studerades.

Stora osäkerheter observerades i kriticiteten för alla isotoper (med undantag av204Pb), med ett betydande bidrag från 208Pb-data. Det dominerande bidraget

till osäkerheten i reaktorparametrarna härstammade från osäkerheter i reso-

nansparametrar.

I ARTIKEL III användes TMC-metoden för att fortplanta osäkerheter i 239Pu-

kärndata till osäkerheter i sammansättningen, dvs. isotopinnehållet, av ELEC-

TRAs bränsle under reaktorns livstid. Resultaten visar att osäkerheterna för en

del minoritets-aktinider var tillräckligt stora för att man vid framtida forskning

bör fortsätta undersöka deras påverkan på säkerhetsparametrar vid upparbet-

ning. Det observerades även att integrala experiment kan användas för att

minska osäkerheterna.

I ARTIKEL IV föreslår jag två olika metoder för att minska osäkerheten i

kärndata med hjälp av en uppsättning integrala experiment. Efter att jag tilläm-

pat de två metoderna observerade jag en avsevärd reduktion av osäkerheten i

kriticitet p.g.a. osäkerheter i 239Pu och 208Pb-kärndata.

I ARTIKEL V föreslår jag en metod för att välja integrala experiment för

reaktorberäkningar som kommer att hjälpa till att validera reaktorkoder genom

att kvantifiera likheter mellan en reaktortillämpning och ett eller flera inte-

grala experiment. Studien har särskild betydelse eftersom man i allmänhet vill

ersätta kostsamma fullskaliga experiment med enklare integrala experiment.

Metoden tillämpades på ELECTRA med hjälp av en uppsättning integrala krit-

icitetsexperiment.

Som en del av denna avhandling föreslår jag även en metod för att kombin-

era differentiella experiment och integrala experiment för korrigering av kärn-

data med hjälp av filvikter baserat på Maximum likelihood-funktionen. Den

föreslagna metoden tillämpades för korrigering av neutroninducerad 208Pb-

kärndata i det snabba energiområdet. Denna metod öppnar upp för flera nya

möjligheter för hur man kan använda både differentiella och integrala experi-

ment i kärndataevaluering.

68

Page 79: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

AcknowledgmentI would like to express my sincere gratitude and appreciation to my team of

supervisors made up of Henrik Sjöstrand, Dimitri Rochman, Michael Öster-

lund and Stephan Pomp. Without your careful guidance, patience and sup-

port, this thesis would not have been possible. To Henrik, thanks for your

great interest in my work and your infectious enthusiasm; it would have been

difficult without your continuous stream of ideas. Thanks also for the late

nights ... To Dimitri, thanks for your generosity with your knowledge, and for

your patience to listen to my crazy ideas that sometimes didn’t make sense.

Thanks also for teaching me ’Total Monte Carlo’; Michael, I am grateful for

the several discussions and also for your help with my visa and resident per-

mit matters. To the man who hired me Stephan, the ’supreme leader’ (I should

reference Cecilia G. here, I think), thanks for believing in me.

I would also like to acknowledge financial support from the Swedish Research

Council through the GENIUS project.

Special thanks to Arjan J. Koning from the IAEA, Vienna and Steven van der

Marck from NRG, Petten, for the great discussions and regular feedbacks that

this thesis has benefited from.

It is also my pleasure to have made acquaintance with the following people

with whom I have had discussions on various interesting subjects and issues:

Temitope Taiwo, Jan Blomgren, Andreas Pautz, Hakim Ferroukhi, Ernst van

Groningen and Mattias Klintenberg.

I wish also, to express my gratitude to the staff, colleagues and friends at

the Division of Applied Nuclear Physics who have made my stay in Uppsala

fun: Mattias L., Cecilia, Ali, Tom, Augusto, Petter, Andrea, Iwona, Federico,

Matilda, Peter, Bill, Andreas, Vasily, Kaj, Diego, Eric, Carl, Sophie,..., and

also to Inger and Marja from the department administration. Ali, Andrea, Kaj

and Bill; it was fun to have you guys around at some conferences and summer

schools, you should visit me in Ghana :-). To my beer buddies Tom and Au-

gusto, thanks for the time out at O’learys.

To my African team here in Uppsala: Sidi from Mauritania, Francisco from

Mozambique, Baba from Senegal and Gerald from Uganda, thanks to y’all for

the beers and exciting night outs; Francisco, keep the tradition alive.

To the doctoral board of the Faculty of Science and Technology, Uppsala Uni-

versity (TNDR) for the 2014/2015, you guys were awesome; you made my

tenure as chair much fun. Good-luck in your future careers.

Family now: To my parents, and siblings: Emelia, Elliot, Elvis and Eva;

thanks for your support and encouragement. I am also grateful to our ’Vienese

family’ (The Scherhak family) for being a wonderful family to me and my

wife here in Europe.

Finally, to Mabel & Emily, thanks for making my life complete.

69

Page 80: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

References

[1] K. Kaygusuz, “Energy for sustainable development: A case of developing

countries,” Renewable and Sustainable Energy Reviews, 16, 2, 1116-1126

(2012).

[2] J. Conti et al., “Annual energy outlook 2007, with Projections to 2030,” EnergyInformation Administration, US Department of Energy, washington, DC,DOE/EIA-0383 (2007).

[3] R. Rhodes and D. Beller, “The need for nuclear power,” Foreign Affairs (2000).

[4] GIF, “Technology roadmap update for generation IV nuclear energy systems.

US DOE Nuclear Energy Research Advisory Committee and the Generation IV

International Forum (GIF),”

https://www.gen-4.org/gif/upload/docs/application/pdf/2014-03/gif-tru2014.pdf(2014).

[5] L. Cinotti and C.F. Smith and C. Artioli and G. Grasso and G. Corsini,

“Lead-Cooled Fast Reactor (LFR) Design: Safety, Neutronics, Thermal

Hydraulics, Structural Mechanics, Fuel, Core, and Plant Design,” in Handbookof Nuclear Engineering, p. 2749–2840, Springer, 2010.

[6] J. Wallenius, E. Suvdantsetseg, and A. Fokau, “European Lead-Cooled Training

Reactor,” Nuclear Technology, 177, 12, 303 - 313 (2012).

[7] G. Aliberti et al., “Nuclear data sensitivity, uncertainty and target accuracy

assessment for future nuclear systems,” Annals of Nuclear Energy, 33, 8,

700-733 (2006).

[8] J. Briesmeister, “MCNP - a general Monte Carlo n-particle transport code,

version 4c,” Tech. rep. LA-13709-M, Los Alamos National Laboratory, LosAlamos New Mexico, USA (2000).

[9] K. Furuta, Y. Oka, and S. Kondo, “SUSD: A computer code for cross section

sensitivity and uncertainty analysis including secondary neutron energy and

angular distributions,” Draft of UTNL-R0185 (English Translation (1986).

[10] L.L. Briggs, “Status uncertainty quantification approaches for advanced reactor

analyses,” (2008), Nuclear Engineering Division, Argonne National Laboratory,

Report ANL-GenIV-110.

[11] M. Salvatores et al., “Volume 26 Uncertainty and target accuracy assessment for

innovative systems using recent covariance data evaluations,” (2008), Tech.

Rep. WPEC Subgroup 26 final report, OECD/NEA Nuclear Data Bank, Paris,

France.

[12] A.J. Koning et al., “CANDIDE: Nuclear data for sustainable nuclear energy,”

(2009), Final report of a Coordinated Action on Nuclear Data for Industrial

Development in Europe (CANDIDE). European Commission Joint Research

Centre Institute for Reference Materials and Measurements.

[13] O. Schwerer and P. Obložinsky, “Nuclear Data Services Provided by the

IAEA,” Proc. Workshop on Nuclear Data and Nuclear Reactors ,Trieste, Italy,

2000, March 13 - April 14, 2000.

70

Page 81: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

[14] A. Trkov, “Status and Perspective of Nuclear Data Production, Evaluation and

Validation,” Nuclear Engineering and Technology, 37, 1, 11 (2005).

[15] D. Rochman and C.M. Sciolla, “Total Monte Carlo uncertainty propagation

applied to the Phase I-1 burnup calculation,” (2012), A report for the Pin-Cell

Physics of TMI-1 PWR unit cell of the OECD/UAM working group, NRG

Report 113696, April (2012).

[16] T.R. England and B.F. Rider, “Evaluation and compilation of fission product

yields,” (1994), ENDF-349, LA-UR-94-3106, Los Alamos National Laboratory

(United States).

[17] H. Henriksson, O. Schwerer, D. Rochman, M. Mikhaylyukova, and N. Otuka,

“The art of collecting experimental data internationally: EXFOR, CINDA and

the NRDC network,” Proc. International Nuclear Data Conference for Scienceand Technology, 2007, Nice, France, April, 22-27.

[18] A.J. Koning and A. Mengoni, “WPEC Subgroup 30: Quality improvement of

the EXFOR database,” (2009), NEA report number NEA/NSC/WPEC/DOC,

Vol. 416. OECD/NEA Nuclear Data Bank, Paris, France.

[19] R. Capote, D.L. Smith, and A. Trkov, “Nuclear data evaluation methodology

including estimates of covariances,” Proc. EPJ Web of Conferences, volume 8,

p. 4001, 2010.

[20] S.C. van der Marck, “Benchmarking ENDF/B-VII.1, JENDL-4.0 and

JEFF-3.1.1 with MCNP6,” Nuclear Data Sheets, 113, 2935-3005 (2012).

[21] J. Briggs et al., “International Handbook of evaluated Criticality Safety

Benchmark Experiments,” Report NEA/DOC (95), 4 (2004).

[22] P. Obložinsky, M. Herman, and S.F. Mughabghab, “Evaluated Nuclear Data,”

in Handbook of Nuclear Engineering, p. 83–187, Springer, 2010.

[23] J.B. Briggs, “International Handbook of Evaluated Reactor Physics Benchmark

Experiments,” NEA/NSC/DOC(95)03/I, Nuclear Energy Agency, Paris,

September (2010).

[24] I. Kodeli, E. Sartori, and B. Kirk, “SINBAD shielding benchmark experiments

status and planned activities,” Proc. ANS 14th Top. Meeting of Rad. Prot. andShielding Division, Carlsbad, USA, 2006.

[25] T. Kawano et al., “Evaluation and propagation of the 239Pu fission cross-section

uncertainties using a Monte Carlo technique,” Nuclear Science andEngineering, 153, 1, 1-7 (2006).

[26] A.J. Koning and D. Rochman, “Modern Nuclear Data Evaluation with TALYS

code system,” Nuclear Data Sheets, 113, 2841-2934 (2012).

[27] R. Capote et al., “RIPL-Reference Input Parameter Library for calculation of

nuclear reactions and nuclear data evaluations,” Nuclear Data Sheets, 110, 12,

3107-3214 (2009).

[28] D.L. Smith, “Covariance matrices for nuclear cross-sections derived from

nuclear model calculations,” (2004), Report ANL/NDM-159, Argonne National

Laboratory, USA.

[29] A.J. Koning, S. Hilaire, and M.C. Duijvestijn, “TALYS-1.0: Making nuclear

data libraries using TALYS,” Proc. International Nuclear Data Conference forScience and Technology, p. 211–214, 2007, Nice, France, April, 22-27.

[30] M. Herman et al., “EMPIRE: nuclear reaction model code system for data

evaluation,” Nuclear Data Sheets, 108, 12, 2655-2715 (2007).

71

Page 82: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

[31] M.B. Chadwick et al., “ENDF/B-VII. 1 nuclear data for science and

technology: cross sections, covariances, fission product yields and decay data,”

Nuclear Data Sheets, 112, 12, 2887-2996 (2011).

[32] M.B. Chadwick, P. Talou, and T. Kawano, “Reducing Uncertainty in Nuclear

Data,” (2005), Los Alamos Science, Number 29.

[33] S.F. Mughabghab, Atlas of Neutron Resonances: Resonance Parameters andThermal Cross Sections. Z= 1-100, Elsevier (2006).

[34] A. Trkov, M. Herman, and D.A. Brown, “ENDF-6 Formats Manual,” Technical

report, Report BNL-90365-2009 Rev. 2, Brookhaven National Laboratory,

Upton, New York, 173 (2011).

[35] C. Dunford, “ENDF Utility Codes Release 7.01/02,” (2005), Los Alamos

National Laboratory, Los Alamos, NM, USA.

[36] K. Shibata et al., “JENDL-4.0: a new library for nuclear science and

engineering,” Journal of Nuclear Science and Technology, 48, 1, 1-30 (2011).

[37] A.J. Koning et al., “TENDL-2014: TALYS-based evaluated nuclear data

library,” http://www.talys.eu/tendl-2014.html (2014).

[38] A.J. Koning et al., “The JEFF-3.1 nuclear data library,” JEFF report, 21 (2006).

[39] Z.G. Ge et al., “The updated version of Chinese evaluated nuclear data library

(CENDL-3.1),” J. Korean Phys. Soc., 59, 2, 1052-1056 (2011).

[40] A. Blokhin et al., “Brond-2.2: Current Status of Russian Nuclear Data

Libraries,” Proc. Nuclear Data for Science and Technology, volume 2, p. 695,

1992.

[41] P.J. Griffin and R. Paviotti-Corcuera, “Summary Report of the Final Technical

Meeting on International Reactor Dosimetry File: IRDF-2002,” INDC(NDS)-448, IAEA, Vienna (October 2003) (2003).

[42] R. Forrest et al., The European Activation File: EAF-2005 cross section library,

EURATOM/UKAEA Fusion Association (2005).

[43] D.L. Aldama and A. Trkov, “FENDL-2.1, Update of an evaluated nuclear data

library for fusion applications,” Report INDC (NDS)-467, International AtomicEnergy Agency (2004).

[44] A. Nouri et al., “JANIS: A new software for nuclear data services,” Journal ofNuclear Science and Technology, 39, sup2, 1480–483 (2002).

[45] M. Herman et al., “Covariance Data in the Fast Neutron Region,” (2011), Tech.

Rep. WPEC Subgroup 24 final report, OECD/NEA Nuclear Data Bank, Paris,

France.

[46] D.G. Cacuci, “Sensitivity and uncertainty analysis of models and data,” in

Nuclear Computational Science, p. 291–353, Springer, 2010.

[47] D.G. Cacuci and M. Ionescu-Bujor, “Sensitivity and Uncertainty Analysis, Data

Assimilation, and Predictive Best-Estimate Model Calibration,” in Handbook ofNuclear Engineering, p. 1913–2051, Springer, 2010.

[48] J.A. Roberts and B.T. Rearden and P.H.P. Wilson, “Determination and

Application of Partial Biases in Criticality Safety Validation,” Nuclear Scienceand Engineering, 173, 1, 43-57 (2013).

[49] G. Aliberti et al., “Nuclear data sensitivity, uncertainty and target accuracy

assessment for future nuclear systems,” Annals of Nuclear Energy, 33, 8,

700-733 (2006).

[50] A.J. Koning and D. Rochman, “Towards sustainable nuclear energy: Putting

72

Page 83: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

nuclear physics to work,” Annals of Nuclear Energy, 35, 2024-2030 (2008).

[51] A.J. Koning, “Bayesian Monte Carlo Method for Nuclear Data Evaluation,”

Nuclear Data Sheets, 123, 207-213 (2015).

[52] P. Helgesson et al., “Incorporating experimental information in the TMC

methodology using file weights,” Nuclear Data Sheets, 123, 214-219 (2015).

[53] A.J. Koning, “TEFAL-1.26: Making nuclear data libraries using TALYS,”

(2010), User manual, Nuclear Research and Consultancy Group (NRG),

unpublished.

[54] D. Rochman, A.J. Koning, and D. F. Da Cruz, “Uncertainties for the Kalimer

Sodium Fast Reactor: Void Reactivity Coefficient, keff, βeff, Depletion and

Radiotoxity,” Nuclear Science and Technology, 48, 8, 1193-1205 (2011).

[55] D. Rochman et al., “Efficient use of monte carlo: uncertainty propagation,”

Nuclear Science and Engineering, 177, 3, 337-349 (2014).

[56] P. Helgesson, D. Rochman, H. Sjöstrand, E. Alhassan, and A.J. Koning, “UO2

vs MOX: propagated nuclear data uncertainty for keff, with burnup,” NuclearScience and Engineering, 177, 321-336 (2014).

[57] D. Rochman et al., “Uncertainty Propagation with Fast Monte Carlo

Techniques,” Nuclear Data Sheets, 118, 367-369 (2014).

[58] D.L. Smith, “A Unified Monte Carlo Approach to Fast Neutron Cross Section

Data Evaluation,” Proc. 8th International Topical Mtg. on Nucl. Applics. andUtil. of Accelerators, Pocatello, July, p. 736, 2008.

[59] E. Bauge, S. Hilaire, and P. Dossantos-Uzarralde, “Evaluation of the covariance

matrix of neutronic cross sections with the Backward-Forward Monte Carlo

method,” Proc. Proceedings of the International Conference on Nuclear Datafor Science and Technology, April, p. 22–27, 2007.

[60] O. Buss, A. Hoefer, and J. C. Neuber, “NUDUNA - Nuclear Data Uncertainty

Analysis,” Proc. Cross Section Evaluation Working Group (CSEWG), 2010,

Santa Fe, November, 1-5.

[61] W. Zwermann, B. Krzykacz-Hausmann, L. Gallner, A. Pautz, and M. Mattes,

“Uncertainty analyses with nuclear covariance data in reactor core calculations,”

Journal of the Korean Physical Society, 59, 2, 1256-1259 (2011).

[62] T. Zhu, A. Vasiliev, H. Ferroukhi, and A. Pautz, “NUSS: A tool for propagating

multigroup nuclear data covariances in pointwise ACE-formatted nuclear data

using stochastic sampling method,” Annals of Nuclear Energy, 75, 713-722

(2015).

[63] O. Leray, P. Grimm, M. Hursin, H. Ferroukhi, and A. Pautz, “Uncertainty

quantification of spent fuel nuclide compositions due to cross sections, decay

constants and fission yields,” Proc. Proc. PHYSOR 2014 - The Role of ReactorPhysics toward a Sustainable Future, 2014, Kyoto, Japan, Sep. 28 - 3 Oct.

[64] J. Rhodes, K. Smith, and D. Lee, “CASMO-5 development and applications,”

Proc. Proc. ANS Topical Meeting on Reactor Pysics (PHYSOR-2006), p. 10–14,

2006.

[65] G. Marleau, A. Hébert, and R. Roy, “A user guide for DRAGON 3.06,” ReportIGE-174 Rev, 7 (2008).

[66] A. Hernández-Solís, C. Demaziere, and C. Ekberg, “Uncertainty and sensitivity

analyses applied to the DRAGONv4.05 code lattice calculations and based on

JENDL-4 data,” Annals of Nuclear Energy, 57, 230-245 (2013).

73

Page 84: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

[67] A. Hernández-Solís, C. Demaziere, and C. Ekberg, “Uncertainty Analyses

Applied to the UAM/TMI-1 Lattice Calculations Using the DRAGON (Version

4.05) Code and Based on JENDL-4 and ENDF/B-VII.1 Covariance Data,”

Annals of Nuclear Energy, 57, 230-245 (2013).

[68] R.E MacFarlane and A.C. Kahler, “Methods for Processing ENDF/B-VII with

NJOY,” Nuclear Data Sheets, 111, 12, 2739-2890 (2010).

[69] D.E. Cullen, “PREPRO 2012 ENDF/B Pre-processing Codes,”

https://www-nds.iaea.org/public/endf/prepro/ (2012).

[70] J. Leppänen, “PSG2/Serpent - a Continuous-energy Monte Carlo Reactor

Physics Burnup Calculation Code,” (2015), User manual, VTT Technical

Research Centre of Finland.

[71] A.J. Koning, “TASMAN-1.26: Statistical software for TALYS: Uncertainties,

sensitivities and optimization,” (2011), User manual, Nuclear Research and

Consultancy Group (NRG), unpublished.

[72] D. Rochman, “TAFIS-1.0: Generation of fission yields, nu-bar and

covariances,” (2011), User manual, Nuclear Research and Consultancy Group

(NRG), unpublished.

[73] D. Rochman, “TANES-1.0: Generation of fission neutron spectra and

covariances,” (2011), User manual, Nuclear Research and Consultancy Group

(NRG), unpublished.

[74] D. Rochman, “TARES-1.1: Generation of resonance data and uncertainties,”

(2011), User manual, Nuclear Research and Consultancy Group (NRG),

unpublished.

[75] A.J. Koning et al., “TENDL-2012: TALYS-based evaluated nuclear data

library,” http://www.talys.eu/tendl-2012.html (2012).

[76] E. Alhassan et al., “Selecting benchmarks for reactor calculations,” Proc. Proc.PHYSOR 2014 - The Role of Reactor Physics toward a Sustainable Future,

2014, Kyoto, Japan, Sep. 28 - 3 Oct.

[77] P. Helgesson et al., “Including experimental information in TMC using file

weights from automatically generated experimental covariance matrices,” Inmanuscript, 2015 (2015).

[78] A.J. Koning et al., “TENDL-2015: TALYS-based evaluated nuclear data

library,” ftp://tendl.psi.ch/tendl2015/tendl2015.html (2015).

[79] D.G. Cacuci, “Sensitivity and uncertainty analysis of models and data,” in

Nuclear Computational Science, p. 291–353, Springer, 2010.

[80] J. Leppänen, “Development of a New Monte Carlo Reactor Physics Code,”

D.Sc. Thesis, Helsinki University of Technology (2007).

[81] J. Duan et al., “Uncertainty study of nuclear model parameters for the n+56Fe

reactions in the fast neutron region below 20 MeV,” Nuclear Data Sheets, 118,

346-348 (2014).

[82] D. Rochman and A.J. Koning, “Evaluation and adjustment of the

neutron-induced reactions of 63,65Cu,” Nuclear Science and Engineering, 170,

3, 265-279 (2012).

[83] V. Sobes, Coupled differential and integral data analysis for improveduncertainty quantification of the 63,65Cu cross section evaluations, PhD thesis,

Massachusetts Institute of Technology, 2014.

[84] D. Rochman and A.J. Koning, “How to randomly evaluated nuclear data: A

74

Page 85: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

new data adjustment method applied to 239Pu,” Nuclear Science andEngineering, 169, 68-80 (2011).

[85] D. Rochman and A.J. Koning, “Pb and Bi neutron data libraries with full

covariance evaluation and improved integral tests,” Nuclear Instruments andMethods in Physics Research Section A: Accelerators, Spectrometers, Detectorsand Associated Equipment, 589, 1, 85-108 (2008).

75

Page 86: Nuclear data uncertainty quantification and data ...865971/FULLTEXT01.pdf · Nuclear data uncertainty quantification and data assimilation ... uncertainty reduction of reactor macroscopic

Acta Universitatis UpsaliensisDigital Comprehensive Summaries of Uppsala Dissertationsfrom the Faculty of Science and Technology 1315

Editor: The Dean of the Faculty of Science and Technology

A doctoral dissertation from the Faculty of Science andTechnology, Uppsala University, is usually a summary of anumber of papers. A few copies of the complete dissertationare kept at major Swedish research libraries, while thesummary alone is distributed internationally throughthe series Digital Comprehensive Summaries of UppsalaDissertations from the Faculty of Science and Technology.(Prior to January, 2005, the series was published under thetitle “Comprehensive Summaries of Uppsala Dissertationsfrom the Faculty of Science and Technology”.)

Distribution: publications.uu.seurn:nbn:se:uu:diva-265502

ACTAUNIVERSITATIS

UPSALIENSISUPPSALA

2015